期刊文献+
共找到478,703篇文章
< 1 2 250 >
每页显示 20 50 100
Challenges to and Countermeasures for the Value Realization of Healthcare Data Elements in China 被引量:1
1
作者 Tianan Yang Wenhao Deng +3 位作者 Ran Liu Tianyu Wang Yuanyuan Dai Jianwei Deng 《Health Care Science》 2025年第3期225-228,共4页
As a new type of production factor in healthcare,healthcare data elements have been rapidly integrated into various health production processes,such as clinical assistance,health management,biological testing,and oper... As a new type of production factor in healthcare,healthcare data elements have been rapidly integrated into various health production processes,such as clinical assistance,health management,biological testing,and operation and supervision[1,2].Healthcare data elements include biolog.ical and clinical data that are related to disease,environ-mental health data that are associated with life,and operational and healthcare management data that are related to healthcare activities(Figure 1).Activities such as the construction of a data value assessment system,the devel-opment of a data circulation and sharing platform,and the authorization of data compliance and operation products support the strong growth momentum of the market for health care data elements in China[3]. 展开更多
关键词 China healthcare data elements healthcare data management value realization
暂未订购
Standardizing Healthcare Datasets in China:Challenges and Strategies
2
作者 Zheng-Yong Hu Xiao-Lei Xiu +2 位作者 Jing-Yu Zhang Wan-Fei Hu Si-Zhu Wu 《Chinese Medical Sciences Journal》 2025年第4期253-267,I0001,共16页
Standardized datasets are foundational to healthcare informatization by enhancing data quality and unleashing the value of data elements.Using bibliometrics and content analysis,this study examines China's healthc... Standardized datasets are foundational to healthcare informatization by enhancing data quality and unleashing the value of data elements.Using bibliometrics and content analysis,this study examines China's healthcare dataset standards from 2011 to 2025.It analyzes their evolution across types,applications,institutions,and themes,highlighting key achievements including substantial growth in quantity,optimized typology,expansion into innovative application scenarios such as health decision support,and broadened institutional involvement.The study also identifies critical challenges,including imbalanced development,insufficient quality control,and a lack of essential metadata—such as authoritative data element mappings and privacy annotations—which hampers the delivery of intelligent services.To address these challenges,the study proposes a multi-faceted strategy focused on optimizing the standard system's architecture,enhancing quality and implementation,and advancing both data governance—through authoritative tracing and privacy protection—and intelligent service provision.These strategies aim to promote the application of dataset standards,thereby fostering and securing the development of new productive forces in healthcare. 展开更多
关键词 healthcare dataset standards data standardization data management
在线阅读 下载PDF
Access and Privacy Control for Healthcare Decision Support System:A Smart Medical Data Exchange Engine(SMDEE)
3
作者 Imran Khan Javed Rashid +3 位作者 Anwar Ghani Muhammad Shoaib Saleem Muhammad Faheem Humera Khan 《CAAI Transactions on Intelligence Technology》 2025年第6期1616-1632,共17页
Secure and automated sharing of medical information among different medical entities/stakeholders like patients,hospitals,doctors,law enforcement agencies,health insurance companies etc.,in a standard format has alway... Secure and automated sharing of medical information among different medical entities/stakeholders like patients,hospitals,doctors,law enforcement agencies,health insurance companies etc.,in a standard format has always been a challenging problem.Current methods for ensuring compliance with medical privacy laws require specialists who are deeply familiar with these laws'complex requirements to verify the lawful exchange of medical information.This article introduces a Smart Medical Data Exchange Engine(SDEE)designed to automate the extracting of logical rules from medical privacy legislation using advanced techniques.These rules facilitate the secure extraction of information,safeguarding patient privacy and confidentiality.In addition,SMDEE can generate standardised clinical documents according to Health Level 7(HL7)standards and also standardise the nomenclature of requested medical data,enabling accurate decision-making when accessing patient data.All access requests to patient information are processed through SMDEE to ensure authorised access.The proposed system's efficacy is evaluated using the Health Insurance Portability and Accountability Act(HIPAA),a fundamental privacy law in the United States.However,SMDEE's flexibility allows its application worldwide,accommodating various medical privacy laws.Beyond facilitating global information exchange,SMDEE aims to enhance international patients'timely and appropriate treatment. 展开更多
关键词 data protection decision making information retrieval intelligent information processing medical applications privacy issues security security of data
在线阅读 下载PDF
RETRACTION:Challenges and Opportunities of Big Data Analytics in Healthcare
4
《Health Care Science》 2025年第2期161-161,共1页
RETRACTION:P.Goyal and R.Malviya,“Challenges and Opportunities of Big Data Analytics in Healthcare,”Health Care Science 2,no.5(2023):328-338,https://doi.org/10.1002/hcs2.66.The above article,published online on 4 Oc... RETRACTION:P.Goyal and R.Malviya,“Challenges and Opportunities of Big Data Analytics in Healthcare,”Health Care Science 2,no.5(2023):328-338,https://doi.org/10.1002/hcs2.66.The above article,published online on 4 October 2023 in Wiley Online Library(wileyonlinelibrary.com),has been retracted by agreement between the journal Editor-in-Chief,Zongjiu Zhang;Tsinghua University Press;and John Wiley&Sons Ltd. 展开更多
关键词 big data analytics RETRACTION healthcare
在线阅读 下载PDF
Large Language Model in Healthcare for the Prediction of Genetic Variants from Unstructured Text Medicine Data Using Natural Language Processing
5
作者 Noor Ayesha Muhammad Mujahid +2 位作者 Abeer Rashad Mirdad Faten S.Alamri Amjad R.Khan 《Computers, Materials & Continua》 2025年第7期1883-1899,共17页
Large language models(LLMs)and natural language processing(NLP)have significant promise to improve efficiency and refine healthcare decision-making and clinical results.Numerous domains,including healthcare,are rapidl... Large language models(LLMs)and natural language processing(NLP)have significant promise to improve efficiency and refine healthcare decision-making and clinical results.Numerous domains,including healthcare,are rapidly adopting LLMs for the classification of biomedical textual data in medical research.The LLM can derive insights from intricate,extensive,unstructured training data.Variants need to be accurately identified and classified to advance genetic research,provide individualized treatment,and assist physicians in making better choices.However,the sophisticated and perplexing language of medical reports is often beyond the capabilities of the devices we now utilize.Such an approach may result in incorrect diagnoses,which could affect a patient’s prognosis and course of therapy.This study evaluated the efficacy of the proposed model by looking at publicly accessible textual clinical data.We have cleaned the clinical textual data using various text preprocessing methods,including stemming,tokenization,and stop word removal.The important features are extracted using Bag of Words(BoW)and Term Frequency-Inverse Document Frequency(TFIDF)feature engineering methods.The important motive of this study is to predict the genetic variants based on the clinical evidence using a novel method with minimal error.According to the experimental results,the random forest model achieved 61%accuracy with 67%precision for class 9 using TFIDF features and 63%accuracy and a 73%F1 score for class 9 using Bag of Words features.The accuracy of the proposed BERT(Bidirectional Encoder Representations from Transformers)model was 70%with 5-fold cross-validation and 71%with 10-fold cross-validation.The research results provide a comprehensive overview of current LLM methods in healthcare,benefiting academics as well as professionals in the discipline. 展开更多
关键词 LLM unstructured data GENETICS PREDICTION healthcare MEDICINE
在线阅读 下载PDF
Streamlining heart failure patient care with machine learning of thoracic cavity sound data
6
作者 Rony Marethianto Santoso Wilbert Huang +4 位作者 Ser Wee Bambang Budi Siswanto Amiliana Mardiani Soesanto Wisnu Jatmiko Aria Kekalih 《World Journal of Cardiology》 2025年第9期33-42,共10页
Together,the heart and lung sound comprise the thoracic cavity sound,which provides informative details that reflect patient conditions,particularly heart failure(HF)patients.However,due to the limitations of human he... Together,the heart and lung sound comprise the thoracic cavity sound,which provides informative details that reflect patient conditions,particularly heart failure(HF)patients.However,due to the limitations of human hearing,a limited amount of information can be auscultated from thoracic cavity sounds.With the aid of artificial intelligence–machine learning,these features can be analyzed and aid in the care of HF patients.Machine learning of thoracic cavity sound data involves sound data pre-processing by denoising,resampling,segmentation,and normalization.Afterwards,the most crucial step is feature extraction and se-lection where relevant features are selected to train the model.The next step is classification and model performance evaluation.This review summarizes the currently available studies that utilized different machine learning models,different feature extraction and selection methods,and different classifiers to generate the desired output.Most studies have analyzed the heart sound component of thoracic cavity sound to distinguish between normal and HF patients.Additionally,some studies have aimed to classify HF patients based on thoracic cavity sounds in their entirety,while others have focused on risk strati-fication and prognostic evaluation of HF patients using thoracic cavity sounds.Overall,the results from these studies demonstrate a promisingly high level of accuracy.Therefore,future prospective studies should incorporate these machine learning models to expedite their integration into daily clinical practice for managing HF patients. 展开更多
关键词 Machine learning Heart failure Sound data Artificial intelligence Deep learning
暂未订购
Blockchain-Based Electronic Health Passport for Secure Storage and Sharing of Healthcare Data
7
作者 Yogendra P.S.Maravi Nishchol Mishra 《Computers, Materials & Continua》 2025年第6期5517-5537,共21页
The growing demand for international travel has highlighted the critical need for reliable tools to verify travelers’healthcare status and meet entry requirements.Personal health passports,while essential,face signif... The growing demand for international travel has highlighted the critical need for reliable tools to verify travelers’healthcare status and meet entry requirements.Personal health passports,while essential,face significant challenges related to data silos,privacy protection,and forgery risks in global sharing.To address these issues,this study proposes a blockchain-based solution designed for the secure storage,sharing,and verification of personal health passports.This innovative approach combines on-chain and off-chain storage,leveraging searchable encryption to enhance data security and optimize blockchain storage efficiency.By reducing the storage burden on the blockchain,the system ensures both the secure handling and reliable sharing of sensitive personal health data.An optimized consensus mechanism streamlines the process into two stages,minimizing communication complexity among nodes and significantly improving the throughput of the blockchain system.Additionally,the introduction of advanced aggregate signature technology accommodates multi-user scenarios,reducing computational overhead for signature verification and enabling swift identification ofmalicious forgers.Comprehensive security analyses validate the system’s robustness and reliability.Simulation results demonstrate notable performance improvements over existing solutions,with reductions in computational overhead of up to 49.89%and communication overhead of up to 25.81%inmulti-user scenarios.Furthermore,the optimized consensus mechanism shows substantial efficiency gains across varying node configurations.This solution represents a significant step toward addressing the pressing challenges of health passport management in a secure,scalable,and efficient manner. 展开更多
关键词 Blockchain healthcare and machine learning healthcare device data health passport computational overhead signature verification
在线阅读 下载PDF
Enhancing Healthcare Data Privacy in Cloud IoT Networks Using Anomaly Detection and Optimization with Explainable AI (ExAI)
8
作者 Jitendra Kumar Samriya Virendra Singh +4 位作者 Gourav Bathla Meena Malik Varsha Arya Wadee Alhalabi Brij B.Gupta 《Computers, Materials & Continua》 2025年第8期3893-3910,共18页
The integration of the Internet of Things(IoT)into healthcare systems improves patient care,boosts operational efficiency,and contributes to cost-effective healthcare delivery.However,overcoming several associated cha... The integration of the Internet of Things(IoT)into healthcare systems improves patient care,boosts operational efficiency,and contributes to cost-effective healthcare delivery.However,overcoming several associated challenges,such as data security,interoperability,and ethical concerns,is crucial to realizing the full potential of IoT in healthcare.Real-time anomaly detection plays a key role in protecting patient data and maintaining device integrity amidst the additional security risks posed by interconnected systems.In this context,this paper presents a novelmethod for healthcare data privacy analysis.The technique is based on the identification of anomalies in cloud-based Internet of Things(IoT)networks,and it is optimized using explainable artificial intelligence.For anomaly detection,the Radial Boltzmann Gaussian Temporal Fuzzy Network(RBGTFN)is used in the process of doing information privacy analysis for healthcare data.Remora Colony SwarmOptimization is then used to carry out the optimization of the network.The performance of the model in identifying anomalies across a variety of healthcare data is evaluated by an experimental study.This evaluation suggested that themodel measures the accuracy,precision,latency,Quality of Service(QoS),and scalability of themodel.A remarkable 95%precision,93%latency,89%quality of service,98%detection accuracy,and 96%scalability were obtained by the suggested model,as shown by the subsequent findings. 展开更多
关键词 Healthcare data privacy analysis anomaly detection cloud IoT network explainable artificial intelligence temporal fuzzy network
在线阅读 下载PDF
A Composite Loss-Based Autoencoder for Accurate and Scalable Missing Data Imputation
9
作者 Thierry Mugenzi Cahit Perkgoz 《Computers, Materials & Continua》 2026年第1期1985-2005,共21页
Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel a... Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications. 展开更多
关键词 Missing data imputation autoencoder deep learning missing mechanisms
在线阅读 下载PDF
Advances in Machine Learning for Explainable Intrusion Detection Using Imbalance Datasets in Cybersecurity with Harris Hawks Optimization
10
作者 Amjad Rehman Tanzila Saba +2 位作者 Mona M.Jamjoom Shaha Al-Otaibi Muhammad I.Khan 《Computers, Materials & Continua》 2026年第1期1804-1818,共15页
Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness a... Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability. 展开更多
关键词 Intrusion detection XAI machine learning ensemble method CYBERSECURITY imbalance data
在线阅读 下载PDF
Enhanced Capacity Reversible Data Hiding Based on Pixel Value Ordering in Triple Stego Images
11
作者 Kim Sao Nguyen Ngoc Dung Bui 《Computers, Materials & Continua》 2026年第1期1571-1586,共16页
Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi... Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography. 展开更多
关键词 RDH reversible data hiding PVO RDH base three stego images
在线阅读 下载PDF
Take Care of Your Har
12
作者 Ruth Devlin 《空中英语教室(初级版.大家说英语)》 2026年第1期25-28,56,53,共6页
Take care of your hair to help it stay clean,strong and healthy.Wash your hair when it gets dirty,but not too often.For most people,that means every two to three days.People with oily hair wash it every one to two day... Take care of your hair to help it stay clean,strong and healthy.Wash your hair when it gets dirty,but not too often.For most people,that means every two to three days.People with oily hair wash it every one to two days.Use a brush or comb to keep your hair neat and smooth.It's also important to be gentle so you don't pull or break your hair.Never go to bed with wet hair.It can break easily when you sleep.Dry it before bed! 展开更多
关键词 sleep hygiene hair damage washing frequency hair type scalp health hair care BRUSHING oily hair
原文传递
Graph-Based Unified Settlement Framework for Complex Electricity Markets:Data Integration and Automated Refund Clearing
13
作者 Xiaozhe Guo Suyan Long +4 位作者 Ziyu Yue Yifan Wang Guanting Yin Yuyang Wang Zhaoyuan Wu 《Energy Engineering》 2026年第1期56-90,共35页
The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack... The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack a unified data structure,and depend heavily on manual intervention to process high-frequency and retroactive transactions.To address these limitations,a graph-based unified settlement framework is proposed to enhance automation,flexibility,and adaptability in electricity market settlements.A flexible attribute-graph model is employed to represent heterogeneousmulti-market data,enabling standardized integration,rapid querying,and seamless adaptation to evolving business requirements.An extensible operator library is designed to support configurable settlement rules,and a suite of modular tools—including dataset generation,formula configuration,billing templates,and task scheduling—facilitates end-to-end automated settlement processing.A robust refund-clearing mechanism is further incorporated,utilizing sandbox execution,data-version snapshots,dynamic lineage tracing,and real-time changecapture technologies to enable rapid and accurate recalculations under dynamic policy and data revisions.Case studies based on real-world data from regional Chinese markets validate the effectiveness of the proposed approach,demonstrating marked improvements in computational efficiency,system robustness,and automation.Moreover,enhanced settlement accuracy and high temporal granularity improve price-signal fidelity,promote cost-reflective tariffs,and incentivize energy-efficient and demand-responsive behavior among market participants.The method not only supports equitable and transparent market operations but also provides a generalizable,scalable foundation for modern electricity settlement platforms in increasingly complex and dynamic market environments. 展开更多
关键词 Electricity market market settlement data model graph database market refund clearing
在线阅读 下载PDF
Impact of Data Processing Techniques on AI Models for Attack-Based Imbalanced and Encrypted Traffic within IoT Environments
14
作者 Yeasul Kim Chaeeun Won Hwankuk Kim 《Computers, Materials & Continua》 2026年第1期247-274,共28页
With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp... With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy. 展开更多
关键词 Encrypted traffic attack detection data sampling technique AI-based detection IoT environment
在线阅读 下载PDF
A brief review on comparative analysis of IoT-based healthcare system for breast cancer prediction
15
作者 Krishna Murari Rajiv Ranjan Suman 《Medical Data Mining》 2026年第1期46-58,共13页
The integration of machine learning(ML)technology with Internet of Things(IoT)systems produces essential changes in healthcare operations.Healthcare personnel can track patients around the clock thanks to healthcare I... The integration of machine learning(ML)technology with Internet of Things(IoT)systems produces essential changes in healthcare operations.Healthcare personnel can track patients around the clock thanks to healthcare IoT(H-IoT)technology,which also provides proactive statistical findings and precise medical diagnoses that enhance healthcare performance.This study examines how ML might support IoT-based health care systems,namely in the areas of prognostic systems,disease detection,patient tracking,and healthcare operations control.The study looks at the benefits and drawbacks of several machine learning techniques for H-IoT applications.It also examines the fundamental problems,such as data security and cyberthreats,as well as the high processing demands that these systems face.Alongside this,the essay discusses the advantages of all the technologies,including machine learning,deep learning,and the Internet of Things,as well as the significant difficulties and problems that arise when integrating the technology into healthcare forecasts. 展开更多
关键词 IOT healthcare system machine learning breast cancer prediction medical data mining security challenges
在线阅读 下载PDF
Efficient Arabic Essay Scoring with Hybrid Models: Feature Selection, Data Optimization, and Performance Trade-Offs
16
作者 Mohamed Ezz Meshrif Alruily +4 位作者 Ayman Mohamed Mostafa Alaa SAlaerjan Bader Aldughayfiq Hisham Allahem Abdulaziz Shehab 《Computers, Materials & Continua》 2026年第1期2274-2301,共28页
Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic... Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage. 展开更多
关键词 Automated essay scoring text-based features vector-based features embedding-based features feature selection optimal data efficiency
在线阅读 下载PDF
Individual Software Expertise Formalization and Assessment from Project Management Tool Databases
17
作者 Traian-Radu Plosca Alexandru-Mihai Pescaru +1 位作者 Bianca-Valeria Rus Daniel-Ioan Curiac 《Computers, Materials & Continua》 2026年第1期389-411,共23页
Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods... Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results. 展开更多
关键词 Expertise formalization transformer-based models natural language processing augmented data project management tool skill classification
在线阅读 下载PDF
Harnessing deep learning for the discovery of latent patterns in multi-omics medical data
18
作者 Okechukwu Paul-Chima Ugwu Fabian COgenyi +8 位作者 Chinyere Nkemjika Anyanwu Melvin Nnaemeka Ugwu Esther Ugo Alum Mariam Basajja Joseph Obiezu Chukwujekwu Ezeonwumelu Daniel Ejim Uti Ibe Michael Usman Chukwuebuka Gabriel Eze Simeon Ikechukwu Egba 《Medical Data Mining》 2026年第1期32-45,共14页
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities... The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders. 展开更多
关键词 deep learning multi-omics integration biomedical data mining precision medicine graph neural networks autoencoders and transformers
在线阅读 下载PDF
AI-driven integration of multi-omics and multimodal data for precision medicine
19
作者 Heng-Rui Liu 《Medical Data Mining》 2026年第1期1-2,共2页
High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging ... High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging foundation models and multimodal learning frameworks are enabling scalable and transferable representations of cellular states,while advances in interpretability and real-world data integration are bridging the gap between discovery and clinical application.This paper outlines a concise roadmap for AI-driven,transcriptome-centered multi-omics integration in precision medicine(Figure 1). 展开更多
关键词 high throughput transcriptomics multi omics single cell multimodal learning frameworks foundation models omics data modalitiesemerging ai driven precision medicine
在线阅读 下载PDF
Multimodal artificial intelligence integrates imaging,endoscopic,and omics data for intelligent decision-making in individualized gastrointestinal tumor treatment
20
作者 Hui Nian Yi-Bin Wu +5 位作者 Yu Bai Zhi-Long Zhang Xiao-Huang Tu Qi-Zhi Liu De-Hua Zhou Qian-Cheng Du 《Artificial Intelligence in Gastroenterology》 2026年第1期1-19,共19页
Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including ... Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including computed tomography(CT),magnetic resonance imaging(MRI),endoscopic imaging,and genomic profiles-to enable intelligent decision-making for individualized therapy.This approach leverages AI algorithms to fuse imaging,endoscopic,and omics data,facilitating comprehensive characterization of tumor biology,prediction of treatment response,and optimization of therapeutic strategies.By combining CT and MRI for structural assessment,endoscopic data for real-time visual inspection,and genomic information for molecular profiling,multimodal AI enhances the accuracy of patient stratification and treatment personalization.The clinical implementation of this technology demonstrates potential for improving patient outcomes,advancing precision oncology,and supporting individualized care in gastrointestinal cancers.Ultimately,multimodal AI serves as a transformative tool in oncology,bridging data integration with clinical application to effectively tailor therapies. 展开更多
关键词 Multimodal artificial intelligence Gastrointestinal tumors Individualized therapy Intelligent diagnosis Treatment optimization Prognostic prediction data fusion Deep learning Precision medicine
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部