The rapid digitalization of urban infrastructure has made smart cities increasingly vulnerable to sophisticated cyber threats.In the evolving landscape of cybersecurity,the efficacy of Intrusion Detection Systems(IDS)...The rapid digitalization of urban infrastructure has made smart cities increasingly vulnerable to sophisticated cyber threats.In the evolving landscape of cybersecurity,the efficacy of Intrusion Detection Systems(IDS)is increasingly measured by technical performance,operational usability,and adaptability.This study introduces and rigorously evaluates a Human-Computer Interaction(HCI)-Integrated IDS with the utilization of Convolutional Neural Network(CNN),CNN-Long Short Term Memory(LSTM),and Random Forest(RF)against both a Baseline Machine Learning(ML)and a Traditional IDS model,through an extensive experimental framework encompassing many performance metrics,including detection latency,accuracy,alert prioritization,classification errors,system throughput,usability,ROC-AUC,precision-recall,confusion matrix analysis,and statistical accuracy measures.Our findings consistently demonstrate the superiority of the HCI-Integrated approach utilizing three major datasets(CICIDS 2017,KDD Cup 1999,and UNSW-NB15).Experimental results indicate that the HCI-Integrated model outperforms its counterparts,achieving an AUC-ROC of 0.99,a precision of 0.93,and a recall of 0.96,while maintaining the lowest false positive rate(0.03)and the fastest detection time(~1.5 s).These findings validate the efficacy of incorporating HCI to enhance anomaly detection capabilities,improve responsiveness,and reduce alert fatigue in critical smart city applications.It achieves markedly lower detection times,higher accuracy across all threat categories,reduced false positive and false negative rates,and enhanced system throughput under concurrent load conditions.The HCIIntegrated IDS excels in alert contextualization and prioritization,offering more actionable insights while minimizing analyst fatigue.Usability feedback underscores increased analyst confidence and operational clarity,reinforcing the importance of user-centered design.These results collectively position the HCI-Integrated IDS as a highly effective,scalable,and human-aligned solution for modern threat detection environments.展开更多
In the context of large language model(LLM)reshaping software engineering education,this paper presents OSSerCopilot,a LLM-based tutoring system designed to address the critical challenge faced by newcomers(especially...In the context of large language model(LLM)reshaping software engineering education,this paper presents OSSerCopilot,a LLM-based tutoring system designed to address the critical challenge faced by newcomers(especially student contributors)in open source software(OSS)communities.Leveraging natural language processing,code semantic understanding,and learner profiling,the system functions as an intelligent tutor to scaffold three core competency domains:contribution guideline interpretation,project architecture comprehension,and personalized task matching.By transforming traditional onboarding barriers-such as complex contribution documentation and opaque project structures-into interactive learning journeys,OSSerCopilot enables newcomers to complete their first OSS contribution more easily and confidently.This paper highlights how LLM technologies can redefine software engineering education by bridging the gap between theoretical knowledge and practical OSS participation,offering implications for curriculum design,competency assessment,and sustainable OSS ecosystem cultivation.A demonstration video of the system is available at https://figshare.com/articles/media/OSSerCopilot_Introduction_mp4/29510276.展开更多
Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the heart.Numerous researchers have made progress in correcting and predicting early heart d...Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the heart.Numerous researchers have made progress in correcting and predicting early heart disease,but more remains to be accomplished.The diagnostic accuracy of many current studies is inadequate due to the attempt to predict patients with heart disease using traditional approaches.By using data fusion from several regions of the country,we intend to increase the accuracy of heart disease prediction.A statistical approach that promotes insights triggered by feature interactions to reveal the intricate pattern in the data,which cannot be adequately captured by a single feature.We processed the data using techniques including feature scaling,outlier detection and replacement,null and missing value imputation,and more to improve the data quality.Furthermore,the proposed feature engineering method uses the correlation test for numerical features and the chi-square test for categorical features to interact with the feature.To reduce the dimensionality,we subsequently used PCA with 95%variation.To identify patients with heart disease,hyperparameter-based machine learning algorithms like RF,XGBoost,Gradient Boosting,LightGBM,CatBoost,SVM,and MLP are utilized,along with ensemble models.The model’s overall prediction performance ranges from 88%to 92%.In order to attain cutting-edge results,we then used a 1D CNN model,which significantly enhanced the prediction with an accuracy score of 96.36%,precision of 96.45%,recall of 96.36%,specificity score of 99.51%and F1 score of 96.34%.The RF model produces the best results among all the classifiers in the evaluation matrix without feature interaction,with accuracy of 90.21%,precision of 90.40%,recall of 90.86%,specificity of 90.91%,and F1 score of 90.63%.Our proposed 1D CNN model is 7%superior to the one without feature engineering when compared to the suggested approach.This illustrates how interaction-focused feature analysis can produce precise and useful insights for heart disease diagnosis.展开更多
Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints...Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.展开更多
Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have probl...Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have problems such as data silos,a lack of visibility in real time,fraudulent activities,and inefficiencies in tracking and traceability.Blockchain’s decentralized and irreversible ledger offers a solid foundation for dealing with these issues;it facilitates trust,security,and the sharing of data in real-time among all parties involved.Through an examination of critical technologies,methodology,and applications,this paper delves deeply into computer modeling based-blockchain framework within supply chain intelligence.The effect of BT on SCM is evaluated by reviewing current research and practical applications in the field.As part of the process,we delved through the research on blockchain-based supply chain models,smart contracts,Decentralized Applications(DApps),and how they connect to other cutting-edge innovations like Artificial Intelligence(AI)and the Internet of Things(IoT).To quantify blockchain’s performance,the study introduces analytical models for efficiency improvement,security enhancement,and scalability,enabling computational assessment and simulation of supply chain scenarios.These models provide a structured approach to predicting system performance under varying parameters.According to the results,BT increases efficiency by automating transactions using smart contracts,increases security by using cryptographic techniques,and improves transparency in the supply chain by providing immutable records.Regulatory concerns,challenges with interoperability,and scalability all work against broad adoption.To fully automate and intelligently integrate blockchain with AI and the IoT,additional research is needed to address blockchain’s current limitations and realize its potential for supply chain intelligence.展开更多
Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(D...Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(DT),acts as a virtual replica of physical assets or processes,facilitating better decision making through simulations and predictive analytics.CPS and DT underpin the evolution of Industry 4.0 by bridging the physical and digital domains.This survey explores their synergy,highlighting how DT enriches CPS with dynamic modeling,realtime data integration,and advanced simulation capabilities.The layered architecture of DTs within CPS is examined,showcasing the enabling technologies and tools vital for seamless integration.The study addresses key challenges in CPS modeling,such as concurrency and communication,and underscores the importance of DT in overcoming these obstacles.Applications in various sectors are analyzed,including smart manufacturing,healthcare,and urban planning,emphasizing the transformative potential of CPS-DT integration.In addition,the review identifies gaps in existing methodologies and proposes future research directions to develop comprehensive,scalable,and secure CPSDT systems.By synthesizing insights fromthe current literature and presenting a taxonomy of CPS and DT,this survey serves as a foundational reference for academics and practitioners.The findings stress the need for unified frameworks that align CPS and DT with emerging technologies,fostering innovation and efficiency in the digital transformation era.展开更多
Employee turnover presents considerable challenges for organizations,leading to increased recruitment costs and disruptions in ongoing operations.High voluntary attrition rates can result in substantial financial loss...Employee turnover presents considerable challenges for organizations,leading to increased recruitment costs and disruptions in ongoing operations.High voluntary attrition rates can result in substantial financial losses,making it essential for Human Resource(HR)departments to prioritize turnover reduction.In this context,Artificial Intelligence(AI)has emerged as a vital tool in strengthening business strategies and people management.This paper incorporates two new representative features,introducing three types of feature engineering to enhance the analysis of employee turnover in the IBM HR Analytics dataset.Key Machine Learning(ML)techniques were subsequently employed in this work,such as Support Vector Machine(SVM),Random Forest(RF),Logistic Regression(LR),Extreme Gradient Boosting(XGBoost),and especially Categorical Boosting(CatBoost),a gradient boosting algorithm optimized for categorical data to analyze employee turnover.Adopting the unique feature engineering process enables CatBoost to enhance model accuracy and robustness while effectively analyzing complex patterns within employee data.Experimental results demonstrate the effectiveness of our proposed methodology,achieving the highest accuracy of 90.14%and an F1-score of 0.88 on the IBM dataset.To assess the capability of our detection system,we have also used an extended dataset,achieving an optimal accuracy of 98.10%and an F1-score of 0.98.These results strongly indicate the efficiency of our proposed methodology and highlight the impact of feature engineering on predictive performance.Moreover,by pinpointing the top ten factors influencing attrition,including“Monthly Income”,“Over Time”,“Total Satisfaction”,and others,this research equips HR departments with insights to implement targeted retention strategies,such as enhancing compensation or job satisfaction,to retain key talent before they consider leaving.展开更多
This study presents an advanced method for post-mortem person identification using the segmentation of skeletal structures from chest X-ray images.The proposed approach employs the Attention U-Net architecture,enhance...This study presents an advanced method for post-mortem person identification using the segmentation of skeletal structures from chest X-ray images.The proposed approach employs the Attention U-Net architecture,enhanced with gated attention mechanisms,to refine segmentation by emphasizing spatially relevant anatomical features while suppressing irrelevant details.By isolating skeletal structures which remain stable over time compared to soft tissues,this method leverages bones as reliable biometric markers for identity verification.The model integrates custom-designed encoder and decoder blocks with attention gates,achieving high segmentation precision.To evaluate the impact of architectural choices,we conducted an ablation study comparing Attention U-Net with and without attentionmechanisms,alongside an analysis of data augmentation effects.Training and evaluation were performed on a curated chest X-ray dataset,with segmentation performance measured using Dice score,precision,and loss functions,achieving over 98% precision and 94% Dice score.The extracted bone structures were further processed to derive unique biometric patterns,enabling robust and privacy-preserving person identification.Our findings highlight the effectiveness of attentionmechanisms in improving segmentation accuracy and underscore the potential of chest bonebased biometrics in forensic and medical imaging.This work paves the way for integrating artificial intelligence into real-world forensic workflows,offering a non-invasive and reliable solution for post-mortem identification.展开更多
Hepatitis is an infection that affects the liver through contaminated foods or blood transfusions,and it has many types,from normal to serious.Hepatitis is diagnosed through many blood tests and factors;Artificial Int...Hepatitis is an infection that affects the liver through contaminated foods or blood transfusions,and it has many types,from normal to serious.Hepatitis is diagnosed through many blood tests and factors;Artificial Intelligence(AI)techniques have played an important role in early diagnosis and help physicians make decisions.This study evaluated the performance of Machine Learning(ML)algorithms on the hepatitis data set.The dataset contains missing values that have been processed and outliers removed.The dataset was counterbalanced by the Synthetic Minority Over-sampling Technique(SMOTE).The features of the data set were processed in two ways:first,the application of the Recursive Feature Elimination(RFE)algorithm to arrange the percentage of contribution of each feature to the diagnosis of hepatitis,then selection of important features using the t-distributed Stochastic Neighbor Embedding(t-SNE)and Principal Component Analysis(PCA)algorithms.Second,the SelectKBest function was applied to give scores for each attribute,followed by the t-SNE and PCA algorithms.Finally,the classification algorithms K-Nearest Neighbors(KNN),Support Vector Machine(SVM),Artificial Neural Network(ANN),Decision Tree(DT),and Random Forest(RF)were fed by the dataset after processing the features in different methods are RFE with t-SNE and PCA and SelectKBest with t-SNE and PCA).All algorithms yielded promising results for diagnosing hepatitis data sets.The RF with RFE and PCA methods achieved accuracy,Precision,Recall,and AUC of 97.18%,96.72%,97.29%,and 94.2%,respectively,during the training phase.During the testing phase,it reached accuracy,Precision,Recall,and AUC by 96.31%,95.23%,97.11%,and 92.67%,respectively.展开更多
This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data character...This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data characterized by skewness,heavy tails,and diverse hazard behaviors.We meticulously develop the TIHTBXII’s mathematical foundations,including its probability density function(PDF),cumulative distribution function(CDF),and essential statistical properties,crucial for theoretical understanding and practical application.A comprehensive Monte Carlo simulation evaluates four parameter estimation methods:maximum likelihood(MLE),maximum product spacing(MPS),least squares(LS),and weighted least squares(WLS).The simulation results consistently show that as sample sizes increase,the Bias and RMSE of all estimators decrease,with WLS and LS often demonstrating superior and more stable performance.Beyond theoretical development,we present a practical application of the TIHTBXII distribution in constructing a group acceptance sampling plan(GASP)for truncated life tests.This application highlights how the TIHTBXII model can optimize quality control decisions by minimizing the average sample number(ASN)while effectively managing consumer and producer risks.Empirical validation using real-world datasets,including“Active Repair Duration,”“Groundwater Contaminant Measurements,”and“Dominica COVID-19 Mortality,”further demonstrates the TIHTBXII’s superior fit compared to existing models.Our findings confirm the TIHTBXII distribution as a powerful and reliable alternative for accurately modeling complex data in fields such as reliability engineering and quality assessment,leading to more informed and robust decision-making.展开更多
The Bat algorithm,a metaheuristic optimization technique inspired by the foraging behaviour of bats,has been employed to tackle optimization problems.Known for its ease of implementation,parameter tunability,and stron...The Bat algorithm,a metaheuristic optimization technique inspired by the foraging behaviour of bats,has been employed to tackle optimization problems.Known for its ease of implementation,parameter tunability,and strong global search capabilities,this algorithm finds application across diverse optimization problem domains.However,in the face of increasingly complex optimization challenges,the Bat algorithm encounters certain limitations,such as slow convergence and sensitivity to initial solutions.In order to tackle these challenges,the present study incorporates a range of optimization compo-nents into the Bat algorithm,thereby proposing a variant called PKEBA.A projection screening strategy is implemented to mitigate its sensitivity to initial solutions,thereby enhancing the quality of the initial solution set.A kinetic adaptation strategy reforms exploration patterns,while an elite communication strategy enhances group interaction,to avoid algorithm from local optima.Subsequently,the effectiveness of the proposed PKEBA is rigorously evaluated.Testing encompasses 30 benchmark functions from IEEE CEC2014,featuring ablation experiments and comparative assessments against classical algorithms and their variants.Moreover,real-world engineering problems are employed as further validation.The results conclusively demonstrate that PKEBA ex-hibits superior convergence and precision compared to existing algorithms.展开更多
This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the...This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the photographs by reducing truncations in the existing images.Furthermore,the collected images undergo processing using histogram gradients and a flexible threshold value that may be adjusted in specific situations.Thus,it is possible to reduce the occurrence of overlapping circumstances in collective picture characteristics by substituting grey-scale photos with colorized factors.The proposed method offers additional robust feature representations by imposing a limiting factor to reduce overall scattering values.This is achieved by visualizing a graphical function.Moreover,to derive valuable insights from a series of photos,both the separation and in-version processes are conducted.This involves analyzing comparison results across four different scenarios.The results of the comparative analysis show that the proposed method effectively reduces the difficulties associated with time and space to 1 s and 3%,respectively.In contrast,the existing strategy exhibits higher complexities of 3 s and 9.1%,respectively.展开更多
A significant number and range of challenges besetting sustainability can be traced to the actions and inter actions of multiple autonomous agents(people mostly)and the entities they create(e.g.,institutions,policies,...A significant number and range of challenges besetting sustainability can be traced to the actions and inter actions of multiple autonomous agents(people mostly)and the entities they create(e.g.,institutions,policies,social network)in the corresponding social-environmental systems(SES).To address these challenges,we need to understand decisions made and actions taken by agents,the outcomes of their actions,including the feedbacks on the corresponding agents and environment.The science of complex adaptive systems-complex adaptive sys tems(CAS)science-has a significant potential to handle such challenges.We address the advantages of CAS science for sustainability by identifying the key elements and challenges in sustainability science,the generic features of CAS,and the key advances and challenges in modeling CAS.Artificial intelligence and data science combined with agent-based modeling promise to improve understanding of agents’behaviors,detect SES struc tures,and formulate SES mechanisms.展开更多
Intrusion detection in Internet of Things(IoT)environments presents challenges due to heterogeneous devices,diverse attack vectors,and highly imbalanced datasets.Existing research on the ToN-IoT dataset has largely em...Intrusion detection in Internet of Things(IoT)environments presents challenges due to heterogeneous devices,diverse attack vectors,and highly imbalanced datasets.Existing research on the ToN-IoT dataset has largely emphasized binary classification and single-model pipelines,which often showstrong performance but limited generalizability,probabilistic reliability,and operational interpretability.This study proposes a stacked ensemble deep learning framework that integrates random forest,extreme gradient boosting,and a deep neural network as base learners,with CatBoost as the meta-learner.On the ToN-IoT Linux process dataset,the model achieved near-perfect discrimination(macro area under the curve=0.998),robust calibration,and superior F1-scores compared with standalone classifiers.Interpretability was achieved through SHapley Additive exPlanations–based feature attribution,which highlights actionable drivers ofmalicious behavior,such as command-line patterns,process scheduling anomalies,and CPU usage spikes,and aligns these indicators with MITRE ATT&CK tactics and techniques.Complementary analyses,including cumulative lift and sensitivity-specificity trade-offs,revealed the framework’s suitability for deployment in security operations centers,where calibrated risk scores,transparent explanations,and resource-aware triage are essential.These contributions bridge methodological rigor in artificial intelligence/machine learning with operational priorities in cybersecurity,delivering a scalable and explainable intrusion detection system suitable for real-world deployment in IoT environments.展开更多
The Nelder-Mead simplex method is a well-known algorithm enabling the minimization of functions that are not available in closed-form and that need not be differentiable or convex.Furthermore,it is particularly parsim...The Nelder-Mead simplex method is a well-known algorithm enabling the minimization of functions that are not available in closed-form and that need not be differentiable or convex.Furthermore,it is particularly parsimonious on the number of function evaluations,thus making it preferable to convex optimization paradigms in the case,common when dealing with control design problems,that the objective function of the optimization problem is non-differentiable,non-convex,and its closed-form is not available or difficult to be computed analytically.The main goal of this paper is to show how the joint use of the Nelder-Mead simplex method and the Morrison algorithm can be successfully used to solve relevant and challenging control problems that cannot be easily solved using analytic methods.In particular,it is shown how the problems of strong stabilization,static output feedback stabilization,and design of robust controllers having fixed structure can be framed as optimization problems,which,in turn,can be efficiently solved by coupling the two above mentioned algorithms.The performance of this procedure is compared with state-of-the-art techniques on dozens of static output feedback benchmark case studies,and its effectiveness is demonstrated by several examples.展开更多
Providing safe and quality food is crucial for every household and is of extreme significance in the growth of any society.It is a complex procedure that deals with all issues focusing on the development of food proce...Providing safe and quality food is crucial for every household and is of extreme significance in the growth of any society.It is a complex procedure that deals with all issues focusing on the development of food processing from seed to harvest,storage,preparation,and consumption.This current paper seeks to demystify the importance of artificial intelligence,machine learning(ML),deep learning(DL),and computer vision(CV)in ensuring food safety and quality.By stressing the importance of these technologies,the audience will feel reassured and confident in their potential.These are very handy for such problems,giving assurance over food safety.CV is incredibly noble in today's generation because it improves food processing quality and positively impacts firms and researchers.Thus,at the present production stage,rich in image processing and computer visioning is incorporated into all facets of food production.In this field,DL and ML are implemented to identify the type of food in addition to quality.Concerning data and result-oriented perceptions,one has found similarities regarding various approaches.As a result,the findings of this study will be helpful for scholars looking for a proper approach to identify the quality of food offered.It helps to indicate which food products have been discussed by other scholars and lets the reader know papers by other scholars inclined to research further.Also,DL is accurately integrated with identifying the quality and safety of foods in the market.This paper describes the current practices and concerns of ML,DL,and probable trends for its future development.展开更多
The integration of machine learning(ML)technology with Internet of Things(IoT)systems produces essential changes in healthcare operations.Healthcare personnel can track patients around the clock thanks to healthcare I...The integration of machine learning(ML)technology with Internet of Things(IoT)systems produces essential changes in healthcare operations.Healthcare personnel can track patients around the clock thanks to healthcare IoT(H-IoT)technology,which also provides proactive statistical findings and precise medical diagnoses that enhance healthcare performance.This study examines how ML might support IoT-based health care systems,namely in the areas of prognostic systems,disease detection,patient tracking,and healthcare operations control.The study looks at the benefits and drawbacks of several machine learning techniques for H-IoT applications.It also examines the fundamental problems,such as data security and cyberthreats,as well as the high processing demands that these systems face.Alongside this,the essay discusses the advantages of all the technologies,including machine learning,deep learning,and the Internet of Things,as well as the significant difficulties and problems that arise when integrating the technology into healthcare forecasts.展开更多
The healthcare field is fraught with challenges associated with severe class imbalance,wherein such critical conditions like sepsis,cardiac arrest,and drug adverse reactions are rare but have dire clinical consequence...The healthcare field is fraught with challenges associated with severe class imbalance,wherein such critical conditions like sepsis,cardiac arrest,and drug adverse reactions are rare but have dire clinical consequences.This paper presents a new framework,Deep Reinforcement Adaptive Gradient Optimization Network to Mining Rare Events(DRAGON-MINE),to demonstrate how deep reinforcement learning can be used synergistically with adaptive gradient optimization and address the inherent weaknesses of current methods in the prediction of rare health events.The suggested architecture uses a dual-pathway consisting of a reinforcement learning agent to dynamically reweigh samples and an adaptive gradient optimizer to follow novel learning rates.With extensive experiments on the MIMIC-IV and eICU-CRD datasets,DRAGON-MINE consistently outperforms recent state-of-the-art methods for sepsis,cardiac arrest,and adverse drug reaction prediction,achieving AUROC values of 92.3%and 91.6%for sepsis prediction on MIMIC-IV and eICU-CRD,respectively,while consistently outperforming Transformer-,CNN-RNN-,and Fed-Ensemble-based methods across all evaluated tasks and datasets,with particularly strong gains observed in precision-recall performance under severe class imbalance.With its high sensitivity(88.4%)and specificity(90.2%),DRAGON-MINE enables reliable early warning of rare clinical events in critical care settings while minimizing false alarms,supporting safer clinical decision support systems,and demonstrating strong potential for scalable deployment across multi-institutional intensive care environments through federated learning.展开更多
Internet of Things(IoT)interconnects devices via network protocols to enable intelligent sensing and control.Resource-constrained IoT devices rely on cloud servers for data storage and processing.However,this cloudass...Internet of Things(IoT)interconnects devices via network protocols to enable intelligent sensing and control.Resource-constrained IoT devices rely on cloud servers for data storage and processing.However,this cloudassisted architecture faces two critical challenges:the untrusted cloud services and the separation of data ownership from control.Although Attribute-based Searchable Encryption(ABSE)provides fine-grained access control and keyword search over encrypted data,existing schemes lack of error tolerance in exact multi-keyword matching.In this paper,we proposed an attribute-based multi-keyword fuzzy searchable encryption with forward ciphertext search(FCS-ABMSE)scheme that avoids computationally expensive bilinear pairing operations on the IoT device side.The scheme supportsmulti-keyword fuzzy search without requiring explicit keyword fields,thereby significantly enhancing error tolerance in search operations.It further incorporates forward-secure ciphertext search to mitigate trapdoor abuse,as well as offline encryption and verifiable outsourced decryption to minimize user-side computational costs.Formal security analysis proved that the FCS-ABMSE scheme meets both indistinguishability of ciphertext under the chosen keyword attacks(IND-CKA)and the indistinguishability of ciphertext under the chosen plaintext attacks(IND-CPA).In addition,we constructed an enhanced variant based on type-3 pairings.Results demonstrated that the proposed scheme outperforms existing ABSE approaches in terms of functionalities,computational cost,and communication cost.展开更多
基金funded and supported by the Ongoing Research Funding program(ORF-2025-314),King Saud University,Riyadh,Saudi Arabia.
文摘The rapid digitalization of urban infrastructure has made smart cities increasingly vulnerable to sophisticated cyber threats.In the evolving landscape of cybersecurity,the efficacy of Intrusion Detection Systems(IDS)is increasingly measured by technical performance,operational usability,and adaptability.This study introduces and rigorously evaluates a Human-Computer Interaction(HCI)-Integrated IDS with the utilization of Convolutional Neural Network(CNN),CNN-Long Short Term Memory(LSTM),and Random Forest(RF)against both a Baseline Machine Learning(ML)and a Traditional IDS model,through an extensive experimental framework encompassing many performance metrics,including detection latency,accuracy,alert prioritization,classification errors,system throughput,usability,ROC-AUC,precision-recall,confusion matrix analysis,and statistical accuracy measures.Our findings consistently demonstrate the superiority of the HCI-Integrated approach utilizing three major datasets(CICIDS 2017,KDD Cup 1999,and UNSW-NB15).Experimental results indicate that the HCI-Integrated model outperforms its counterparts,achieving an AUC-ROC of 0.99,a precision of 0.93,and a recall of 0.96,while maintaining the lowest false positive rate(0.03)and the fastest detection time(~1.5 s).These findings validate the efficacy of incorporating HCI to enhance anomaly detection capabilities,improve responsiveness,and reduce alert fatigue in critical smart city applications.It achieves markedly lower detection times,higher accuracy across all threat categories,reduced false positive and false negative rates,and enhanced system throughput under concurrent load conditions.The HCIIntegrated IDS excels in alert contextualization and prioritization,offering more actionable insights while minimizing analyst fatigue.Usability feedback underscores increased analyst confidence and operational clarity,reinforcing the importance of user-centered design.These results collectively position the HCI-Integrated IDS as a highly effective,scalable,and human-aligned solution for modern threat detection environments.
基金supported by the National Natural Science Foundation of China (62202022, 92582204, and 62572030)the Fundamental Research Funds for the Central Universitiesthe exploratory elective projects of the State Key Laboratory of Complex and Critical Software Environments
文摘In the context of large language model(LLM)reshaping software engineering education,this paper presents OSSerCopilot,a LLM-based tutoring system designed to address the critical challenge faced by newcomers(especially student contributors)in open source software(OSS)communities.Leveraging natural language processing,code semantic understanding,and learner profiling,the system functions as an intelligent tutor to scaffold three core competency domains:contribution guideline interpretation,project architecture comprehension,and personalized task matching.By transforming traditional onboarding barriers-such as complex contribution documentation and opaque project structures-into interactive learning journeys,OSSerCopilot enables newcomers to complete their first OSS contribution more easily and confidently.This paper highlights how LLM technologies can redefine software engineering education by bridging the gap between theoretical knowledge and practical OSS participation,offering implications for curriculum design,competency assessment,and sustainable OSS ecosystem cultivation.A demonstration video of the system is available at https://figshare.com/articles/media/OSSerCopilot_Introduction_mp4/29510276.
基金supported by the Competitive Research Fund of the University of Aizu,Japan(Grant No.P-13).
文摘Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the heart.Numerous researchers have made progress in correcting and predicting early heart disease,but more remains to be accomplished.The diagnostic accuracy of many current studies is inadequate due to the attempt to predict patients with heart disease using traditional approaches.By using data fusion from several regions of the country,we intend to increase the accuracy of heart disease prediction.A statistical approach that promotes insights triggered by feature interactions to reveal the intricate pattern in the data,which cannot be adequately captured by a single feature.We processed the data using techniques including feature scaling,outlier detection and replacement,null and missing value imputation,and more to improve the data quality.Furthermore,the proposed feature engineering method uses the correlation test for numerical features and the chi-square test for categorical features to interact with the feature.To reduce the dimensionality,we subsequently used PCA with 95%variation.To identify patients with heart disease,hyperparameter-based machine learning algorithms like RF,XGBoost,Gradient Boosting,LightGBM,CatBoost,SVM,and MLP are utilized,along with ensemble models.The model’s overall prediction performance ranges from 88%to 92%.In order to attain cutting-edge results,we then used a 1D CNN model,which significantly enhanced the prediction with an accuracy score of 96.36%,precision of 96.45%,recall of 96.36%,specificity score of 99.51%and F1 score of 96.34%.The RF model produces the best results among all the classifiers in the evaluation matrix without feature interaction,with accuracy of 90.21%,precision of 90.40%,recall of 90.86%,specificity of 90.91%,and F1 score of 90.63%.Our proposed 1D CNN model is 7%superior to the one without feature engineering when compared to the suggested approach.This illustrates how interaction-focused feature analysis can produce precise and useful insights for heart disease diagnosis.
文摘Quantum software development utilizes quantum phenomena such as superposition and entanglement to address problems that are challenging for classical systems.However,it must also adhere to critical quantum constraints,notably the no-cloning theorem,which prohibits the exact duplication of unknown quantum states and has profound implications for cryptography,secure communication,and error correction.While existing quantum circuit representations implicitly honor such constraints,they lack formal mechanisms for early-stage verification in software design.Addressing this constraint at the design phase is essential to ensure the correctness and reliability of quantum software.This paper presents a formal metamodeling framework using UML-style notation and and Object Constraint Language(OCL)to systematically capture and enforce the no-cloning theorem within quantum software models.The proposed metamodel formalizes key quantum concepts—such as entanglement and teleportation—and encodes enforceable invariants that reflect core quantum mechanical laws.The framework’s effectiveness is validated by analyzing two critical edge cases—conditional copying with CNOT gates and quantum teleportation—through instance model evaluations.These cases demonstrate that the metamodel can capture nuanced scenarios that are often mistaken as violations of the no-cloning theorem but are proven compliant under formal analysis.Thus,these serve as constructive validations that demonstrate the metamodel’s expressiveness and correctness in representing operations that may appear to challenge the no-cloning theorem but,upon rigorous analysis,are shown to comply with it.The approach supports early detection of conceptual design errors,promoting correctness prior to implementation.The framework’s extensibility is also demonstrated by modeling projective measurement,further reinforcing its applicability to broader quantum software engineering tasks.By integrating the rigor of metamodeling with fundamental quantum mechanical principles,this work provides a structured,model-driven approach that enables traditional software engineers to address quantum computing challenges.It offers practical insights into embedding quantum correctness at the modeling level and advances the development of reliable,error-resilient quantum software systems.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2025R97)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia。
文摘Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have problems such as data silos,a lack of visibility in real time,fraudulent activities,and inefficiencies in tracking and traceability.Blockchain’s decentralized and irreversible ledger offers a solid foundation for dealing with these issues;it facilitates trust,security,and the sharing of data in real-time among all parties involved.Through an examination of critical technologies,methodology,and applications,this paper delves deeply into computer modeling based-blockchain framework within supply chain intelligence.The effect of BT on SCM is evaluated by reviewing current research and practical applications in the field.As part of the process,we delved through the research on blockchain-based supply chain models,smart contracts,Decentralized Applications(DApps),and how they connect to other cutting-edge innovations like Artificial Intelligence(AI)and the Internet of Things(IoT).To quantify blockchain’s performance,the study introduces analytical models for efficiency improvement,security enhancement,and scalability,enabling computational assessment and simulation of supply chain scenarios.These models provide a structured approach to predicting system performance under varying parameters.According to the results,BT increases efficiency by automating transactions using smart contracts,increases security by using cryptographic techniques,and improves transparency in the supply chain by providing immutable records.Regulatory concerns,challenges with interoperability,and scalability all work against broad adoption.To fully automate and intelligently integrate blockchain with AI and the IoT,additional research is needed to address blockchain’s current limitations and realize its potential for supply chain intelligence.
文摘Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(DT),acts as a virtual replica of physical assets or processes,facilitating better decision making through simulations and predictive analytics.CPS and DT underpin the evolution of Industry 4.0 by bridging the physical and digital domains.This survey explores their synergy,highlighting how DT enriches CPS with dynamic modeling,realtime data integration,and advanced simulation capabilities.The layered architecture of DTs within CPS is examined,showcasing the enabling technologies and tools vital for seamless integration.The study addresses key challenges in CPS modeling,such as concurrency and communication,and underscores the importance of DT in overcoming these obstacles.Applications in various sectors are analyzed,including smart manufacturing,healthcare,and urban planning,emphasizing the transformative potential of CPS-DT integration.In addition,the review identifies gaps in existing methodologies and proposes future research directions to develop comprehensive,scalable,and secure CPSDT systems.By synthesizing insights fromthe current literature and presenting a taxonomy of CPS and DT,this survey serves as a foundational reference for academics and practitioners.The findings stress the need for unified frameworks that align CPS and DT with emerging technologies,fostering innovation and efficiency in the digital transformation era.
基金supported by Innovative Human Resource Development for Local Intellectualization program through the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(IITP-2024-00156287,50%)supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)under the Artificial Intelligence Convergence Innovation Human Resources Development(IITP-2023-RS-2023-00256629,25%)grant funded by the Korea government(MSIT)supported by the Korea Internet&Security Agency(KISA)-Information Security College Support Project(25%).
文摘Employee turnover presents considerable challenges for organizations,leading to increased recruitment costs and disruptions in ongoing operations.High voluntary attrition rates can result in substantial financial losses,making it essential for Human Resource(HR)departments to prioritize turnover reduction.In this context,Artificial Intelligence(AI)has emerged as a vital tool in strengthening business strategies and people management.This paper incorporates two new representative features,introducing three types of feature engineering to enhance the analysis of employee turnover in the IBM HR Analytics dataset.Key Machine Learning(ML)techniques were subsequently employed in this work,such as Support Vector Machine(SVM),Random Forest(RF),Logistic Regression(LR),Extreme Gradient Boosting(XGBoost),and especially Categorical Boosting(CatBoost),a gradient boosting algorithm optimized for categorical data to analyze employee turnover.Adopting the unique feature engineering process enables CatBoost to enhance model accuracy and robustness while effectively analyzing complex patterns within employee data.Experimental results demonstrate the effectiveness of our proposed methodology,achieving the highest accuracy of 90.14%and an F1-score of 0.88 on the IBM dataset.To assess the capability of our detection system,we have also used an extended dataset,achieving an optimal accuracy of 98.10%and an F1-score of 0.98.These results strongly indicate the efficiency of our proposed methodology and highlight the impact of feature engineering on predictive performance.Moreover,by pinpointing the top ten factors influencing attrition,including“Monthly Income”,“Over Time”,“Total Satisfaction”,and others,this research equips HR departments with insights to implement targeted retention strategies,such as enhancing compensation or job satisfaction,to retain key talent before they consider leaving.
基金funded by Umm Al-Qura University,Saudi Arabia under grant number:25UQU4300346GSSR08.
文摘This study presents an advanced method for post-mortem person identification using the segmentation of skeletal structures from chest X-ray images.The proposed approach employs the Attention U-Net architecture,enhanced with gated attention mechanisms,to refine segmentation by emphasizing spatially relevant anatomical features while suppressing irrelevant details.By isolating skeletal structures which remain stable over time compared to soft tissues,this method leverages bones as reliable biometric markers for identity verification.The model integrates custom-designed encoder and decoder blocks with attention gates,achieving high segmentation precision.To evaluate the impact of architectural choices,we conducted an ablation study comparing Attention U-Net with and without attentionmechanisms,alongside an analysis of data augmentation effects.Training and evaluation were performed on a curated chest X-ray dataset,with segmentation performance measured using Dice score,precision,and loss functions,achieving over 98% precision and 94% Dice score.The extracted bone structures were further processed to derive unique biometric patterns,enabling robust and privacy-preserving person identification.Our findings highlight the effectiveness of attentionmechanisms in improving segmentation accuracy and underscore the potential of chest bonebased biometrics in forensic and medical imaging.This work paves the way for integrating artificial intelligence into real-world forensic workflows,offering a non-invasive and reliable solution for post-mortem identification.
基金funded by Scientific Research Deanship at University of Ha’il,Saudi Arabia,through project number GR-24009.
文摘Hepatitis is an infection that affects the liver through contaminated foods or blood transfusions,and it has many types,from normal to serious.Hepatitis is diagnosed through many blood tests and factors;Artificial Intelligence(AI)techniques have played an important role in early diagnosis and help physicians make decisions.This study evaluated the performance of Machine Learning(ML)algorithms on the hepatitis data set.The dataset contains missing values that have been processed and outliers removed.The dataset was counterbalanced by the Synthetic Minority Over-sampling Technique(SMOTE).The features of the data set were processed in two ways:first,the application of the Recursive Feature Elimination(RFE)algorithm to arrange the percentage of contribution of each feature to the diagnosis of hepatitis,then selection of important features using the t-distributed Stochastic Neighbor Embedding(t-SNE)and Principal Component Analysis(PCA)algorithms.Second,the SelectKBest function was applied to give scores for each attribute,followed by the t-SNE and PCA algorithms.Finally,the classification algorithms K-Nearest Neighbors(KNN),Support Vector Machine(SVM),Artificial Neural Network(ANN),Decision Tree(DT),and Random Forest(RF)were fed by the dataset after processing the features in different methods are RFE with t-SNE and PCA and SelectKBest with t-SNE and PCA).All algorithms yielded promising results for diagnosing hepatitis data sets.The RF with RFE and PCA methods achieved accuracy,Precision,Recall,and AUC of 97.18%,96.72%,97.29%,and 94.2%,respectively,during the training phase.During the testing phase,it reached accuracy,Precision,Recall,and AUC by 96.31%,95.23%,97.11%,and 92.67%,respectively.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(Grant Number IMSIU-DDRSP2501).
文摘This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data characterized by skewness,heavy tails,and diverse hazard behaviors.We meticulously develop the TIHTBXII’s mathematical foundations,including its probability density function(PDF),cumulative distribution function(CDF),and essential statistical properties,crucial for theoretical understanding and practical application.A comprehensive Monte Carlo simulation evaluates four parameter estimation methods:maximum likelihood(MLE),maximum product spacing(MPS),least squares(LS),and weighted least squares(WLS).The simulation results consistently show that as sample sizes increase,the Bias and RMSE of all estimators decrease,with WLS and LS often demonstrating superior and more stable performance.Beyond theoretical development,we present a practical application of the TIHTBXII distribution in constructing a group acceptance sampling plan(GASP)for truncated life tests.This application highlights how the TIHTBXII model can optimize quality control decisions by minimizing the average sample number(ASN)while effectively managing consumer and producer risks.Empirical validation using real-world datasets,including“Active Repair Duration,”“Groundwater Contaminant Measurements,”and“Dominica COVID-19 Mortality,”further demonstrates the TIHTBXII’s superior fit compared to existing models.Our findings confirm the TIHTBXII distribution as a powerful and reliable alternative for accurately modeling complex data in fields such as reliability engineering and quality assessment,leading to more informed and robust decision-making.
基金partially supported by MRC(MC_PC_17171)Royal Society(RP202G0230)+8 种基金BHF(AA/18/3/34220)Hope Foundation for Cancer Research(RM60G0680)GCRF(20P2PF11)Sino-UK Industrial Fund(RP202G0289)LIAS(20P2ED10,20P2RE969)Data Science Enhancement Fund(20P2RE237)Fight for Sight(24NN201)Sino-UK Education Fund(OP202006)BBSRC(RM32G0178B8).
文摘The Bat algorithm,a metaheuristic optimization technique inspired by the foraging behaviour of bats,has been employed to tackle optimization problems.Known for its ease of implementation,parameter tunability,and strong global search capabilities,this algorithm finds application across diverse optimization problem domains.However,in the face of increasingly complex optimization challenges,the Bat algorithm encounters certain limitations,such as slow convergence and sensitivity to initial solutions.In order to tackle these challenges,the present study incorporates a range of optimization compo-nents into the Bat algorithm,thereby proposing a variant called PKEBA.A projection screening strategy is implemented to mitigate its sensitivity to initial solutions,thereby enhancing the quality of the initial solution set.A kinetic adaptation strategy reforms exploration patterns,while an elite communication strategy enhances group interaction,to avoid algorithm from local optima.Subsequently,the effectiveness of the proposed PKEBA is rigorously evaluated.Testing encompasses 30 benchmark functions from IEEE CEC2014,featuring ablation experiments and comparative assessments against classical algorithms and their variants.Moreover,real-world engineering problems are employed as further validation.The results conclusively demonstrate that PKEBA ex-hibits superior convergence and precision compared to existing algorithms.
基金financially supported by Ongoing Research Funding Program(ORF-2025-846),King Saud University,Riyadh,Saudi Arabia.
文摘This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the photographs by reducing truncations in the existing images.Furthermore,the collected images undergo processing using histogram gradients and a flexible threshold value that may be adjusted in specific situations.Thus,it is possible to reduce the occurrence of overlapping circumstances in collective picture characteristics by substituting grey-scale photos with colorized factors.The proposed method offers additional robust feature representations by imposing a limiting factor to reduce overall scattering values.This is achieved by visualizing a graphical function.Moreover,to derive valuable insights from a series of photos,both the separation and in-version processes are conducted.This involves analyzing comparison results across four different scenarios.The results of the comparative analysis show that the proposed method effectively reduces the difficulties associated with time and space to 1 s and 3%,respectively.In contrast,the existing strategy exhibits higher complexities of 3 s and 9.1%,respectively.
基金The National Science Foundation funded this research under the Dy-namics of Coupled Natural and Human Systems program(Grants No.DEB-1212183 and BCS-1826839)support from San Diego State University and Auburn University.
文摘A significant number and range of challenges besetting sustainability can be traced to the actions and inter actions of multiple autonomous agents(people mostly)and the entities they create(e.g.,institutions,policies,social network)in the corresponding social-environmental systems(SES).To address these challenges,we need to understand decisions made and actions taken by agents,the outcomes of their actions,including the feedbacks on the corresponding agents and environment.The science of complex adaptive systems-complex adaptive sys tems(CAS)science-has a significant potential to handle such challenges.We address the advantages of CAS science for sustainability by identifying the key elements and challenges in sustainability science,the generic features of CAS,and the key advances and challenges in modeling CAS.Artificial intelligence and data science combined with agent-based modeling promise to improve understanding of agents’behaviors,detect SES struc tures,and formulate SES mechanisms.
文摘Intrusion detection in Internet of Things(IoT)environments presents challenges due to heterogeneous devices,diverse attack vectors,and highly imbalanced datasets.Existing research on the ToN-IoT dataset has largely emphasized binary classification and single-model pipelines,which often showstrong performance but limited generalizability,probabilistic reliability,and operational interpretability.This study proposes a stacked ensemble deep learning framework that integrates random forest,extreme gradient boosting,and a deep neural network as base learners,with CatBoost as the meta-learner.On the ToN-IoT Linux process dataset,the model achieved near-perfect discrimination(macro area under the curve=0.998),robust calibration,and superior F1-scores compared with standalone classifiers.Interpretability was achieved through SHapley Additive exPlanations–based feature attribution,which highlights actionable drivers ofmalicious behavior,such as command-line patterns,process scheduling anomalies,and CPU usage spikes,and aligns these indicators with MITRE ATT&CK tactics and techniques.Complementary analyses,including cumulative lift and sensitivity-specificity trade-offs,revealed the framework’s suitability for deployment in security operations centers,where calibrated risk scores,transparent explanations,and resource-aware triage are essential.These contributions bridge methodological rigor in artificial intelligence/machine learning with operational priorities in cybersecurity,delivering a scalable and explainable intrusion detection system suitable for real-world deployment in IoT environments.
基金partially supported by the Italian Ministry for Research in the framework of the 2020 Program for Research Projects of National Interest(2020RTWES4)。
文摘The Nelder-Mead simplex method is a well-known algorithm enabling the minimization of functions that are not available in closed-form and that need not be differentiable or convex.Furthermore,it is particularly parsimonious on the number of function evaluations,thus making it preferable to convex optimization paradigms in the case,common when dealing with control design problems,that the objective function of the optimization problem is non-differentiable,non-convex,and its closed-form is not available or difficult to be computed analytically.The main goal of this paper is to show how the joint use of the Nelder-Mead simplex method and the Morrison algorithm can be successfully used to solve relevant and challenging control problems that cannot be easily solved using analytic methods.In particular,it is shown how the problems of strong stabilization,static output feedback stabilization,and design of robust controllers having fixed structure can be framed as optimization problems,which,in turn,can be efficiently solved by coupling the two above mentioned algorithms.The performance of this procedure is compared with state-of-the-art techniques on dozens of static output feedback benchmark case studies,and its effectiveness is demonstrated by several examples.
文摘Providing safe and quality food is crucial for every household and is of extreme significance in the growth of any society.It is a complex procedure that deals with all issues focusing on the development of food processing from seed to harvest,storage,preparation,and consumption.This current paper seeks to demystify the importance of artificial intelligence,machine learning(ML),deep learning(DL),and computer vision(CV)in ensuring food safety and quality.By stressing the importance of these technologies,the audience will feel reassured and confident in their potential.These are very handy for such problems,giving assurance over food safety.CV is incredibly noble in today's generation because it improves food processing quality and positively impacts firms and researchers.Thus,at the present production stage,rich in image processing and computer visioning is incorporated into all facets of food production.In this field,DL and ML are implemented to identify the type of food in addition to quality.Concerning data and result-oriented perceptions,one has found similarities regarding various approaches.As a result,the findings of this study will be helpful for scholars looking for a proper approach to identify the quality of food offered.It helps to indicate which food products have been discussed by other scholars and lets the reader know papers by other scholars inclined to research further.Also,DL is accurately integrated with identifying the quality and safety of foods in the market.This paper describes the current practices and concerns of ML,DL,and probable trends for its future development.
文摘The integration of machine learning(ML)technology with Internet of Things(IoT)systems produces essential changes in healthcare operations.Healthcare personnel can track patients around the clock thanks to healthcare IoT(H-IoT)technology,which also provides proactive statistical findings and precise medical diagnoses that enhance healthcare performance.This study examines how ML might support IoT-based health care systems,namely in the areas of prognostic systems,disease detection,patient tracking,and healthcare operations control.The study looks at the benefits and drawbacks of several machine learning techniques for H-IoT applications.It also examines the fundamental problems,such as data security and cyberthreats,as well as the high processing demands that these systems face.Alongside this,the essay discusses the advantages of all the technologies,including machine learning,deep learning,and the Internet of Things,as well as the significant difficulties and problems that arise when integrating the technology into healthcare forecasts.
文摘The healthcare field is fraught with challenges associated with severe class imbalance,wherein such critical conditions like sepsis,cardiac arrest,and drug adverse reactions are rare but have dire clinical consequences.This paper presents a new framework,Deep Reinforcement Adaptive Gradient Optimization Network to Mining Rare Events(DRAGON-MINE),to demonstrate how deep reinforcement learning can be used synergistically with adaptive gradient optimization and address the inherent weaknesses of current methods in the prediction of rare health events.The suggested architecture uses a dual-pathway consisting of a reinforcement learning agent to dynamically reweigh samples and an adaptive gradient optimizer to follow novel learning rates.With extensive experiments on the MIMIC-IV and eICU-CRD datasets,DRAGON-MINE consistently outperforms recent state-of-the-art methods for sepsis,cardiac arrest,and adverse drug reaction prediction,achieving AUROC values of 92.3%and 91.6%for sepsis prediction on MIMIC-IV and eICU-CRD,respectively,while consistently outperforming Transformer-,CNN-RNN-,and Fed-Ensemble-based methods across all evaluated tasks and datasets,with particularly strong gains observed in precision-recall performance under severe class imbalance.With its high sensitivity(88.4%)and specificity(90.2%),DRAGON-MINE enables reliable early warning of rare clinical events in critical care settings while minimizing false alarms,supporting safer clinical decision support systems,and demonstrating strong potential for scalable deployment across multi-institutional intensive care environments through federated learning.
文摘Internet of Things(IoT)interconnects devices via network protocols to enable intelligent sensing and control.Resource-constrained IoT devices rely on cloud servers for data storage and processing.However,this cloudassisted architecture faces two critical challenges:the untrusted cloud services and the separation of data ownership from control.Although Attribute-based Searchable Encryption(ABSE)provides fine-grained access control and keyword search over encrypted data,existing schemes lack of error tolerance in exact multi-keyword matching.In this paper,we proposed an attribute-based multi-keyword fuzzy searchable encryption with forward ciphertext search(FCS-ABMSE)scheme that avoids computationally expensive bilinear pairing operations on the IoT device side.The scheme supportsmulti-keyword fuzzy search without requiring explicit keyword fields,thereby significantly enhancing error tolerance in search operations.It further incorporates forward-secure ciphertext search to mitigate trapdoor abuse,as well as offline encryption and verifiable outsourced decryption to minimize user-side computational costs.Formal security analysis proved that the FCS-ABMSE scheme meets both indistinguishability of ciphertext under the chosen keyword attacks(IND-CKA)and the indistinguishability of ciphertext under the chosen plaintext attacks(IND-CPA).In addition,we constructed an enhanced variant based on type-3 pairings.Results demonstrated that the proposed scheme outperforms existing ABSE approaches in terms of functionalities,computational cost,and communication cost.