Grasping is one of the most fundamental operations in modern robotics applications.While deep rein-forcement learning(DRL)has demonstrated strong potential in robotics,there is too much emphasis on maximizing the cumu...Grasping is one of the most fundamental operations in modern robotics applications.While deep rein-forcement learning(DRL)has demonstrated strong potential in robotics,there is too much emphasis on maximizing the cumulative reward in executing tasks,and the potential safety risks are often ignored.In this paper,an optimization method based on safe reinforcement learning(Safe RL)is proposed to address the robotic grasping problem under safety constraints.Specifically,considering the obstacle avoidance constraints of the system,the grasping problem of the manipulator is modeled as a Constrained Markov Decision Process(CMDP).The Lagrange multiplier and a dynamic weighted mechanism are introduced into the Proximal Policy Optimization(PPO)framework,leading to the development of the dynamic weighted Lagrange PPO(DWL-PPO)algorithm.The behavior of violating safety constraints is punished while the policy is optimized in this proposed method.In addition,the orientation control of the end-effector is included in the reward function,and a compound reward function adapted to changes in pose is designed.Ultimately,the efficacy and advantages of the suggested method are proved by extensive training and testing in the Pybullet simulator.The results of grasping experiments reveal that the recommended approach provides superior safety and efficiency compared with other advanced RL methods and achieves a good trade-off between model learning and risk aversion.展开更多
Batteries play a crucial role in the storage and application of sustainable energy,yet their inherent safety risks are non-negligible.Traditional monitoring methods often suffer from high costs,time consumption,and li...Batteries play a crucial role in the storage and application of sustainable energy,yet their inherent safety risks are non-negligible.Traditional monitoring methods often suffer from high costs,time consumption,and limited scalability,making it increasingly difficult to meet the evolving demands of modern society.In this context,recent advancements in machine learning technology have emerged as a promising solution for predicting and monitoring battery states,offering innovative approaches to battery management systems(BMS).By transforming raw operational data into actionable insights,machine learning has shifted the paradigm from reactive to predictive battery safety management,significantly enhancing system reliability and risk mitigation capabilities.This review delves into the implementation of machine learning in battery state prediction,including dataset selection,feature extraction,and model training.It also highlights the latest progress of these models in key applications such as state of health(SOH),state of charge(SOC),thermal runaway warning,fault detection,and remaining useful life(RUL).Finally,we critically examined the challenges and opportunities associated with leveraging machine learning to improve battery safety and performance,providing a comprehensive perspective for future research in this rapidly advancing field.展开更多
Low visibility conditions,particularly those caused by fog,significantly affect road safety and reduce drivers’ability to see ahead clearly.The conventional approaches used to address this problem primarily rely on i...Low visibility conditions,particularly those caused by fog,significantly affect road safety and reduce drivers’ability to see ahead clearly.The conventional approaches used to address this problem primarily rely on instrument-based and fixed-threshold-based theoretical frameworks,which face challenges in adaptability and demonstrate lower performance under varying environmental conditions.To overcome these challenges,we propose a real-time visibility estimation model that leverages roadside CCTV cameras to monitor and identify visibility levels under different weather conditions.The proposedmethod begins by identifying specific regions of interest(ROI)in the CCTVimages and focuses on extracting specific features such as the number of lines and contours detected within these regions.These features are then provided as an input to the proposed hierarchical clusteringmodel,which classifies them into different visibility levels without the need for predefined rules and threshold values.In the proposed approach,we used two different distance similaritymetrics,namely dynamic time warping(DTW)and Euclidean distance,alongside the proposed hierarchical clustering model and noted its performance in terms of numerous evaluation measures.The proposed model achieved an average accuracy of 97.81%,precision of 91.31%,recall of 91.25%,and F1-score of 91.27% using theDTWdistancemetric.We also conducted experiments for other deep learning(DL)-based models used in the literature and compared their performances with the proposed model.The experimental results demonstrate that the proposedmodel ismore adaptable and consistent compared to themethods used in the literature.The proposedmethod provides drivers real-time and accurate visibility information and enhances road safety during low visibility conditions.展开更多
This study presents an interpretable surrogate framework for predicting pedestrian-leg injury severity that integrates high-fidelity finite-element(FE)simulations with a TabNet-based deep-learning model.We generated a...This study presents an interpretable surrogate framework for predicting pedestrian-leg injury severity that integrates high-fidelity finite-element(FE)simulations with a TabNet-based deep-learning model.We generated a parametric dataset of 3000 impact scenarios-covering ten vehicle types and various legform impactors-using automated FE runs configured via Latin hypercube sampling.After preprocessing and one-hot encoding of categorical features,we trained TabNet alongside Support-Vector Regression,Random Forest,and Decision-Tree ensembles.All models underwent hyperparameter tuning via Optuna’s Bayesian optimization coupled with repeated four-fold crossvalidation(20 trials per model).TabNet achieved the best balance of explanatory power and predictive accuracy,with an average R^(2)=0.94±0.01 and RMSE=0.14±0.02.On an independent test set,85%,88%,and 90%of predictions for tibial acceleration,knee-flexion angle,and shear displacement,respectively,fell within±20%of true peaks.SHAPbased analyses confirm that collision-point location and bumper geometry dominate injury outcomes.These results demonstrate TabNet’s capacity to deliver rapid,robust,and explainable injury predictions,offering actionable design insights for vehicle front-end optimization and regulatory assessment in early development stages.展开更多
Background Efficient disaster victim detection(DVD)in urban areas after natural disasters is crucial for minimizing losses.However,conventional search and rescue(SAR)methods often experience delays,which can hinder th...Background Efficient disaster victim detection(DVD)in urban areas after natural disasters is crucial for minimizing losses.However,conventional search and rescue(SAR)methods often experience delays,which can hinder the timely detection of victims.SAR teams face various challenges,including limited access to debris and collapsed structures,safety risks due to unstable conditions,and disrupted communication networks.Methods In this paper,we present DeepSafe,a novel two-level deep learning approach for multilevel classification and object detection using a simulated disaster victim dataset.DeepSafe first employs YOLOv8 to classify images into victim and non-victim categories.Subsequently,Detectron2 is used to precisely locate and outline the victims.Results Experimental results demonstrate the promising performance of DeepSafe in both victim classification and detection.The model effectively identified and located victims under the challenging conditions presented in the dataset.Conclusion DeepSafe offers a practical tool for real-time disaster management and SAR operations,significantly improving conventional methods by reducing delays and enhancing victim detection accuracy in disaster-stricken urban areas.展开更多
Cyclohexene is an important raw material in the production of nylon.Selective hydrogenation of benzene is a key method for preparing cyclohexene.However,the Ru catalysts used in current industrial processes still face...Cyclohexene is an important raw material in the production of nylon.Selective hydrogenation of benzene is a key method for preparing cyclohexene.However,the Ru catalysts used in current industrial processes still face challenges,including high metal usage,high process costs,and low cyclohexene yield.This study utilizes existing literature data combined with machine learning methods to analyze the factors influencing benzene conversion,cyclohexene selectivity,and yield in the benzene hydrogenation to cyclohexene reaction.It constructs predictive models based on XGBoost and Random Forest algorithms.After analysis,it was found that reaction time,Ru content,and space velocity are key factors influencing cyclohexene yield,selectivity,and benzene conversion.Shapley Additive Explanations(SHAP)analysis and feature importance analysis further revealed the contribution of each variable to the reaction outcomes.Additionally,we randomly generated one million variable combinations using the Dirichlet distribution to attempt to predict high-yield catalyst formulations.This paper provides new insights into the application of machine learning in heterogeneous catalysis and offers some reference for further research.展开更多
The uplift resistance of the soil overlying shield tunnels significantly impacts their anti-floating stability.However,research on uplift resistance concerning special-shaped shield tunnels is limited.This study combi...The uplift resistance of the soil overlying shield tunnels significantly impacts their anti-floating stability.However,research on uplift resistance concerning special-shaped shield tunnels is limited.This study combines numerical simulation with machine learning techniques to explore this issue.It presents a summary of special-shaped tunnel geometries and introduces a shape coefficient.Through the finite element software,Plaxis3D,the study simulates six key parameters—shape coefficient,burial depth ratio,tunnel’s longest horizontal length,internal friction angle,cohesion,and soil submerged bulk density—that impact uplift resistance across different conditions.Employing XGBoost and ANN methods,the feature importance of each parameter was analyzed based on the numerical simulation results.The findings demonstrate that a tunnel shape more closely resembling a circle leads to reduced uplift resistance in the overlying soil,whereas other parameters exhibit the contrary effects.Furthermore,the study reveals a diminishing trend in the feature importance of buried depth ratio,internal friction angle,tunnel longest horizontal length,cohesion,soil submerged bulk density,and shape coefficient in influencing uplift resistance.展开更多
Federated Learning(FL)has become a leading decentralized solution that enables multiple clients to train a model in a collaborative environment without directly sharing raw data,making it suitable for privacy-sensitiv...Federated Learning(FL)has become a leading decentralized solution that enables multiple clients to train a model in a collaborative environment without directly sharing raw data,making it suitable for privacy-sensitive applications such as healthcare,finance,and smart systems.As the field continues to evolve,the research field has become more complex and scattered,covering different system designs,training methods,and privacy techniques.This survey is organized around the three core challenges:how the data is distributed,how models are synchronized,and how to defend against attacks.It provides a structured and up-to-date review of FL research from 2023 to 2025,offering a unified taxonomy that categorizes works by data distribution(Horizontal FL,Vertical FL,Federated Transfer Learning,and Personalized FL),training synchronization(synchronous and asynchronous FL),optimization strategies,and threat models(data leakage and poisoning attacks).In particular,we summarize the latest contributions in Vertical FL frameworks for secure multi-party learning,communication-efficient Horizontal FL,and domain-adaptive Federated Transfer Learning.Furthermore,we examine synchronization techniques addressing system heterogeneity,including straggler mitigation in synchronous FL and staleness management in asynchronous FL.The survey covers security threats in FL,such as gradient inversion,membership inference,and poisoning attacks,as well as their defense strategies that include privacy-preserving aggregation and anomaly detection.The paper concludes by outlining unresolved issues and highlighting challenges in handling personalized models,scalability,and real-world adoption.展开更多
Ever since the research in machine learning gained traction in recent years,it has been employed to address challenges in a wide variety of domains,including mechanical devices.Most of the machine learning models are ...Ever since the research in machine learning gained traction in recent years,it has been employed to address challenges in a wide variety of domains,including mechanical devices.Most of the machine learning models are built on the assumption of a static learning environment,but in practical situations,the data generated by the process is dynamic.This evolution of the data is termed concept drift.This research paper presents an approach for predictingmechanical failure in real-time using incremental learning based on the statistically calculated parameters of mechanical equipment.The method proposed here is applicable to allmechanical devices that are susceptible to failure or operational degradation.The proposed method in this paper is equipped with the capacity to detect the drift in data generation and adaptation.The proposed approach evaluates the machine learning and deep learning models for their efficacy in handling the errors related to industrial machines due to their dynamic nature.It is observed that,in the settings without concept drift in the data,methods like SVM and Random Forest performed better compared to deep neural networks.However,this resulted in poor sensitivity for the smallest drift in the machine data reported as a drift.In this perspective,DNN generated the stable drift detection method;it reported an accuracy of 84%and an AUC of 0.87 while detecting only a single drift point,indicating the stability to performbetter in detecting and adapting to new data in the drifting environments under industrial measurement settings.展开更多
Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subn...Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subnetworks in order to mitigate the exposure of sensitive data and reduce the overhead on client devices,thereby making SL particularly suitable for resource-constrained devices.Although SL prevents the direct transmission of raw data,it does not alleviate entirely the risk of privacy breaches.In fact,the data intermediately transmitted to the server sub-model may include patterns or information that could reveal sensitive data.Moreover,achieving a balance between model utility and data privacy has emerged as a challenging problem.In this article,we propose a novel defense approach that combines:(i)Adversarial learning,and(ii)Network channel pruning.In particular,the proposed adversarial learning approach is specifically designed to reduce the risk of private data exposure while maintaining high performance for the utility task.On the other hand,the suggested channel pruning enables the model to adaptively adjust and reactivate pruned channels while conducting adversarial training.The integration of these two techniques reduces the informativeness of the intermediate data transmitted by the client sub-model,thereby enhancing its robustness against attribute inference attacks without adding significant computational overhead,making it wellsuited for IoT devices,mobile platforms,and Internet of Vehicles(IoV)scenarios.The proposed defense approach was evaluated using EfficientNet-B0,a widely adopted compact model,along with three benchmark datasets.The obtained results showcased its superior defense capability against attribute inference attacks compared to existing state-of-the-art methods.This research’s findings demonstrated the effectiveness of the proposed channel pruning-based adversarial training approach in achieving the intended compromise between utility and privacy within SL frameworks.In fact,the classification accuracy attained by the attackers witnessed a drastic decrease of 70%.展开更多
Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Re...Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning.展开更多
Latest digital advancements have intensified the necessity for adaptive,data-driven and socially-centered learning ecosystems.This paper presents the formulation of a cross-platform,innovative,gamified and personalize...Latest digital advancements have intensified the necessity for adaptive,data-driven and socially-centered learning ecosystems.This paper presents the formulation of a cross-platform,innovative,gamified and personalized Learning Ecosystem,which integrates 3D/VR environments,as well as machine learning algorithms,and business intelligence frameworks to enhance learner-centered education and inferenced decision-making.This Learning System makes use of immersive,analytically assessed virtual learning spaces,therefore facilitating real-time monitoring of not just learning performance,but also overall engagement and behavioral patterns,via a comprehensive set of sustainability-oriented ESG-aligned Key Performance Indicators(KPIs).Machine learning models support predictive analysis,personalized feedback,and hybrid recommendation mechanisms,whilst dedicated dashboards translate complex educational data into actionable insights for all Use Cases of the System(Educational Institutions,Educators and Learners).Additionally,the presented Learning System introduces a structured Mentoring and Consulting Subsystem,thence reinforcing human-centered guidance alongside automated intelligence.The Platform’s modular architecture and simulation-centered evaluation approach actively support personalized,and continuously optimized learning pathways.Thence,it exemplifies a mature,adaptive Learning Ecosystem,supporting immersive technologies,analytics,and pedagogical support,hence,contributing to contemporary digital learning innovation and sociotechnical transformation in education.展开更多
Automated grading of dandruff severity is a clinically significant but challenging task due to the inherent ordinal nature of severity levels and the high prevalence of label noise from subjective expert annotations.S...Automated grading of dandruff severity is a clinically significant but challenging task due to the inherent ordinal nature of severity levels and the high prevalence of label noise from subjective expert annotations.Standard classification methods fail to address these dual challenges,limiting their real-world performance.In this paper,a novel,three-phase training framework is proposed that learns a robust ordinal classifier directly from noisy labels.The approach synergistically combines a rank-based ordinal regression backbone with a cooperative,semi-supervised learning strategy to dynamically partition the data into clean and noisy subsets.A hybrid training objective is then employed,applying a supervised ordinal loss to the clean set.The noisy set is simultaneously trained using a dualobjective that combines a semi-supervised ordinal loss with a parallel,label-agnostic contrastive loss.This design allows themodel to learn fromthe entire noisy subset while using contrastive learning to mitigate the risk of error propagation frompotentially corrupt supervision.Extensive experiments on a new,large-scale,multi-site clinical dataset validate our approach.Themethod achieves state-of-the-art performance with 80.71%accuracy and a 76.86%F1-score,significantly outperforming existing approaches,including a 2.26%improvement over the strongest baseline method.This work provides not only a robust solution for a practical medical imaging problem but also a generalizable framework for other tasks plagued by noisy ordinal labels.展开更多
Adversarial Reinforcement Learning(ARL)models for intelligent devices and Network Intrusion Detection Systems(NIDS)improve systemresilience against sophisticated cyber-attacks.As a core component of ARL,Adversarial Tr...Adversarial Reinforcement Learning(ARL)models for intelligent devices and Network Intrusion Detection Systems(NIDS)improve systemresilience against sophisticated cyber-attacks.As a core component of ARL,Adversarial Training(AT)enables NIDS agents to discover and prevent newattack paths by exposing them to competing examples,thereby increasing detection accuracy,reducing False Positives(FPs),and enhancing network security.To develop robust decision-making capabilities for real-world network disruptions and hostile activity,NIDS agents are trained in adversarial scenarios to monitor the current state and notify management of any abnormal or malicious activity.The accuracy and timeliness of the IDS were crucial to the network’s availability and reliability at this time.This paper analyzes ARL applications in NIDS,revealing State-of-The-Art(SoTA)methodology,issues,and future research prospects.This includes Reinforcement Machine Learning(RML)-based NIDS,which enables an agent to interact with the environment to achieve a goal,andDeep Reinforcement Learning(DRL)-based NIDS,which can solve complex decision-making problems.Additionally,this survey study addresses cybersecurity adversarial circumstances and their importance for ARL and NIDS.Architectural design,RL algorithms,feature representation,and training methodologies are examined in the ARL-NIDS study.This comprehensive study evaluates ARL for intelligent NIDS research,benefiting cybersecurity researchers,practitioners,and policymakers.The report promotes cybersecurity defense research and innovation.展开更多
As urbanization continues to accelerate,the challenges associated with managing transportation in metropolitan areas become increasingly complex.The surge in population density contributes to traffic congestion,impact...As urbanization continues to accelerate,the challenges associated with managing transportation in metropolitan areas become increasingly complex.The surge in population density contributes to traffic congestion,impacting travel experiences and posing safety risks.Smart urban transportation management emerges as a strategic solution,conceptualized here as a multidimensional big data problem.The success of this strategy hinges on the effective collection of information from diverse,extensive,and heterogeneous data sources,necessitating the implementation of full⁃stack Information and Communication Technology(ICT)solutions.The main idea of the work is to investigate the current technologies of Intelligent Transportation Systems(ITS)and enhance the safety of urban transportation systems.Machine learning models,trained on historical data,can predict traffic congestion,allowing for the implementation of preventive measures.Deep learning architectures,with their ability to handle complex data representations,further refine traffic predictions,contributing to more accurate and dynamic transportation management.The background of this research underscores the challenges posed by traffic congestion in metropolitan areas and emphasizes the need for advanced technological solutions.By integrating GPS and GIS technologies with machine learning algorithms,this work aims to pay attention to the development of intelligent transportation systems that not only address current challenges but also pave the way for future advancements in urban transportation management.展开更多
As carrier aircraft sortie frequency and flight deck operational density increase,autonomous dispatch trajectory planning for carrier-based vehicles demands efficient,safe,and kinematically feasible solutions.This pap...As carrier aircraft sortie frequency and flight deck operational density increase,autonomous dispatch trajectory planning for carrier-based vehicles demands efficient,safe,and kinematically feasible solutions.This paper presents an Iterative Safe Dispatch Corridor(iSDC)framework,addressing the suboptimality of the traditional SDC method caused by static corridor construction and redundant obstacle exploration.First,a Kinodynamic-Informed-Bidirectional Rapidly-exploring Random Tree Star(KIBRRT^(*))algorithm is proposed for the front-end coarse planning.By integrating bidirectional tree expansion,goal-biased elliptical sampling,and artificial potential field guidance,it reduces unnecessary exploration near concave obstacles and generates kinematically admissible paths.Secondly,the traditional SDC is implemented in an iterative manner,and the obtained trajectory in the current iteration is fed into the next iteration for corridor generation,thus progressively improving the quality of withincorridor constraints.For tractors,a reverse-motion penalty function is incorporated into the back-end optimizer to prioritize forward driving,aligning with mechanical constraints and human operational preferences.Numerical validations on the data of Gerald R.Ford-class carrier demonstrate that the KIBRRT^(*)reduces average computational time by 75%and expansion nodes by 25%compared to conventional RRT^(*)algorithms.Meanwhile,the iSDC framework yields more time-efficient trajectories for both carrier aircraft and tractors,with the dispatch time reduced by 31.3%and tractor reverse motion proportion decreased by 23.4%relative to traditional SDC.The presented framework offers a scalable solution for autonomous dispatch in confined and safety-critical environment,and an illustrative animation is available at bilibili.com/video/BV1tZ7Zz6Eyz.Moreover,the framework can be easily extended to three-dimension scenarios,and thus applicable for trajectory planning of aerial and underwater vehicles.展开更多
BACKGROUND The accurate prediction of lymph node metastasis(LNM)is crucial for managing locally advanced(T3/T4)colorectal cancer(CRC).However,both traditional histopathology and standard slide-level deep learning ofte...BACKGROUND The accurate prediction of lymph node metastasis(LNM)is crucial for managing locally advanced(T3/T4)colorectal cancer(CRC).However,both traditional histopathology and standard slide-level deep learning often fail to capture the sparse and diagnostically critical features of metastatic potential.AIM To develop and validate a case-level multiple-instance learning(MIL)framework mimicking a pathologist's comprehensive review and improve T3/T4 CRC LNM prediction.METHODS The whole-slide images of 130 patients with T3/T4 CRC were retrospectively collected.A case-level MIL framework utilising the CONCH v1.5 and UNI2-h deep learning models was trained on features from all haematoxylin and eosinstained primary tumour slides for each patient.These pathological features were subsequently integrated with clinical data,and model performance was evaluated using the area under the curve(AUC).RESULTS The case-level framework demonstrated superior LNM prediction over slide-level training,with the CONCH v1.5 model achieving a mean AUC(±SD)of 0.899±0.033 vs 0.814±0.083,respectively.Integrating pathology features with clinical data further enhanced performance,yielding a top model with a mean AUC of 0.904±0.047,in sharp contrast to a clinical-only model(mean AUC 0.584±0.084).Crucially,a pathologist’s review confirmed that the model-identified high-attention regions correspond to known high-risk histopathological features.CONCLUSION A case-level MIL framework provides a superior approach for predicting LNM in advanced CRC.This method shows promise for risk stratification and therapy decisions,requiring further validation.展开更多
Providing safe and quality food is crucial for every household and is of extreme significance in the growth of any society.It is a complex procedure that deals with all issues focusing on the development of food proce...Providing safe and quality food is crucial for every household and is of extreme significance in the growth of any society.It is a complex procedure that deals with all issues focusing on the development of food processing from seed to harvest,storage,preparation,and consumption.This current paper seeks to demystify the importance of artificial intelligence,machine learning(ML),deep learning(DL),and computer vision(CV)in ensuring food safety and quality.By stressing the importance of these technologies,the audience will feel reassured and confident in their potential.These are very handy for such problems,giving assurance over food safety.CV is incredibly noble in today's generation because it improves food processing quality and positively impacts firms and researchers.Thus,at the present production stage,rich in image processing and computer visioning is incorporated into all facets of food production.In this field,DL and ML are implemented to identify the type of food in addition to quality.Concerning data and result-oriented perceptions,one has found similarities regarding various approaches.As a result,the findings of this study will be helpful for scholars looking for a proper approach to identify the quality of food offered.It helps to indicate which food products have been discussed by other scholars and lets the reader know papers by other scholars inclined to research further.Also,DL is accurately integrated with identifying the quality and safety of foods in the market.This paper describes the current practices and concerns of ML,DL,and probable trends for its future development.展开更多
文摘Grasping is one of the most fundamental operations in modern robotics applications.While deep rein-forcement learning(DRL)has demonstrated strong potential in robotics,there is too much emphasis on maximizing the cumulative reward in executing tasks,and the potential safety risks are often ignored.In this paper,an optimization method based on safe reinforcement learning(Safe RL)is proposed to address the robotic grasping problem under safety constraints.Specifically,considering the obstacle avoidance constraints of the system,the grasping problem of the manipulator is modeled as a Constrained Markov Decision Process(CMDP).The Lagrange multiplier and a dynamic weighted mechanism are introduced into the Proximal Policy Optimization(PPO)framework,leading to the development of the dynamic weighted Lagrange PPO(DWL-PPO)algorithm.The behavior of violating safety constraints is punished while the policy is optimized in this proposed method.In addition,the orientation control of the end-effector is included in the reward function,and a compound reward function adapted to changes in pose is designed.Ultimately,the efficacy and advantages of the suggested method are proved by extensive training and testing in the Pybullet simulator.The results of grasping experiments reveal that the recommended approach provides superior safety and efficiency compared with other advanced RL methods and achieves a good trade-off between model learning and risk aversion.
基金supported by the National Key Research and Development Program of China(No.2021YFF0500600)Natural Science Foundation of Henan Province(No.252300421176)+1 种基金National Natural Science Foundation of China(No.22478361 and No.22108256)Frontier Exploration Projects of Longmen Laboratory(No.LMQYTSKT021)。
文摘Batteries play a crucial role in the storage and application of sustainable energy,yet their inherent safety risks are non-negligible.Traditional monitoring methods often suffer from high costs,time consumption,and limited scalability,making it increasingly difficult to meet the evolving demands of modern society.In this context,recent advancements in machine learning technology have emerged as a promising solution for predicting and monitoring battery states,offering innovative approaches to battery management systems(BMS).By transforming raw operational data into actionable insights,machine learning has shifted the paradigm from reactive to predictive battery safety management,significantly enhancing system reliability and risk mitigation capabilities.This review delves into the implementation of machine learning in battery state prediction,including dataset selection,feature extraction,and model training.It also highlights the latest progress of these models in key applications such as state of health(SOH),state of charge(SOC),thermal runaway warning,fault detection,and remaining useful life(RUL).Finally,we critically examined the challenges and opportunities associated with leveraging machine learning to improve battery safety and performance,providing a comprehensive perspective for future research in this rapidly advancing field.
文摘Low visibility conditions,particularly those caused by fog,significantly affect road safety and reduce drivers’ability to see ahead clearly.The conventional approaches used to address this problem primarily rely on instrument-based and fixed-threshold-based theoretical frameworks,which face challenges in adaptability and demonstrate lower performance under varying environmental conditions.To overcome these challenges,we propose a real-time visibility estimation model that leverages roadside CCTV cameras to monitor and identify visibility levels under different weather conditions.The proposedmethod begins by identifying specific regions of interest(ROI)in the CCTVimages and focuses on extracting specific features such as the number of lines and contours detected within these regions.These features are then provided as an input to the proposed hierarchical clusteringmodel,which classifies them into different visibility levels without the need for predefined rules and threshold values.In the proposed approach,we used two different distance similaritymetrics,namely dynamic time warping(DTW)and Euclidean distance,alongside the proposed hierarchical clustering model and noted its performance in terms of numerous evaluation measures.The proposed model achieved an average accuracy of 97.81%,precision of 91.31%,recall of 91.25%,and F1-score of 91.27% using theDTWdistancemetric.We also conducted experiments for other deep learning(DL)-based models used in the literature and compared their performances with the proposed model.The experimental results demonstrate that the proposedmodel ismore adaptable and consistent compared to themethods used in the literature.The proposedmethod provides drivers real-time and accurate visibility information and enhances road safety during low visibility conditions.
基金sponsored by the National Natural Science Foundation of China(No.U21A20165,No.52072057).
文摘This study presents an interpretable surrogate framework for predicting pedestrian-leg injury severity that integrates high-fidelity finite-element(FE)simulations with a TabNet-based deep-learning model.We generated a parametric dataset of 3000 impact scenarios-covering ten vehicle types and various legform impactors-using automated FE runs configured via Latin hypercube sampling.After preprocessing and one-hot encoding of categorical features,we trained TabNet alongside Support-Vector Regression,Random Forest,and Decision-Tree ensembles.All models underwent hyperparameter tuning via Optuna’s Bayesian optimization coupled with repeated four-fold crossvalidation(20 trials per model).TabNet achieved the best balance of explanatory power and predictive accuracy,with an average R^(2)=0.94±0.01 and RMSE=0.14±0.02.On an independent test set,85%,88%,and 90%of predictions for tibial acceleration,knee-flexion angle,and shear displacement,respectively,fell within±20%of true peaks.SHAPbased analyses confirm that collision-point location and bumper geometry dominate injury outcomes.These results demonstrate TabNet’s capacity to deliver rapid,robust,and explainable injury predictions,offering actionable design insights for vehicle front-end optimization and regulatory assessment in early development stages.
基金Supported by European Union’s Horizon 2020 Research and Innovation Program(739578)the Government of the Republic of Cyprus through the Deputy Ministry of Research,Innovation,and Digital Policy.
文摘Background Efficient disaster victim detection(DVD)in urban areas after natural disasters is crucial for minimizing losses.However,conventional search and rescue(SAR)methods often experience delays,which can hinder the timely detection of victims.SAR teams face various challenges,including limited access to debris and collapsed structures,safety risks due to unstable conditions,and disrupted communication networks.Methods In this paper,we present DeepSafe,a novel two-level deep learning approach for multilevel classification and object detection using a simulated disaster victim dataset.DeepSafe first employs YOLOv8 to classify images into victim and non-victim categories.Subsequently,Detectron2 is used to precisely locate and outline the victims.Results Experimental results demonstrate the promising performance of DeepSafe in both victim classification and detection.The model effectively identified and located victims under the challenging conditions presented in the dataset.Conclusion DeepSafe offers a practical tool for real-time disaster management and SAR operations,significantly improving conventional methods by reducing delays and enhancing victim detection accuracy in disaster-stricken urban areas.
基金Supported by CAS Basic and Interdisciplinary Frontier Scientific Research Pilot Project(XDB1190300,XDB1190302)Youth Innovation Promotion Association CAS(Y2021056)+1 种基金Joint Fund of the Yulin University and the Dalian National Laboratory for Clean Energy(YLU-DNL Fund 2022007)The special fund for Science and Technology Innovation Teams of Shanxi Province(202304051001007)。
文摘Cyclohexene is an important raw material in the production of nylon.Selective hydrogenation of benzene is a key method for preparing cyclohexene.However,the Ru catalysts used in current industrial processes still face challenges,including high metal usage,high process costs,and low cyclohexene yield.This study utilizes existing literature data combined with machine learning methods to analyze the factors influencing benzene conversion,cyclohexene selectivity,and yield in the benzene hydrogenation to cyclohexene reaction.It constructs predictive models based on XGBoost and Random Forest algorithms.After analysis,it was found that reaction time,Ru content,and space velocity are key factors influencing cyclohexene yield,selectivity,and benzene conversion.Shapley Additive Explanations(SHAP)analysis and feature importance analysis further revealed the contribution of each variable to the reaction outcomes.Additionally,we randomly generated one million variable combinations using the Dirichlet distribution to attempt to predict high-yield catalyst formulations.This paper provides new insights into the application of machine learning in heterogeneous catalysis and offers some reference for further research.
基金Guangzhou Metro Scientific Research Project(No.JT204-100111-23001)Chongqing Municipal Special Project for Technological Innovation and Application Development(No.CSTB2022TIAD-KPX0101)Science and Technology Research and Development Program of China State Railway Group Co.,Ltd.(No.N2023G045)。
文摘The uplift resistance of the soil overlying shield tunnels significantly impacts their anti-floating stability.However,research on uplift resistance concerning special-shaped shield tunnels is limited.This study combines numerical simulation with machine learning techniques to explore this issue.It presents a summary of special-shaped tunnel geometries and introduces a shape coefficient.Through the finite element software,Plaxis3D,the study simulates six key parameters—shape coefficient,burial depth ratio,tunnel’s longest horizontal length,internal friction angle,cohesion,and soil submerged bulk density—that impact uplift resistance across different conditions.Employing XGBoost and ANN methods,the feature importance of each parameter was analyzed based on the numerical simulation results.The findings demonstrate that a tunnel shape more closely resembling a circle leads to reduced uplift resistance in the overlying soil,whereas other parameters exhibit the contrary effects.Furthermore,the study reveals a diminishing trend in the feature importance of buried depth ratio,internal friction angle,tunnel longest horizontal length,cohesion,soil submerged bulk density,and shape coefficient in influencing uplift resistance.
文摘Federated Learning(FL)has become a leading decentralized solution that enables multiple clients to train a model in a collaborative environment without directly sharing raw data,making it suitable for privacy-sensitive applications such as healthcare,finance,and smart systems.As the field continues to evolve,the research field has become more complex and scattered,covering different system designs,training methods,and privacy techniques.This survey is organized around the three core challenges:how the data is distributed,how models are synchronized,and how to defend against attacks.It provides a structured and up-to-date review of FL research from 2023 to 2025,offering a unified taxonomy that categorizes works by data distribution(Horizontal FL,Vertical FL,Federated Transfer Learning,and Personalized FL),training synchronization(synchronous and asynchronous FL),optimization strategies,and threat models(data leakage and poisoning attacks).In particular,we summarize the latest contributions in Vertical FL frameworks for secure multi-party learning,communication-efficient Horizontal FL,and domain-adaptive Federated Transfer Learning.Furthermore,we examine synchronization techniques addressing system heterogeneity,including straggler mitigation in synchronous FL and staleness management in asynchronous FL.The survey covers security threats in FL,such as gradient inversion,membership inference,and poisoning attacks,as well as their defense strategies that include privacy-preserving aggregation and anomaly detection.The paper concludes by outlining unresolved issues and highlighting challenges in handling personalized models,scalability,and real-world adoption.
文摘Ever since the research in machine learning gained traction in recent years,it has been employed to address challenges in a wide variety of domains,including mechanical devices.Most of the machine learning models are built on the assumption of a static learning environment,but in practical situations,the data generated by the process is dynamic.This evolution of the data is termed concept drift.This research paper presents an approach for predictingmechanical failure in real-time using incremental learning based on the statistically calculated parameters of mechanical equipment.The method proposed here is applicable to allmechanical devices that are susceptible to failure or operational degradation.The proposed method in this paper is equipped with the capacity to detect the drift in data generation and adaptation.The proposed approach evaluates the machine learning and deep learning models for their efficacy in handling the errors related to industrial machines due to their dynamic nature.It is observed that,in the settings without concept drift in the data,methods like SVM and Random Forest performed better compared to deep neural networks.However,this resulted in poor sensitivity for the smallest drift in the machine data reported as a drift.In this perspective,DNN generated the stable drift detection method;it reported an accuracy of 84%and an AUC of 0.87 while detecting only a single drift point,indicating the stability to performbetter in detecting and adapting to new data in the drifting environments under industrial measurement settings.
基金supported by a grant(No.CRPG-25-2054)under the Cybersecurity Research and Innovation Pioneers Initiative,provided by the National Cybersecurity Authority(NCA)in the Kingdom of Saudi Arabia.
文摘Split Learning(SL)has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency.Specifically,neural networks are divided into client and server subnetworks in order to mitigate the exposure of sensitive data and reduce the overhead on client devices,thereby making SL particularly suitable for resource-constrained devices.Although SL prevents the direct transmission of raw data,it does not alleviate entirely the risk of privacy breaches.In fact,the data intermediately transmitted to the server sub-model may include patterns or information that could reveal sensitive data.Moreover,achieving a balance between model utility and data privacy has emerged as a challenging problem.In this article,we propose a novel defense approach that combines:(i)Adversarial learning,and(ii)Network channel pruning.In particular,the proposed adversarial learning approach is specifically designed to reduce the risk of private data exposure while maintaining high performance for the utility task.On the other hand,the suggested channel pruning enables the model to adaptively adjust and reactivate pruned channels while conducting adversarial training.The integration of these two techniques reduces the informativeness of the intermediate data transmitted by the client sub-model,thereby enhancing its robustness against attribute inference attacks without adding significant computational overhead,making it wellsuited for IoT devices,mobile platforms,and Internet of Vehicles(IoV)scenarios.The proposed defense approach was evaluated using EfficientNet-B0,a widely adopted compact model,along with three benchmark datasets.The obtained results showcased its superior defense capability against attribute inference attacks compared to existing state-of-the-art methods.This research’s findings demonstrated the effectiveness of the proposed channel pruning-based adversarial training approach in achieving the intended compromise between utility and privacy within SL frameworks.In fact,the classification accuracy attained by the attackers witnessed a drastic decrease of 70%.
文摘Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning.
文摘Latest digital advancements have intensified the necessity for adaptive,data-driven and socially-centered learning ecosystems.This paper presents the formulation of a cross-platform,innovative,gamified and personalized Learning Ecosystem,which integrates 3D/VR environments,as well as machine learning algorithms,and business intelligence frameworks to enhance learner-centered education and inferenced decision-making.This Learning System makes use of immersive,analytically assessed virtual learning spaces,therefore facilitating real-time monitoring of not just learning performance,but also overall engagement and behavioral patterns,via a comprehensive set of sustainability-oriented ESG-aligned Key Performance Indicators(KPIs).Machine learning models support predictive analysis,personalized feedback,and hybrid recommendation mechanisms,whilst dedicated dashboards translate complex educational data into actionable insights for all Use Cases of the System(Educational Institutions,Educators and Learners).Additionally,the presented Learning System introduces a structured Mentoring and Consulting Subsystem,thence reinforcing human-centered guidance alongside automated intelligence.The Platform’s modular architecture and simulation-centered evaluation approach actively support personalized,and continuously optimized learning pathways.Thence,it exemplifies a mature,adaptive Learning Ecosystem,supporting immersive technologies,analytics,and pedagogical support,hence,contributing to contemporary digital learning innovation and sociotechnical transformation in education.
文摘Automated grading of dandruff severity is a clinically significant but challenging task due to the inherent ordinal nature of severity levels and the high prevalence of label noise from subjective expert annotations.Standard classification methods fail to address these dual challenges,limiting their real-world performance.In this paper,a novel,three-phase training framework is proposed that learns a robust ordinal classifier directly from noisy labels.The approach synergistically combines a rank-based ordinal regression backbone with a cooperative,semi-supervised learning strategy to dynamically partition the data into clean and noisy subsets.A hybrid training objective is then employed,applying a supervised ordinal loss to the clean set.The noisy set is simultaneously trained using a dualobjective that combines a semi-supervised ordinal loss with a parallel,label-agnostic contrastive loss.This design allows themodel to learn fromthe entire noisy subset while using contrastive learning to mitigate the risk of error propagation frompotentially corrupt supervision.Extensive experiments on a new,large-scale,multi-site clinical dataset validate our approach.Themethod achieves state-of-the-art performance with 80.71%accuracy and a 76.86%F1-score,significantly outperforming existing approaches,including a 2.26%improvement over the strongest baseline method.This work provides not only a robust solution for a practical medical imaging problem but also a generalizable framework for other tasks plagued by noisy ordinal labels.
文摘Adversarial Reinforcement Learning(ARL)models for intelligent devices and Network Intrusion Detection Systems(NIDS)improve systemresilience against sophisticated cyber-attacks.As a core component of ARL,Adversarial Training(AT)enables NIDS agents to discover and prevent newattack paths by exposing them to competing examples,thereby increasing detection accuracy,reducing False Positives(FPs),and enhancing network security.To develop robust decision-making capabilities for real-world network disruptions and hostile activity,NIDS agents are trained in adversarial scenarios to monitor the current state and notify management of any abnormal or malicious activity.The accuracy and timeliness of the IDS were crucial to the network’s availability and reliability at this time.This paper analyzes ARL applications in NIDS,revealing State-of-The-Art(SoTA)methodology,issues,and future research prospects.This includes Reinforcement Machine Learning(RML)-based NIDS,which enables an agent to interact with the environment to achieve a goal,andDeep Reinforcement Learning(DRL)-based NIDS,which can solve complex decision-making problems.Additionally,this survey study addresses cybersecurity adversarial circumstances and their importance for ARL and NIDS.Architectural design,RL algorithms,feature representation,and training methodologies are examined in the ARL-NIDS study.This comprehensive study evaluates ARL for intelligent NIDS research,benefiting cybersecurity researchers,practitioners,and policymakers.The report promotes cybersecurity defense research and innovation.
文摘As urbanization continues to accelerate,the challenges associated with managing transportation in metropolitan areas become increasingly complex.The surge in population density contributes to traffic congestion,impacting travel experiences and posing safety risks.Smart urban transportation management emerges as a strategic solution,conceptualized here as a multidimensional big data problem.The success of this strategy hinges on the effective collection of information from diverse,extensive,and heterogeneous data sources,necessitating the implementation of full⁃stack Information and Communication Technology(ICT)solutions.The main idea of the work is to investigate the current technologies of Intelligent Transportation Systems(ITS)and enhance the safety of urban transportation systems.Machine learning models,trained on historical data,can predict traffic congestion,allowing for the implementation of preventive measures.Deep learning architectures,with their ability to handle complex data representations,further refine traffic predictions,contributing to more accurate and dynamic transportation management.The background of this research underscores the challenges posed by traffic congestion in metropolitan areas and emphasizes the need for advanced technological solutions.By integrating GPS and GIS technologies with machine learning algorithms,this work aims to pay attention to the development of intelligent transportation systems that not only address current challenges but also pave the way for future advancements in urban transportation management.
基金support of the National Key Research and Development Plan(Grant No.2021YFB3302501)the financial support of the National Science Foundation of China(Grant No.12161076)the financial support of the Fundamental Research Funds for the Central Universities(Grant No.DUT24LAB129).
文摘As carrier aircraft sortie frequency and flight deck operational density increase,autonomous dispatch trajectory planning for carrier-based vehicles demands efficient,safe,and kinematically feasible solutions.This paper presents an Iterative Safe Dispatch Corridor(iSDC)framework,addressing the suboptimality of the traditional SDC method caused by static corridor construction and redundant obstacle exploration.First,a Kinodynamic-Informed-Bidirectional Rapidly-exploring Random Tree Star(KIBRRT^(*))algorithm is proposed for the front-end coarse planning.By integrating bidirectional tree expansion,goal-biased elliptical sampling,and artificial potential field guidance,it reduces unnecessary exploration near concave obstacles and generates kinematically admissible paths.Secondly,the traditional SDC is implemented in an iterative manner,and the obtained trajectory in the current iteration is fed into the next iteration for corridor generation,thus progressively improving the quality of withincorridor constraints.For tractors,a reverse-motion penalty function is incorporated into the back-end optimizer to prioritize forward driving,aligning with mechanical constraints and human operational preferences.Numerical validations on the data of Gerald R.Ford-class carrier demonstrate that the KIBRRT^(*)reduces average computational time by 75%and expansion nodes by 25%compared to conventional RRT^(*)algorithms.Meanwhile,the iSDC framework yields more time-efficient trajectories for both carrier aircraft and tractors,with the dispatch time reduced by 31.3%and tractor reverse motion proportion decreased by 23.4%relative to traditional SDC.The presented framework offers a scalable solution for autonomous dispatch in confined and safety-critical environment,and an illustrative animation is available at bilibili.com/video/BV1tZ7Zz6Eyz.Moreover,the framework can be easily extended to three-dimension scenarios,and thus applicable for trajectory planning of aerial and underwater vehicles.
基金Supported by Chongqing Medical Scientific Research Project(Joint Project of Chongqing Health Commission and Science and Technology Bureau),No.2023MSXM060.
文摘BACKGROUND The accurate prediction of lymph node metastasis(LNM)is crucial for managing locally advanced(T3/T4)colorectal cancer(CRC).However,both traditional histopathology and standard slide-level deep learning often fail to capture the sparse and diagnostically critical features of metastatic potential.AIM To develop and validate a case-level multiple-instance learning(MIL)framework mimicking a pathologist's comprehensive review and improve T3/T4 CRC LNM prediction.METHODS The whole-slide images of 130 patients with T3/T4 CRC were retrospectively collected.A case-level MIL framework utilising the CONCH v1.5 and UNI2-h deep learning models was trained on features from all haematoxylin and eosinstained primary tumour slides for each patient.These pathological features were subsequently integrated with clinical data,and model performance was evaluated using the area under the curve(AUC).RESULTS The case-level framework demonstrated superior LNM prediction over slide-level training,with the CONCH v1.5 model achieving a mean AUC(±SD)of 0.899±0.033 vs 0.814±0.083,respectively.Integrating pathology features with clinical data further enhanced performance,yielding a top model with a mean AUC of 0.904±0.047,in sharp contrast to a clinical-only model(mean AUC 0.584±0.084).Crucially,a pathologist’s review confirmed that the model-identified high-attention regions correspond to known high-risk histopathological features.CONCLUSION A case-level MIL framework provides a superior approach for predicting LNM in advanced CRC.This method shows promise for risk stratification and therapy decisions,requiring further validation.
文摘Providing safe and quality food is crucial for every household and is of extreme significance in the growth of any society.It is a complex procedure that deals with all issues focusing on the development of food processing from seed to harvest,storage,preparation,and consumption.This current paper seeks to demystify the importance of artificial intelligence,machine learning(ML),deep learning(DL),and computer vision(CV)in ensuring food safety and quality.By stressing the importance of these technologies,the audience will feel reassured and confident in their potential.These are very handy for such problems,giving assurance over food safety.CV is incredibly noble in today's generation because it improves food processing quality and positively impacts firms and researchers.Thus,at the present production stage,rich in image processing and computer visioning is incorporated into all facets of food production.In this field,DL and ML are implemented to identify the type of food in addition to quality.Concerning data and result-oriented perceptions,one has found similarities regarding various approaches.As a result,the findings of this study will be helpful for scholars looking for a proper approach to identify the quality of food offered.It helps to indicate which food products have been discussed by other scholars and lets the reader know papers by other scholars inclined to research further.Also,DL is accurately integrated with identifying the quality and safety of foods in the market.This paper describes the current practices and concerns of ML,DL,and probable trends for its future development.