The generation of synthetic trajectories has become essential in various fields for analyzing complex movement patterns.However,the use of real-world trajectory data poses significant privacy risks,such as location re...The generation of synthetic trajectories has become essential in various fields for analyzing complex movement patterns.However,the use of real-world trajectory data poses significant privacy risks,such as location reidentification and correlation attacks.To address these challenges,privacy-preserving trajectory generation methods are critical for applications relying on sensitive location data.This paper introduces DPIL-Traj,an advanced framework designed to generate synthetic trajectories while achieving a superior balance between data utility and privacy preservation.Firstly,the framework incorporates Differential Privacy Clustering,which anonymizes trajectory data by applying differential privacy techniques that add noise,ensuring the protection of sensitive user information.Secondly,Imitation Learning is used to replicate decision-making behaviors observed in real-world trajectories.By learning from expert trajectories,this component generates synthetic data that closely mimics real-world decision-making processes while optimizing the quality of the generated trajectories.Finally,Markov-based Trajectory Generation is employed to capture and maintain the inherent temporal dynamics of movement patterns.Extensive experiments conducted on the GeoLife trajectory dataset show that DPIL-Traj improves utility performance by an average of 19.85%,and in terms of privacy performance by an average of 12.51%,compared to state-of-the-art approaches.Ablation studies further reveal that DP clustering effectively safeguards privacy,imitation learning enhances utility under noise,and the Markov module strengthens temporal coherence.展开更多
Landslides pose a formidable natural hazard across the Qinghai-Tibet Plateau(QTP),endangering both ecosystems and human life.Identifying the driving factors behind landslides and accurately assessing susceptibility ar...Landslides pose a formidable natural hazard across the Qinghai-Tibet Plateau(QTP),endangering both ecosystems and human life.Identifying the driving factors behind landslides and accurately assessing susceptibility are key to mitigating disaster risk.This study integrated multi-source historical landslide data with 15 predictive factors and used several machine learning models—Random Forest(RF),Gradient Boosting Regression Trees(GBRT),Extreme Gradient Boosting(XGBoost),and Categorical Boosting(CatBoost)—to generate susceptibility maps.The Shapley additive explanation(SHAP)method was applied to quantify factor importance and explore their nonlinear effects.The results showed that:(1)CatBoost was the best-performing model(CA=0.938,AUC=0.980)in assessing landslide susceptibility,with altitude emerging as the most significant factor,followed by distance to roads and earthquake sites,precipitation,and slope;(2)the SHAP method revealed critical nonlinear thresholds,demonstrating that historical landslides were concentrated at mid-altitudes(1400-4000 m)and decreased markedly above 4000 m,with a parallel reduction in probability beyond 700 m from roads;and(3)landslide-prone areas,comprising 13%of the QTP,were concentrated in the southeastern and northeastern parts of the plateau.By integrating machine learning and SHAP analysis,this study revealed landslide hazard-prone areas and their driving factors,providing insights to support disaster management strategies and sustainable regional planning.展开更多
The advancement of wearable sensing technologies demands multifunctional materials that integrate high sensitivity,environmental resilience,and intelligent signal processing.In this work,a flexible hydrophobic conduct...The advancement of wearable sensing technologies demands multifunctional materials that integrate high sensitivity,environmental resilience,and intelligent signal processing.In this work,a flexible hydrophobic conductive yarn(FCB@SY)featuring a controllable microcrack structure is developed via a synergistic approach combining ultrasonic swelling and non-solvent induced phase separation(NIPS).By embedding a robust conductive network and engineering microcrack morphology,the resulting sensor achieves an ultrahigh gauge factor(GF≈12,670),an ultrabroad working range(0%-547%),a low detection limit(0.5%),rapid response/recovery time(140 ms/140 ms),and outstanding durability over 10,000 cycles.Furthermore,the hydrophobic surface endowed by conductive coatings imparts exceptional chemical stability against acidic and alkaline environments,as well as reliable waterproof performance.This enables consistent functionality under harsh conditions,including underwater operation.Integrated with machine learning algorithms,the FCB@SY-based intelligent sensing system demonstrates dualmode capabilities in human motion tracking and gesture recognition,offering significant potential for applications in wearable electronics,human-machine interfaces,and soft robotics.展开更多
Aircraft assembly is characterized by stringent precedence constraints,limited resource availability,spatial restrictions,and a high degree of manual intervention.These factors lead to considerable variability in oper...Aircraft assembly is characterized by stringent precedence constraints,limited resource availability,spatial restrictions,and a high degree of manual intervention.These factors lead to considerable variability in operator workloads and significantly increase the complexity of scheduling.To address this challenge,this study investigates the Aircraft Pulsating Assembly Line Scheduling Problem(APALSP)under skilled operator allocation,with the objective of minimizing assembly completion time.A mathematical model considering skilled operator allocation is developed,and a Q-Learning improved Particle Swarm Optimization algorithm(QLPSO)is proposed.In the algorithm design,a reverse scheduling strategy is adopted to effectively manage large-scale precedence constraints.Moreover,a reverse sequence encoding method is introduced to generate operation sequences,while a time decoding mechanism is employed to determine completion times.The problem is further reformulated as a Markov Decision Process(MDP)with explicitly defined state and action spaces.Within QLPSO,the Q-learning mechanism adaptively adjusts inertia weights and learning factors,thereby achieving a balance between exploration capability and convergence performance.To validate the effectiveness of the proposed approach,extensive computational experiments are conducted on benchmark instances of different scales,including small,medium,large,and ultra-large cases.The results demonstrate that QLPSO consistently delivers stable and high-quality solutions across all scenarios.In ultra-large-scale instances,it improves the best solution by 25.2%compared with the Genetic Algorithm(GA)and enhances the average solution by 16.9%over the Q-learning algorithm,showing clear advantages over the comparative methods.These findings not only confirm the effectiveness of the proposed algorithm but also provide valuable theoretical references and practical guidance for the intelligent scheduling optimization of aircraft pulsating assembly lines.展开更多
At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown ...At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown and complex environments,this paper proposes an Attention-Enhanced Dueling Deep Q-Network(ADDueling DQN),which integrates a multi-head attention mechanism and a prioritized experience replay strategy into a Dueling-DQN reinforcement learning framework.A multi-objective reward function,centered on energy efficiency,is designed to comprehensively consider path length,terrain slope,motion smoothness,and obstacle avoidance,enabling optimal low-energy trajectory generation in 3D space from the source.The incorporation of a multihead attention mechanism allows the model to dynamically focus on energy-critical state features—such as slope gradients and obstacle density—thereby significantly improving its ability to recognize and avoid energy-intensive paths.Additionally,the prioritized experience replay mechanism accelerates learning from key decision-making experiences,suppressing inefficient exploration and guiding the policy toward low-energy solutions more rapidly.The effectiveness of the proposed path planning algorithm is validated through simulation experiments conducted in multiple off-road scenarios.Results demonstrate that AD-Dueling DQN consistently achieves the lowest average energy consumption across all tested environments.Moreover,the proposed method exhibits faster convergence and greater training stability compared to baseline algorithms,highlighting its global optimization capability under energy-aware objectives in complex terrains.This study offers an efficient and scalable intelligent control strategy for the development of energy-conscious autonomous navigation systems.展开更多
Arsenic(As)pollution in soils is a pervasive environmental issue.Biochar immobilization offers a promising solution for addressing soil As contamination.The efficiency of biochar in immobilizing As in soils primarily ...Arsenic(As)pollution in soils is a pervasive environmental issue.Biochar immobilization offers a promising solution for addressing soil As contamination.The efficiency of biochar in immobilizing As in soils primarily hinges on the characteristics of both the soil and the biochar.However,the influence of a specific property on As immobilization varies among different studies,and the development and application of arsenic passivation materials based on biochar often rely on empirical knowledge.To enhance immobilization efficiency and reduce labor and time costs,a machine learning(ML)model was employed to predict As immobilization efficiency before biochar application.In this study,we collected a dataset comprising 182 data points on As immobilization efficiency from 17 publications to construct three ML models.The results demonstrated that the random forest(RF)model outperformed gradient boost regression tree and support vector regression models in predictive performance.Relative importance analysis and partial dependence plots based on the RF model were conducted to identify the most crucial factors influencing As immobilization.These findings highlighted the significant roles of biochar application time and biochar pH in As immobilization efficiency in soils.Furthermore,the study revealed that Fe-modified biochar exhibited a substantial improvement in As immobilization.These insights can facilitate targeted biochar property design and optimization of biochar application conditions to enhance As immobilization efficiency.展开更多
Artificial neural networks are capable of machine learning by simulating the hiera rchical structure of the human brain.To enable learning by brain and machine,it is essential to accurately identify and correct the pr...Artificial neural networks are capable of machine learning by simulating the hiera rchical structure of the human brain.To enable learning by brain and machine,it is essential to accurately identify and correct the prediction errors,referred to as credit assignment(Lillicrap et al.,2020).It is critical to develop artificial intelligence by understanding how the brain deals with credit assignment in neuroscience.展开更多
Numerous intermediate to felsic igneous rocks are present in both subduction and collisional orogens.However,porphyry copper deposits(PCDs)are comparatively rare.The underlying factors that differentiate fertile magma...Numerous intermediate to felsic igneous rocks are present in both subduction and collisional orogens.However,porphyry copper deposits(PCDs)are comparatively rare.The underlying factors that differentiate fertile magmas,which give rise to PCDs,from barren magmas in a specific geological setting are not well understood.In this study,three supervised machine learning algorithms:random forest(RF),logistic regression(LR)and support vector machine(SVM)were employed to classify metallogenic fertility in southeastern Tibet,Sanjiang orogenic belt,based on whole-rock trace element and Sr-Nd isotopic ratios.The performance of the RF model is better than LR and SVM models.Feature importance analysis of the models reveals that the concentration of Y,Eu,and Th,along with Sr-Nd isotope compositions are crucial variables in distinguishing fertile and barren samples.However,when the optimized models were applied to predict the datasets of Miocene Gangdese porphyry copper belt and Jurassic Gangdese arc representing collision and subduction settings respectively,a marked decline in metrics occurred in all three models,particularly on the subduction dataset.This substantial decrease indicates the compositional characteristics of intrusions across different tectonic settings could be diverse in a multidimensional space,highlighting the complex interplay of geological factors influencing PCD’s formation.展开更多
Interlayer is an important factor affecting the distribution of remaining oil.Accurate identification of interlayer distribution is of great significance in guiding oilfield production and development.However,the trad...Interlayer is an important factor affecting the distribution of remaining oil.Accurate identification of interlayer distribution is of great significance in guiding oilfield production and development.However,the traditional method of identifying interlayers has some limitations:(1)Due to the existence of overlaps in the cross plot for different categories of interlayers,it is difficult to establish a determined model to classify the type of interlayer;(2)Traditional identification methods only use two or three logging curves to identify the types of interlayers,making it difficult to fully utilize the information of the logging curves,the recognition accuracy will be greatly reduced;(3)For a large number of complex logging data,interlayer identification is time-consuming and laborintensive.Based on the existing well area data such as logging data and core data,this paper uses machine learning method to quantitatively identify the interlayers in the single well layer of CIII sandstone group in the M oilfield.Through the comparison of various classifiers,it is found that the decision tree method has the best applicability and the highest accuracy in the study area.Based on single well identification of interlayers,the continuity of well interval interlayers in the study area is analyzed according to the horizontal well.Finally,the influence of the continuity of interlayers on the distribution of remaining oil is verified by the spatial distribution characteristics of interlayers combined with the production situation of the M oilfield.展开更多
In order to address the limited mechanical properties of silicon-based materials,this study designed 12 B-site mixed-valence perovskites with s^(0)+s^(2)electronic configurations.Five machine learning models were used...In order to address the limited mechanical properties of silicon-based materials,this study designed 12 B-site mixed-valence perovskites with s^(0)+s^(2)electronic configurations.Five machine learning models were used to predict the bandgap values of candidate materials,and Cs_(2)AgSbCl_(6)was selected as the optimal light absorbing material.By using first principles calculations under stress and strain,it has been determined that micro-strains can achieve the goals of reducing material strength,enhancing flexible characteristics,directionally adjusting the anisotropy of stress concentration areas,improving thermodynamic properties,and enhancing sound insulation ability without significantly affecting photoelectric properties.According to device simulations,tensile strain can effectively increase the theoretical efficiency of solar cells.This work elucidates the mechanism of mechanical property changes under stress and strain,offering insights into new materials for solar energy conversion and accelerating the development of high-performance photovoltaic devices.展开更多
Integrating exhaled breath analysis into the diagnosis of cardiovascular diseases holds significant promise as a valuable tool for future clinical use,particularly for ischemic heart disease(IHD).However,current resea...Integrating exhaled breath analysis into the diagnosis of cardiovascular diseases holds significant promise as a valuable tool for future clinical use,particularly for ischemic heart disease(IHD).However,current research on the volatilome(exhaled breath composition)in heart disease remains underexplored and lacks sufficient evidence to confirm its clinical validity.Key challenges hindering the application of breath analysis in diagnosing IHD include the scarcity of studies(only three published papers to date),substantial methodological bias in two of these studies,and the absence of standardized protocols for clinical imple-mentation.Additionally,inconsistencies in methodologies—such as sample collection,analytical techniques,machine learning(ML)approaches,and result interpretation—vary widely across studies,further complicating their reprodu-cibility and comparability.To address these gaps,there is an urgent need to establish unified guidelines that define best practices for breath sample collection,data analysis,ML integration,and biomarker annotation.Until these challenges are systematically resolved,the widespread adoption of exhaled breath analysis as a reliable diagnostic tool for IHD remains a distant goal rather than an immi-nent reality.展开更多
The application of machine learning in alloy design is increasingly widespread,yet traditional models still face challenges when dealing with limited datasets and complex nonlinear relationships.This work proposes an ...The application of machine learning in alloy design is increasingly widespread,yet traditional models still face challenges when dealing with limited datasets and complex nonlinear relationships.This work proposes an interpretable machine learning method based on data augmentation and reconstruction,excavating high-performance low-alloyed magnesium(Mg)alloys.The data augmentation technique expands the original dataset through Gaussian noise.The data reconstruction method reorganizes and transforms the original data to extract more representative features,significantly improving the model's generalization ability and prediction accuracy,with a coefficient of determination(R^(2))of 95.9%for the ultimate tensile strength(UTS)model and a R^(2)of 95.3%for the elongation-to-failure(EL)model.The correlation coefficient assisted screening(CCAS)method is proposed to filter low-alloyed target alloys.A new Mg-2.2Mn-0.4Zn-0.2Al-0.2Ca(MZAX2000,wt%)alloy is designed and extruded into bar at given processing parameters,achieving room-temperature strength-ductility synergy showing an excellent UTS of 395 MPa and a high EL of 17.9%.This is closely related to its hetero-structured characteristic in the as-extruded MZAX2000 alloy consisting of coarse grains(16%),fine grains(75%),and fiber regions(9%).Therefore,this work offers new insights into optimizing alloy compositions and processing parameters for attaining new high strong and ductile low-alloyed Mg alloys.展开更多
Achieving the simultaneous enhancement of strength and ductility in laser powder bed fused (LPBF-ed) titanium (Ti) is challenging due to the complex, high-dimensional parameter space and interactions between parameter...Achieving the simultaneous enhancement of strength and ductility in laser powder bed fused (LPBF-ed) titanium (Ti) is challenging due to the complex, high-dimensional parameter space and interactions between parameters and powders. Herein, a hybrid intelligent framework for process parameter optimization of LPBF-ed Ti with improved ultimate tensile strength (UTS) and elongation (EL) was proposed. It combines the data augmentation method (AVG ± EC × SD), the multi-model fusion stacking ensemble learning model (GBDT-BPNN-XGBoost), the interpretable machine learning method and the non-dominated ranking genetic algorithm (NSGA-Ⅱ). The GBDT-BPNN-XGBoost outperforms single models in predicting UTS and EL across the accuracy, generalization ability and stability. The SHAP analysis reveals that laser power (P) is the most important feature affecting both UTS and EL, and it has a positive impact on them when P < 220 W. The UTS and EL of samples fabricated by the optimal process parameters were 718 ± 5 MPa and 27.9 % ± 0.1 %, respectively. The outstanding strength-ductility balance is attributable to the forward stresses in hard α'-martensite and back stresses in soft αm'-martensite induced by the strain gradients of hetero-microstructure. The back stresses strengthen the soft αm'-martensite, improving the overall UTS. The forward stresses stimulate the activation of dislocations in hard α'-martensite and the generation of 〈c + a〉 dislocations, allowing the plastic strain to occur in hard regions and enhancing the overall ductility. This work provides a feasible strategy for multi-objective optimization and valuable insights into tailoring the microstructure for improving mechanical properties.展开更多
Negative logarithm of the acid dissociation constant(pK_(a))significantly influences the absorption,dis-tribution,metabolism,excretion,and toxicity(ADMET)properties of molecules and is a crucial indicator in drug rese...Negative logarithm of the acid dissociation constant(pK_(a))significantly influences the absorption,dis-tribution,metabolism,excretion,and toxicity(ADMET)properties of molecules and is a crucial indicator in drug research.Given the rapid and accurate characteristics of computational methods,their role in predicting drug properties is increasingly important.Although many pK_(a) prediction models currently exist,they often focus on enhancing model precision while neglecting interpretability.In this study,we present GraFpKa,a pK_(a) prediction model using graph neural networks(GNNs)and molecular finger-prints.The results show that our acidic and basic models achieved mean absolute errors(MAEs)of 0.621 and 0.402,respectively,on the test set,demonstrating good predictive performance.Notably,to improve interpretability,GraFpKa also incorporates Integrated Gradients(IGs),providing a clearer visual description of the atoms significantly affecting the pK_(a) values.The high reliability and interpretability of GraFpKa ensure accurate pKa predictions while also facilitating a deeper understanding of the relation-ship between molecular structure and pK_(a) values,making it a valuable tool in the field of pK_(a) prediction.展开更多
Artificial intelligence(AI)and machine learning(ML)are transforming spine care by addressing diagnostics,treatment planning,and rehabilitation challenges.This study highlights advancements in precision medicine for sp...Artificial intelligence(AI)and machine learning(ML)are transforming spine care by addressing diagnostics,treatment planning,and rehabilitation challenges.This study highlights advancements in precision medicine for spinal pathologies,leveraging AI and ML to enhance diagnostic accuracy through deep learning algorithms,enabling faster and more accurate detection of abnormalities.AIpowered robotics and surgical navigation systems improve implant placement precision and reduce complications in complex spine surgeries.Wearable devices and virtual platforms,designed with AI,offer personalized,adaptive therapies that improve treatment adherence and recovery outcomes.AI also enables preventive interventions by assessing spine condition risks early.Despite progress,challenges remain,including limited healthcare datasets,algorithmic biases,ethical concerns,and integration into existing systems.Interdisciplinary collaboration and explainable AI frameworks are essential to unlock AI’s full potential in spine care.Future developments include multimodal AI systems integrating imaging,clinical,and genetic data for holistic treatment approaches.AI and ML promise significant improvements in diagnostic accuracy,treatment personalization,service accessibility,and cost efficiency,paving the way for more streamlined and effective spine care,ultimately enhancing patient outcomes.展开更多
App reviews are crucial in influencing user decisions and providing essential feedback for developers to improve their products.Automating the analysis of these reviews is vital for efficient review management.While t...App reviews are crucial in influencing user decisions and providing essential feedback for developers to improve their products.Automating the analysis of these reviews is vital for efficient review management.While traditional machine learning(ML)models rely on basic word-based feature extraction,deep learning(DL)methods,enhanced with advanced word embeddings,have shown superior performance.This research introduces a novel aspectbased sentiment analysis(ABSA)framework to classify app reviews based on key non-functional requirements,focusing on usability factors:effectiveness,efficiency,and satisfaction.We propose a hybrid DL model,combining BERT(Bidirectional Encoder Representations from Transformers)with BiLSTM(Bidirectional Long Short-Term Memory)and CNN(Convolutional Neural Networks)layers,to enhance classification accuracy.Comparative analysis against state-of-the-art models demonstrates that our BERT-BiLSTM-CNN model achieves exceptional performance,with precision,recall,F1-score,and accuracy of 96%,87%,91%,and 94%,respectively.Thesignificant contributions of this work include a refined ABSA-based relabeling framework,the development of a highperformance classifier,and the comprehensive relabeling of the Instagram App Reviews dataset.These advancements provide valuable insights for software developers to enhance usability and drive user-centric application development.展开更多
In the big data era,the surge in network traffic volume poses challenges for network management and cybersecurity.Network Traffic Classification(NTC)employs deep learning to categorize traffic data,aiding security and...In the big data era,the surge in network traffic volume poses challenges for network management and cybersecurity.Network Traffic Classification(NTC)employs deep learning to categorize traffic data,aiding security and analysis systems as Intrusion Detection Systems(IDS)and Intrusion Prevention Systems(IPS).However,current NTC methods,based on isolated network simulations,usually fail to adapt to new protocols and applications and ignore the effects of network conditions and user behavior on traffic patterns.To improve network traffic management insights,federated learning frameworks have been proposed to aggregate diverse traffic data for collaborative model training.This approach faces challenges like data integrity,label noise,packet loss,and skewed data distributions.While label noise can be mitigated through the use of sophisticated traffic labeling tools,other issues such as packet loss and skewed data distributions encountered in Network Packet Brokers(NPB)can severely impede the efficacy of federated learning algorithms.In this paper,we introduced the Robust Traffic Classifier with Federated Contrastive Learning(FC-RTC),combining federated and contrastive learning methods.Using the Supcon-Loss function from contrastive learning,FC-RTC distinguishes between similar and dissimilar samples.Training by sample pairs,FC-RTC effectively updates when receiving corrupted traffic data with packet loss or disorder.In cases of sample imbalance,contrastive loss functions for similar samples reduce model bias towards higher proportion data.By addressing uneven data distribution and packet loss,our system enhances its capability to adapt and perform accurately in real-world network traffic analysis,meeting the specific demands of this complex field.展开更多
Climate downscaling is used to transform large-scale meteorological data into small-scale data with enhanced detail,which finds wide applications in climate modeling,numerical weather forecasting,and renewable energy....Climate downscaling is used to transform large-scale meteorological data into small-scale data with enhanced detail,which finds wide applications in climate modeling,numerical weather forecasting,and renewable energy.Although deeplearning-based downscaling methods effectively capture the complex nonlinear mapping between meteorological data of varying scales,the supervised deep-learning-based downscaling methods suffer from insufficient high-resolution data in practice,and unsupervised methods struggle with accurately inferring small-scale specifics from limited large-scale inputs due to small-scale uncertainty.This article presents DualDS,a dual-learning framework utilizing a Generative Adversarial Network–based neural network and subgrid-scale auxiliary information for climate downscaling.Such a learning method is unified in a two-stream framework through up-and downsamplers,where the downsampler is used to simulate the information loss process during the upscaling,and the upsampler is used to reconstruct lost details and correct errors incurred during the upscaling.This dual learning strategy can eliminate the dependence on high-resolution ground truth data in the training process and refine the downscaling results by constraining the mapping process.Experimental findings demonstrate that DualDS is comparable to several state-of-the-art deep learning downscaling approaches,both qualitatively and quantitatively.Specifically,for a single surface-temperature data downscaling task,our method is comparable with other unsupervised algorithms with the same dataset,and we can achieve a 0.469 dB higher peak signal-to-noise ratio,0.017 higher structural similarity,0.08 lower RMSE,and the best correlation coefficient.In summary,this paper presents a novel approach to addressing small-scale uncertainty issues in unsupervised downscaling processes.展开更多
基金supported by the Natural Science Foundation of Fujian Province of China(2025J01380)National Natural Science Foundation of China(No.62471139)+3 种基金the Major Health Research Project of Fujian Province(2021ZD01001)Fujian Provincial Units Special Funds for Education and Research(2022639)Fujian University of Technology Research Start-up Fund(GY-S24002)Fujian Research and Training Grants for Young and Middle-aged Leaders in Healthcare(GY-H-24179).
文摘The generation of synthetic trajectories has become essential in various fields for analyzing complex movement patterns.However,the use of real-world trajectory data poses significant privacy risks,such as location reidentification and correlation attacks.To address these challenges,privacy-preserving trajectory generation methods are critical for applications relying on sensitive location data.This paper introduces DPIL-Traj,an advanced framework designed to generate synthetic trajectories while achieving a superior balance between data utility and privacy preservation.Firstly,the framework incorporates Differential Privacy Clustering,which anonymizes trajectory data by applying differential privacy techniques that add noise,ensuring the protection of sensitive user information.Secondly,Imitation Learning is used to replicate decision-making behaviors observed in real-world trajectories.By learning from expert trajectories,this component generates synthetic data that closely mimics real-world decision-making processes while optimizing the quality of the generated trajectories.Finally,Markov-based Trajectory Generation is employed to capture and maintain the inherent temporal dynamics of movement patterns.Extensive experiments conducted on the GeoLife trajectory dataset show that DPIL-Traj improves utility performance by an average of 19.85%,and in terms of privacy performance by an average of 12.51%,compared to state-of-the-art approaches.Ablation studies further reveal that DP clustering effectively safeguards privacy,imitation learning enhances utility under noise,and the Markov module strengthens temporal coherence.
基金The National Key Research and Development Program of China,No.2023YFC3206601。
文摘Landslides pose a formidable natural hazard across the Qinghai-Tibet Plateau(QTP),endangering both ecosystems and human life.Identifying the driving factors behind landslides and accurately assessing susceptibility are key to mitigating disaster risk.This study integrated multi-source historical landslide data with 15 predictive factors and used several machine learning models—Random Forest(RF),Gradient Boosting Regression Trees(GBRT),Extreme Gradient Boosting(XGBoost),and Categorical Boosting(CatBoost)—to generate susceptibility maps.The Shapley additive explanation(SHAP)method was applied to quantify factor importance and explore their nonlinear effects.The results showed that:(1)CatBoost was the best-performing model(CA=0.938,AUC=0.980)in assessing landslide susceptibility,with altitude emerging as the most significant factor,followed by distance to roads and earthquake sites,precipitation,and slope;(2)the SHAP method revealed critical nonlinear thresholds,demonstrating that historical landslides were concentrated at mid-altitudes(1400-4000 m)and decreased markedly above 4000 m,with a parallel reduction in probability beyond 700 m from roads;and(3)landslide-prone areas,comprising 13%of the QTP,were concentrated in the southeastern and northeastern parts of the plateau.By integrating machine learning and SHAP analysis,this study revealed landslide hazard-prone areas and their driving factors,providing insights to support disaster management strategies and sustainable regional planning.
基金the financial support of this work by the National Natural Science Foundation of China(No.52373093)Excellent Youth Found of Natural Science Foundation of Henan Province(No.242300421062)+1 种基金Central Plains Youth Top notch Talent Program of Henan Provincethe 111 project(No.D18023).
文摘The advancement of wearable sensing technologies demands multifunctional materials that integrate high sensitivity,environmental resilience,and intelligent signal processing.In this work,a flexible hydrophobic conductive yarn(FCB@SY)featuring a controllable microcrack structure is developed via a synergistic approach combining ultrasonic swelling and non-solvent induced phase separation(NIPS).By embedding a robust conductive network and engineering microcrack morphology,the resulting sensor achieves an ultrahigh gauge factor(GF≈12,670),an ultrabroad working range(0%-547%),a low detection limit(0.5%),rapid response/recovery time(140 ms/140 ms),and outstanding durability over 10,000 cycles.Furthermore,the hydrophobic surface endowed by conductive coatings imparts exceptional chemical stability against acidic and alkaline environments,as well as reliable waterproof performance.This enables consistent functionality under harsh conditions,including underwater operation.Integrated with machine learning algorithms,the FCB@SY-based intelligent sensing system demonstrates dualmode capabilities in human motion tracking and gesture recognition,offering significant potential for applications in wearable electronics,human-machine interfaces,and soft robotics.
基金supported by the National Natural Science Foundation of China(Grant No.52475543)Natural Science Foundation of Henan(Grant No.252300421101)+1 种基金Henan Province University Science and Technology Innovation Talent Support Plan(Grant No.24HASTIT048)Science and Technology Innovation Team Project of Zhengzhou University of Light Industry(Grant No.23XNKJTD0101).
文摘Aircraft assembly is characterized by stringent precedence constraints,limited resource availability,spatial restrictions,and a high degree of manual intervention.These factors lead to considerable variability in operator workloads and significantly increase the complexity of scheduling.To address this challenge,this study investigates the Aircraft Pulsating Assembly Line Scheduling Problem(APALSP)under skilled operator allocation,with the objective of minimizing assembly completion time.A mathematical model considering skilled operator allocation is developed,and a Q-Learning improved Particle Swarm Optimization algorithm(QLPSO)is proposed.In the algorithm design,a reverse scheduling strategy is adopted to effectively manage large-scale precedence constraints.Moreover,a reverse sequence encoding method is introduced to generate operation sequences,while a time decoding mechanism is employed to determine completion times.The problem is further reformulated as a Markov Decision Process(MDP)with explicitly defined state and action spaces.Within QLPSO,the Q-learning mechanism adaptively adjusts inertia weights and learning factors,thereby achieving a balance between exploration capability and convergence performance.To validate the effectiveness of the proposed approach,extensive computational experiments are conducted on benchmark instances of different scales,including small,medium,large,and ultra-large cases.The results demonstrate that QLPSO consistently delivers stable and high-quality solutions across all scenarios.In ultra-large-scale instances,it improves the best solution by 25.2%compared with the Genetic Algorithm(GA)and enhances the average solution by 16.9%over the Q-learning algorithm,showing clear advantages over the comparative methods.These findings not only confirm the effectiveness of the proposed algorithm but also provide valuable theoretical references and practical guidance for the intelligent scheduling optimization of aircraft pulsating assembly lines.
文摘At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown and complex environments,this paper proposes an Attention-Enhanced Dueling Deep Q-Network(ADDueling DQN),which integrates a multi-head attention mechanism and a prioritized experience replay strategy into a Dueling-DQN reinforcement learning framework.A multi-objective reward function,centered on energy efficiency,is designed to comprehensively consider path length,terrain slope,motion smoothness,and obstacle avoidance,enabling optimal low-energy trajectory generation in 3D space from the source.The incorporation of a multihead attention mechanism allows the model to dynamically focus on energy-critical state features—such as slope gradients and obstacle density—thereby significantly improving its ability to recognize and avoid energy-intensive paths.Additionally,the prioritized experience replay mechanism accelerates learning from key decision-making experiences,suppressing inefficient exploration and guiding the policy toward low-energy solutions more rapidly.The effectiveness of the proposed path planning algorithm is validated through simulation experiments conducted in multiple off-road scenarios.Results demonstrate that AD-Dueling DQN consistently achieves the lowest average energy consumption across all tested environments.Moreover,the proposed method exhibits faster convergence and greater training stability compared to baseline algorithms,highlighting its global optimization capability under energy-aware objectives in complex terrains.This study offers an efficient and scalable intelligent control strategy for the development of energy-conscious autonomous navigation systems.
基金supported by the National Key Research and Development Program of China(No.2020YFC1808701).
文摘Arsenic(As)pollution in soils is a pervasive environmental issue.Biochar immobilization offers a promising solution for addressing soil As contamination.The efficiency of biochar in immobilizing As in soils primarily hinges on the characteristics of both the soil and the biochar.However,the influence of a specific property on As immobilization varies among different studies,and the development and application of arsenic passivation materials based on biochar often rely on empirical knowledge.To enhance immobilization efficiency and reduce labor and time costs,a machine learning(ML)model was employed to predict As immobilization efficiency before biochar application.In this study,we collected a dataset comprising 182 data points on As immobilization efficiency from 17 publications to construct three ML models.The results demonstrated that the random forest(RF)model outperformed gradient boost regression tree and support vector regression models in predictive performance.Relative importance analysis and partial dependence plots based on the RF model were conducted to identify the most crucial factors influencing As immobilization.These findings highlighted the significant roles of biochar application time and biochar pH in As immobilization efficiency in soils.Furthermore,the study revealed that Fe-modified biochar exhibited a substantial improvement in As immobilization.These insights can facilitate targeted biochar property design and optimization of biochar application conditions to enhance As immobilization efficiency.
基金supported by the National Natural Science Foundation of China,No.62276089。
文摘Artificial neural networks are capable of machine learning by simulating the hiera rchical structure of the human brain.To enable learning by brain and machine,it is essential to accurately identify and correct the prediction errors,referred to as credit assignment(Lillicrap et al.,2020).It is critical to develop artificial intelligence by understanding how the brain deals with credit assignment in neuroscience.
基金financially supported by the National Key Research and Development Program of China(2019YFA0708602,2022YFF0800903)National Natural Science Foundation of China(42472112,U2244217,41973045)+1 种基金Basic Science and Technology Research Fundings of the Institute of Geology,CAGS(JKYZD202312)Geological Survey Projects of the China Geological Survey(DD20242878,DD20243512).
文摘Numerous intermediate to felsic igneous rocks are present in both subduction and collisional orogens.However,porphyry copper deposits(PCDs)are comparatively rare.The underlying factors that differentiate fertile magmas,which give rise to PCDs,from barren magmas in a specific geological setting are not well understood.In this study,three supervised machine learning algorithms:random forest(RF),logistic regression(LR)and support vector machine(SVM)were employed to classify metallogenic fertility in southeastern Tibet,Sanjiang orogenic belt,based on whole-rock trace element and Sr-Nd isotopic ratios.The performance of the RF model is better than LR and SVM models.Feature importance analysis of the models reveals that the concentration of Y,Eu,and Th,along with Sr-Nd isotope compositions are crucial variables in distinguishing fertile and barren samples.However,when the optimized models were applied to predict the datasets of Miocene Gangdese porphyry copper belt and Jurassic Gangdese arc representing collision and subduction settings respectively,a marked decline in metrics occurred in all three models,particularly on the subduction dataset.This substantial decrease indicates the compositional characteristics of intrusions across different tectonic settings could be diverse in a multidimensional space,highlighting the complex interplay of geological factors influencing PCD’s formation.
基金supported by the Natural Science Basic Research Program of Shaanxi(2024JC-YBMS-202).
文摘Interlayer is an important factor affecting the distribution of remaining oil.Accurate identification of interlayer distribution is of great significance in guiding oilfield production and development.However,the traditional method of identifying interlayers has some limitations:(1)Due to the existence of overlaps in the cross plot for different categories of interlayers,it is difficult to establish a determined model to classify the type of interlayer;(2)Traditional identification methods only use two or three logging curves to identify the types of interlayers,making it difficult to fully utilize the information of the logging curves,the recognition accuracy will be greatly reduced;(3)For a large number of complex logging data,interlayer identification is time-consuming and laborintensive.Based on the existing well area data such as logging data and core data,this paper uses machine learning method to quantitatively identify the interlayers in the single well layer of CIII sandstone group in the M oilfield.Through the comparison of various classifiers,it is found that the decision tree method has the best applicability and the highest accuracy in the study area.Based on single well identification of interlayers,the continuity of well interval interlayers in the study area is analyzed according to the horizontal well.Finally,the influence of the continuity of interlayers on the distribution of remaining oil is verified by the spatial distribution characteristics of interlayers combined with the production situation of the M oilfield.
基金financially supported by the National Key Research and Development Program Projects of China(No.2023YFB3608901)the Xi'an University of Architecture and Technology Branch of Xi'an Computing Center.
文摘In order to address the limited mechanical properties of silicon-based materials,this study designed 12 B-site mixed-valence perovskites with s^(0)+s^(2)electronic configurations.Five machine learning models were used to predict the bandgap values of candidate materials,and Cs_(2)AgSbCl_(6)was selected as the optimal light absorbing material.By using first principles calculations under stress and strain,it has been determined that micro-strains can achieve the goals of reducing material strength,enhancing flexible characteristics,directionally adjusting the anisotropy of stress concentration areas,improving thermodynamic properties,and enhancing sound insulation ability without significantly affecting photoelectric properties.According to device simulations,tensile strain can effectively increase the theoretical efficiency of solar cells.This work elucidates the mechanism of mechanical property changes under stress and strain,offering insights into new materials for solar energy conversion and accelerating the development of high-performance photovoltaic devices.
基金Supported by The government assignment,No.1023022600020-6The Ministry of Science and Higher Education of the Russian Federation Within The Framework of State Support for The Creation and Development of World-Class Research Center“Digital Biodesign and Personalized Healthcare,”No.075-15-2022-304RSF grant,No.24-15-00549.
文摘Integrating exhaled breath analysis into the diagnosis of cardiovascular diseases holds significant promise as a valuable tool for future clinical use,particularly for ischemic heart disease(IHD).However,current research on the volatilome(exhaled breath composition)in heart disease remains underexplored and lacks sufficient evidence to confirm its clinical validity.Key challenges hindering the application of breath analysis in diagnosing IHD include the scarcity of studies(only three published papers to date),substantial methodological bias in two of these studies,and the absence of standardized protocols for clinical imple-mentation.Additionally,inconsistencies in methodologies—such as sample collection,analytical techniques,machine learning(ML)approaches,and result interpretation—vary widely across studies,further complicating their reprodu-cibility and comparability.To address these gaps,there is an urgent need to establish unified guidelines that define best practices for breath sample collection,data analysis,ML integration,and biomarker annotation.Until these challenges are systematically resolved,the widespread adoption of exhaled breath analysis as a reliable diagnostic tool for IHD remains a distant goal rather than an immi-nent reality.
基金funded by the National Natural Science Foundation of China(No.52204407)the Natural Science Foundation of Jiangsu Province(No.BK20220595)+1 种基金the China Postdoctoral Science Foundation(No.2022M723689)the Industrial Collaborative Innovation Project of Shanghai(No.XTCX-KJ-2022-2-11)。
文摘The application of machine learning in alloy design is increasingly widespread,yet traditional models still face challenges when dealing with limited datasets and complex nonlinear relationships.This work proposes an interpretable machine learning method based on data augmentation and reconstruction,excavating high-performance low-alloyed magnesium(Mg)alloys.The data augmentation technique expands the original dataset through Gaussian noise.The data reconstruction method reorganizes and transforms the original data to extract more representative features,significantly improving the model's generalization ability and prediction accuracy,with a coefficient of determination(R^(2))of 95.9%for the ultimate tensile strength(UTS)model and a R^(2)of 95.3%for the elongation-to-failure(EL)model.The correlation coefficient assisted screening(CCAS)method is proposed to filter low-alloyed target alloys.A new Mg-2.2Mn-0.4Zn-0.2Al-0.2Ca(MZAX2000,wt%)alloy is designed and extruded into bar at given processing parameters,achieving room-temperature strength-ductility synergy showing an excellent UTS of 395 MPa and a high EL of 17.9%.This is closely related to its hetero-structured characteristic in the as-extruded MZAX2000 alloy consisting of coarse grains(16%),fine grains(75%),and fiber regions(9%).Therefore,this work offers new insights into optimizing alloy compositions and processing parameters for attaining new high strong and ductile low-alloyed Mg alloys.
基金supported by the National Natural Sci-ence Foundation of China(Nos.52274359 and 52304379)the China National Postdoctoral Program for Innovative Talents(No.BX20220034)+2 种基金the China Postdoctoral Science Foundation(No.2022M720403)the AECC University Research Cooperation Project(No.HFZL2021CXY021)the Interdisciplinary Research Project for Young Teachers of USTB(Fundamental Research Funds for the Central Universities)(No.FRF-IDRY-23-025).
文摘Achieving the simultaneous enhancement of strength and ductility in laser powder bed fused (LPBF-ed) titanium (Ti) is challenging due to the complex, high-dimensional parameter space and interactions between parameters and powders. Herein, a hybrid intelligent framework for process parameter optimization of LPBF-ed Ti with improved ultimate tensile strength (UTS) and elongation (EL) was proposed. It combines the data augmentation method (AVG ± EC × SD), the multi-model fusion stacking ensemble learning model (GBDT-BPNN-XGBoost), the interpretable machine learning method and the non-dominated ranking genetic algorithm (NSGA-Ⅱ). The GBDT-BPNN-XGBoost outperforms single models in predicting UTS and EL across the accuracy, generalization ability and stability. The SHAP analysis reveals that laser power (P) is the most important feature affecting both UTS and EL, and it has a positive impact on them when P < 220 W. The UTS and EL of samples fabricated by the optimal process parameters were 718 ± 5 MPa and 27.9 % ± 0.1 %, respectively. The outstanding strength-ductility balance is attributable to the forward stresses in hard α'-martensite and back stresses in soft αm'-martensite induced by the strain gradients of hetero-microstructure. The back stresses strengthen the soft αm'-martensite, improving the overall UTS. The forward stresses stimulate the activation of dislocations in hard α'-martensite and the generation of 〈c + a〉 dislocations, allowing the plastic strain to occur in hard regions and enhancing the overall ductility. This work provides a feasible strategy for multi-objective optimization and valuable insights into tailoring the microstructure for improving mechanical properties.
基金upported by the National Key Research and Development Program of China(Grant No.:2023YFF1204904)the National Natural Science Foundation of China(Grant Nos.:U23A20530 and 82173746)Shanghai Frontiers Science Center of Optogenetic Techniques for Cell Metabolism(Shanghai Municipal Education Commission,China).
文摘Negative logarithm of the acid dissociation constant(pK_(a))significantly influences the absorption,dis-tribution,metabolism,excretion,and toxicity(ADMET)properties of molecules and is a crucial indicator in drug research.Given the rapid and accurate characteristics of computational methods,their role in predicting drug properties is increasingly important.Although many pK_(a) prediction models currently exist,they often focus on enhancing model precision while neglecting interpretability.In this study,we present GraFpKa,a pK_(a) prediction model using graph neural networks(GNNs)and molecular finger-prints.The results show that our acidic and basic models achieved mean absolute errors(MAEs)of 0.621 and 0.402,respectively,on the test set,demonstrating good predictive performance.Notably,to improve interpretability,GraFpKa also incorporates Integrated Gradients(IGs),providing a clearer visual description of the atoms significantly affecting the pK_(a) values.The high reliability and interpretability of GraFpKa ensure accurate pKa predictions while also facilitating a deeper understanding of the relation-ship between molecular structure and pK_(a) values,making it a valuable tool in the field of pK_(a) prediction.
文摘Artificial intelligence(AI)and machine learning(ML)are transforming spine care by addressing diagnostics,treatment planning,and rehabilitation challenges.This study highlights advancements in precision medicine for spinal pathologies,leveraging AI and ML to enhance diagnostic accuracy through deep learning algorithms,enabling faster and more accurate detection of abnormalities.AIpowered robotics and surgical navigation systems improve implant placement precision and reduce complications in complex spine surgeries.Wearable devices and virtual platforms,designed with AI,offer personalized,adaptive therapies that improve treatment adherence and recovery outcomes.AI also enables preventive interventions by assessing spine condition risks early.Despite progress,challenges remain,including limited healthcare datasets,algorithmic biases,ethical concerns,and integration into existing systems.Interdisciplinary collaboration and explainable AI frameworks are essential to unlock AI’s full potential in spine care.Future developments include multimodal AI systems integrating imaging,clinical,and genetic data for holistic treatment approaches.AI and ML promise significant improvements in diagnostic accuracy,treatment personalization,service accessibility,and cost efficiency,paving the way for more streamlined and effective spine care,ultimately enhancing patient outcomes.
基金supported by the Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah,under grant no.(GPIP:13-612-2024).
文摘App reviews are crucial in influencing user decisions and providing essential feedback for developers to improve their products.Automating the analysis of these reviews is vital for efficient review management.While traditional machine learning(ML)models rely on basic word-based feature extraction,deep learning(DL)methods,enhanced with advanced word embeddings,have shown superior performance.This research introduces a novel aspectbased sentiment analysis(ABSA)framework to classify app reviews based on key non-functional requirements,focusing on usability factors:effectiveness,efficiency,and satisfaction.We propose a hybrid DL model,combining BERT(Bidirectional Encoder Representations from Transformers)with BiLSTM(Bidirectional Long Short-Term Memory)and CNN(Convolutional Neural Networks)layers,to enhance classification accuracy.Comparative analysis against state-of-the-art models demonstrates that our BERT-BiLSTM-CNN model achieves exceptional performance,with precision,recall,F1-score,and accuracy of 96%,87%,91%,and 94%,respectively.Thesignificant contributions of this work include a refined ABSA-based relabeling framework,the development of a highperformance classifier,and the comprehensive relabeling of the Instagram App Reviews dataset.These advancements provide valuable insights for software developers to enhance usability and drive user-centric application development.
基金supported by the Joint Funds of the National Natural Science Foundation of China under grant No.U22B2025.
文摘In the big data era,the surge in network traffic volume poses challenges for network management and cybersecurity.Network Traffic Classification(NTC)employs deep learning to categorize traffic data,aiding security and analysis systems as Intrusion Detection Systems(IDS)and Intrusion Prevention Systems(IPS).However,current NTC methods,based on isolated network simulations,usually fail to adapt to new protocols and applications and ignore the effects of network conditions and user behavior on traffic patterns.To improve network traffic management insights,federated learning frameworks have been proposed to aggregate diverse traffic data for collaborative model training.This approach faces challenges like data integrity,label noise,packet loss,and skewed data distributions.While label noise can be mitigated through the use of sophisticated traffic labeling tools,other issues such as packet loss and skewed data distributions encountered in Network Packet Brokers(NPB)can severely impede the efficacy of federated learning algorithms.In this paper,we introduced the Robust Traffic Classifier with Federated Contrastive Learning(FC-RTC),combining federated and contrastive learning methods.Using the Supcon-Loss function from contrastive learning,FC-RTC distinguishes between similar and dissimilar samples.Training by sample pairs,FC-RTC effectively updates when receiving corrupted traffic data with packet loss or disorder.In cases of sample imbalance,contrastive loss functions for similar samples reduce model bias towards higher proportion data.By addressing uneven data distribution and packet loss,our system enhances its capability to adapt and perform accurately in real-world network traffic analysis,meeting the specific demands of this complex field.
基金supported by the following funding bodies:the National Key Research and Development Program of China(Grant No.2020YFA0608000)National Science Foundation of China(Grant Nos.42075142,42375148,42125503+2 种基金42130608)FY-APP-2022.0609,Sichuan Province Key Tech nology Research and Development project(Grant Nos.2024ZHCG0168,2024ZHCG0176,2023YFG0305,2023YFG-0124,and 23ZDYF0091)the CUIT Science and Technology Innovation Capacity Enhancement Program project(Grant No.KYQN202305)。
文摘Climate downscaling is used to transform large-scale meteorological data into small-scale data with enhanced detail,which finds wide applications in climate modeling,numerical weather forecasting,and renewable energy.Although deeplearning-based downscaling methods effectively capture the complex nonlinear mapping between meteorological data of varying scales,the supervised deep-learning-based downscaling methods suffer from insufficient high-resolution data in practice,and unsupervised methods struggle with accurately inferring small-scale specifics from limited large-scale inputs due to small-scale uncertainty.This article presents DualDS,a dual-learning framework utilizing a Generative Adversarial Network–based neural network and subgrid-scale auxiliary information for climate downscaling.Such a learning method is unified in a two-stream framework through up-and downsamplers,where the downsampler is used to simulate the information loss process during the upscaling,and the upsampler is used to reconstruct lost details and correct errors incurred during the upscaling.This dual learning strategy can eliminate the dependence on high-resolution ground truth data in the training process and refine the downscaling results by constraining the mapping process.Experimental findings demonstrate that DualDS is comparable to several state-of-the-art deep learning downscaling approaches,both qualitatively and quantitatively.Specifically,for a single surface-temperature data downscaling task,our method is comparable with other unsupervised algorithms with the same dataset,and we can achieve a 0.469 dB higher peak signal-to-noise ratio,0.017 higher structural similarity,0.08 lower RMSE,and the best correlation coefficient.In summary,this paper presents a novel approach to addressing small-scale uncertainty issues in unsupervised downscaling processes.