In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach...In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach to facilitate such collaboration,allowing diverse entities to collectively enhance machine learning models without the need to share sensitive training data.However,existing works have highlighted VFL’s susceptibility to privacy inference attacks,where an honest but curious server could potentially reconstruct a client’s raw data from embeddings uploaded by the client.This vulnerability poses a significant threat to VFL-based intelligent railway transportation systems.In this paper,we introduce SensFL,a novel privacy-enhancing method to against privacy inference attacks in VFL.Specifically,SensFL integrates regularization of the sensitivity of embeddings to the original data into the model training process,effectively limiting the information contained in shared embeddings.By reducing the sensitivity of embeddings to the original data,SensFL can effectively resist reverse privacy attacks and prevent the reconstruction of the original data from the embeddings.Extensive experiments were conducted on four distinct datasets and three different models to demonstrate the efficacy of SensFL.Experiment results show that SensFL can effectively mitigate privacy inference attacks while maintaining the accuracy of the primary learning task.These results underscore SensFL’s potential to advance privacy protection technologies within VFL-based intelligent railway systems,addressing critical security concerns in collaborative learning environments.展开更多
Liposomes serve as critical carriers for drugs and vaccines,with their biological effects influenced by their size.The microfluidic method,renowned for its precise control,reproducibility,and scalability,has been wide...Liposomes serve as critical carriers for drugs and vaccines,with their biological effects influenced by their size.The microfluidic method,renowned for its precise control,reproducibility,and scalability,has been widely employed for liposome preparation.Although some studies have explored factors affecting liposomal size in microfluidic processes,most focus on small-sized liposomes,predominantly through experimental data analysis.However,the production of larger liposomes,which are equally significant,remains underexplored.In this work,we thoroughly investigate multiple variables influencing liposome size during microfluidic preparation and develop a machine learning(ML)model capable of accurately predicting liposomal size.Experimental validation was conducted using a staggered herringbone micromixer(SHM)chip.Our findings reveal that most investigated variables significantly influence liposomal size,often interrelating in complex ways.We evaluated the predictive performance of several widely-used ML algorithms,including ensemble methods,through cross-validation(CV)for both lipo-some size and polydispersity index(PDI).A standalone dataset was experimentally validated to assess the accuracy of the ML predictions,with results indicating that ensemble algorithms provided the most reliable predictions.Specifically,gradient boosting was selected for size prediction,while random forest was employed for PDI prediction.We successfully produced uniform large(600 nm)and small(100 nm)liposomes using the optimised experimental conditions derived from the ML models.In conclusion,this study presents a robust methodology that enables precise control over liposome size distribution,of-fering valuable insights for medicinal research applications.展开更多
The presence of aluminum(Al^(3+))and fluoride(F^(−))ions in the environment can be harmful to ecosystems and human health,highlighting the need for accurate and efficient monitoring.In this paper,an innovative approac...The presence of aluminum(Al^(3+))and fluoride(F^(−))ions in the environment can be harmful to ecosystems and human health,highlighting the need for accurate and efficient monitoring.In this paper,an innovative approach is presented that leverages the power of machine learning to enhance the accuracy and efficiency of fluorescence-based detection for sequential quantitative analysis of aluminum(Al^(3+))and fluoride(F^(−))ions in aqueous solutions.The proposed method involves the synthesis of sulfur-functionalized carbon dots(C-dots)as fluorescence probes,with fluorescence enhancement upon interaction with Al^(3+)ions,achieving a detection limit of 4.2 nmol/L.Subsequently,in the presence of F^(−)ions,fluorescence is quenched,with a detection limit of 47.6 nmol/L.The fingerprints of fluorescence images are extracted using a cross-platform computer vision library in Python,followed by data preprocessing.Subsequently,the fingerprint data is subjected to cluster analysis using the K-means model from machine learning,and the average Silhouette Coefficient indicates excellent model performance.Finally,a regression analysis based on the principal component analysis method is employed to achieve more precise quantitative analysis of aluminum and fluoride ions.The results demonstrate that the developed model excels in terms of accuracy and sensitivity.This groundbreaking model not only showcases exceptional performance but also addresses the urgent need for effective environmental monitoring and risk assessment,making it a valuable tool for safeguarding our ecosystems and public health.展开更多
In order to study the characteristics of pure fly ash-based geopolymer concrete(PFGC)conveniently,we used a machine learning method that can quantify the perception of characteristics to predict its compressive streng...In order to study the characteristics of pure fly ash-based geopolymer concrete(PFGC)conveniently,we used a machine learning method that can quantify the perception of characteristics to predict its compressive strength.In this study,505 groups of data were collected,and a new database of compressive strength of PFGC was constructed.In order to establish an accurate prediction model of compressive strength,five different types of machine learning networks were used for comparative analysis.The five machine learning models all showed good compressive strength prediction performance on PFGC.Among them,R2,MSE,RMSE and MAE of decision tree model(DT)are 0.99,1.58,1.25,and 0.25,respectively.While R2,MSE,RMSE and MAE of random forest model(RF)are 0.97,5.17,2.27 and 1.38,respectively.The two models have high prediction accuracy and outstanding generalization ability.In order to enhance the interpretability of model decision-making,we used importance ranking to obtain the perception of machine learning model to 13 variables.These 13 variables include chemical composition of fly ash(SiO_(2)/Al_(2)O_(3),Si/Al),the ratio of alkaline liquid to the binder,curing temperature,curing durations inside oven,fly ash dosage,fine aggregate dosage,coarse aggregate dosage,extra water dosage and sodium hydroxide dosage.Curing temperature,specimen ages and curing durations inside oven have the greatest influence on the prediction results,indicating that curing conditions have more prominent influence on the compressive strength of PFGC than ordinary Portland cement concrete.The importance of curing conditions of PFGC even exceeds that of the concrete mix proportion,due to the low reactivity of pure fly ash.展开更多
The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because o...The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.展开更多
Attempts have been made to modulate motor sequence learning(MSL)through repetitive transcranial magnetic stimulation,targeting different sites within the sensorimotor network.However,the target with the optimum modula...Attempts have been made to modulate motor sequence learning(MSL)through repetitive transcranial magnetic stimulation,targeting different sites within the sensorimotor network.However,the target with the optimum modulatory effect on neural plasticity associated with MSL remains unclarified.This study was therefore designed to compare the role of the left primary motor cortex and the left supplementary motor area proper(SMAp)in modulating MSL across different complexity levels and for both hands,as well as the associated neuroplasticity by applying intermittent theta burst stimulation together with the electroencephalogram and concurrent transcranial magnetic stimulation.Our data demonstrated the role of SMAp stimulation in modulating neural communication to support MSL,which is achieved by facilitating regional activation and orchestrating neural coupling across distributed brain regions,particularly in interhemispheric connections.These findings may have important clinical implications,particularly for motor rehabilitation in populations such as post-stroke patients.展开更多
While artificial intelligence(AI)shows promise in education,its real-world effectiveness in specific settings like blended English as a Foreign Language(EFL)learning needs closer examination.This study investigated th...While artificial intelligence(AI)shows promise in education,its real-world effectiveness in specific settings like blended English as a Foreign Language(EFL)learning needs closer examination.This study investigated the impact of a blended teaching model incorporating AI tools on the Superstar Learning Platform for Chinese university EFL students.Using a mixed-methods approach,60 first-year students were randomized into an experimental group(using the AI-enhanced model)and a control group(traditional instruction)for 16 weeks.Data included test scores,learning behaviors(duration,task completion),satisfaction surveys,and interviews.Results showed the experimental group significantly outperformed the control group on post-tests and achieved larger learning gains.These students also demonstrated greater engagement through longer study times and higher task completion rates,and reported significantly higher satisfaction.Interviews confirmed these findings,with students attributing benefits to the model’s personalized guidance,structured content presentation(knowledge graphs),immediate responses,flexibility,and varied interaction methods.However,limitations were noted,including areas where the platform’s AI could be improved(e.g.,for assessing speaking/translation)and ongoing challenges with student self-discipline.The study concludes that this AI-enhanced blended model significantly improved student performance,engagement,and satisfaction in this EFL context.The findings offer practical insights for educators and platform developers,suggesting AI integration holds significant potential while highlighting areas for refinement.展开更多
Online interactive learning plays a crucial role in improving online education quality.This grounded theory study examines:(1)what key factors shape EFL learners’online interactive learning,(2)how these factors form ...Online interactive learning plays a crucial role in improving online education quality.This grounded theory study examines:(1)what key factors shape EFL learners’online interactive learning,(2)how these factors form an empirically validated model,and(3)how they interact within this model,through systematic analysis of 9,207 discussion forum posts from a Chinese University MOOC platform.Results demonstrate that learning drive,course structure,teaching competence,interaction behavior,expected outcomes,and online learning context significantly influence EFL online interactive learning.The analysis reveals two key mechanisms:expected outcomes mediate the effects of learning drive(β=0.45),course structure,teaching competence,and interaction behavior(β=0.35)on learning outcomes,while online learning context moderates these relationships(β=0.25).Specifically,learning drive provides intrinsic/extrinsic motivation,whereas course structure,teaching competence,interaction behavior,and expected outcomes collectively enhance interaction quality and sustainability.These findings,derived through rigorous grounded theory methodology involving open,axial,and selective coding of large-scale interaction data,yield three key contributions:(1)a comprehensive theoretical model of EFL online learning dynamics,(2)empirical validation of mediation/moderation mechanisms,and(3)practical strategies for designing scaffolded interaction protocols and adaptive feedback systems.The study establishes that its theoretically saturated model(achieved after analyzing 7,366 posts with 1,841 verification cases)offers educators evidence-based approaches to optimize collaborative interaction in digital EFL environments.展开更多
Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework...Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework for human gait classification in video sequences using deep learning(DL)fusion assisted and posterior probability-based moth flames optimization(MFO)is proposed.In the first step,the video frames are resized and finetuned by two pre-trained lightweight DL models,EfficientNetB0 and MobileNetV2.Both models are selected based on the top-5 accuracy and less number of parameters.Later,both models are trained through deep transfer learning and extracted deep features fused using a voting scheme.In the last step,the authors develop a posterior probabilitybased MFO feature selection algorithm to select the best features.The selected features are classified using several supervised learning methods.The CASIA-B publicly available dataset has been employed for the experimental process.On this dataset,the authors selected six angles such as 0°,18°,90°,108°,162°,and 180°and obtained an average accuracy of 96.9%,95.7%,86.8%,90.0%,95.1%,and 99.7%.Results demonstrate comparable improvement in accuracy and significantly minimize the computational time with recent state-of-the-art techniques.展开更多
After living in China for 33 years,Associate Professor Gu Qingyang of the Lee Kuan Yew School of Public Policy(LKYSPP)at the National University of Singapore(NUS)arrived in Singapore in 1994.Over the past 31 years,he ...After living in China for 33 years,Associate Professor Gu Qingyang of the Lee Kuan Yew School of Public Policy(LKYSPP)at the National University of Singapore(NUS)arrived in Singapore in 1994.Over the past 31 years,he has remained dedicated to building bridges—initially by systematically introducing Singapore’s development experience to China,and later by fostering mutual learning between the two countries.展开更多
“Flying Ad Hoc Networks(FANETs)”,which use“Unmanned Aerial Vehicles(UAVs)”,are developing as a critical mechanism for numerous applications,such as military operations and civilian services.The dynamic nature of F...“Flying Ad Hoc Networks(FANETs)”,which use“Unmanned Aerial Vehicles(UAVs)”,are developing as a critical mechanism for numerous applications,such as military operations and civilian services.The dynamic nature of FANETs,with high mobility,quick node migration,and frequent topology changes,presents substantial hurdles for routing protocol development.Over the preceding few years,researchers have found that machine learning gives productive solutions in routing while preserving the nature of FANET,which is topology change and high mobility.This paper reviews current research on routing protocols and Machine Learning(ML)approaches applied to FANETs,emphasizing developments between 2021 and 2023.The research uses the PRISMA approach to sift through the literature,filtering results from the SCOPUS database to find 82 relevant publications.The research study uses machine learning-based routing algorithms to beat the issues of high mobility,dynamic topologies,and intermittent connection in FANETs.When compared with conventional routing,it gives an energy-efficient and fast decision-making solution in a real-time environment,with greater fault tolerance capabilities.These protocols aim to increase routing efficiency,flexibility,and network stability using ML’s predictive and adaptive capabilities.This comprehensive review seeks to integrate existing information,offer novel integration approaches,and recommend future research topics for improving routing efficiency and flexibility in FANETs.Moreover,the study highlights emerging trends in ML integration,discusses challenges faced during the review,and discusses overcoming these hurdles in future research.展开更多
To predict stall and surge in advance that make the aero-engine compressor operatesafely,a stall prediction model based on deep learning theory is established in the current study.The Long Short-Term Memory(LSTM)origi...To predict stall and surge in advance that make the aero-engine compressor operatesafely,a stall prediction model based on deep learning theory is established in the current study.The Long Short-Term Memory(LSTM)originating from the recurrent neural network is used,and a set of measured dynamic pressure datasets including the stall process is used to learn whatdetermines the weight of neural network nodes.Subsequently,the structure and function hyperpa-rameters in the model are deeply optimized,and a set of measured pressure data is used to verify theprediction effects of the model.On this basis of the above good predictive capability,stall in low-and high-speed compressor are predicted by using the established model.When a period of non-stallpressure data is used as input in the model,the model can quickly complete the prediction of sub-sequent time series data through the self-learning and prediction mechanism.Comparison with thereal-time measured pressure data demonstrates that the starting point of the predicted stall is basi-cally the same as that of the measured stall,and the stall can be predicted more than 1 s in advanceso that the occurrence of stall can be avoided.The model of stall prediction in the current study canmake up for the uncertainty of threshold selection of the existing stall warning methods based onmeasured data signal processing.It has a great application potential to predict the stall occurrenceof aero-engine compressor in advance and avoid the accidents.展开更多
Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)du...Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)due to its tamper-proof and non-repudiation features.Although blockchain typically does not require the endorsement of third-party trust organizations,it mostly needs to perform necessary mathematical calculations to prevent malicious attacks,which results in stricter requirements for computation resources on the participating devices.By offloading the computation tasks required to support blockchain consensus to edge service nodes or the cloud,while providing data privacy protection for IoT applications,it can effectively address the limitations of computation and energy resources in IoT devices.However,how to make reasonable offloading decisions for IoT devices remains an open issue.Due to the excellent self-learning ability of Reinforcement Learning(RL),this paper proposes a RL enabled Swarm Intelligence Optimization Algorithm(RLSIOA)that aims to improve the quality of initial solutions and achieve efficient optimization of computation task offloading decisions.The algorithm considers various factors that may affect the revenue obtained by IoT devices executing consensus algorithms(e.g.,Proof-of-Work),it optimizes the proportion of sub-tasks to be offloaded and the scale of computing resources to be rented from the edge and cloud to maximize the revenue of devices.Experimental results show that RLSIOA can obtain higher-quality offloading decision-making schemes at lower latency costs compared to representative benchmark algorithms.展开更多
High-entropy alloys(HEAs)have attracted considerable attention because of their excellent properties and broad compositional design space.However,traditional trial-and-error methods for screening HEAs are costly and i...High-entropy alloys(HEAs)have attracted considerable attention because of their excellent properties and broad compositional design space.However,traditional trial-and-error methods for screening HEAs are costly and inefficient,thereby limiting the development of new materials.Although density functional theory(DFT),molecular dynamics(MD),and thermodynamic modeling have improved the design efficiency,their indirect connection to properties has led to limitations in calculation and prediction.With the awarding of the Nobel Prize in Physics and Chemistry to artificial intelligence(AI)related researchers,there has been a renewed enthusiasm for the application of machine learning(ML)in the field of alloy materials.In this study,common and advanced ML models and strategies in HEA design were introduced,and the mechanism by which ML can play a role in composition optimization and performance prediction was investigated through case studies.The general workflow of ML application in material design was also introduced from the programmer’s point of view,including data preprocessing,feature engineering,model training,evaluation,optimization,and interpretability.Furthermore,data scarcity,multi-model coupling,and other challenges and opportunities at the current stage were analyzed,and an outlook on future research directions was provided.展开更多
Accurate acquisition and prediction of acoustic parameters of seabed sediments are crucial in marine sound propagation research.While the relationship between sound velocity and physical properties of sediment has bee...Accurate acquisition and prediction of acoustic parameters of seabed sediments are crucial in marine sound propagation research.While the relationship between sound velocity and physical properties of sediment has been extensively studied,there is still no consensus on the correlation between acoustic attenuation coefficient and sediment physical properties.Predicting the acoustic attenuation coefficient remains a challenging issue in sedimentary acoustic research.In this study,we propose a prediction method for the acoustic attenuation coefficient using machine learning algorithms,specifically the random forest(RF),support vector machine(SVR),and convolutional neural network(CNN)algorithms.We utilized the acoustic attenuation coefficient and sediment particle size data from 52 stations as training parameters,with the particle size parameters as the input feature matrix,and measured acoustic attenuation as the training label to validate the attenuation prediction model.Our results indicate that the error of the attenuation prediction model is small.Among the three models,the RF model exhibited the lowest prediction error,with a mean squared error of 0.8232,mean absolute error of 0.6613,and root mean squared error of 0.9073.Additionally,when we applied the models to predict the data collected at different times in the same region,we found that the models developed in this study also demonstrated a certain level of reliability in real prediction scenarios.Our approach demonstrates that constructing a sediment acoustic characteristics model based on machine learning is feasible to a certain extent and offers a novel perspective for studying sediment acoustic properties.展开更多
As an effective strategy to address urban traffic congestion,traffic flow prediction has gained attention from Federated-Learning(FL)researchers due FL’s ability to preserving data privacy.However,existing methods fa...As an effective strategy to address urban traffic congestion,traffic flow prediction has gained attention from Federated-Learning(FL)researchers due FL’s ability to preserving data privacy.However,existing methods face challenges:some are too simplistic to capture complex traffic patterns effectively,and others are overly complex,leading to excessive communication overhead between cloud and edge devices.Moreover,the problem of single point failure limits their robustness and reliability in real-world applications.To tackle these challenges,this paper proposes a new method,CMBA-FL,a Communication-Mitigated and Blockchain-Assisted Federated Learning model.First,CMBA-FL improves the client model’s ability to capture temporal traffic patterns by employing the Encoder-Decoder framework for each edge device.Second,to reduce the communication overhead during federated learning,we introduce a verification method based on parameter update consistency,avoiding unnecessary parameter updates.Third,to mitigate the risk of a single point of failure,we integrate consensus mechanisms from blockchain technology.To validate the effectiveness of CMBA-FL,we assess its performance on two widely used traffic datasets.Our experimental results show that CMBA-FL reduces prediction error by 11.46%,significantly lowers communication overhead,and improves security.展开更多
Federated learning(FL)is a distributed machine learning paradigm that excels at preserving data privacy when using data from multiple parties.When combined with Fog Computing,FL offers enhanced capabilities for machin...Federated learning(FL)is a distributed machine learning paradigm that excels at preserving data privacy when using data from multiple parties.When combined with Fog Computing,FL offers enhanced capabilities for machine learning applications in the Internet of Things(IoT).However,implementing FL across large-scale distributed fog networks presents significant challenges in maintaining privacy,preventing collusion attacks,and ensuring robust data aggregation.To address these challenges,we propose an Efficient Privacy-preserving and Robust Federated Learning(EPRFL)scheme for fog computing scenarios.Specifically,we first propose an efficient secure aggregation strategy based on the improved threshold homomorphic encryption algorithm,which is not only resistant to model inference and collusion attacks,but also robust to fog node dropping.Then,we design a dynamic gradient filtering method based on cosine similarity to further reduce the communication overhead.To minimize training delays,we develop a dynamic task scheduling strategy based on comprehensive score.Theoretical analysis demonstrates that EPRFL offers robust security and low latency.Extensive experimental results indicate that EPRFL outperforms similar strategies in terms of privacy preserving,model performance,and resource efficiency.展开更多
Electrolyte engineering with fluoroethers as solvents offers promising potential for high-performance lithium metal batteries.Despite recent progresses achieved in designing and synthesizing novel fluoroether solvents...Electrolyte engineering with fluoroethers as solvents offers promising potential for high-performance lithium metal batteries.Despite recent progresses achieved in designing and synthesizing novel fluoroether solvents,a systematic understanding of how fluorination patterns impact electrolyte performance is still lacking.We investigate the effects of fluorination patterns on properties of electrolytes using fluorinated 1,2-diethoxyethane(FDEE)as single solvents.By employing quantum calculations,molecular dynamics simulations,and interpretable machine learning,we establish significant correlations between fluorination patterns and electrolyte properties.Higher fluorination levels enhance FDEE stability but decrease conductivity.The symmetry of fluorination sites is critical for stability and viscosity,while exerting minimal influence on ionic conductivity.FDEEs with highly symmetric fluorination sites exhibit favorable viscosity,stability,and overall electrolyte performance.Conductivity primarily depends on lithium-anion dissociation or association.These findings provide design principles for rational fluoroether electrolyte design,emphasizing the trade-offs between stability,viscosity,and conductivity.Our work underscores the significance of considering fluorination patterns and molecular symmetry in the development of fluoroether-based electrolytes for advanced lithium batteries.展开更多
This paper introduces a quantum-enhanced edge computing framework that synergizes quantuminspired algorithms with advanced machine learning techniques to optimize real-time task offloading in edge computing environmen...This paper introduces a quantum-enhanced edge computing framework that synergizes quantuminspired algorithms with advanced machine learning techniques to optimize real-time task offloading in edge computing environments.This innovative approach not only significantly improves the system’s real-time responsiveness and resource utilization efficiency but also addresses critical challenges in Internet of Things(IoT)ecosystems—such as high demand variability,resource allocation uncertainties,and data privacy concerns—through practical solutions.Initially,the framework employs an adaptive adjustment mechanism to dynamically manage task and resource states,complemented by online learning models for precise predictive analytics.Secondly,it accelerates the search for optimal solutions using Grover’s algorithm while efficiently evaluating complex constraints through multi-controlled Toffoli gates,thereby markedly enhancing the practicality and robustness of the proposed solution.Furthermore,to bolster the system’s adaptability and response speed in dynamic environments,an efficientmonitoring mechanism and event-driven architecture are incorporated,ensuring timely responses to environmental changes and maintaining synchronization between internal and external systems.Experimental evaluations confirm that the proposed algorithm demonstrates superior performance in complex application scenarios,characterized by faster convergence,enhanced stability,and superior data privacy protection,alongside notable reductions in latency and optimized resource utilization.This research paves the way for transformative advancements in edge computing and IoT technologies,driving smart edge computing towards unprecedented levels of intelligence and automation.展开更多
As the information sensing and processing capabilities of IoT devices increase,a large amount of data is being generated at the edge of Industrial IoT(IIoT),which has become a strong foundation for distributed Artific...As the information sensing and processing capabilities of IoT devices increase,a large amount of data is being generated at the edge of Industrial IoT(IIoT),which has become a strong foundation for distributed Artificial Intelligence(AI)applications.However,most users are reluctant to disclose their data due to network bandwidth limitations,device energy consumption,and privacy requirements.To address this issue,this paper introduces an Edge-assisted Federated Learning(EFL)framework,along with an incentive mechanism for lightweight industrial data sharing.In order to reduce the information asymmetry between data owners and users,an EFL model-sharing incentive mechanism based on contract theory is designed.In addition,a weight dispersion evaluation scheme based on Wasserstein distance is proposed.This study models an optimization problem of node selection and sharing incentives to maximize the EFL model consumers'profit and ensure the quality of training services.An incentive-based EFL algorithm with individual rationality and incentive compatibility constraints is proposed.Finally,the experimental results verify the effectiveness of the proposed scheme in terms of positive incentives for contract design and performance analysis of EFL systems.展开更多
基金supported by Systematic Major Project of Shuohuang Railway Development Co.,Ltd.,National Energy Group(Grant Number:SHTL-23-31)Beijing Natural Science Foundation(U22B2027).
文摘In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach to facilitate such collaboration,allowing diverse entities to collectively enhance machine learning models without the need to share sensitive training data.However,existing works have highlighted VFL’s susceptibility to privacy inference attacks,where an honest but curious server could potentially reconstruct a client’s raw data from embeddings uploaded by the client.This vulnerability poses a significant threat to VFL-based intelligent railway transportation systems.In this paper,we introduce SensFL,a novel privacy-enhancing method to against privacy inference attacks in VFL.Specifically,SensFL integrates regularization of the sensitivity of embeddings to the original data into the model training process,effectively limiting the information contained in shared embeddings.By reducing the sensitivity of embeddings to the original data,SensFL can effectively resist reverse privacy attacks and prevent the reconstruction of the original data from the embeddings.Extensive experiments were conducted on four distinct datasets and three different models to demonstrate the efficacy of SensFL.Experiment results show that SensFL can effectively mitigate privacy inference attacks while maintaining the accuracy of the primary learning task.These results underscore SensFL’s potential to advance privacy protection technologies within VFL-based intelligent railway systems,addressing critical security concerns in collaborative learning environments.
基金supported by the National Key Research and Development Plan of the Ministry of Science and Technology,China(Grant No.:2022YFE0125300)the National Natural Science Foundation of China(Grant No:81690262)+2 种基金the National Science and Technology Major Project,China(Grant No.:2017ZX09201004-021)the Open Project of National facility for Translational Medicine(Shanghai),China(Grant No.:TMSK-2021-104)Shanghai Jiao Tong University STAR Grant,China(Grant Nos.:YG2022ZD024 and YG2022QN111).
文摘Liposomes serve as critical carriers for drugs and vaccines,with their biological effects influenced by their size.The microfluidic method,renowned for its precise control,reproducibility,and scalability,has been widely employed for liposome preparation.Although some studies have explored factors affecting liposomal size in microfluidic processes,most focus on small-sized liposomes,predominantly through experimental data analysis.However,the production of larger liposomes,which are equally significant,remains underexplored.In this work,we thoroughly investigate multiple variables influencing liposome size during microfluidic preparation and develop a machine learning(ML)model capable of accurately predicting liposomal size.Experimental validation was conducted using a staggered herringbone micromixer(SHM)chip.Our findings reveal that most investigated variables significantly influence liposomal size,often interrelating in complex ways.We evaluated the predictive performance of several widely-used ML algorithms,including ensemble methods,through cross-validation(CV)for both lipo-some size and polydispersity index(PDI).A standalone dataset was experimentally validated to assess the accuracy of the ML predictions,with results indicating that ensemble algorithms provided the most reliable predictions.Specifically,gradient boosting was selected for size prediction,while random forest was employed for PDI prediction.We successfully produced uniform large(600 nm)and small(100 nm)liposomes using the optimised experimental conditions derived from the ML models.In conclusion,this study presents a robust methodology that enables precise control over liposome size distribution,of-fering valuable insights for medicinal research applications.
基金supported by the National Natural Science Foundation of China(No.U21A20290)Guangdong Basic and Applied Basic Research Foundation(No.2022A1515011656)+2 种基金the Projects of Talents Recruitment of GDUPT(No.2023rcyj1003)the 2022“Sail Plan”Project of Maoming Green Chemical Industry Research Institute(No.MMGCIRI2022YFJH-Y-024)Maoming Science and Technology Project(No.2023382).
文摘The presence of aluminum(Al^(3+))and fluoride(F^(−))ions in the environment can be harmful to ecosystems and human health,highlighting the need for accurate and efficient monitoring.In this paper,an innovative approach is presented that leverages the power of machine learning to enhance the accuracy and efficiency of fluorescence-based detection for sequential quantitative analysis of aluminum(Al^(3+))and fluoride(F^(−))ions in aqueous solutions.The proposed method involves the synthesis of sulfur-functionalized carbon dots(C-dots)as fluorescence probes,with fluorescence enhancement upon interaction with Al^(3+)ions,achieving a detection limit of 4.2 nmol/L.Subsequently,in the presence of F^(−)ions,fluorescence is quenched,with a detection limit of 47.6 nmol/L.The fingerprints of fluorescence images are extracted using a cross-platform computer vision library in Python,followed by data preprocessing.Subsequently,the fingerprint data is subjected to cluster analysis using the K-means model from machine learning,and the average Silhouette Coefficient indicates excellent model performance.Finally,a regression analysis based on the principal component analysis method is employed to achieve more precise quantitative analysis of aluminum and fluoride ions.The results demonstrate that the developed model excels in terms of accuracy and sensitivity.This groundbreaking model not only showcases exceptional performance but also addresses the urgent need for effective environmental monitoring and risk assessment,making it a valuable tool for safeguarding our ecosystems and public health.
基金Funded by the Natural Science Foundation of China(No.52109168)。
文摘In order to study the characteristics of pure fly ash-based geopolymer concrete(PFGC)conveniently,we used a machine learning method that can quantify the perception of characteristics to predict its compressive strength.In this study,505 groups of data were collected,and a new database of compressive strength of PFGC was constructed.In order to establish an accurate prediction model of compressive strength,five different types of machine learning networks were used for comparative analysis.The five machine learning models all showed good compressive strength prediction performance on PFGC.Among them,R2,MSE,RMSE and MAE of decision tree model(DT)are 0.99,1.58,1.25,and 0.25,respectively.While R2,MSE,RMSE and MAE of random forest model(RF)are 0.97,5.17,2.27 and 1.38,respectively.The two models have high prediction accuracy and outstanding generalization ability.In order to enhance the interpretability of model decision-making,we used importance ranking to obtain the perception of machine learning model to 13 variables.These 13 variables include chemical composition of fly ash(SiO_(2)/Al_(2)O_(3),Si/Al),the ratio of alkaline liquid to the binder,curing temperature,curing durations inside oven,fly ash dosage,fine aggregate dosage,coarse aggregate dosage,extra water dosage and sodium hydroxide dosage.Curing temperature,specimen ages and curing durations inside oven have the greatest influence on the prediction results,indicating that curing conditions have more prominent influence on the compressive strength of PFGC than ordinary Portland cement concrete.The importance of curing conditions of PFGC even exceeds that of the concrete mix proportion,due to the low reactivity of pure fly ash.
基金supported in part by the National Key Research and Development Program of China under Grant No.2021YFF0901300in part by the National Natural Science Foundation of China under Grant Nos.62173076 and 72271048.
文摘The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.
基金supported by grants from the Zhejiang Provincial Natural Science Foundation(LGJ22H180001)Zhejiang Medical and Health Science and Technology Project(2021KY249)the National Key R&D Program of China(2017YFC1310000).
文摘Attempts have been made to modulate motor sequence learning(MSL)through repetitive transcranial magnetic stimulation,targeting different sites within the sensorimotor network.However,the target with the optimum modulatory effect on neural plasticity associated with MSL remains unclarified.This study was therefore designed to compare the role of the left primary motor cortex and the left supplementary motor area proper(SMAp)in modulating MSL across different complexity levels and for both hands,as well as the associated neuroplasticity by applying intermittent theta burst stimulation together with the electroencephalogram and concurrent transcranial magnetic stimulation.Our data demonstrated the role of SMAp stimulation in modulating neural communication to support MSL,which is achieved by facilitating regional activation and orchestrating neural coupling across distributed brain regions,particularly in interhemispheric connections.These findings may have important clinical implications,particularly for motor rehabilitation in populations such as post-stroke patients.
基金supported by the 2024“Special Research Project on the Application of Artificial Intelligence in Empowering Teaching and Education”of Zhejiang Province Association of Higher Education(KT2024165).
文摘While artificial intelligence(AI)shows promise in education,its real-world effectiveness in specific settings like blended English as a Foreign Language(EFL)learning needs closer examination.This study investigated the impact of a blended teaching model incorporating AI tools on the Superstar Learning Platform for Chinese university EFL students.Using a mixed-methods approach,60 first-year students were randomized into an experimental group(using the AI-enhanced model)and a control group(traditional instruction)for 16 weeks.Data included test scores,learning behaviors(duration,task completion),satisfaction surveys,and interviews.Results showed the experimental group significantly outperformed the control group on post-tests and achieved larger learning gains.These students also demonstrated greater engagement through longer study times and higher task completion rates,and reported significantly higher satisfaction.Interviews confirmed these findings,with students attributing benefits to the model’s personalized guidance,structured content presentation(knowledge graphs),immediate responses,flexibility,and varied interaction methods.However,limitations were noted,including areas where the platform’s AI could be improved(e.g.,for assessing speaking/translation)and ongoing challenges with student self-discipline.The study concludes that this AI-enhanced blended model significantly improved student performance,engagement,and satisfaction in this EFL context.The findings offer practical insights for educators and platform developers,suggesting AI integration holds significant potential while highlighting areas for refinement.
文摘Online interactive learning plays a crucial role in improving online education quality.This grounded theory study examines:(1)what key factors shape EFL learners’online interactive learning,(2)how these factors form an empirically validated model,and(3)how they interact within this model,through systematic analysis of 9,207 discussion forum posts from a Chinese University MOOC platform.Results demonstrate that learning drive,course structure,teaching competence,interaction behavior,expected outcomes,and online learning context significantly influence EFL online interactive learning.The analysis reveals two key mechanisms:expected outcomes mediate the effects of learning drive(β=0.45),course structure,teaching competence,and interaction behavior(β=0.35)on learning outcomes,while online learning context moderates these relationships(β=0.25).Specifically,learning drive provides intrinsic/extrinsic motivation,whereas course structure,teaching competence,interaction behavior,and expected outcomes collectively enhance interaction quality and sustainability.These findings,derived through rigorous grounded theory methodology involving open,axial,and selective coding of large-scale interaction data,yield three key contributions:(1)a comprehensive theoretical model of EFL online learning dynamics,(2)empirical validation of mediation/moderation mechanisms,and(3)practical strategies for designing scaffolded interaction protocols and adaptive feedback systems.The study establishes that its theoretically saturated model(achieved after analyzing 7,366 posts with 1,841 verification cases)offers educators evidence-based approaches to optimize collaborative interaction in digital EFL environments.
基金King Saud University,Grant/Award Number:RSP2024R157。
文摘Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework for human gait classification in video sequences using deep learning(DL)fusion assisted and posterior probability-based moth flames optimization(MFO)is proposed.In the first step,the video frames are resized and finetuned by two pre-trained lightweight DL models,EfficientNetB0 and MobileNetV2.Both models are selected based on the top-5 accuracy and less number of parameters.Later,both models are trained through deep transfer learning and extracted deep features fused using a voting scheme.In the last step,the authors develop a posterior probabilitybased MFO feature selection algorithm to select the best features.The selected features are classified using several supervised learning methods.The CASIA-B publicly available dataset has been employed for the experimental process.On this dataset,the authors selected six angles such as 0°,18°,90°,108°,162°,and 180°and obtained an average accuracy of 96.9%,95.7%,86.8%,90.0%,95.1%,and 99.7%.Results demonstrate comparable improvement in accuracy and significantly minimize the computational time with recent state-of-the-art techniques.
文摘After living in China for 33 years,Associate Professor Gu Qingyang of the Lee Kuan Yew School of Public Policy(LKYSPP)at the National University of Singapore(NUS)arrived in Singapore in 1994.Over the past 31 years,he has remained dedicated to building bridges—initially by systematically introducing Singapore’s development experience to China,and later by fostering mutual learning between the two countries.
基金support the findings of this study are openly available in(Scopus database)at www.scopus.com(accessed on 07 January 2025).
文摘“Flying Ad Hoc Networks(FANETs)”,which use“Unmanned Aerial Vehicles(UAVs)”,are developing as a critical mechanism for numerous applications,such as military operations and civilian services.The dynamic nature of FANETs,with high mobility,quick node migration,and frequent topology changes,presents substantial hurdles for routing protocol development.Over the preceding few years,researchers have found that machine learning gives productive solutions in routing while preserving the nature of FANET,which is topology change and high mobility.This paper reviews current research on routing protocols and Machine Learning(ML)approaches applied to FANETs,emphasizing developments between 2021 and 2023.The research uses the PRISMA approach to sift through the literature,filtering results from the SCOPUS database to find 82 relevant publications.The research study uses machine learning-based routing algorithms to beat the issues of high mobility,dynamic topologies,and intermittent connection in FANETs.When compared with conventional routing,it gives an energy-efficient and fast decision-making solution in a real-time environment,with greater fault tolerance capabilities.These protocols aim to increase routing efficiency,flexibility,and network stability using ML’s predictive and adaptive capabilities.This comprehensive review seeks to integrate existing information,offer novel integration approaches,and recommend future research topics for improving routing efficiency and flexibility in FANETs.Moreover,the study highlights emerging trends in ML integration,discusses challenges faced during the review,and discusses overcoming these hurdles in future research.
基金funded by the National Natural Science Foundation of China(No.52376039 and U24A20138)the Beijing Natural Science Foundation of China(No.JQ24017)+1 种基金the National Science and Technology Major Project of China(Nos.J2019-II-0005-0025 and Y2022-Ⅱ-0002-0005)the Special Fund for the Member of Youth Innovation Promotion Association of Chinese Academy of Sciences(No.2018173)。
文摘To predict stall and surge in advance that make the aero-engine compressor operatesafely,a stall prediction model based on deep learning theory is established in the current study.The Long Short-Term Memory(LSTM)originating from the recurrent neural network is used,and a set of measured dynamic pressure datasets including the stall process is used to learn whatdetermines the weight of neural network nodes.Subsequently,the structure and function hyperpa-rameters in the model are deeply optimized,and a set of measured pressure data is used to verify theprediction effects of the model.On this basis of the above good predictive capability,stall in low-and high-speed compressor are predicted by using the established model.When a period of non-stallpressure data is used as input in the model,the model can quickly complete the prediction of sub-sequent time series data through the self-learning and prediction mechanism.Comparison with thereal-time measured pressure data demonstrates that the starting point of the predicted stall is basi-cally the same as that of the measured stall,and the stall can be predicted more than 1 s in advanceso that the occurrence of stall can be avoided.The model of stall prediction in the current study canmake up for the uncertainty of threshold selection of the existing stall warning methods based onmeasured data signal processing.It has a great application potential to predict the stall occurrenceof aero-engine compressor in advance and avoid the accidents.
基金supported by the Project of Science and Technology Research Program of Chongqing Education Commission of China(No.KJZD-K202401105)High-Quality Development Action Plan for Graduate Education at Chongqing University of Technology(No.gzljg2023308,No.gzljd2024204)+1 种基金the Graduate Innovation Program of Chongqing University of Technology(No.gzlcx20233197)Yunnan Provincial Key R&D Program(202203AA080006).
文摘Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)due to its tamper-proof and non-repudiation features.Although blockchain typically does not require the endorsement of third-party trust organizations,it mostly needs to perform necessary mathematical calculations to prevent malicious attacks,which results in stricter requirements for computation resources on the participating devices.By offloading the computation tasks required to support blockchain consensus to edge service nodes or the cloud,while providing data privacy protection for IoT applications,it can effectively address the limitations of computation and energy resources in IoT devices.However,how to make reasonable offloading decisions for IoT devices remains an open issue.Due to the excellent self-learning ability of Reinforcement Learning(RL),this paper proposes a RL enabled Swarm Intelligence Optimization Algorithm(RLSIOA)that aims to improve the quality of initial solutions and achieve efficient optimization of computation task offloading decisions.The algorithm considers various factors that may affect the revenue obtained by IoT devices executing consensus algorithms(e.g.,Proof-of-Work),it optimizes the proportion of sub-tasks to be offloaded and the scale of computing resources to be rented from the edge and cloud to maximize the revenue of devices.Experimental results show that RLSIOA can obtain higher-quality offloading decision-making schemes at lower latency costs compared to representative benchmark algorithms.
基金the National Natural Science Foundation of China(52161011)the Central Guiding Local Science and Technology Development Fund Project(Guike ZY23055005,Guike ZY24212036 and GuikeAB25069457)+5 种基金the Guangxi Science and Technology Project(2023GXNSFDA026046 and Guike AB24010247)the Scientifc Research and Technology Development Program of Guilin(20220110-3 and 20230110-3)the Scientifc Research and Technology Development Program of Nanning Jiangnan district(20230715-02)the Guangxi Key Laboratory of Superhard Material(2022-K-001)the Guangxi Key Laboratory of Information Materials(231003-Z,231033-K and 231013-Z)the Innovation Project of GUET Graduate Education(2025YCXS177)for the fnancial support given to this work.
文摘High-entropy alloys(HEAs)have attracted considerable attention because of their excellent properties and broad compositional design space.However,traditional trial-and-error methods for screening HEAs are costly and inefficient,thereby limiting the development of new materials.Although density functional theory(DFT),molecular dynamics(MD),and thermodynamic modeling have improved the design efficiency,their indirect connection to properties has led to limitations in calculation and prediction.With the awarding of the Nobel Prize in Physics and Chemistry to artificial intelligence(AI)related researchers,there has been a renewed enthusiasm for the application of machine learning(ML)in the field of alloy materials.In this study,common and advanced ML models and strategies in HEA design were introduced,and the mechanism by which ML can play a role in composition optimization and performance prediction was investigated through case studies.The general workflow of ML application in material design was also introduced from the programmer’s point of view,including data preprocessing,feature engineering,model training,evaluation,optimization,and interpretability.Furthermore,data scarcity,multi-model coupling,and other challenges and opportunities at the current stage were analyzed,and an outlook on future research directions was provided.
基金funded by the Basic Scientific Fund for National Public Research Institutes of China(No.2022 S01)the National Natural Science Foundation of China(Nos.42176191,42049902,and U22A2012)+5 种基金the Shandong Provincial Natural Science Foundation,China(No.ZR2022YQ40)the National Key R&D Program of China(No.2021YFF0501202)the Southern Marine Science and Engineering Guangdong Laboratory(Zhuhai)(No.SML2023 SP232)the Fundamental Research Funds for the Central Universities,Sun Yat-sen University(No.241gqb006)Data acquisition and sample collections were supported by the National Natural Science Foundation of China Open Research Cruise(Cruise No.NORC2021-02+NORC2021301)funded by the Shiptime Sharing Project of the National Natural Science Foundation of China。
文摘Accurate acquisition and prediction of acoustic parameters of seabed sediments are crucial in marine sound propagation research.While the relationship between sound velocity and physical properties of sediment has been extensively studied,there is still no consensus on the correlation between acoustic attenuation coefficient and sediment physical properties.Predicting the acoustic attenuation coefficient remains a challenging issue in sedimentary acoustic research.In this study,we propose a prediction method for the acoustic attenuation coefficient using machine learning algorithms,specifically the random forest(RF),support vector machine(SVR),and convolutional neural network(CNN)algorithms.We utilized the acoustic attenuation coefficient and sediment particle size data from 52 stations as training parameters,with the particle size parameters as the input feature matrix,and measured acoustic attenuation as the training label to validate the attenuation prediction model.Our results indicate that the error of the attenuation prediction model is small.Among the three models,the RF model exhibited the lowest prediction error,with a mean squared error of 0.8232,mean absolute error of 0.6613,and root mean squared error of 0.9073.Additionally,when we applied the models to predict the data collected at different times in the same region,we found that the models developed in this study also demonstrated a certain level of reliability in real prediction scenarios.Our approach demonstrates that constructing a sediment acoustic characteristics model based on machine learning is feasible to a certain extent and offers a novel perspective for studying sediment acoustic properties.
基金supported by the National Natural Science Foundation of China under Grant No.U20A20182.
文摘As an effective strategy to address urban traffic congestion,traffic flow prediction has gained attention from Federated-Learning(FL)researchers due FL’s ability to preserving data privacy.However,existing methods face challenges:some are too simplistic to capture complex traffic patterns effectively,and others are overly complex,leading to excessive communication overhead between cloud and edge devices.Moreover,the problem of single point failure limits their robustness and reliability in real-world applications.To tackle these challenges,this paper proposes a new method,CMBA-FL,a Communication-Mitigated and Blockchain-Assisted Federated Learning model.First,CMBA-FL improves the client model’s ability to capture temporal traffic patterns by employing the Encoder-Decoder framework for each edge device.Second,to reduce the communication overhead during federated learning,we introduce a verification method based on parameter update consistency,avoiding unnecessary parameter updates.Third,to mitigate the risk of a single point of failure,we integrate consensus mechanisms from blockchain technology.To validate the effectiveness of CMBA-FL,we assess its performance on two widely used traffic datasets.Our experimental results show that CMBA-FL reduces prediction error by 11.46%,significantly lowers communication overhead,and improves security.
基金supported in part by the National Natural Science Foundation of China(62462053)the Science and Technology Foundation of Qinghai Province(2023-ZJ-731)+1 种基金the Open Project of the Qinghai Provincial Key Laboratory of Restoration Ecology in Cold Area(2023-KF-12)the Open Research Fund of Guangdong Key Laboratory of Blockchain Security,Guangzhou University。
文摘Federated learning(FL)is a distributed machine learning paradigm that excels at preserving data privacy when using data from multiple parties.When combined with Fog Computing,FL offers enhanced capabilities for machine learning applications in the Internet of Things(IoT).However,implementing FL across large-scale distributed fog networks presents significant challenges in maintaining privacy,preventing collusion attacks,and ensuring robust data aggregation.To address these challenges,we propose an Efficient Privacy-preserving and Robust Federated Learning(EPRFL)scheme for fog computing scenarios.Specifically,we first propose an efficient secure aggregation strategy based on the improved threshold homomorphic encryption algorithm,which is not only resistant to model inference and collusion attacks,but also robust to fog node dropping.Then,we design a dynamic gradient filtering method based on cosine similarity to further reduce the communication overhead.To minimize training delays,we develop a dynamic task scheduling strategy based on comprehensive score.Theoretical analysis demonstrates that EPRFL offers robust security and low latency.Extensive experimental results indicate that EPRFL outperforms similar strategies in terms of privacy preserving,model performance,and resource efficiency.
基金supported by the Major Research Plan of the National Natural Science Foundation of China(92372104)Guangdong Basic and Applied Basic Research Foundation(2022A1515110016)+3 种基金the Recruitment Program of Guangdong(2016ZT06C322)R&D Program of Guangzhou(2023A04J1364)Fundamental Research Funds for the Central Universities(2024ZYGXZR043)TCL Science and Technology Innovation Fund。
文摘Electrolyte engineering with fluoroethers as solvents offers promising potential for high-performance lithium metal batteries.Despite recent progresses achieved in designing and synthesizing novel fluoroether solvents,a systematic understanding of how fluorination patterns impact electrolyte performance is still lacking.We investigate the effects of fluorination patterns on properties of electrolytes using fluorinated 1,2-diethoxyethane(FDEE)as single solvents.By employing quantum calculations,molecular dynamics simulations,and interpretable machine learning,we establish significant correlations between fluorination patterns and electrolyte properties.Higher fluorination levels enhance FDEE stability but decrease conductivity.The symmetry of fluorination sites is critical for stability and viscosity,while exerting minimal influence on ionic conductivity.FDEEs with highly symmetric fluorination sites exhibit favorable viscosity,stability,and overall electrolyte performance.Conductivity primarily depends on lithium-anion dissociation or association.These findings provide design principles for rational fluoroether electrolyte design,emphasizing the trade-offs between stability,viscosity,and conductivity.Our work underscores the significance of considering fluorination patterns and molecular symmetry in the development of fluoroether-based electrolytes for advanced lithium batteries.
基金supported by National Natural Science Foundation of China(Nos.62071481 and 61501471).
文摘This paper introduces a quantum-enhanced edge computing framework that synergizes quantuminspired algorithms with advanced machine learning techniques to optimize real-time task offloading in edge computing environments.This innovative approach not only significantly improves the system’s real-time responsiveness and resource utilization efficiency but also addresses critical challenges in Internet of Things(IoT)ecosystems—such as high demand variability,resource allocation uncertainties,and data privacy concerns—through practical solutions.Initially,the framework employs an adaptive adjustment mechanism to dynamically manage task and resource states,complemented by online learning models for precise predictive analytics.Secondly,it accelerates the search for optimal solutions using Grover’s algorithm while efficiently evaluating complex constraints through multi-controlled Toffoli gates,thereby markedly enhancing the practicality and robustness of the proposed solution.Furthermore,to bolster the system’s adaptability and response speed in dynamic environments,an efficientmonitoring mechanism and event-driven architecture are incorporated,ensuring timely responses to environmental changes and maintaining synchronization between internal and external systems.Experimental evaluations confirm that the proposed algorithm demonstrates superior performance in complex application scenarios,characterized by faster convergence,enhanced stability,and superior data privacy protection,alongside notable reductions in latency and optimized resource utilization.This research paves the way for transformative advancements in edge computing and IoT technologies,driving smart edge computing towards unprecedented levels of intelligence and automation.
基金supported by the National Natural Science Foundation of China (No.62071070)Major science and technology special project of Science and Technology Department of Yunnan Province (202002AB080001-8)BUPT innovation&entrepreneurship support program (2023-YC-T031)。
文摘As the information sensing and processing capabilities of IoT devices increase,a large amount of data is being generated at the edge of Industrial IoT(IIoT),which has become a strong foundation for distributed Artificial Intelligence(AI)applications.However,most users are reluctant to disclose their data due to network bandwidth limitations,device energy consumption,and privacy requirements.To address this issue,this paper introduces an Edge-assisted Federated Learning(EFL)framework,along with an incentive mechanism for lightweight industrial data sharing.In order to reduce the information asymmetry between data owners and users,an EFL model-sharing incentive mechanism based on contract theory is designed.In addition,a weight dispersion evaluation scheme based on Wasserstein distance is proposed.This study models an optimization problem of node selection and sharing incentives to maximize the EFL model consumers'profit and ensure the quality of training services.An incentive-based EFL algorithm with individual rationality and incentive compatibility constraints is proposed.Finally,the experimental results verify the effectiveness of the proposed scheme in terms of positive incentives for contract design and performance analysis of EFL systems.