Deuterium(D_(2)) is one of the important fuel sources that power nuclear fusion reactors. The existing D_(2)/H_(2) separation technologies that obtain high-purity D_(2) are cost-intensive. Recent research has shown th...Deuterium(D_(2)) is one of the important fuel sources that power nuclear fusion reactors. The existing D_(2)/H_(2) separation technologies that obtain high-purity D_(2) are cost-intensive. Recent research has shown that metal-organic frameworks(MOFs) are of good potential for D_(2)/H_(2) separation application. In this work, a high-throughput computational screening of 12020 computation-ready experimental MOFs is carried out to determine the best MOFs for hydrogen isotope separation application. Meanwhile, the detailed structure-performance correlation is systematically investigated with the aid of machine learning. The results indicate that the ideal D_(2)/H_(2) adsorption selectivity calculated based on Henry coefficient is strongly correlated with the 1/ΔAD feature descriptor;that is, inverse of the adsorbility difference of the two adsorbates. Meanwhile, the machine learning(ML) results show that the prediction accuracy of all the four ML methods is significantly improved after the addition of this feature descriptor. In addition, the ML results based on extreme gradient boosting model also revealed that the 1/ΔAD descriptor has the highest relative importance compared to other commonly-used descriptors. To further explore the effect of hydrogen isotope separation in binary mixture, 1548 MOFs with ideal adsorption selectivity greater than 1.5 are simulated at equimolar conditions. The structure-performance relationship shows that high adsorption selectivity MOFs generally have smaller pore size(0.3-0.5 nm) and lower surface area. Among the top 200 performers, the materials mainly have the sql, pcu, cds, hxl, and ins topologies.Finally, three MOFs with high D_(2)/H_(2) selectivity and good D_(2) uptake are identified as the best candidates,of all which had one-dimensional channel pore. The findings obtained in this work may be helpful for the identification of potentially promising candidates for hydrogen isotope separation.展开更多
This paper proposes a non-intrusive computational method for mechanical dynamic systems involving a large-scale of interval uncertain parameters,aiming to reduce the computational costs and improve accuracy in determi...This paper proposes a non-intrusive computational method for mechanical dynamic systems involving a large-scale of interval uncertain parameters,aiming to reduce the computational costs and improve accuracy in determining bounds of system response.The screening method is firstly used to reduce the scale of active uncertain parameters.The sequential high-order polynomials surrogate models are then used to approximate the dynamic system’s response at each time step.To reduce the sampling cost of constructing surrogate model,the interaction effect among uncertain parameters is gradually added to the surrogate model by sequentially incorporating samples from a candidate set,which is composed of vertices and inner grid points.Finally,the points that may produce the bounds of the system response at each time step are searched using the surrogate models.The optimization algorithm is used to locate extreme points,which contribute to determining the inner points producing system response bounds.Additionally,all vertices are also checked using the surrogate models.A vehicle nonlinear dynamic model with 72 uncertain parameters is presented to demonstrate the accuracy and efficiency of the proposed uncertain computational method.展开更多
Among various architectures of polymers,end-group-free rings have attracted growing interests due to their distinct physicochemical performances over the linear counterparts which are exemplified by reduced hydrodynam...Among various architectures of polymers,end-group-free rings have attracted growing interests due to their distinct physicochemical performances over the linear counterparts which are exemplified by reduced hydrodynamic size and slower degradation.It is key to develop facile methods to large-scale synthesis of polymer rings with tunable compositions and microstructures.Recent progresses in large-scale synthesis of polymer rings against single-chain dynamic nanoparticles,and the example applications in synchronous enhancing toughness and strength of polymer nanocomposites are summarized.Once there is the breakthrough in rational design and effective large-scale synthesis of polymer rings and their functional derivatives,a family of cyclic functional hybrids would be available,thus providing a new paradigm in developing polymer science and engineering.展开更多
A numerical technique of the target-region locating (TRL) solver in conjunction with the wave-front method is presented for the application of the finite element method (FEM) for 3-D electromagnetic computation. F...A numerical technique of the target-region locating (TRL) solver in conjunction with the wave-front method is presented for the application of the finite element method (FEM) for 3-D electromagnetic computation. First, the principle of TRL technique is described. Then, the availability of TRL solver for nonlinear application is particularly discussed demonstrating that this solver can be easily used while still remaining great efficiency. The implementation on how to apply this technique in FEM based on magnetic vector potential (MVP) is also introduced. Finally, a numerical example of 3-D magnetostatic modeling using the TRL solver and FEMLAB is given. It shows that a huge computer resource can be saved by employing the new solver.展开更多
Large-scale complex systems are integral to the functioning of various organizations within the national economy.Despite their significance,the lengthy construction cycles and the involvement of multiple entities ofte...Large-scale complex systems are integral to the functioning of various organizations within the national economy.Despite their significance,the lengthy construction cycles and the involvement of multiple entities often result in the deprioritization of standardized management practices,as they do not yield immediate benefits.The implementation of such systems typically encompasses the integrated phases of "development,construction,utiliz ation,and operation and maintenance".To enhance the overall delivery quality of these systems,it is imperative to dismantle the management barriers among these phases and adopt a holistic approach to standardized management.This paper takes a specific system project as a research object to identify common challenges,and proposes improvement strategies in the implementation of standar dized management.Empirical results indicate a substantial reduction in the system s full-lifecycle costs.展开更多
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain...Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.展开更多
The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous c...The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.展开更多
Summer rainfall in the Yangtze River basin(YRB)is favored by two key factors in the lower troposphere:the tropical anticyclonic anomaly over the western North Pacific and the extratropical northeasterly anomalies to t...Summer rainfall in the Yangtze River basin(YRB)is favored by two key factors in the lower troposphere:the tropical anticyclonic anomaly over the western North Pacific and the extratropical northeasterly anomalies to the north of the YRB.This study,however,found that approximately 46%of heavy rainfall events in the YRB occur when only one factor appears and the other is opposite signed.Accordingly,these heavy rainfall events can be categorized into two types:the extratropical northeasterly anomalies but tropical cyclonic anomaly(first unconventional type),and the tropical anticyclonic anomaly but extratropical southwesterly anomalies(second unconventional type).Anomalous water vapor convergence and upward motion exists for both types,but through different mechanisms.For the first type,the moisture convergence and upward motion are induced by a cyclonic anomaly over the YRB,which appears in the mid and lower troposphere and originates from the upstream region.For the second type,a mid-tropospheric cyclonic anomaly over Lake Baikal extends southward and results in southwesterly anomalies over the YRB,in conjunction with the tropical anticyclonic anomaly.The southwesterly anomalies transport water vapor to the YRB and lead to upward motion through warm advection.This study emphasizes the role of mid-tropospheric circulations in inducing heavy rainfall in the YRB.展开更多
In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network e...In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.展开更多
The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreser...The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreserving computation.Classical MPC relies on cryptographic techniques such as homomorphic encryption,secret sharing,and oblivious transfer,which may become vulnerable in the post-quantum era due to the computational power of quantum adversaries.This study presents a review of 140 peer-reviewed articles published between 2000 and 2025 that used different databases like MDPI,IEEE Explore,Springer,and Elsevier,examining the applications,types,and security issues with the solution of Quantum computing in different fields.This review explores the impact of quantum computing on MPC security,assesses emerging quantum-resistant MPC protocols,and examines hybrid classicalquantum approaches aimed at mitigating quantum threats.We analyze the role of Quantum Key Distribution(QKD),post-quantum cryptography(PQC),and quantum homomorphic encryption in securing multiparty computations.Additionally,we discuss the challenges of scalability,computational efficiency,and practical deployment of quantumsecure MPC frameworks in real-world applications such as privacy-preserving AI,secure blockchain transactions,and confidential data analysis.This review provides insights into the future research directions and open challenges in ensuring secure,scalable,and quantum-resistant multiparty computation.展开更多
This study develops an event-triggered control strategy utilizing the fully actuated system approach for nonlinear interconnected large-scale systems containing actuator failures.First,to reduce the complexity of the ...This study develops an event-triggered control strategy utilizing the fully actuated system approach for nonlinear interconnected large-scale systems containing actuator failures.First,to reduce the complexity of the design process,we transform the studied system into the form of a fully actuated system through a state transformation.Then,to address the unknown nonlinear functions and actuator fault parameters,we employ neural networks and adaptive estimation techniques,respectively.Moreover,to reduce the control cost and improve the control efficiency,we introduce event-triggered inputs into the control strategy.It is proved by the Lyapunov stability analysis that all signals of the closed-loop system are bounded and the output of system eventually converge to a bounded region.The efficacy of the control approach is ultimately demonstrated via the simulation of an actual machine feeding system.展开更多
Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based met...Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based methods,tracking on a single CPU core,or parallelizing the computation across multiple cores via the message passing interface(MPI).Although these approaches work well for single-bunch tracking,scaling them to multiple bunches significantly increases the computational load,which often necessitates the use of a dedicated multi-CPU cluster.To address this challenge,alternative methods leveraging General-Purpose computing on Graphics Processing Units(GPGPU)have been proposed,enabling tracking studies on a standalone desktop personal computer(PC).However,frequent CPU-GPU interactions,including data transfers and synchronization operations during tracking,can introduce communication overheads,potentially reducing the overall effectiveness of GPU-based computations.In this study,we propose a novel approach that eliminates this overhead by performing the entire tracking simulation process exclusively on the GPU,thereby enabling the simultaneous processing of all bunches and their macro-particles.Specifically,we introduce MBTRACK2-CUDA,a Compute Unified Device Architecture(CUDA)ported version of MBTRACK2,which facilitates efficient tracking of single-and multi-bunch collective effects by leveraging the full GPU-resident computation.展开更多
As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays...As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.展开更多
Physics-informed neural networks(PINNs)have emerged as a promising class of scientific machine learning techniques that integrate governing physical laws into neural network training.Their ability to enforce different...Physics-informed neural networks(PINNs)have emerged as a promising class of scientific machine learning techniques that integrate governing physical laws into neural network training.Their ability to enforce differential equations,constitutive relations,and boundary conditions within the loss function provides a physically grounded alternative to traditional data-driven models,particularly for solid and structural mechanics,where data are often limited or noisy.This review offers a comprehensive assessment of recent developments in PINNs,combining bibliometric analysis,theoretical foundations,application-oriented insights,and methodological innovations.A biblio-metric survey indicates a rapid increase in publications on PINNs since 2018,with prominent research clusters focused on numerical methods,structural analysis,and forecasting.Building upon this trend,the review consolidates advance-ments across five principal application domains,including forward structural analysis,inverse modeling and parameter identification,structural and topology optimization,assessment of structural integrity,and manufacturing processes.These applications are propelled by substantial methodological advancements,encompassing rigorous enforcement of boundary conditions,modified loss functions,adaptive training,domain decomposition strategies,multi-fidelity and transfer learning approaches,as well as hybrid finite element–PINN integration.These advances address recurring challenges in solid mechanics,such as high-order governing equations,material heterogeneity,complex geometries,localized phenomena,and limited experimental data.Despite remaining challenges in computational cost,scalability,and experimental validation,PINNs are increasingly evolving into specialized,physics-aware tools for practical solid and structural mechanics applications.展开更多
Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and v...Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and viable quantum algorithms for simulating large-scale materials are still limited.We propose and implement random-state quantum algorithms to calculate electronic-structure properties of real materials.Using a random state circuit on a small number of qubits,we employ real-time evolution with first-order Trotter decomposition and Hadamard test to obtain electronic density of states,and we develop a modified quantum phase estimation algorithm to calculate real-space local density of states via direct quantum measurements.Furthermore,we validate these algorithms by numerically computing the density of states and spatial distributions of electronic states in graphene,twisted bilayer graphene quasicrystals,and fractal lattices,covering system sizes from hundreds to thousands of atoms.Our results manifest that the random-state quantum algorithms provide a general and qubit-efficient route to scalable simulations of electronic properties in large-scale periodic and aperiodic materials.展开更多
Large-scale multi-objective optimization problems(MOPs)that involve a large number of decision variables,have emerged from many real-world applications.While evolutionary algorithms(EAs)have been widely acknowledged a...Large-scale multi-objective optimization problems(MOPs)that involve a large number of decision variables,have emerged from many real-world applications.While evolutionary algorithms(EAs)have been widely acknowledged as a mainstream method for MOPs,most research progress and successful applications of EAs have been restricted to MOPs with small-scale decision variables.More recently,it has been reported that traditional multi-objective EAs(MOEAs)suffer severe deterioration with the increase of decision variables.As a result,and motivated by the emergence of real-world large-scale MOPs,investigation of MOEAs in this aspect has attracted much more attention in the past decade.This paper reviews the progress of evolutionary computation for large-scale multi-objective optimization from two angles.From the key difficulties of the large-scale MOPs,the scalability analysis is discussed by focusing on the performance of existing MOEAs and the challenges induced by the increase of the number of decision variables.From the perspective of methodology,the large-scale MOEAs are categorized into three classes and introduced respectively:divide and conquer based,dimensionality reduction based and enhanced search-based approaches.Several future research directions are also discussed.展开更多
In this paper, according to the parallel environment of ELXSI computer, a parallel solving process of substructure method in static and dynamic analyses of large-scale and complex structure has been put forward, and t...In this paper, according to the parallel environment of ELXSI computer, a parallel solving process of substructure method in static and dynamic analyses of large-scale and complex structure has been put forward, and the corresponding parallel computational program has been developed.展开更多
The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical r...The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical research.The review covers key topics such as computational modelling,bioinformatics,machine learning in medical diagnostics,and the integration of wearable technology for real-time health monitoring.Major findings indicate that computational models have significantly enhanced the understanding of complex biological systems,while machine learning algorithms have improved the accuracy of disease prediction and diagnosis.The synergy between bioinformatics and computational techniques has led to breakthroughs in personalized medicine,enabling more precise treatment strategies.Additionally,the integration of wearable devices with advanced computational methods has opened new avenues for continuous health monitoring and early disease detection.The review emphasizes the need for interdisciplinary collaboration to further advance this field.Future research should focus on developing more robust and scalable computational models,enhancing data integration techniques,and addressing ethical considerations related to data privacy and security.By fostering innovation at the intersection of these disciplines,the potential to revolutionize healthcare delivery and outcomes becomes increasingly attainable.展开更多
1.Introduction Climate change mitigation pathways aimed at limiting global anthropogenic carbon dioxide(CO_(2))emissions while striving to constrain the global temperature increase to below 2℃—as outlined by the Int...1.Introduction Climate change mitigation pathways aimed at limiting global anthropogenic carbon dioxide(CO_(2))emissions while striving to constrain the global temperature increase to below 2℃—as outlined by the Intergovernmental Panel on Climate Change(IPCC)—consistently predict the widespread implementation of CO_(2)geological storage on a global scale.展开更多
基金supported by the National Natural Science Foundation of China (22078004)the Research Development Fund from Xi’an Jiaotong-Liverpool University (RDF-16-02-03 and RDF15-01-23)key program special fund (KSF-E-03)。
文摘Deuterium(D_(2)) is one of the important fuel sources that power nuclear fusion reactors. The existing D_(2)/H_(2) separation technologies that obtain high-purity D_(2) are cost-intensive. Recent research has shown that metal-organic frameworks(MOFs) are of good potential for D_(2)/H_(2) separation application. In this work, a high-throughput computational screening of 12020 computation-ready experimental MOFs is carried out to determine the best MOFs for hydrogen isotope separation application. Meanwhile, the detailed structure-performance correlation is systematically investigated with the aid of machine learning. The results indicate that the ideal D_(2)/H_(2) adsorption selectivity calculated based on Henry coefficient is strongly correlated with the 1/ΔAD feature descriptor;that is, inverse of the adsorbility difference of the two adsorbates. Meanwhile, the machine learning(ML) results show that the prediction accuracy of all the four ML methods is significantly improved after the addition of this feature descriptor. In addition, the ML results based on extreme gradient boosting model also revealed that the 1/ΔAD descriptor has the highest relative importance compared to other commonly-used descriptors. To further explore the effect of hydrogen isotope separation in binary mixture, 1548 MOFs with ideal adsorption selectivity greater than 1.5 are simulated at equimolar conditions. The structure-performance relationship shows that high adsorption selectivity MOFs generally have smaller pore size(0.3-0.5 nm) and lower surface area. Among the top 200 performers, the materials mainly have the sql, pcu, cds, hxl, and ins topologies.Finally, three MOFs with high D_(2)/H_(2) selectivity and good D_(2) uptake are identified as the best candidates,of all which had one-dimensional channel pore. The findings obtained in this work may be helpful for the identification of potentially promising candidates for hydrogen isotope separation.
基金supported by the National Natural Science Foundation of China(Grant No.12272142)Fundamental Research Funds for the Central Universities(Grant No.2172021XXJS048)。
文摘This paper proposes a non-intrusive computational method for mechanical dynamic systems involving a large-scale of interval uncertain parameters,aiming to reduce the computational costs and improve accuracy in determining bounds of system response.The screening method is firstly used to reduce the scale of active uncertain parameters.The sequential high-order polynomials surrogate models are then used to approximate the dynamic system’s response at each time step.To reduce the sampling cost of constructing surrogate model,the interaction effect among uncertain parameters is gradually added to the surrogate model by sequentially incorporating samples from a candidate set,which is composed of vertices and inner grid points.Finally,the points that may produce the bounds of the system response at each time step are searched using the surrogate models.The optimization algorithm is used to locate extreme points,which contribute to determining the inner points producing system response bounds.Additionally,all vertices are also checked using the surrogate models.A vehicle nonlinear dynamic model with 72 uncertain parameters is presented to demonstrate the accuracy and efficiency of the proposed uncertain computational method.
基金Supported by the National Natural Science Foundation of China(Nos.52293472,22473096 and 22471164)。
文摘Among various architectures of polymers,end-group-free rings have attracted growing interests due to their distinct physicochemical performances over the linear counterparts which are exemplified by reduced hydrodynamic size and slower degradation.It is key to develop facile methods to large-scale synthesis of polymer rings with tunable compositions and microstructures.Recent progresses in large-scale synthesis of polymer rings against single-chain dynamic nanoparticles,and the example applications in synchronous enhancing toughness and strength of polymer nanocomposites are summarized.Once there is the breakthrough in rational design and effective large-scale synthesis of polymer rings and their functional derivatives,a family of cyclic functional hybrids would be available,thus providing a new paradigm in developing polymer science and engineering.
基金Open Funds of State Key Laboratory of MillimeterWaves,China (No. K200401), Outstanding Teaching and ResearchAwards for Young Teachers of Nanjing Normal University (No.1320BL51)
文摘A numerical technique of the target-region locating (TRL) solver in conjunction with the wave-front method is presented for the application of the finite element method (FEM) for 3-D electromagnetic computation. First, the principle of TRL technique is described. Then, the availability of TRL solver for nonlinear application is particularly discussed demonstrating that this solver can be easily used while still remaining great efficiency. The implementation on how to apply this technique in FEM based on magnetic vector potential (MVP) is also introduced. Finally, a numerical example of 3-D magnetostatic modeling using the TRL solver and FEMLAB is given. It shows that a huge computer resource can be saved by employing the new solver.
文摘Large-scale complex systems are integral to the functioning of various organizations within the national economy.Despite their significance,the lengthy construction cycles and the involvement of multiple entities often result in the deprioritization of standardized management practices,as they do not yield immediate benefits.The implementation of such systems typically encompasses the integrated phases of "development,construction,utiliz ation,and operation and maintenance".To enhance the overall delivery quality of these systems,it is imperative to dismantle the management barriers among these phases and adopt a holistic approach to standardized management.This paper takes a specific system project as a research object to identify common challenges,and proposes improvement strategies in the implementation of standar dized management.Empirical results indicate a substantial reduction in the system s full-lifecycle costs.
基金supported by Key Science and Technology Program of Henan Province,China(Grant Nos.242102210147,242102210027)Fujian Province Young and Middle aged Teacher Education Research Project(Science and Technology Category)(No.JZ240101)(Corresponding author:Dong Yuan).
文摘Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.
基金appreciation to the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R384)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.
基金supported by the National Natural Science Foundation of China(Grant No.42275041)the Hainan Province Science and Technology Special Fund(Grant No.SOLZSKY2025006).
文摘Summer rainfall in the Yangtze River basin(YRB)is favored by two key factors in the lower troposphere:the tropical anticyclonic anomaly over the western North Pacific and the extratropical northeasterly anomalies to the north of the YRB.This study,however,found that approximately 46%of heavy rainfall events in the YRB occur when only one factor appears and the other is opposite signed.Accordingly,these heavy rainfall events can be categorized into two types:the extratropical northeasterly anomalies but tropical cyclonic anomaly(first unconventional type),and the tropical anticyclonic anomaly but extratropical southwesterly anomalies(second unconventional type).Anomalous water vapor convergence and upward motion exists for both types,but through different mechanisms.For the first type,the moisture convergence and upward motion are induced by a cyclonic anomaly over the YRB,which appears in the mid and lower troposphere and originates from the upstream region.For the second type,a mid-tropospheric cyclonic anomaly over Lake Baikal extends southward and results in southwesterly anomalies over the YRB,in conjunction with the tropical anticyclonic anomaly.The southwesterly anomalies transport water vapor to the YRB and lead to upward motion through warm advection.This study emphasizes the role of mid-tropospheric circulations in inducing heavy rainfall in the YRB.
基金supported by the National Natural Science Foundation of China(62202215)Liaoning Province Applied Basic Research Program(Youth Special Project,2023JH2/101600038)+4 种基金Shenyang Youth Science and Technology Innovation Talent Support Program(RC220458)Guangxuan Program of Shenyang Ligong University(SYLUGXRC202216)the Basic Research Special Funds for Undergraduate Universities in Liaoning Province(LJ212410144067)the Natural Science Foundation of Liaoning Province(2024-MS-113)the science and technology funds from Liaoning Education Department(LJKZ0242).
文摘In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.
文摘The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreserving computation.Classical MPC relies on cryptographic techniques such as homomorphic encryption,secret sharing,and oblivious transfer,which may become vulnerable in the post-quantum era due to the computational power of quantum adversaries.This study presents a review of 140 peer-reviewed articles published between 2000 and 2025 that used different databases like MDPI,IEEE Explore,Springer,and Elsevier,examining the applications,types,and security issues with the solution of Quantum computing in different fields.This review explores the impact of quantum computing on MPC security,assesses emerging quantum-resistant MPC protocols,and examines hybrid classicalquantum approaches aimed at mitigating quantum threats.We analyze the role of Quantum Key Distribution(QKD),post-quantum cryptography(PQC),and quantum homomorphic encryption in securing multiparty computations.Additionally,we discuss the challenges of scalability,computational efficiency,and practical deployment of quantumsecure MPC frameworks in real-world applications such as privacy-preserving AI,secure blockchain transactions,and confidential data analysis.This review provides insights into the future research directions and open challenges in ensuring secure,scalable,and quantum-resistant multiparty computation.
基金supported by the Science Center Program of National Natural Science Foundation of China under Grant 62188101the National Natural Science Foundation of China under Grant 62573265.
文摘This study develops an event-triggered control strategy utilizing the fully actuated system approach for nonlinear interconnected large-scale systems containing actuator failures.First,to reduce the complexity of the design process,we transform the studied system into the form of a fully actuated system through a state transformation.Then,to address the unknown nonlinear functions and actuator fault parameters,we employ neural networks and adaptive estimation techniques,respectively.Moreover,to reduce the control cost and improve the control efficiency,we introduce event-triggered inputs into the control strategy.It is proved by the Lyapunov stability analysis that all signals of the closed-loop system are bounded and the output of system eventually converge to a bounded region.The efficacy of the control approach is ultimately demonstrated via the simulation of an actual machine feeding system.
基金supported by the National Research Foundation of Korea(NRF)funded by the Ministry of Science and ICT(MSIT)(No.RS-2022-00143178)the Ministry of Education(MOE)(Nos.2022R1A6A3A13053896 and 2022R1F1A1074616),Republic of Korea.
文摘Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based methods,tracking on a single CPU core,or parallelizing the computation across multiple cores via the message passing interface(MPI).Although these approaches work well for single-bunch tracking,scaling them to multiple bunches significantly increases the computational load,which often necessitates the use of a dedicated multi-CPU cluster.To address this challenge,alternative methods leveraging General-Purpose computing on Graphics Processing Units(GPGPU)have been proposed,enabling tracking studies on a standalone desktop personal computer(PC).However,frequent CPU-GPU interactions,including data transfers and synchronization operations during tracking,can introduce communication overheads,potentially reducing the overall effectiveness of GPU-based computations.In this study,we propose a novel approach that eliminates this overhead by performing the entire tracking simulation process exclusively on the GPU,thereby enabling the simultaneous processing of all bunches and their macro-particles.Specifically,we introduce MBTRACK2-CUDA,a Compute Unified Device Architecture(CUDA)ported version of MBTRACK2,which facilitates efficient tracking of single-and multi-bunch collective effects by leveraging the full GPU-resident computation.
基金supported by Youth Talent Project of Scientific Research Program of Hubei Provincial Department of Education under Grant Q20241809Doctoral Scientific Research Foundation of Hubei University of Automotive Technology under Grant 202404.
文摘As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.
基金funded by National Research Council of Thailand(contract No.N42A671047).
文摘Physics-informed neural networks(PINNs)have emerged as a promising class of scientific machine learning techniques that integrate governing physical laws into neural network training.Their ability to enforce differential equations,constitutive relations,and boundary conditions within the loss function provides a physically grounded alternative to traditional data-driven models,particularly for solid and structural mechanics,where data are often limited or noisy.This review offers a comprehensive assessment of recent developments in PINNs,combining bibliometric analysis,theoretical foundations,application-oriented insights,and methodological innovations.A biblio-metric survey indicates a rapid increase in publications on PINNs since 2018,with prominent research clusters focused on numerical methods,structural analysis,and forecasting.Building upon this trend,the review consolidates advance-ments across five principal application domains,including forward structural analysis,inverse modeling and parameter identification,structural and topology optimization,assessment of structural integrity,and manufacturing processes.These applications are propelled by substantial methodological advancements,encompassing rigorous enforcement of boundary conditions,modified loss functions,adaptive training,domain decomposition strategies,multi-fidelity and transfer learning approaches,as well as hybrid finite element–PINN integration.These advances address recurring challenges in solid mechanics,such as high-order governing equations,material heterogeneity,complex geometries,localized phenomena,and limited experimental data.Despite remaining challenges in computational cost,scalability,and experimental validation,PINNs are increasingly evolving into specialized,physics-aware tools for practical solid and structural mechanics applications.
基金supported by the Major Project for the Integration of ScienceEducation and Industry (Grant No.2025ZDZX02)。
文摘Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and viable quantum algorithms for simulating large-scale materials are still limited.We propose and implement random-state quantum algorithms to calculate electronic-structure properties of real materials.Using a random state circuit on a small number of qubits,we employ real-time evolution with first-order Trotter decomposition and Hadamard test to obtain electronic density of states,and we develop a modified quantum phase estimation algorithm to calculate real-space local density of states via direct quantum measurements.Furthermore,we validate these algorithms by numerically computing the density of states and spatial distributions of electronic states in graphene,twisted bilayer graphene quasicrystals,and fractal lattices,covering system sizes from hundreds to thousands of atoms.Our results manifest that the random-state quantum algorithms provide a general and qubit-efficient route to scalable simulations of electronic properties in large-scale periodic and aperiodic materials.
基金This work was supported by the Natural Science Foundation of China(Nos.61672478 and 61806090)the National Key Research and Development Program of China(No.2017YFB1003102)+4 种基金the Guangdong Provincial Key Laboratory(No.2020B121201001)the Shenzhen Peacock Plan(No.KQTD2016112514355531)the Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-inspired Intelligence Fund(No.2019028)the Fellowship of China Postdoctoral Science Foundation(No.2020M671900)the National Leading Youth Talent Support Program of China.
文摘Large-scale multi-objective optimization problems(MOPs)that involve a large number of decision variables,have emerged from many real-world applications.While evolutionary algorithms(EAs)have been widely acknowledged as a mainstream method for MOPs,most research progress and successful applications of EAs have been restricted to MOPs with small-scale decision variables.More recently,it has been reported that traditional multi-objective EAs(MOEAs)suffer severe deterioration with the increase of decision variables.As a result,and motivated by the emergence of real-world large-scale MOPs,investigation of MOEAs in this aspect has attracted much more attention in the past decade.This paper reviews the progress of evolutionary computation for large-scale multi-objective optimization from two angles.From the key difficulties of the large-scale MOPs,the scalability analysis is discussed by focusing on the performance of existing MOEAs and the challenges induced by the increase of the number of decision variables.From the perspective of methodology,the large-scale MOEAs are categorized into three classes and introduced respectively:divide and conquer based,dimensionality reduction based and enhanced search-based approaches.Several future research directions are also discussed.
文摘In this paper, according to the parallel environment of ELXSI computer, a parallel solving process of substructure method in static and dynamic analyses of large-scale and complex structure has been put forward, and the corresponding parallel computational program has been developed.
文摘The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical research.The review covers key topics such as computational modelling,bioinformatics,machine learning in medical diagnostics,and the integration of wearable technology for real-time health monitoring.Major findings indicate that computational models have significantly enhanced the understanding of complex biological systems,while machine learning algorithms have improved the accuracy of disease prediction and diagnosis.The synergy between bioinformatics and computational techniques has led to breakthroughs in personalized medicine,enabling more precise treatment strategies.Additionally,the integration of wearable devices with advanced computational methods has opened new avenues for continuous health monitoring and early disease detection.The review emphasizes the need for interdisciplinary collaboration to further advance this field.Future research should focus on developing more robust and scalable computational models,enhancing data integration techniques,and addressing ethical considerations related to data privacy and security.By fostering innovation at the intersection of these disciplines,the potential to revolutionize healthcare delivery and outcomes becomes increasingly attainable.
基金supported by the National Key Research and Development Program of China(2022YFE0206700)。
文摘1.Introduction Climate change mitigation pathways aimed at limiting global anthropogenic carbon dioxide(CO_(2))emissions while striving to constrain the global temperature increase to below 2℃—as outlined by the Intergovernmental Panel on Climate Change(IPCC)—consistently predict the widespread implementation of CO_(2)geological storage on a global scale.