Noncohesive particle clusters are identified and tracked in turbulent flows to determine the breakdown and time evolution of cluster statistics and their implications for interscale mass transfer,which has connections...Noncohesive particle clusters are identified and tracked in turbulent flows to determine the breakdown and time evolution of cluster statistics and their implications for interscale mass transfer,which has connections to the classical turbulent energy cascade and its mass cascade counterpart running in parallel.In particular,the formation and dynamics of sediment and larvae clusters are of interest to coral larvae settlement in coastal regions and particularly the resilience of green-gray coastal protection solutions.Analogous cluster behavior is relevant to cloud microphysics and precipitation initiation,radiation transport and light transmission through colloids and suspensions,heat and mass transfer in particle-laden flows,and viral and pollutant transmission.Following a comparison between various clustering techniques,we adopt a density-based cluster identification algorithm based on its simplicity and efficiency,where particles are clustered based on the number of neighboring particles in their individual spheres of influence.We establish parallels with lattice-based percolation theory,as evident in the power-law scaling of the cluster size distribution near the percolation threshold.The degree of discontinuity of the phase transition associated with this percolation threshold is observed to broaden with larger Stokes numbers and thereby large-scale clustering.The sensitivity of our findings to the employed clustering algorithm is discussed.A novel cluster tracking algorithm is deployed to determine the interscale transfer rate along the particle-number phase-space dimension via accounting of cluster breakup and merger events,extending previous work on the bubble breakup cascade beneath surface breaking waves.Our findings shed light on the interaction between particle clusters and their carrier turbulent flows,with an eye toward transport models incorporating cluster characteristics and dynamics.展开更多
Two Co(Ⅱ)and Ni(Ⅱ)complexes were synthesized by synergistic coordination of 3,3-diphenylpropionic acid(HDPA)and 2,2′-bipyridylamine(PAm).The structures of complexes[Co(DPA)_(2)(PAm)]·2H_(2)O(1)and[Ni(DPA)_(2)(...Two Co(Ⅱ)and Ni(Ⅱ)complexes were synthesized by synergistic coordination of 3,3-diphenylpropionic acid(HDPA)and 2,2′-bipyridylamine(PAm).The structures of complexes[Co(DPA)_(2)(PAm)]·2H_(2)O(1)and[Ni(DPA)_(2)(PAm)]·2H_(2)O(2)were determined by single-crystal X-ray diffraction,IR spectroscopy,and powder X-ray diffraction.Hirshfeld surface analysis provided quantitative insights into the intermolecular interactions within the complexes,while molecular docking studies elucidated their binding modes and affinities toward urease.Furthermore,the biological activities of both complexes were systematically evaluated through a range of assays,including DNA binding,urease inhibition,antibacterial activity,and in vitro cytotoxicity against cancer cells.Both complexes exhibited binding affinity for DNA and displayed notable urease inhibitory activity.Under in vitro conditions,both complexes showed appreciable cytotoxicity toward HepG2 cells with efficacy comparable to clinically used platinumbased anticancer agents.CCDC:2479943,1;2479944,2.展开更多
Biomass-based hydrocarbon fuels,as one of the alternatives to traditional fossil fuels,have attracted considerable attention in the energy field due to their renewability and environmental benefits.This article provid...Biomass-based hydrocarbon fuels,as one of the alternatives to traditional fossil fuels,have attracted considerable attention in the energy field due to their renewability and environmental benefits.This article provides a systematic review of recent research progress in the chemical synthesis of biomass-based hydrocarbon fuels.It outlines the conversion pathways using feedstocks such as lipids,terpenoids,cellulose/hemicellulose,and lignin.Depending on the feedstock,various products with distinct structural characteristics can be prepared through reactions such as cyclization,condensation,and catalytic hydrogenation.Throughout the synthesis process,three key factors play a critical role:efficient catalyst development,production process optimization,and computational-chemistry-based molecular design.Finally,the article discusses future perspectives for biomass-based hydrocarbon fuel synthesis research.展开更多
The capture of atmospheric carbon dioxide by adsorbents is an important strategy to deal with the greenhouse effect.Compared with traditional CO_(2) adsorption materials like activated carbon,silica gel,and zeolite mo...The capture of atmospheric carbon dioxide by adsorbents is an important strategy to deal with the greenhouse effect.Compared with traditional CO_(2) adsorption materials like activated carbon,silica gel,and zeolite molecular sieves,covalent organic frameworks(COFs)have excellent thermal and chemical stabilities and can be produced in many different forms.Using their different possible construction units,ordered structures for specific applications can be produced,giving them broad prospects in fields such as gas storage.This review analyzes the different types of COFs that have been synthesized and their different methods of CO_(2) capture.It then discusses different ways to increase CO_(2) adsorption by changing the internal structure of COFs and modifying their surfaces.The limitations of COF-derived carbon materials in CO_(2) capture are reviewed and,finally,the key role of machine learning and computational simulation in improving CO_(2) adsorption is mentioned,and the current status and future possible uses of COFs are summarized.展开更多
Practical applications of smart cities and the Internet of Things(IoT)have multiplied,posing many difficulties in network performance,dependability,and security.Concerns of accessibility,reliability,sustainability,and...Practical applications of smart cities and the Internet of Things(IoT)have multiplied,posing many difficulties in network performance,dependability,and security.Concerns of accessibility,reliability,sustainability,and security too have arisen correspondingly because of the decentralized character of the smart city and IoT systems.Fog computing offers a foundation for various applications,including cognitive support,health and social services,intelligent transportation systems,and pervasive computing and communications.Fog computing can help enhance these apps'productivity and lower the end-to-end delay experienced by such time-sensitive applications.In this research,we propose a reliable and secure service delivery strategy at the network edge for smart cities.To improve the availability and dependability,along with the security of smart city applications,the approach employs a combined method uniting distributed fog servers in addition to mist servers with the help of an intrusion detection system.Simulation findings suggest a reduction of 40.3%in the delay incurred by each service request for highly dense areas and 60.6%for moderately dense environments.Furthermore,the system has low false-negative rates and high detection and accuracy rates,decreasing service requests 2%.展开更多
Organic electrochemical transistor(OECT)devices demonstrate great promising potential for reservoir computing(RC)systems,but their lack of tunable dynamic characteristics limits their application in multi-temporal sca...Organic electrochemical transistor(OECT)devices demonstrate great promising potential for reservoir computing(RC)systems,but their lack of tunable dynamic characteristics limits their application in multi-temporal scale tasks.In this study,we report an OECT-based neuromorphic device with tunable relaxation time(τ)by introducing an additional vertical back-gate electrode into a planar structure.The dual-gate design enablesτreconfiguration from 93 to 541 ms.The tunable relaxation behaviors can be attributed to the combined effects of planar-gate induced electrochemical doping and back-gateinduced electrostatic coupling,as verified by electrochemical impedance spectroscopy analysis.Furthermore,we used theτ-tunable OECT devices as physical reservoirs in the RC system for intelligent driving trajectory prediction,achieving a significant improvement in prediction accuracy from below 69%to 99%.The results demonstrate that theτ-tunable OECT shows a promising candidate for multi-temporal scale neuromorphic computing applications.展开更多
The deceleration of Moore's law and the energy–latency drawbacks of the von Neumann bottleneck have heightened the pursuit for beyond-CMOS designs that integrate memory and compute.Self-rectifying memristors(SRMs...The deceleration of Moore's law and the energy–latency drawbacks of the von Neumann bottleneck have heightened the pursuit for beyond-CMOS designs that integrate memory and compute.Self-rectifying memristors(SRMs)have emerged as promising building blocks for high-performance,low-power systems by combining resistive switching with intrinsic diode-like behavior.Their unidirectional conduction inhibits sneak-path currents in crossbar arrays devoid of external selectors,while nonlinear I–V characteristics,adjustable conductance states,low operating voltages,and rapid switching facilitate efficient vector–matrix operations,neuromorphic plasticity,and hardware security primitives.This review synthesizes the working mechanisms of SRMs,surveys material,and structural strategies and compares device metrics relevant to array-scale deployment(rectification ratio,nonlinearity,endurance,retention,variability,and operating voltage).We assess SRM-enabled in-memory computing and neuromorphic applications,as well as security functions such as physical unclonable functions and reconfigurable cryptographic primitives.Integration pathways toward CMOS compatibility are analyzed,including back-end-of-line thermal budgets,uniformity,write disturb mitigation,and reliability.Finally,we outline key challenges and opportunities:materials/architecture co-design,precision analog training,stochasticity control/exploitation,3D stacking,and standardized benchmarking that can accelerate large-scale SRM adoption.Through the use of specialized materials and structural optimization,SRMs are set to provide selector-free,densely integrated,and energy-efficient hardware for future information processing.展开更多
The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous c...The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.展开更多
The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreser...The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreserving computation.Classical MPC relies on cryptographic techniques such as homomorphic encryption,secret sharing,and oblivious transfer,which may become vulnerable in the post-quantum era due to the computational power of quantum adversaries.This study presents a review of 140 peer-reviewed articles published between 2000 and 2025 that used different databases like MDPI,IEEE Explore,Springer,and Elsevier,examining the applications,types,and security issues with the solution of Quantum computing in different fields.This review explores the impact of quantum computing on MPC security,assesses emerging quantum-resistant MPC protocols,and examines hybrid classicalquantum approaches aimed at mitigating quantum threats.We analyze the role of Quantum Key Distribution(QKD),post-quantum cryptography(PQC),and quantum homomorphic encryption in securing multiparty computations.Additionally,we discuss the challenges of scalability,computational efficiency,and practical deployment of quantumsecure MPC frameworks in real-world applications such as privacy-preserving AI,secure blockchain transactions,and confidential data analysis.This review provides insights into the future research directions and open challenges in ensuring secure,scalable,and quantum-resistant multiparty computation.展开更多
The integration of large-scale foundation models(e.g.,GPT series and AlphaFold)into oncology is fundamentally transforming both research methodologies and clinical practices,driven by unprecedented advancements in com...The integration of large-scale foundation models(e.g.,GPT series and AlphaFold)into oncology is fundamentally transforming both research methodologies and clinical practices,driven by unprecedented advancements in computational power.This review synthesizes recent progress in the application of large language models to core oncological tasks,including medical imaging analysis,genomic interpretation,and personalized treatment planning.Underpinned by advanced computational infrastructures,such as graphics processing unit/tensor processing unit clusters,heterogeneous computing,and cloud platforms,these models enable superior representation learning and generalization across multimodal data sources.This review examines how these infrastructures overcome key bottlenecks in intelligent oncology through scalable optimization strategies,including mixed-precision training,memory optimization,and heterogeneous computing.Alongside these technical advancements,the review explores pressing challenges,such as data heterogeneity,limited model interpretability,regulatory uncertainties,and the environmental impact of artificial intelligence(AI)systems.Special emphasis is placed on emerging solutions,encompassing green AI and edge computing,which offer promising approaches for low-resource deployment scenarios.Additionally,the review highlights the critical role of interdisciplinary collaboration among oncology,computer science,ethics,and policy to ensure that AI systems are not only powerful but also transparent,safe,and clinically relevant.Finally,the review outlines potential avenues for future research aimed at developing robust,scalable,and human-centered frameworks for intelligent oncology.展开更多
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain...Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.展开更多
The global surge in Artificial Intelligence(AI)has been triggered by the impressive performance of deep-learning models based on the Transformer architecture.However,the efficacy of such models is increasingly depende...The global surge in Artificial Intelligence(AI)has been triggered by the impressive performance of deep-learning models based on the Transformer architecture.However,the efficacy of such models is increasingly dependent on the volume and quality of data.Data are often distributed across institutions and companies,making cross-organizational data transfer vulnerable to privacy breaches and subject to privacy laws and trade secret regulations.These privacy and security concerns continue to pose major challenges to collaborative training and inference in multi-source data environments.These challenges are particularly significant for Transformer models,where the complex internal encryption computations drastically reduce computational efficiency,ultimately threatening the model's practical applicability.We hence introduce Secformer,an innovative architecture specifically designed to protect the privacy of Transformer-like models.Secformer separates the encoder and decoder modules,enabling the decomposition of computation flows in Transformer-like models and their efficient mapping to Multi-Party Computation(MPC)protocols.This design effectively addresses privacy leakage issues during the collaborative computation process of Transformer models.To prevent performance degradation caused by encrypted attention modules,we propose a modular design strategy that optimizes high-level components by reconstructing low-level operators.We further analyze the security of Secformer's core components,presenting security definitions and formal proofs.We construct a library of fundamental operators and core modules using atomic-level component designs as the basic building blocks for encoders and decoders.Moreover,these components can serve as foundational operators for other Transformer-like models.Extensive experimental evaluations demonstrate Secformer's excellent performance while preserving privacy and offering universal adaptability for Transformer-like models.展开更多
In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network e...In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.展开更多
With the rapid development of power Internet of Things(IoT)scenarios such as smart factories and smart homes,numerous intelligent terminal devices and real-time interactive applications impose higher demands on comput...With the rapid development of power Internet of Things(IoT)scenarios such as smart factories and smart homes,numerous intelligent terminal devices and real-time interactive applications impose higher demands on computing latency and resource supply efficiency.Multi-access edge computing technology deploys cloud computing capabilities at the network edge;constructs distributed computing nodes and multi-access systems and offers infrastructure support for services with low latency and high reliability.Existing research relies on a strong assumption that the environmental state is fully observable and fails to thoroughly consider the continuous time-varying features of edge server load fluctuations,leading to insufficient adaptability of the model in a heterogeneous dynamic environment.Thus,this paper establishes a framework for end-edge collaborative task offloading based on a partially observable Markov decision-making process(POMDP)and proposes a method for end-edge collaborative task offloading in heterogeneous scenarios.It achieves time-series modeling of the historical load characteristics of edge servers and endows the agent with the ability to be aware of the load in dynamic environmental states.Moreover,by dynamically assessing the exploration value of historical trajectories in the central trajectory pool and adjusting the sample weight distribution,directional exploration and strategy optimization of high-value trajectories are realized.Experimental results indicate that the proposed method exhibits distinct advantages compared with existing methods in terms of average delay and task failure rate and also verifies the method’s robustness in a dynamic environment.展开更多
Rotational computed laminography(CL)has broad application potential in three-dimensional imaging of plate-like objects because it only requires X-rays to pass through the tested object in the thickness direction durin...Rotational computed laminography(CL)has broad application potential in three-dimensional imaging of plate-like objects because it only requires X-rays to pass through the tested object in the thickness direction during the imaging process.In this study,a rectangular cross-section field-of-view rotational CL(RC-CL)is proposed for circuit board imaging.Compared to other rotational CL systems,the field of view is the largest and most suitable for rectangular circuit boards.Meanwhile,as the imaging geometry of RC-CL is significantly different from that of cone-beam CT,the Feldkamp-Davis-Kress(FDK)reconstruction algorithm cannot be used directly.However,transferring the projection data to fit into the CBCT geometry using two-dimensional interpolation introduces interpolation errors.Therefore,an FDK-type analytical reconstruction algorithm applicable to RC-CL was developed.The effectiveness of the method was validated through numerical experiments,and the influence of the tilt angle on the reconstruction results was analyzed.Finally,the RC-CL technique was applied to real defect detection research on circuit boards.展开更多
Density functional theory(DFT)has helped propel the advance of electrocatalysis in the past two decades.In view of its massive use,it is worth asking how reliable DFT is for the prediction of adsorption energies,which...Density functional theory(DFT)has helped propel the advance of electrocatalysis in the past two decades.In view of its massive use,it is worth asking how reliable DFT is for the prediction of adsorption energies,which are paramount in computational electrocatalysis models.Here,we provide an experimental-computational approach to break down overall adsorption-energy errors into separate gas-phase and adsorbed-phase contributions.The method is evaluated using experimental data and various exchange-correlation functionals and materials for C-and O-containing species.Our main conclusion is that no functional is simultaneously accurate for adsorbates and molecules,as adsorbed-phase errors are visibly different from gas-phase errors.Importantly,total,gas-phase,and adsorbed-phase errors are correlated,revealing intrinsic DFT limitations and enabling the elaboration of swift correction routines.To illustrate the benefits of our approach,we deconvolute and correct all errors in CO_(2)electroreduction to CO and find an agreement with experiments close to chemical accuracy for numerous transition-metal electrodes and all scrutinized functionals.展开更多
The increasing popularity of quantum computing has resulted in a considerable rise in demand for cloud quantum computing usage in recent years.Nevertheless,the rapid surge in demand for cloud-based quantum computing r...The increasing popularity of quantum computing has resulted in a considerable rise in demand for cloud quantum computing usage in recent years.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources has led to a scarcity.In order to meet the needs of an increasing number of researchers,it is imperative to facilitate efficient and flexible access to computing resources in a cloud environment.In this paper,we propose a novel quantum computing paradigm,Virtual QPU(VQPU),which addresses this issue and enhances quantum cloud throughput with guaranteed circuit fidelity.The proposal introduces three innovative concepts:(1)The integration of virtualization technology into the field of quantum computing to enhance quantum cloud throughput.(2)The introduction of an asynchronous execution of circuits methodology to improve quantum computing flexibility.(3)The development of a virtual QPU allocation scheme for quantum tasks in a cloud environment to improve circuit fidelity.The concepts have been validated through the utilization of a self-built simulated quantum cloud platform.展开更多
In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task schedul...In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.展开更多
Peridynamics(PD)demonstrates unique advantages in addressing fracture problems,however,its nonlocality and meshfree discretization result in high computational and storage costs.Moreover,in its engineering application...Peridynamics(PD)demonstrates unique advantages in addressing fracture problems,however,its nonlocality and meshfree discretization result in high computational and storage costs.Moreover,in its engineering applications,the computational scale of classical GPU parallel schemes is often limited by the finite graphics memory of GPU devices.In the present study,we develop an efficient particle information management strategy based on the cell-linked list method and on this basis propose a subdomain-based GPU parallel scheme,which exhibits outstanding acceleration performance in specific compute kernels while significantly reducing graphics memory usage.Compared to the classical parallel scheme,the cell-linked list method facilitates efficient management of particle information within subdomains,enabling the proposed parallel scheme to effectively reduce graphics memory usage by optimizing the size and number of subdomains while significantly improving the speed of neighbor search.As demonstrated in PD examples,the proposed parallel scheme enhances the neighbor search efficiency dramatically and achieves a significant speedup relative to serial programs.For instance,without considering the time of data transmission,the proposed scheme achieves a remarkable speedup of nearly 1076.8×in one test case,due to its excellent computational efficiency in the neighbor search.Additionally,for 2D and 3D PD models with tens of millions of particles,the graphics memory usage can be reduced up to 83.6%and 85.9%,respectively.Therefore,this subdomain-based GPU parallel scheme effectively avoids graphics memory shortages while significantly improving the computational efficiency,providing new insights into studying more complex large-scale problems.展开更多
As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and el...As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and electrochemical characteristics,MXenes have shown great potential in brain-inspired neuromorphic computing electronics,including neuromorphic gas sensors,pressure sensors and photodetectors.This paper provides a forward-looking review of the research progress regarding MXenes in the neuromorphic sensing domain and discussed the critical challenges that need to be resolved.Key bottlenecks such as insufficient long-term stability under environmental exposure,high costs,scalability limitations in large-scale production,and mechanical mismatch in wearable integration hinder their practical deployment.Furthermore,unresolved issues like interfacial compatibility in heterostructures and energy inefficiency in neu-romorphic signal conversion demand urgent attention.The review offers insights into future research directions enhance the fundamental understanding of MXene properties and promote further integration into neuromorphic computing applications through the convergence with various emerging technologies.展开更多
文摘Noncohesive particle clusters are identified and tracked in turbulent flows to determine the breakdown and time evolution of cluster statistics and their implications for interscale mass transfer,which has connections to the classical turbulent energy cascade and its mass cascade counterpart running in parallel.In particular,the formation and dynamics of sediment and larvae clusters are of interest to coral larvae settlement in coastal regions and particularly the resilience of green-gray coastal protection solutions.Analogous cluster behavior is relevant to cloud microphysics and precipitation initiation,radiation transport and light transmission through colloids and suspensions,heat and mass transfer in particle-laden flows,and viral and pollutant transmission.Following a comparison between various clustering techniques,we adopt a density-based cluster identification algorithm based on its simplicity and efficiency,where particles are clustered based on the number of neighboring particles in their individual spheres of influence.We establish parallels with lattice-based percolation theory,as evident in the power-law scaling of the cluster size distribution near the percolation threshold.The degree of discontinuity of the phase transition associated with this percolation threshold is observed to broaden with larger Stokes numbers and thereby large-scale clustering.The sensitivity of our findings to the employed clustering algorithm is discussed.A novel cluster tracking algorithm is deployed to determine the interscale transfer rate along the particle-number phase-space dimension via accounting of cluster breakup and merger events,extending previous work on the bubble breakup cascade beneath surface breaking waves.Our findings shed light on the interaction between particle clusters and their carrier turbulent flows,with an eye toward transport models incorporating cluster characteristics and dynamics.
文摘Two Co(Ⅱ)and Ni(Ⅱ)complexes were synthesized by synergistic coordination of 3,3-diphenylpropionic acid(HDPA)and 2,2′-bipyridylamine(PAm).The structures of complexes[Co(DPA)_(2)(PAm)]·2H_(2)O(1)and[Ni(DPA)_(2)(PAm)]·2H_(2)O(2)were determined by single-crystal X-ray diffraction,IR spectroscopy,and powder X-ray diffraction.Hirshfeld surface analysis provided quantitative insights into the intermolecular interactions within the complexes,while molecular docking studies elucidated their binding modes and affinities toward urease.Furthermore,the biological activities of both complexes were systematically evaluated through a range of assays,including DNA binding,urease inhibition,antibacterial activity,and in vitro cytotoxicity against cancer cells.Both complexes exhibited binding affinity for DNA and displayed notable urease inhibitory activity.Under in vitro conditions,both complexes showed appreciable cytotoxicity toward HepG2 cells with efficacy comparable to clinically used platinumbased anticancer agents.CCDC:2479943,1;2479944,2.
基金Support by National Natural Science Foundation of China(22127802,22573091)the HY Action(62402010305)。
文摘Biomass-based hydrocarbon fuels,as one of the alternatives to traditional fossil fuels,have attracted considerable attention in the energy field due to their renewability and environmental benefits.This article provides a systematic review of recent research progress in the chemical synthesis of biomass-based hydrocarbon fuels.It outlines the conversion pathways using feedstocks such as lipids,terpenoids,cellulose/hemicellulose,and lignin.Depending on the feedstock,various products with distinct structural characteristics can be prepared through reactions such as cyclization,condensation,and catalytic hydrogenation.Throughout the synthesis process,three key factors play a critical role:efficient catalyst development,production process optimization,and computational-chemistry-based molecular design.Finally,the article discusses future perspectives for biomass-based hydrocarbon fuel synthesis research.
文摘The capture of atmospheric carbon dioxide by adsorbents is an important strategy to deal with the greenhouse effect.Compared with traditional CO_(2) adsorption materials like activated carbon,silica gel,and zeolite molecular sieves,covalent organic frameworks(COFs)have excellent thermal and chemical stabilities and can be produced in many different forms.Using their different possible construction units,ordered structures for specific applications can be produced,giving them broad prospects in fields such as gas storage.This review analyzes the different types of COFs that have been synthesized and their different methods of CO_(2) capture.It then discusses different ways to increase CO_(2) adsorption by changing the internal structure of COFs and modifying their surfaces.The limitations of COF-derived carbon materials in CO_(2) capture are reviewed and,finally,the key role of machine learning and computational simulation in improving CO_(2) adsorption is mentioned,and the current status and future possible uses of COFs are summarized.
基金co-funded by the European Union under the REFRESH-Research Excellence For REgion Sustainability and High-tech Industries project number CZ.10.03.01/00/22_003/0000048 via the Operational Programme Just Transitionsupported by the Ministry of Education,Youth and Sports of the Czech Republic conducted by VSB-Technical University of Ostrava,Czechia,under Grants SP2025/021 and SP2025/039。
文摘Practical applications of smart cities and the Internet of Things(IoT)have multiplied,posing many difficulties in network performance,dependability,and security.Concerns of accessibility,reliability,sustainability,and security too have arisen correspondingly because of the decentralized character of the smart city and IoT systems.Fog computing offers a foundation for various applications,including cognitive support,health and social services,intelligent transportation systems,and pervasive computing and communications.Fog computing can help enhance these apps'productivity and lower the end-to-end delay experienced by such time-sensitive applications.In this research,we propose a reliable and secure service delivery strategy at the network edge for smart cities.To improve the availability and dependability,along with the security of smart city applications,the approach employs a combined method uniting distributed fog servers in addition to mist servers with the help of an intrusion detection system.Simulation findings suggest a reduction of 40.3%in the delay incurred by each service request for highly dense areas and 60.6%for moderately dense environments.Furthermore,the system has low false-negative rates and high detection and accuracy rates,decreasing service requests 2%.
基金supported by the National Key Research and Development Program of China under Grant 2022YFB3608300in part by the National Nature Science Foundation of China(NSFC)under Grants 62404050,U2341218,62574056,62204052。
文摘Organic electrochemical transistor(OECT)devices demonstrate great promising potential for reservoir computing(RC)systems,but their lack of tunable dynamic characteristics limits their application in multi-temporal scale tasks.In this study,we report an OECT-based neuromorphic device with tunable relaxation time(τ)by introducing an additional vertical back-gate electrode into a planar structure.The dual-gate design enablesτreconfiguration from 93 to 541 ms.The tunable relaxation behaviors can be attributed to the combined effects of planar-gate induced electrochemical doping and back-gateinduced electrostatic coupling,as verified by electrochemical impedance spectroscopy analysis.Furthermore,we used theτ-tunable OECT devices as physical reservoirs in the RC system for intelligent driving trajectory prediction,achieving a significant improvement in prediction accuracy from below 69%to 99%.The results demonstrate that theτ-tunable OECT shows a promising candidate for multi-temporal scale neuromorphic computing applications.
基金supported by the National Natural Science Foundation of China(Grants No.92364204 and 62204219)the open research fund of Suzhou Laboratory(Grants No.SZLAB-1208-2024-TS012)+1 种基金Major Program of Natural Science Foundation of Zhejiang Province(Grants No.LDT23F0401)Zhejiang Province Introduces and Cultivates Leading Innovation and Entrepreneurship Teams(Grants No.2023R01011)。
文摘The deceleration of Moore's law and the energy–latency drawbacks of the von Neumann bottleneck have heightened the pursuit for beyond-CMOS designs that integrate memory and compute.Self-rectifying memristors(SRMs)have emerged as promising building blocks for high-performance,low-power systems by combining resistive switching with intrinsic diode-like behavior.Their unidirectional conduction inhibits sneak-path currents in crossbar arrays devoid of external selectors,while nonlinear I–V characteristics,adjustable conductance states,low operating voltages,and rapid switching facilitate efficient vector–matrix operations,neuromorphic plasticity,and hardware security primitives.This review synthesizes the working mechanisms of SRMs,surveys material,and structural strategies and compares device metrics relevant to array-scale deployment(rectification ratio,nonlinearity,endurance,retention,variability,and operating voltage).We assess SRM-enabled in-memory computing and neuromorphic applications,as well as security functions such as physical unclonable functions and reconfigurable cryptographic primitives.Integration pathways toward CMOS compatibility are analyzed,including back-end-of-line thermal budgets,uniformity,write disturb mitigation,and reliability.Finally,we outline key challenges and opportunities:materials/architecture co-design,precision analog training,stochasticity control/exploitation,3D stacking,and standardized benchmarking that can accelerate large-scale SRM adoption.Through the use of specialized materials and structural optimization,SRMs are set to provide selector-free,densely integrated,and energy-efficient hardware for future information processing.
基金appreciation to the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R384)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.
文摘The advent of quantum computing poses a significant challenge to traditional cryptographic protocols,particularly those used in SecureMultiparty Computation(MPC),a fundamental cryptographic primitive for privacypreserving computation.Classical MPC relies on cryptographic techniques such as homomorphic encryption,secret sharing,and oblivious transfer,which may become vulnerable in the post-quantum era due to the computational power of quantum adversaries.This study presents a review of 140 peer-reviewed articles published between 2000 and 2025 that used different databases like MDPI,IEEE Explore,Springer,and Elsevier,examining the applications,types,and security issues with the solution of Quantum computing in different fields.This review explores the impact of quantum computing on MPC security,assesses emerging quantum-resistant MPC protocols,and examines hybrid classicalquantum approaches aimed at mitigating quantum threats.We analyze the role of Quantum Key Distribution(QKD),post-quantum cryptography(PQC),and quantum homomorphic encryption in securing multiparty computations.Additionally,we discuss the challenges of scalability,computational efficiency,and practical deployment of quantumsecure MPC frameworks in real-world applications such as privacy-preserving AI,secure blockchain transactions,and confidential data analysis.This review provides insights into the future research directions and open challenges in ensuring secure,scalable,and quantum-resistant multiparty computation.
文摘The integration of large-scale foundation models(e.g.,GPT series and AlphaFold)into oncology is fundamentally transforming both research methodologies and clinical practices,driven by unprecedented advancements in computational power.This review synthesizes recent progress in the application of large language models to core oncological tasks,including medical imaging analysis,genomic interpretation,and personalized treatment planning.Underpinned by advanced computational infrastructures,such as graphics processing unit/tensor processing unit clusters,heterogeneous computing,and cloud platforms,these models enable superior representation learning and generalization across multimodal data sources.This review examines how these infrastructures overcome key bottlenecks in intelligent oncology through scalable optimization strategies,including mixed-precision training,memory optimization,and heterogeneous computing.Alongside these technical advancements,the review explores pressing challenges,such as data heterogeneity,limited model interpretability,regulatory uncertainties,and the environmental impact of artificial intelligence(AI)systems.Special emphasis is placed on emerging solutions,encompassing green AI and edge computing,which offer promising approaches for low-resource deployment scenarios.Additionally,the review highlights the critical role of interdisciplinary collaboration among oncology,computer science,ethics,and policy to ensure that AI systems are not only powerful but also transparent,safe,and clinically relevant.Finally,the review outlines potential avenues for future research aimed at developing robust,scalable,and human-centered frameworks for intelligent oncology.
基金supported by Key Science and Technology Program of Henan Province,China(Grant Nos.242102210147,242102210027)Fujian Province Young and Middle aged Teacher Education Research Project(Science and Technology Category)(No.JZ240101)(Corresponding author:Dong Yuan).
文摘Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.
基金supported by the National Natural Science Foundation of China under Grant 62471205in part by the Yunnan Fundamental Research Projects under Grant 202301AV070003in part by the Major Science and Technology Projects in Yunnan Province under Grant 202302AG050009。
文摘The global surge in Artificial Intelligence(AI)has been triggered by the impressive performance of deep-learning models based on the Transformer architecture.However,the efficacy of such models is increasingly dependent on the volume and quality of data.Data are often distributed across institutions and companies,making cross-organizational data transfer vulnerable to privacy breaches and subject to privacy laws and trade secret regulations.These privacy and security concerns continue to pose major challenges to collaborative training and inference in multi-source data environments.These challenges are particularly significant for Transformer models,where the complex internal encryption computations drastically reduce computational efficiency,ultimately threatening the model's practical applicability.We hence introduce Secformer,an innovative architecture specifically designed to protect the privacy of Transformer-like models.Secformer separates the encoder and decoder modules,enabling the decomposition of computation flows in Transformer-like models and their efficient mapping to Multi-Party Computation(MPC)protocols.This design effectively addresses privacy leakage issues during the collaborative computation process of Transformer models.To prevent performance degradation caused by encrypted attention modules,we propose a modular design strategy that optimizes high-level components by reconstructing low-level operators.We further analyze the security of Secformer's core components,presenting security definitions and formal proofs.We construct a library of fundamental operators and core modules using atomic-level component designs as the basic building blocks for encoders and decoders.Moreover,these components can serve as foundational operators for other Transformer-like models.Extensive experimental evaluations demonstrate Secformer's excellent performance while preserving privacy and offering universal adaptability for Transformer-like models.
基金supported by the National Natural Science Foundation of China(62202215)Liaoning Province Applied Basic Research Program(Youth Special Project,2023JH2/101600038)+4 种基金Shenyang Youth Science and Technology Innovation Talent Support Program(RC220458)Guangxuan Program of Shenyang Ligong University(SYLUGXRC202216)the Basic Research Special Funds for Undergraduate Universities in Liaoning Province(LJ212410144067)the Natural Science Foundation of Liaoning Province(2024-MS-113)the science and technology funds from Liaoning Education Department(LJKZ0242).
文摘In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.
基金funded by the State Grid Corporation Science and Technology Project“Research and Application of Key Technologies for Integrated Sensing and Computing for Intelligent Operation of Power Grid”(Grant No.5700-202318596A-3-2-ZN).
文摘With the rapid development of power Internet of Things(IoT)scenarios such as smart factories and smart homes,numerous intelligent terminal devices and real-time interactive applications impose higher demands on computing latency and resource supply efficiency.Multi-access edge computing technology deploys cloud computing capabilities at the network edge;constructs distributed computing nodes and multi-access systems and offers infrastructure support for services with low latency and high reliability.Existing research relies on a strong assumption that the environmental state is fully observable and fails to thoroughly consider the continuous time-varying features of edge server load fluctuations,leading to insufficient adaptability of the model in a heterogeneous dynamic environment.Thus,this paper establishes a framework for end-edge collaborative task offloading based on a partially observable Markov decision-making process(POMDP)and proposes a method for end-edge collaborative task offloading in heterogeneous scenarios.It achieves time-series modeling of the historical load characteristics of edge servers and endows the agent with the ability to be aware of the load in dynamic environmental states.Moreover,by dynamically assessing the exploration value of historical trajectories in the central trajectory pool and adjusting the sample weight distribution,directional exploration and strategy optimization of high-value trajectories are realized.Experimental results indicate that the proposed method exhibits distinct advantages compared with existing methods in terms of average delay and task failure rate and also verifies the method’s robustness in a dynamic environment.
基金supported by the National Key Research and Development Program of China(No.2022YFF0607802)。
文摘Rotational computed laminography(CL)has broad application potential in three-dimensional imaging of plate-like objects because it only requires X-rays to pass through the tested object in the thickness direction during the imaging process.In this study,a rectangular cross-section field-of-view rotational CL(RC-CL)is proposed for circuit board imaging.Compared to other rotational CL systems,the field of view is the largest and most suitable for rectangular circuit boards.Meanwhile,as the imaging geometry of RC-CL is significantly different from that of cone-beam CT,the Feldkamp-Davis-Kress(FDK)reconstruction algorithm cannot be used directly.However,transferring the projection data to fit into the CBCT geometry using two-dimensional interpolation introduces interpolation errors.Therefore,an FDK-type analytical reconstruction algorithm applicable to RC-CL was developed.The effectiveness of the method was validated through numerical experiments,and the influence of the tilt angle on the reconstruction results was analyzed.Finally,the RC-CL technique was applied to real defect detection research on circuit boards.
基金financial support from MICIU/AEI/10.13039/501100011033by the European Union,and grant MOE-T2EP10222-0007 from the Ministry of Education,Singapore。
文摘Density functional theory(DFT)has helped propel the advance of electrocatalysis in the past two decades.In view of its massive use,it is worth asking how reliable DFT is for the prediction of adsorption energies,which are paramount in computational electrocatalysis models.Here,we provide an experimental-computational approach to break down overall adsorption-energy errors into separate gas-phase and adsorbed-phase contributions.The method is evaluated using experimental data and various exchange-correlation functionals and materials for C-and O-containing species.Our main conclusion is that no functional is simultaneously accurate for adsorbates and molecules,as adsorbed-phase errors are visibly different from gas-phase errors.Importantly,total,gas-phase,and adsorbed-phase errors are correlated,revealing intrinsic DFT limitations and enabling the elaboration of swift correction routines.To illustrate the benefits of our approach,we deconvolute and correct all errors in CO_(2)electroreduction to CO and find an agreement with experiments close to chemical accuracy for numerous transition-metal electrodes and all scrutinized functionals.
文摘The increasing popularity of quantum computing has resulted in a considerable rise in demand for cloud quantum computing usage in recent years.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources has led to a scarcity.In order to meet the needs of an increasing number of researchers,it is imperative to facilitate efficient and flexible access to computing resources in a cloud environment.In this paper,we propose a novel quantum computing paradigm,Virtual QPU(VQPU),which addresses this issue and enhances quantum cloud throughput with guaranteed circuit fidelity.The proposal introduces three innovative concepts:(1)The integration of virtualization technology into the field of quantum computing to enhance quantum cloud throughput.(2)The introduction of an asynchronous execution of circuits methodology to improve quantum computing flexibility.(3)The development of a virtual QPU allocation scheme for quantum tasks in a cloud environment to improve circuit fidelity.The concepts have been validated through the utilization of a self-built simulated quantum cloud platform.
基金supported and funded by theDeanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2503).
文摘In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.
基金Jun Li was supported by National Natural Science Foundation of China(No.:U2441215)Lisheng Liu and Xin Lai were supported by National Natural Science Foundation of China(No.:52494933).
文摘Peridynamics(PD)demonstrates unique advantages in addressing fracture problems,however,its nonlocality and meshfree discretization result in high computational and storage costs.Moreover,in its engineering applications,the computational scale of classical GPU parallel schemes is often limited by the finite graphics memory of GPU devices.In the present study,we develop an efficient particle information management strategy based on the cell-linked list method and on this basis propose a subdomain-based GPU parallel scheme,which exhibits outstanding acceleration performance in specific compute kernels while significantly reducing graphics memory usage.Compared to the classical parallel scheme,the cell-linked list method facilitates efficient management of particle information within subdomains,enabling the proposed parallel scheme to effectively reduce graphics memory usage by optimizing the size and number of subdomains while significantly improving the speed of neighbor search.As demonstrated in PD examples,the proposed parallel scheme enhances the neighbor search efficiency dramatically and achieves a significant speedup relative to serial programs.For instance,without considering the time of data transmission,the proposed scheme achieves a remarkable speedup of nearly 1076.8×in one test case,due to its excellent computational efficiency in the neighbor search.Additionally,for 2D and 3D PD models with tens of millions of particles,the graphics memory usage can be reduced up to 83.6%and 85.9%,respectively.Therefore,this subdomain-based GPU parallel scheme effectively avoids graphics memory shortages while significantly improving the computational efficiency,providing new insights into studying more complex large-scale problems.
基金supported by the NSFC(12474071)Natural Science Foundation of Shandong Province(ZR2024YQ051,ZR2025QB50)+6 种基金Guangdong Basic and Applied Basic Research Foundation(2025A1515011191)the Shanghai Sailing Program(23YF1402200,23YF1402400)funded by Basic Research Program of Jiangsu(BK20240424)Open Research Fund of State Key Laboratory of Crystal Materials(KF2406)Taishan Scholar Foundation of Shandong Province(tsqn202408006,tsqn202507058)Young Talent of Lifting engineering for Science and Technology in Shandong,China(SDAST2024QTB002)the Qilu Young Scholar Program of Shandong University。
文摘As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and electrochemical characteristics,MXenes have shown great potential in brain-inspired neuromorphic computing electronics,including neuromorphic gas sensors,pressure sensors and photodetectors.This paper provides a forward-looking review of the research progress regarding MXenes in the neuromorphic sensing domain and discussed the critical challenges that need to be resolved.Key bottlenecks such as insufficient long-term stability under environmental exposure,high costs,scalability limitations in large-scale production,and mechanical mismatch in wearable integration hinder their practical deployment.Furthermore,unresolved issues like interfacial compatibility in heterostructures and energy inefficiency in neu-romorphic signal conversion demand urgent attention.The review offers insights into future research directions enhance the fundamental understanding of MXene properties and promote further integration into neuromorphic computing applications through the convergence with various emerging technologies.