As a common foodborne pathogen,Salmonella poses risks to public health safety,common given the emergence of antimicrobial-resistant strains.However,there is currently a lack of systematic platforms based on large lang...As a common foodborne pathogen,Salmonella poses risks to public health safety,common given the emergence of antimicrobial-resistant strains.However,there is currently a lack of systematic platforms based on large language models(LLMs)for Salmonella resistance prediction,data presentation,and data sharing.To overcome this issue,we firstly propose a two-step feature-selection process based on the chi-square test and conditional mutual information maximization to find the key Salmonella resistance genes in a pan-genomics analysis and develop an LLM-based Salmonella antimicrobial-resistance predictive(SARPLLM)algorithm to achieve accurate antimicrobial-resistance prediction,based on Qwen2 LLM and low-rank adaptation.Secondly,we optimize the time complexity to compute the sample distance from the linear to logarithmic level by constructing a quantum data augmentation algorithm denoted as QSMOTEN.Thirdly,we build up a user-friendly Salmonella antimicrobial-resistance predictive online platform based on knowledge graphs,which not only facilitates online resistance prediction for users but also visualizes the pan-genomics analysis results of the Salmonella datasets.展开更多
The rapid advancement of deep learning and the emergence of largescale neural models,such as bidirectional encoder representations from transformers(BERT),generative pre-trained transformer(GPT),and large language mod...The rapid advancement of deep learning and the emergence of largescale neural models,such as bidirectional encoder representations from transformers(BERT),generative pre-trained transformer(GPT),and large language model Meta AI(LLaMa),have brought significant computational and energy challenges.Neuromorphic computing presents a biologically inspired approach to addressing these issues,leveraging event-driven processing and in-memory computation for enhanced energy efficiency.This survey explores the intersection of neuromorphic computing and large-scale deep learning models,focusing on neuromorphic models,learning methods,and hardware.We highlight transferable techniques from deep learning to neuromorphic computing and examine the memoryrelated scalability limitations of current neuromorphic systems.Furthermore,we identify potential directions to enable neuromorphic systems to meet the growing demands of modern AI workloads.展开更多
The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achievi...The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.展开更多
Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different ...Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different models exhibit distinct strengths and preferences,resulting in varying levels of performance.In this paper,we compare the capabilities of the most advanced LLMs—DeepSeek,ChatGPT,and Claude—along with their reasoning-optimized versions in addressing computational challenges.Specifically,we evaluate their proficiency in solving traditional numerical problems in scientific computing as well as leveraging scientific machine learning techniques for PDE-based problems.We designed all our experiments so that a nontrivial decision is required,e.g,defining the proper space of input functions for neural operator learning.Our findings show that reasoning and hybrid-reasoning models consistently and significantly outperform non-reasoning ones in solving challenging problems,with ChatGPT o3-mini-high generally offering the fastest reasoning speed.展开更多
Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure ...Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms.展开更多
Within the prefrontal-cingulate cortex,abnormalities in coupling between neuronal networks can disturb the emotion-cognition interactions,contributing to the development of mental disorders such as depression.Despite ...Within the prefrontal-cingulate cortex,abnormalities in coupling between neuronal networks can disturb the emotion-cognition interactions,contributing to the development of mental disorders such as depression.Despite this understanding,the neural circuit mechanisms underlying this phenomenon remain elusive.In this study,we present a biophysical computational model encompassing three crucial regions,including the dorsolateral prefrontal cortex,subgenual anterior cingulate cortex,and ventromedial prefrontal cortex.The objective is to investigate the role of coupling relationships within the prefrontal-cingulate cortex networks in balancing emotions and cognitive processes.The numerical results confirm that coupled weights play a crucial role in the balance of emotional cognitive networks.Furthermore,our model predicts the pathogenic mechanism of depression resulting from abnormalities in the subgenual cortex,and network functionality was restored through intervention in the dorsolateral prefrontal cortex.This study utilizes computational modeling techniques to provide an insight explanation for the diagnosis and treatment of depression.展开更多
Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automat...Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automating CFD workflows is underdeveloped.We introduce a novel approach centered on domain-specific LLM adaptation.By fine-tuning Qwen2.5-7B-Instruct on NL2FOAM,our custom dataset of 28,716 natural language-to-OpenFOAM configuration pairs with chain-of-thought(CoT)annotations enables direct translation from natural language descriptions to executable CFD setups.A multi-agent system orchestrates the process,autonomously verifying inputs,generating configurations,running simulations,and correcting errors.Evaluation on a benchmark of 21 diverse flow cases demonstrates state-of-the-art performance,achieving 88.7%solution accuracy and 82.6%first-attempt success rate.This significantly outperforms larger general-purpose models such as Qwen2.5-72B-Instruct,DeepSeek-R1,and Llama3.3-70B-Instruct,while also requiring fewer correction iterations and maintaining high computational efficiency.The results highlight the critical role of domain-specific adaptation in deploying LLM assistants for complex engineering workflows.Our code and fine-tuned model have been deposited at https://github.com/YYgroup/AutoCFD.展开更多
Robotic computing systems play an important role in enabling intelligent robotic tasks through intelligent algo-rithms and supporting hardware.In recent years,the evolution of robotic algorithms indicates a roadmap fr...Robotic computing systems play an important role in enabling intelligent robotic tasks through intelligent algo-rithms and supporting hardware.In recent years,the evolution of robotic algorithms indicates a roadmap from traditional robotics to hierarchical and end-to-end models.This algorithmic advancement poses a critical challenge in achieving balanced system-wide performance.Therefore,algorithm-hardware co-design has emerged as the primary methodology,which ana-lyzes algorithm behaviors on hardware to identify common computational properties.These properties can motivate algo-rithm optimization to reduce computational complexity and hardware innovation from architecture to circuit for high performance and high energy efficiency.We then reviewed recent works on robotic and embodied AI algorithms and computing hard-ware to demonstrate this algorithm-hardware co-design methodology.In the end,we discuss future research opportunities by answering two questions:(1)how to adapt the computing platforms to the rapid evolution of embodied AI algorithms,and(2)how to transform the potential of emerging hardware innovations into end-to-end inference improvements.展开更多
BACKGROUND The computed tomography(CT)-based preoperative risk score was developed to predict recurrence after upfront surgery in patients with resectable pancreatic ductal adenocarcinoma(PDAC)in South Korea.However,w...BACKGROUND The computed tomography(CT)-based preoperative risk score was developed to predict recurrence after upfront surgery in patients with resectable pancreatic ductal adenocarcinoma(PDAC)in South Korea.However,whether it performs well in other countries remains unknown.AIM To externally validate the CT-based preoperative risk score for PDAC in a country outside South Korea.METHODS Consecutive patients with PDAC who underwent upfront surgery from January 2016 to December 2019 at our institute in a country outside South Korea were retrospectively included.The study utilized the CT-based risk scoring system,which incorporates tumor size,portal venous phase density,tumor necrosis,peripancreatic infiltration,and suspicious metastatic lymph nodes.Patients were categorized into prognosis groups based on their risk score,as good(risk score<2),moderate(risk score 2-4),and poor(risk score≥5).RESULTS A total of 283 patients were evaluated,comprising 170 males and 113 females,with an average age of 63.52±8.71 years.Follow-up was conducted until May 2023,and 76%of patients experienced tumor recurrence with median recurrence-free survival(RFS)of 29.1±1.9 months.According to the evaluation results of Reader 1,the recurrence rates were 39.0%in the good prognosis group,82.1%in the moderate group,and 84.5%in the poor group.In comparison,Reader 2 reported recurrence rates of 50.0%,79.5%,and 88.9%,respectively,across the same prognostic categories.The study validated the effectiveness of the risk scoring system,demonstrating better RFS in the good prognosis group.CONCLUSION This research validated that the CT-based preoperative risk scoring system can effectively predict RFS in patients with PDAC,suggesting that it may be valuable in diverse populations.展开更多
Electric vehicles,powered by electricity stored in a battery pack,are developing rapidly due to the rapid development of energy storage and the related motor systems being environmentally friendly.However,thermal runa...Electric vehicles,powered by electricity stored in a battery pack,are developing rapidly due to the rapid development of energy storage and the related motor systems being environmentally friendly.However,thermal runaway is the key scientific problem in battery safety research,which can cause fire and even lead to battery explosion under impact loading.In this work,a detailed computational model simulating the mechanical deformation and predicting the short-circuit onset of the 18,650 cylindrical battery is established.The detailed computational model,including the anode,cathode,separator,winding,and battery casing,is then developed under the indentation condition.The failure criteria are subsequently established based on the force–displacement curve and the separator failure.Two methods for improving the anti-short circuit ability are proposed.Results show the three causes of the short circuit and the failure sequence of components and reveal the reason why the fire is more serious under dynamic loading than under quasi-static loading.展开更多
This paper investigates the capabilities of large language models(LLMs)to leverage,learn and create knowledge in solving computational fluid dynamics(CFD)problems through three categories of baseline problems.These ca...This paper investigates the capabilities of large language models(LLMs)to leverage,learn and create knowledge in solving computational fluid dynamics(CFD)problems through three categories of baseline problems.These categories include(1)conventional CFD problems that can be solved using existing numerical methods in LLMs,such as lid-driven cavity flow and the Sod shock tube problem;(2)problems that require new numerical methods beyond those available in LLMs,such as the recently developed Chien-physics-informed neural networks for singularly perturbed convection-diffusion equations;and(3)problems that cannot be solved using existing numerical methods in LLMs,such as the ill-conditioned Hilbert linear algebraic systems.The evaluations indicate that reasoning LLMs overall outperform non-reasoning models in four test cases.Reasoning LLMs show excellent performance for CFD problems according to the tailored prompts,but their current capability in autonomous knowledge exploration and creation needs to be enhanced.展开更多
Convolutional neural networks have been widely used for analyzing image data in industry,especially in the oil and gas area.Brazil has an extensive hydrocarbon reserve on its coast and has also benefited from these ne...Convolutional neural networks have been widely used for analyzing image data in industry,especially in the oil and gas area.Brazil has an extensive hydrocarbon reserve on its coast and has also benefited from these neural network models.Image data from petrographic thin section can be essential to provide information about reservoir quality,highlighting important features such as carbonate lithology.However,the automatic identification of lithology in reservoir rocks is still a significant challenge,mainly due to the heterogeneity that is part of the lithologies of the Brazilian pre-salt.Within this context,this work presents an approach using one-class or specialist models to identify four classes of lithology present in reservoir rocks in the Brazilian pre-salt.The proposed methodology had the challenge of dealing with a small number of images for training the neural networks,in addition to the complexity involved in the analyzed data.An auto-machine learning tool called AutoKeras was used to define the hyperparameters of the implemented models.The results found were satisfactory and presented an accuracy greater than 70%for image samples belonging to other wells not seen during the model building,which increases the applicability of the implemented model.Finally,a comparison was made between the proposed methodology and multiple-class models,demonstrating the superiority of one-class models.展开更多
In task offloading,the movement of vehicles causes the switching of connected RSUs and servers,which may lead to task offloading failure or high service delay.In this paper,we analyze the impact of vehicle movements o...In task offloading,the movement of vehicles causes the switching of connected RSUs and servers,which may lead to task offloading failure or high service delay.In this paper,we analyze the impact of vehicle movements on task offloading and reveal that data preparation time for task execution can be minimized via forward-looking scheduling.Then,a Bi-LSTM-based model is proposed to predict the trajectories of vehicles.The service area is divided into several equal-sized grids.If the actual position of the vehicle and the predicted position by the model belong to the same grid,the prediction is considered correct,thereby reducing the difficulty of vehicle trajectory prediction.Moreover,we propose a scheduling strategy for delay optimization based on the vehicle trajectory prediction.Considering the inevitable prediction error,we take some edge servers around the predicted area as candidate execution servers and the data required for task execution are backed up to these candidate servers,thereby reducing the impact of prediction deviations on task offloading and converting the modest increase of resource overheads into delay reduction in task offloading.Simulation results show that,compared with other classical schemes,the proposed strategy has lower average task offloading delays.展开更多
The underlying electrophysiological mechanisms and clinical treatments of cardiovascular diseases,which are the most common cause of morbidity and mortality worldwide,have gotten a lot of attention and been widely exp...The underlying electrophysiological mechanisms and clinical treatments of cardiovascular diseases,which are the most common cause of morbidity and mortality worldwide,have gotten a lot of attention and been widely explored in recent decades.Along the way,techniques such as medical imaging,computing modeling,and artificial intelligence(AI)have always played significant roles in above studies.In this article,we illustrated the applications of AI in cardiac electrophysiological research and disease prediction.We summarized general principles of AI and then focused on the roles of AI in cardiac basic and clinical studies incorporating magnetic resonance imaging and computing modeling techniques.The main challenges and perspectives were also analyzed.展开更多
Titanium-silicon(Ti-Si)alloy system shows significant potential for aerospace and automotive applications due to its superior specific strength,creep resistance,and oxidation resistance.For Si-containing Ti alloys,the...Titanium-silicon(Ti-Si)alloy system shows significant potential for aerospace and automotive applications due to its superior specific strength,creep resistance,and oxidation resistance.For Si-containing Ti alloys,the sufficient content of Si is critical for achieving these favorable performances,while excessive Si addition will result in mechanical brittleness.Herein,both physical experiments and finite element(FE)simulations are employed to investigate the micro-mechanisms of Si alloying in tailoring the mechanical properties of Ti alloys.Four typical states of Si-containing Ti alloys(solid solution state,hypoeutectoid state,near-eutectoid state,hypereutectoid state)with varying Si content(0.3-1.2 wt.%)were fabricated via in-situ alloying spark plasma sintering.Experimental results indicate that in-situ alloying of 0.6 wt.%Si enhances the alloy’s strength and ductility simultaneously due to the formation of fine and uniformly dispersed Ti_(5)Si_(3)particles,while higher content of Si(0.9 and 1.2 wt.%)results in coarser primary Ti_(5)Si_(3)agglomerations,deteriorating the ductility.FE simulations support these findings,highlighting the finer and more uniformly distributed Ti_(5)Si_(3)particles contribute to less stress concentration and promote uniform deformation across the matrix,while agglomerated Ti_(5)Si_(3)particles result in increased local stress concentrations,leading to higher chances of particle fracture and reduced ductility.This study not only elucidates the micro-mechanisms of in-situ Si alloying for tailoring the mechanical properties of Ti alloys but also aids in optimizing the design of high-performance Si-containing Ti alloys.展开更多
Based on BERTopic Model,the paper combines qualitative and quantitative methods to explore the reception of Can Xue’s translated works by analyzing readers’book reviews posted on Goodreads and Lovereading.We first c...Based on BERTopic Model,the paper combines qualitative and quantitative methods to explore the reception of Can Xue’s translated works by analyzing readers’book reviews posted on Goodreads and Lovereading.We first collected book reviews from these two well-known websites by Python.Through topic analysis of these reviews,we identified recurring topics,including details of her translated works and appreciation of their translation quality.Then,employing sentiment and content analysis methods,the paper explored the emotional attitudes and the specific thoughts of readers toward Can Xue and her translated works.The fingdings revealed that,among the 408 reviews,though the reception of Can Xue’s translated works was relatively positive,the current level of attention and recognition remains insufficient.However,based on the research results,the paper can derive valuable insights into the translation and dissemination processes such as adjusting translation and dissemination strategies,so that the global reach of Chinese literature and culture can be better facilitated.展开更多
Metaverse technologies are increasingly promoted as game-changers in transport planning,connectedautonomous mobility,and immersive traveler services.However,the field lacks a systematic review of what has been achieve...Metaverse technologies are increasingly promoted as game-changers in transport planning,connectedautonomous mobility,and immersive traveler services.However,the field lacks a systematic review of what has been achieved,where critical technical gaps remain,and where future deployments should be integrated.Using a transparent protocol-driven screening process,we reviewed 1589 records and retained 101 peer-reviewed journal and conference articles(2021–2025)that explicitly frame their contributions within a transport-oriented metaverse.Our reviewreveals a predominantly exploratory evidence base.Among the 101 studies reviewed,17(16.8%)apply fuzzymulticriteria decision-making,36(35.6%)feature digital-twin visualizations or simulation-based testbeds,9(8.9%)present hardware-in-the-loop or field pilots,and only 4(4.0%)report performance metrics such as latency,throughput,or safety under realistic network conditions.Over time,the literature evolves fromearly conceptual sketches(2021–2022)through simulation-centered frameworks(2023)to nascent engineering prototypes(2024–2025).To clarify persistent gaps,we synthesize findings into four foundational layers—geometry and rendering,distributed synchronization,cryptographic integrity,and human factors—enumerating essential algorithms(homogeneous 4×4 transforms,Lamport clocks,Raft consensus,Merkle proofs,sweep-and-prune collision culling,Q-learning,and real-time ergonomic feedback loops).A worked bus-fleet prototype illustrates how blockchain-based ticketing,reinforcement learning-optimized traffic signals,and extended reality dispatch can be integrated into a live digital twin.This prototype is supported by a threephase rollout strategy.Advancing the transport metaverse from blueprint to operation requires open data schemas,reproducible edge–cloud performance benchmarks,cross-disciplinary cyber-physical threat models,and city-scale sandboxes that apply their mathematical foundations in real-world settings.展开更多
This study presents an innovative development of the exponentially weighted moving average(EWMA)control chart,explicitly adapted for the examination of time series data distinguished by seasonal autoregressive moving ...This study presents an innovative development of the exponentially weighted moving average(EWMA)control chart,explicitly adapted for the examination of time series data distinguished by seasonal autoregressive moving average behavior—SARMA(1,1)L under exponential white noise.Unlike previous works that rely on simplified models such as AR(1)or assume independence,this research derives for the first time an exact two-sided Average Run Length(ARL)formula for theModified EWMAchart under SARMA(1,1)L conditions,using a mathematically rigorous Fredholm integral approach.The derived formulas are validated against numerical integral equation(NIE)solutions,showing strong agreement and significantly reduced computational burden.Additionally,a performance comparison index(PCI)is introduced to assess the chart’s detection capability.Results demonstrate that the proposed method exhibits superior sensitivity to mean shifts in autocorrelated environments,outperforming existing approaches.The findings offer a new,efficient framework for real-time quality control in complex seasonal processes,with potential applications in environmental monitoring and intelligent manufacturing systems.展开更多
Streptococcus suis(S.suis)is a major disease impacting pig farming globally.It can also be transferred to humans by eating raw pork.A comprehensive study was recently carried out to determine the indices throughmultip...Streptococcus suis(S.suis)is a major disease impacting pig farming globally.It can also be transferred to humans by eating raw pork.A comprehensive study was recently carried out to determine the indices throughmultiple geographic regions in China.Methods:The well-posed theorems were employed to conduct a thorough analysis of the model’s feasible features,including positivity,boundedness equilibria,reproduction number,and parameter sensitivity.Stochastic Euler,Runge Kutta,and EulerMaruyama are some of the numerical techniques used to replicate the behavior of the streptococcus suis infection in the pig population.However,the dynamic qualities of the suggested model cannot be restored using these techniques.Results:For the stochastic delay differential equations of the model,the non-standard finite difference approach in the sense of stochasticity is developed to avoid several problems such as negativity,unboundedness,inconsistency,and instability of the findings.Results from traditional stochastic methods either converge conditionally or diverge over time.The stochastic non-negative step size convergence nonstandard finite difference(NSFD)method unconditionally converges to the model’s true states.Conclusions:This study improves our understanding of the dynamics of streptococcus suis infection using versions of stochastic with delay approaches and opens up new avenues for the study of cognitive processes and neuronal analysis.Theplotted interaction behaviour and new solution comparison profiles.展开更多
Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of se...Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of service(QoS)and quality of experience(QoE).Edge computing technology extends cloud service functionality to the edge of the mobile network,closer to the task execution end,and can effectivelymitigate the communication latency problem.However,the massive and heterogeneous nature of servers in edge computing systems brings new challenges to task scheduling and resource management,and the booming development of artificial neural networks provides us withmore powerfulmethods to alleviate this limitation.Therefore,in this paper,we proposed a time series forecasting model incorporating Conv1D,LSTM and GRU for edge computing device resource scheduling,trained and tested the forecasting model using a small self-built dataset,and achieved competitive experimental results.展开更多
基金supported by the National Science and Technology Major Project(2021YFF1201200)the National Natural Science Foundation of China(62372316)the Sichuan Science and Technology Program key project(2024YFHZ0091).
文摘As a common foodborne pathogen,Salmonella poses risks to public health safety,common given the emergence of antimicrobial-resistant strains.However,there is currently a lack of systematic platforms based on large language models(LLMs)for Salmonella resistance prediction,data presentation,and data sharing.To overcome this issue,we firstly propose a two-step feature-selection process based on the chi-square test and conditional mutual information maximization to find the key Salmonella resistance genes in a pan-genomics analysis and develop an LLM-based Salmonella antimicrobial-resistance predictive(SARPLLM)algorithm to achieve accurate antimicrobial-resistance prediction,based on Qwen2 LLM and low-rank adaptation.Secondly,we optimize the time complexity to compute the sample distance from the linear to logarithmic level by constructing a quantum data augmentation algorithm denoted as QSMOTEN.Thirdly,we build up a user-friendly Salmonella antimicrobial-resistance predictive online platform based on knowledge graphs,which not only facilitates online resistance prediction for users but also visualizes the pan-genomics analysis results of the Salmonella datasets.
文摘The rapid advancement of deep learning and the emergence of largescale neural models,such as bidirectional encoder representations from transformers(BERT),generative pre-trained transformer(GPT),and large language model Meta AI(LLaMa),have brought significant computational and energy challenges.Neuromorphic computing presents a biologically inspired approach to addressing these issues,leveraging event-driven processing and in-memory computation for enhanced energy efficiency.This survey explores the intersection of neuromorphic computing and large-scale deep learning models,focusing on neuromorphic models,learning methods,and hardware.We highlight transferable techniques from deep learning to neuromorphic computing and examine the memoryrelated scalability limitations of current neuromorphic systems.Furthermore,we identify potential directions to enable neuromorphic systems to meet the growing demands of modern AI workloads.
文摘The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.
基金supported by the ONR Vannevar Bush Faculty Fellowship(Grant No.N00014-22-1-2795).
文摘Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different models exhibit distinct strengths and preferences,resulting in varying levels of performance.In this paper,we compare the capabilities of the most advanced LLMs—DeepSeek,ChatGPT,and Claude—along with their reasoning-optimized versions in addressing computational challenges.Specifically,we evaluate their proficiency in solving traditional numerical problems in scientific computing as well as leveraging scientific machine learning techniques for PDE-based problems.We designed all our experiments so that a nontrivial decision is required,e.g,defining the proper space of input functions for neural operator learning.Our findings show that reasoning and hybrid-reasoning models consistently and significantly outperform non-reasoning ones in solving challenging problems,with ChatGPT o3-mini-high generally offering the fastest reasoning speed.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region(Grant No.2022D01B 187)。
文摘Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms.
基金supported by the Major Research Instrument Development Project of the National Natural Science Foundation of China(82327810)the Foundation of the President of Hebei University(XZJJ202202)the Hebei Province“333 talent project”(A202101058).
文摘Within the prefrontal-cingulate cortex,abnormalities in coupling between neuronal networks can disturb the emotion-cognition interactions,contributing to the development of mental disorders such as depression.Despite this understanding,the neural circuit mechanisms underlying this phenomenon remain elusive.In this study,we present a biophysical computational model encompassing three crucial regions,including the dorsolateral prefrontal cortex,subgenual anterior cingulate cortex,and ventromedial prefrontal cortex.The objective is to investigate the role of coupling relationships within the prefrontal-cingulate cortex networks in balancing emotions and cognitive processes.The numerical results confirm that coupled weights play a crucial role in the balance of emotional cognitive networks.Furthermore,our model predicts the pathogenic mechanism of depression resulting from abnormalities in the subgenual cortex,and network functionality was restored through intervention in the dorsolateral prefrontal cortex.This study utilizes computational modeling techniques to provide an insight explanation for the diagnosis and treatment of depression.
基金supported by the National Natural Science Foundation of China(Grant Nos.52306126,22350710788,12432010,11988102,92270203)the Xplore Prize.
文摘Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automating CFD workflows is underdeveloped.We introduce a novel approach centered on domain-specific LLM adaptation.By fine-tuning Qwen2.5-7B-Instruct on NL2FOAM,our custom dataset of 28,716 natural language-to-OpenFOAM configuration pairs with chain-of-thought(CoT)annotations enables direct translation from natural language descriptions to executable CFD setups.A multi-agent system orchestrates the process,autonomously verifying inputs,generating configurations,running simulations,and correcting errors.Evaluation on a benchmark of 21 diverse flow cases demonstrates state-of-the-art performance,achieving 88.7%solution accuracy and 82.6%first-attempt success rate.This significantly outperforms larger general-purpose models such as Qwen2.5-72B-Instruct,DeepSeek-R1,and Llama3.3-70B-Instruct,while also requiring fewer correction iterations and maintaining high computational efficiency.The results highlight the critical role of domain-specific adaptation in deploying LLM assistants for complex engineering workflows.Our code and fine-tuned model have been deposited at https://github.com/YYgroup/AutoCFD.
基金supported in part by NSFC under Grant 62422407in part by RGC under Grant 26204424in part by ACCESS–AI Chip Center for Emerging Smart Systems, sponsored by the Inno HK initiative of the Innovation and Technology Commission of the Hong Kong Special Administrative Region Government
文摘Robotic computing systems play an important role in enabling intelligent robotic tasks through intelligent algo-rithms and supporting hardware.In recent years,the evolution of robotic algorithms indicates a roadmap from traditional robotics to hierarchical and end-to-end models.This algorithmic advancement poses a critical challenge in achieving balanced system-wide performance.Therefore,algorithm-hardware co-design has emerged as the primary methodology,which ana-lyzes algorithm behaviors on hardware to identify common computational properties.These properties can motivate algo-rithm optimization to reduce computational complexity and hardware innovation from architecture to circuit for high performance and high energy efficiency.We then reviewed recent works on robotic and embodied AI algorithms and computing hard-ware to demonstrate this algorithm-hardware co-design methodology.In the end,we discuss future research opportunities by answering two questions:(1)how to adapt the computing platforms to the rapid evolution of embodied AI algorithms,and(2)how to transform the potential of emerging hardware innovations into end-to-end inference improvements.
文摘BACKGROUND The computed tomography(CT)-based preoperative risk score was developed to predict recurrence after upfront surgery in patients with resectable pancreatic ductal adenocarcinoma(PDAC)in South Korea.However,whether it performs well in other countries remains unknown.AIM To externally validate the CT-based preoperative risk score for PDAC in a country outside South Korea.METHODS Consecutive patients with PDAC who underwent upfront surgery from January 2016 to December 2019 at our institute in a country outside South Korea were retrospectively included.The study utilized the CT-based risk scoring system,which incorporates tumor size,portal venous phase density,tumor necrosis,peripancreatic infiltration,and suspicious metastatic lymph nodes.Patients were categorized into prognosis groups based on their risk score,as good(risk score<2),moderate(risk score 2-4),and poor(risk score≥5).RESULTS A total of 283 patients were evaluated,comprising 170 males and 113 females,with an average age of 63.52±8.71 years.Follow-up was conducted until May 2023,and 76%of patients experienced tumor recurrence with median recurrence-free survival(RFS)of 29.1±1.9 months.According to the evaluation results of Reader 1,the recurrence rates were 39.0%in the good prognosis group,82.1%in the moderate group,and 84.5%in the poor group.In comparison,Reader 2 reported recurrence rates of 50.0%,79.5%,and 88.9%,respectively,across the same prognostic categories.The study validated the effectiveness of the risk scoring system,demonstrating better RFS in the good prognosis group.CONCLUSION This research validated that the CT-based preoperative risk scoring system can effectively predict RFS in patients with PDAC,suggesting that it may be valuable in diverse populations.
基金supported by the National Natural Science Foundation of China(Grant Numbers:12172149 and 12172151).
文摘Electric vehicles,powered by electricity stored in a battery pack,are developing rapidly due to the rapid development of energy storage and the related motor systems being environmentally friendly.However,thermal runaway is the key scientific problem in battery safety research,which can cause fire and even lead to battery explosion under impact loading.In this work,a detailed computational model simulating the mechanical deformation and predicting the short-circuit onset of the 18,650 cylindrical battery is established.The detailed computational model,including the anode,cathode,separator,winding,and battery casing,is then developed under the indentation condition.The failure criteria are subsequently established based on the force–displacement curve and the separator failure.Two methods for improving the anti-short circuit ability are proposed.Results show the three causes of the short circuit and the failure sequence of components and reveal the reason why the fire is more serious under dynamic loading than under quasi-static loading.
基金supported by the National Natural Science Foundation of China Basic Science Center Program for“Multiscale Problems in Nonlinear Mechanics”(Grant No.11988102)the National Natural Science Foundation of China(Grant No.12202451).
文摘This paper investigates the capabilities of large language models(LLMs)to leverage,learn and create knowledge in solving computational fluid dynamics(CFD)problems through three categories of baseline problems.These categories include(1)conventional CFD problems that can be solved using existing numerical methods in LLMs,such as lid-driven cavity flow and the Sod shock tube problem;(2)problems that require new numerical methods beyond those available in LLMs,such as the recently developed Chien-physics-informed neural networks for singularly perturbed convection-diffusion equations;and(3)problems that cannot be solved using existing numerical methods in LLMs,such as the ill-conditioned Hilbert linear algebraic systems.The evaluations indicate that reasoning LLMs overall outperform non-reasoning models in four test cases.Reasoning LLMs show excellent performance for CFD problems according to the tailored prompts,but their current capability in autonomous knowledge exploration and creation needs to be enhanced.
基金supported by the R&D cooperation agreement be-tween Petrobras and CBPF(Contract No.0050.0121790.22.9)Brazilian Research Council(CNPq)for the scholarships for students.
文摘Convolutional neural networks have been widely used for analyzing image data in industry,especially in the oil and gas area.Brazil has an extensive hydrocarbon reserve on its coast and has also benefited from these neural network models.Image data from petrographic thin section can be essential to provide information about reservoir quality,highlighting important features such as carbonate lithology.However,the automatic identification of lithology in reservoir rocks is still a significant challenge,mainly due to the heterogeneity that is part of the lithologies of the Brazilian pre-salt.Within this context,this work presents an approach using one-class or specialist models to identify four classes of lithology present in reservoir rocks in the Brazilian pre-salt.The proposed methodology had the challenge of dealing with a small number of images for training the neural networks,in addition to the complexity involved in the analyzed data.An auto-machine learning tool called AutoKeras was used to define the hyperparameters of the implemented models.The results found were satisfactory and presented an accuracy greater than 70%for image samples belonging to other wells not seen during the model building,which increases the applicability of the implemented model.Finally,a comparison was made between the proposed methodology and multiple-class models,demonstrating the superiority of one-class models.
基金supported in part by the National Science Foundation of China(Grant No.62172450)the Key R&D Plan of Hunan Province(Grant No.2022GK2008)the Nature Science Foundation of Hunan Province(Grant No.2020JJ4756)。
文摘In task offloading,the movement of vehicles causes the switching of connected RSUs and servers,which may lead to task offloading failure or high service delay.In this paper,we analyze the impact of vehicle movements on task offloading and reveal that data preparation time for task execution can be minimized via forward-looking scheduling.Then,a Bi-LSTM-based model is proposed to predict the trajectories of vehicles.The service area is divided into several equal-sized grids.If the actual position of the vehicle and the predicted position by the model belong to the same grid,the prediction is considered correct,thereby reducing the difficulty of vehicle trajectory prediction.Moreover,we propose a scheduling strategy for delay optimization based on the vehicle trajectory prediction.Considering the inevitable prediction error,we take some edge servers around the predicted area as candidate execution servers and the data required for task execution are backed up to these candidate servers,thereby reducing the impact of prediction deviations on task offloading and converting the modest increase of resource overheads into delay reduction in task offloading.Simulation results show that,compared with other classical schemes,the proposed strategy has lower average task offloading delays.
基金the Hainan Provincial Natural Science Foundation of China(No.820RC625)the National Natural Science Foundation of China(No.82060332)。
文摘The underlying electrophysiological mechanisms and clinical treatments of cardiovascular diseases,which are the most common cause of morbidity and mortality worldwide,have gotten a lot of attention and been widely explored in recent decades.Along the way,techniques such as medical imaging,computing modeling,and artificial intelligence(AI)have always played significant roles in above studies.In this article,we illustrated the applications of AI in cardiac electrophysiological research and disease prediction.We summarized general principles of AI and then focused on the roles of AI in cardiac basic and clinical studies incorporating magnetic resonance imaging and computing modeling techniques.The main challenges and perspectives were also analyzed.
基金supported by the Natural Science Foundation of Hunan Province(Grant No.2023JJ40353)the National Key Research and Development Program of China(No.2019YFE03120001).
文摘Titanium-silicon(Ti-Si)alloy system shows significant potential for aerospace and automotive applications due to its superior specific strength,creep resistance,and oxidation resistance.For Si-containing Ti alloys,the sufficient content of Si is critical for achieving these favorable performances,while excessive Si addition will result in mechanical brittleness.Herein,both physical experiments and finite element(FE)simulations are employed to investigate the micro-mechanisms of Si alloying in tailoring the mechanical properties of Ti alloys.Four typical states of Si-containing Ti alloys(solid solution state,hypoeutectoid state,near-eutectoid state,hypereutectoid state)with varying Si content(0.3-1.2 wt.%)were fabricated via in-situ alloying spark plasma sintering.Experimental results indicate that in-situ alloying of 0.6 wt.%Si enhances the alloy’s strength and ductility simultaneously due to the formation of fine and uniformly dispersed Ti_(5)Si_(3)particles,while higher content of Si(0.9 and 1.2 wt.%)results in coarser primary Ti_(5)Si_(3)agglomerations,deteriorating the ductility.FE simulations support these findings,highlighting the finer and more uniformly distributed Ti_(5)Si_(3)particles contribute to less stress concentration and promote uniform deformation across the matrix,while agglomerated Ti_(5)Si_(3)particles result in increased local stress concentrations,leading to higher chances of particle fracture and reduced ductility.This study not only elucidates the micro-mechanisms of in-situ Si alloying for tailoring the mechanical properties of Ti alloys but also aids in optimizing the design of high-performance Si-containing Ti alloys.
基金supported by the 2023 Youth Fund for Humanities and Social Sciences Research by the Ministry of Education of the People’s Republic of China(Grant No.23YJC740004).
文摘Based on BERTopic Model,the paper combines qualitative and quantitative methods to explore the reception of Can Xue’s translated works by analyzing readers’book reviews posted on Goodreads and Lovereading.We first collected book reviews from these two well-known websites by Python.Through topic analysis of these reviews,we identified recurring topics,including details of her translated works and appreciation of their translation quality.Then,employing sentiment and content analysis methods,the paper explored the emotional attitudes and the specific thoughts of readers toward Can Xue and her translated works.The fingdings revealed that,among the 408 reviews,though the reception of Can Xue’s translated works was relatively positive,the current level of attention and recognition remains insufficient.However,based on the research results,the paper can derive valuable insights into the translation and dissemination processes such as adjusting translation and dissemination strategies,so that the global reach of Chinese literature and culture can be better facilitated.
基金financial support from the Centro de Matematica da Universidade doMinho(CMAT/UM),through project UID/00013.
文摘Metaverse technologies are increasingly promoted as game-changers in transport planning,connectedautonomous mobility,and immersive traveler services.However,the field lacks a systematic review of what has been achieved,where critical technical gaps remain,and where future deployments should be integrated.Using a transparent protocol-driven screening process,we reviewed 1589 records and retained 101 peer-reviewed journal and conference articles(2021–2025)that explicitly frame their contributions within a transport-oriented metaverse.Our reviewreveals a predominantly exploratory evidence base.Among the 101 studies reviewed,17(16.8%)apply fuzzymulticriteria decision-making,36(35.6%)feature digital-twin visualizations or simulation-based testbeds,9(8.9%)present hardware-in-the-loop or field pilots,and only 4(4.0%)report performance metrics such as latency,throughput,or safety under realistic network conditions.Over time,the literature evolves fromearly conceptual sketches(2021–2022)through simulation-centered frameworks(2023)to nascent engineering prototypes(2024–2025).To clarify persistent gaps,we synthesize findings into four foundational layers—geometry and rendering,distributed synchronization,cryptographic integrity,and human factors—enumerating essential algorithms(homogeneous 4×4 transforms,Lamport clocks,Raft consensus,Merkle proofs,sweep-and-prune collision culling,Q-learning,and real-time ergonomic feedback loops).A worked bus-fleet prototype illustrates how blockchain-based ticketing,reinforcement learning-optimized traffic signals,and extended reality dispatch can be integrated into a live digital twin.This prototype is supported by a threephase rollout strategy.Advancing the transport metaverse from blueprint to operation requires open data schemas,reproducible edge–cloud performance benchmarks,cross-disciplinary cyber-physical threat models,and city-scale sandboxes that apply their mathematical foundations in real-world settings.
基金financially by the National Research Council of Thailand(NRCT)under Contract No.N42A670894.
文摘This study presents an innovative development of the exponentially weighted moving average(EWMA)control chart,explicitly adapted for the examination of time series data distinguished by seasonal autoregressive moving average behavior—SARMA(1,1)L under exponential white noise.Unlike previous works that rely on simplified models such as AR(1)or assume independence,this research derives for the first time an exact two-sided Average Run Length(ARL)formula for theModified EWMAchart under SARMA(1,1)L conditions,using a mathematically rigorous Fredholm integral approach.The derived formulas are validated against numerical integral equation(NIE)solutions,showing strong agreement and significantly reduced computational burden.Additionally,a performance comparison index(PCI)is introduced to assess the chart’s detection capability.Results demonstrate that the proposed method exhibits superior sensitivity to mean shifts in autocorrelated environments,outperforming existing approaches.The findings offer a new,efficient framework for real-time quality control in complex seasonal processes,with potential applications in environmental monitoring and intelligent manufacturing systems.
基金supported by the Deanship of Scientific Research,Vice Presidency for Graduate Studies and Scientific Research,King Faisal University,Saudi Arabia[KFU250259].
文摘Streptococcus suis(S.suis)is a major disease impacting pig farming globally.It can also be transferred to humans by eating raw pork.A comprehensive study was recently carried out to determine the indices throughmultiple geographic regions in China.Methods:The well-posed theorems were employed to conduct a thorough analysis of the model’s feasible features,including positivity,boundedness equilibria,reproduction number,and parameter sensitivity.Stochastic Euler,Runge Kutta,and EulerMaruyama are some of the numerical techniques used to replicate the behavior of the streptococcus suis infection in the pig population.However,the dynamic qualities of the suggested model cannot be restored using these techniques.Results:For the stochastic delay differential equations of the model,the non-standard finite difference approach in the sense of stochasticity is developed to avoid several problems such as negativity,unboundedness,inconsistency,and instability of the findings.Results from traditional stochastic methods either converge conditionally or diverge over time.The stochastic non-negative step size convergence nonstandard finite difference(NSFD)method unconditionally converges to the model’s true states.Conclusions:This study improves our understanding of the dynamics of streptococcus suis infection using versions of stochastic with delay approaches and opens up new avenues for the study of cognitive processes and neuronal analysis.Theplotted interaction behaviour and new solution comparison profiles.
基金supported in part by the National Natural Science Foundation of China under Grant 62172192,U20A20228,and 62171203in part by the Science and Technology Demonstration Project of Social Development of Jiangsu Province under Grant BE2019631。
文摘Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of service(QoS)and quality of experience(QoE).Edge computing technology extends cloud service functionality to the edge of the mobile network,closer to the task execution end,and can effectivelymitigate the communication latency problem.However,the massive and heterogeneous nature of servers in edge computing systems brings new challenges to task scheduling and resource management,and the booming development of artificial neural networks provides us withmore powerfulmethods to alleviate this limitation.Therefore,in this paper,we proposed a time series forecasting model incorporating Conv1D,LSTM and GRU for edge computing device resource scheduling,trained and tested the forecasting model using a small self-built dataset,and achieved competitive experimental results.