期刊文献+
共找到16,620篇文章
< 1 2 250 >
每页显示 20 50 100
Developing a Predictive Platform for Salmonella Antimicrobial Resistance Based on a Large Language Model and Quantum Computing
1
作者 Yujie You Kan Tan +1 位作者 Zekun Jiang Le Zhang 《Engineering》 2025年第5期174-184,共11页
As a common foodborne pathogen,Salmonella poses risks to public health safety,common given the emergence of antimicrobial-resistant strains.However,there is currently a lack of systematic platforms based on large lang... As a common foodborne pathogen,Salmonella poses risks to public health safety,common given the emergence of antimicrobial-resistant strains.However,there is currently a lack of systematic platforms based on large language models(LLMs)for Salmonella resistance prediction,data presentation,and data sharing.To overcome this issue,we firstly propose a two-step feature-selection process based on the chi-square test and conditional mutual information maximization to find the key Salmonella resistance genes in a pan-genomics analysis and develop an LLM-based Salmonella antimicrobial-resistance predictive(SARPLLM)algorithm to achieve accurate antimicrobial-resistance prediction,based on Qwen2 LLM and low-rank adaptation.Secondly,we optimize the time complexity to compute the sample distance from the linear to logarithmic level by constructing a quantum data augmentation algorithm denoted as QSMOTEN.Thirdly,we build up a user-friendly Salmonella antimicrobial-resistance predictive online platform based on knowledge graphs,which not only facilitates online resistance prediction for users but also visualizes the pan-genomics analysis results of the Salmonella datasets. 展开更多
关键词 Salmonella resistance prediction Pan-genomics Large language model Quantum computing BIOINFORMATICS
在线阅读 下载PDF
Neuromorphic Computing in the Era of Large Models
2
作者 Haoxuan SHAN Chiyue WEI +4 位作者 Nicolas RAMOS Xiaoxuan YANG Cong GUO Hai(Helen)LI Yiran CHEN 《Artificial Intelligence Science and Engineering》 2025年第1期17-30,共14页
The rapid advancement of deep learning and the emergence of largescale neural models,such as bidirectional encoder representations from transformers(BERT),generative pre-trained transformer(GPT),and large language mod... The rapid advancement of deep learning and the emergence of largescale neural models,such as bidirectional encoder representations from transformers(BERT),generative pre-trained transformer(GPT),and large language model Meta AI(LLaMa),have brought significant computational and energy challenges.Neuromorphic computing presents a biologically inspired approach to addressing these issues,leveraging event-driven processing and in-memory computation for enhanced energy efficiency.This survey explores the intersection of neuromorphic computing and large-scale deep learning models,focusing on neuromorphic models,learning methods,and hardware.We highlight transferable techniques from deep learning to neuromorphic computing and examine the memoryrelated scalability limitations of current neuromorphic systems.Furthermore,we identify potential directions to enable neuromorphic systems to meet the growing demands of modern AI workloads. 展开更多
关键词 neuromorphic computing spiking neural networks large deep learning models
在线阅读 下载PDF
CBBM-WARM:A Workload-Aware Meta-Heuristic for Resource Management in Cloud Computing 被引量:1
3
作者 K Nivitha P Pabitha R Praveen 《China Communications》 2025年第6期255-275,共21页
The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achievi... The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks. 展开更多
关键词 autonomic resource management cloud computing coot bird behavior model SLA violation cost WORKLOAD
在线阅读 下载PDF
DeepSeek vs.ChatGPT vs.Claude:A comparative study for scientific computing and scientific machine learning tasks 被引量:1
4
作者 Qile Jiang Zhiwei Gao George Em Karniadakis 《Theoretical & Applied Mechanics Letters》 2025年第3期194-206,共13页
Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different ... Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different models exhibit distinct strengths and preferences,resulting in varying levels of performance.In this paper,we compare the capabilities of the most advanced LLMs—DeepSeek,ChatGPT,and Claude—along with their reasoning-optimized versions in addressing computational challenges.Specifically,we evaluate their proficiency in solving traditional numerical problems in scientific computing as well as leveraging scientific machine learning techniques for PDE-based problems.We designed all our experiments so that a nontrivial decision is required,e.g,defining the proper space of input functions for neural operator learning.Our findings show that reasoning and hybrid-reasoning models consistently and significantly outperform non-reasoning ones in solving challenging problems,with ChatGPT o3-mini-high generally offering the fastest reasoning speed. 展开更多
关键词 Large language models(LLM) Scientific computing Scientific machine learning Physics-informed neural network
在线阅读 下载PDF
FedCLCC:A personalized federated learning algorithm for edge cloud collaboration based on contrastive learning and conditional computing
5
作者 Kangning Yin Xinhui Ji +1 位作者 Yan Wang Zhiguo Wang 《Defence Technology(防务技术)》 2025年第1期80-93,共14页
Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure ... Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms. 展开更多
关键词 Federated learning Statistical heterogeneity Personalized model Conditional computing Contrastive learning
在线阅读 下载PDF
Computational Modeling of the Prefrontal-Cingulate Cortex to Investigate the Role of Coupling Relationships for Balancing Emotion and Cognition
6
作者 Jinzhao Wei Licong Li +3 位作者 Jiayi Zhang Erdong Shi Jianli Yang Xiuling Liu 《Neuroscience Bulletin》 2025年第1期33-45,共13页
Within the prefrontal-cingulate cortex,abnormalities in coupling between neuronal networks can disturb the emotion-cognition interactions,contributing to the development of mental disorders such as depression.Despite ... Within the prefrontal-cingulate cortex,abnormalities in coupling between neuronal networks can disturb the emotion-cognition interactions,contributing to the development of mental disorders such as depression.Despite this understanding,the neural circuit mechanisms underlying this phenomenon remain elusive.In this study,we present a biophysical computational model encompassing three crucial regions,including the dorsolateral prefrontal cortex,subgenual anterior cingulate cortex,and ventromedial prefrontal cortex.The objective is to investigate the role of coupling relationships within the prefrontal-cingulate cortex networks in balancing emotions and cognitive processes.The numerical results confirm that coupled weights play a crucial role in the balance of emotional cognitive networks.Furthermore,our model predicts the pathogenic mechanism of depression resulting from abnormalities in the subgenual cortex,and network functionality was restored through intervention in the dorsolateral prefrontal cortex.This study utilizes computational modeling techniques to provide an insight explanation for the diagnosis and treatment of depression. 展开更多
关键词 Prefrontal-cingulate cortex computational modeling Coupling relationships DEPRESSION Emotion and cognition
原文传递
Fine-tuning a large language model for automating computational fluid dynamics simulations
7
作者 Zhehao Dong Zhen Lu Yue Yang 《Theoretical & Applied Mechanics Letters》 2025年第3期219-225,共7页
Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automat... Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automating CFD workflows is underdeveloped.We introduce a novel approach centered on domain-specific LLM adaptation.By fine-tuning Qwen2.5-7B-Instruct on NL2FOAM,our custom dataset of 28,716 natural language-to-OpenFOAM configuration pairs with chain-of-thought(CoT)annotations enables direct translation from natural language descriptions to executable CFD setups.A multi-agent system orchestrates the process,autonomously verifying inputs,generating configurations,running simulations,and correcting errors.Evaluation on a benchmark of 21 diverse flow cases demonstrates state-of-the-art performance,achieving 88.7%solution accuracy and 82.6%first-attempt success rate.This significantly outperforms larger general-purpose models such as Qwen2.5-72B-Instruct,DeepSeek-R1,and Llama3.3-70B-Instruct,while also requiring fewer correction iterations and maintaining high computational efficiency.The results highlight the critical role of domain-specific adaptation in deploying LLM assistants for complex engineering workflows.Our code and fine-tuned model have been deposited at https://github.com/YYgroup/AutoCFD. 展开更多
关键词 Large language models Fine-tuning computational fluid dynamics Automated CFD Multi-agent system
在线阅读 下载PDF
Robotic computing system and embodied AI evolution:an algorithm-hardware co-design perspective
8
作者 Longke Yan Xin Zhao +7 位作者 Bohan Yang Yongkun Wu Guangnan Dai Jiancong Li Chi-Ying Tsui Kwang-Ting Cheng Yihan Zhang Fengbin Tu 《Journal of Semiconductors》 2025年第10期6-23,共18页
Robotic computing systems play an important role in enabling intelligent robotic tasks through intelligent algo-rithms and supporting hardware.In recent years,the evolution of robotic algorithms indicates a roadmap fr... Robotic computing systems play an important role in enabling intelligent robotic tasks through intelligent algo-rithms and supporting hardware.In recent years,the evolution of robotic algorithms indicates a roadmap from traditional robotics to hierarchical and end-to-end models.This algorithmic advancement poses a critical challenge in achieving balanced system-wide performance.Therefore,algorithm-hardware co-design has emerged as the primary methodology,which ana-lyzes algorithm behaviors on hardware to identify common computational properties.These properties can motivate algo-rithm optimization to reduce computational complexity and hardware innovation from architecture to circuit for high performance and high energy efficiency.We then reviewed recent works on robotic and embodied AI algorithms and computing hard-ware to demonstrate this algorithm-hardware co-design methodology.In the end,we discuss future research opportunities by answering two questions:(1)how to adapt the computing platforms to the rapid evolution of embodied AI algorithms,and(2)how to transform the potential of emerging hardware innovations into end-to-end inference improvements. 展开更多
关键词 robotic computing system embodied AI algorithm-hardware co-design AI chip large-scale AI models
在线阅读 下载PDF
Preoperative computed tomography-based risk stratification model validation for postoperative pancreatic ductal adenocarcinoma recurrence
9
作者 Xiao-Hui Liu Jing-Hong Xie +1 位作者 Xi-Song Zhu Li-Heng Liu 《World Journal of Gastrointestinal Surgery》 2025年第7期300-308,共9页
BACKGROUND The computed tomography(CT)-based preoperative risk score was developed to predict recurrence after upfront surgery in patients with resectable pancreatic ductal adenocarcinoma(PDAC)in South Korea.However,w... BACKGROUND The computed tomography(CT)-based preoperative risk score was developed to predict recurrence after upfront surgery in patients with resectable pancreatic ductal adenocarcinoma(PDAC)in South Korea.However,whether it performs well in other countries remains unknown.AIM To externally validate the CT-based preoperative risk score for PDAC in a country outside South Korea.METHODS Consecutive patients with PDAC who underwent upfront surgery from January 2016 to December 2019 at our institute in a country outside South Korea were retrospectively included.The study utilized the CT-based risk scoring system,which incorporates tumor size,portal venous phase density,tumor necrosis,peripancreatic infiltration,and suspicious metastatic lymph nodes.Patients were categorized into prognosis groups based on their risk score,as good(risk score<2),moderate(risk score 2-4),and poor(risk score≥5).RESULTS A total of 283 patients were evaluated,comprising 170 males and 113 females,with an average age of 63.52±8.71 years.Follow-up was conducted until May 2023,and 76%of patients experienced tumor recurrence with median recurrence-free survival(RFS)of 29.1±1.9 months.According to the evaluation results of Reader 1,the recurrence rates were 39.0%in the good prognosis group,82.1%in the moderate group,and 84.5%in the poor group.In comparison,Reader 2 reported recurrence rates of 50.0%,79.5%,and 88.9%,respectively,across the same prognostic categories.The study validated the effectiveness of the risk scoring system,demonstrating better RFS in the good prognosis group.CONCLUSION This research validated that the CT-based preoperative risk scoring system can effectively predict RFS in patients with PDAC,suggesting that it may be valuable in diverse populations. 展开更多
关键词 Pancreatic ductal adenocarcinoma Postoperative recurrence Risk assessment system computed tomography model validation
暂未订购
Failure Analyses of Cylindrical Lithium-Ion Batteries Under Dynamic Loading Based on Detailed Computational Model
10
作者 Huifeng Xi Guicheng Zhao +3 位作者 Shuo Wang Junkui Li Linghui He Bao Yang 《Acta Mechanica Solida Sinica》 2025年第3期526-538,共13页
Electric vehicles,powered by electricity stored in a battery pack,are developing rapidly due to the rapid development of energy storage and the related motor systems being environmentally friendly.However,thermal runa... Electric vehicles,powered by electricity stored in a battery pack,are developing rapidly due to the rapid development of energy storage and the related motor systems being environmentally friendly.However,thermal runaway is the key scientific problem in battery safety research,which can cause fire and even lead to battery explosion under impact loading.In this work,a detailed computational model simulating the mechanical deformation and predicting the short-circuit onset of the 18,650 cylindrical battery is established.The detailed computational model,including the anode,cathode,separator,winding,and battery casing,is then developed under the indentation condition.The failure criteria are subsequently established based on the force–displacement curve and the separator failure.Two methods for improving the anti-short circuit ability are proposed.Results show the three causes of the short circuit and the failure sequence of components and reveal the reason why the fire is more serious under dynamic loading than under quasi-static loading. 展开更多
关键词 18 650 lithium-ion battery Detailed computational model DEFORMATION Fracture mode Failure criteria
原文传递
Evaluations of large language models in computational fluid dynamics:Leveraging,learning and creating knowledge
11
作者 Long Wang Lei Zhang Guowei He 《Theoretical & Applied Mechanics Letters》 2025年第3期207-218,共12页
This paper investigates the capabilities of large language models(LLMs)to leverage,learn and create knowledge in solving computational fluid dynamics(CFD)problems through three categories of baseline problems.These ca... This paper investigates the capabilities of large language models(LLMs)to leverage,learn and create knowledge in solving computational fluid dynamics(CFD)problems through three categories of baseline problems.These categories include(1)conventional CFD problems that can be solved using existing numerical methods in LLMs,such as lid-driven cavity flow and the Sod shock tube problem;(2)problems that require new numerical methods beyond those available in LLMs,such as the recently developed Chien-physics-informed neural networks for singularly perturbed convection-diffusion equations;and(3)problems that cannot be solved using existing numerical methods in LLMs,such as the ill-conditioned Hilbert linear algebraic systems.The evaluations indicate that reasoning LLMs overall outperform non-reasoning models in four test cases.Reasoning LLMs show excellent performance for CFD problems according to the tailored prompts,but their current capability in autonomous knowledge exploration and creation needs to be enhanced. 展开更多
关键词 Large language models computational fluid dynamics Machine learning
在线阅读 下载PDF
Automatic classification of Carbonatic thin sections by computer vision techniques and one-vs-all models
12
作者 Elisangela L.Faria Rayan Barbosa +7 位作者 Juliana M.Coelho Thais F.Matos Bernardo C.C.Santos J.L.Gonzalez Clécio R.Bom Márcio P.de Albuquerque P.J.Russano Marcelo P.de Albuquerque 《Artificial Intelligence in Geosciences》 2025年第1期271-281,共11页
Convolutional neural networks have been widely used for analyzing image data in industry,especially in the oil and gas area.Brazil has an extensive hydrocarbon reserve on its coast and has also benefited from these ne... Convolutional neural networks have been widely used for analyzing image data in industry,especially in the oil and gas area.Brazil has an extensive hydrocarbon reserve on its coast and has also benefited from these neural network models.Image data from petrographic thin section can be essential to provide information about reservoir quality,highlighting important features such as carbonate lithology.However,the automatic identification of lithology in reservoir rocks is still a significant challenge,mainly due to the heterogeneity that is part of the lithologies of the Brazilian pre-salt.Within this context,this work presents an approach using one-class or specialist models to identify four classes of lithology present in reservoir rocks in the Brazilian pre-salt.The proposed methodology had the challenge of dealing with a small number of images for training the neural networks,in addition to the complexity involved in the analyzed data.An auto-machine learning tool called AutoKeras was used to define the hyperparameters of the implemented models.The results found were satisfactory and presented an accuracy greater than 70%for image samples belonging to other wells not seen during the model building,which increases the applicability of the implemented model.Finally,a comparison was made between the proposed methodology and multiple-class models,demonstrating the superiority of one-class models. 展开更多
关键词 Carbonate thin section Convolution neural network computational vision One-vs-all models
在线阅读 下载PDF
Task offloading delay minimization in vehicular edge computing based on vehicle trajectory prediction
13
作者 Feng Zeng Zheng Zhang Jinsong Wu 《Digital Communications and Networks》 2025年第2期537-546,共10页
In task offloading,the movement of vehicles causes the switching of connected RSUs and servers,which may lead to task offloading failure or high service delay.In this paper,we analyze the impact of vehicle movements o... In task offloading,the movement of vehicles causes the switching of connected RSUs and servers,which may lead to task offloading failure or high service delay.In this paper,we analyze the impact of vehicle movements on task offloading and reveal that data preparation time for task execution can be minimized via forward-looking scheduling.Then,a Bi-LSTM-based model is proposed to predict the trajectories of vehicles.The service area is divided into several equal-sized grids.If the actual position of the vehicle and the predicted position by the model belong to the same grid,the prediction is considered correct,thereby reducing the difficulty of vehicle trajectory prediction.Moreover,we propose a scheduling strategy for delay optimization based on the vehicle trajectory prediction.Considering the inevitable prediction error,we take some edge servers around the predicted area as candidate execution servers and the data required for task execution are backed up to these candidate servers,thereby reducing the impact of prediction deviations on task offloading and converting the modest increase of resource overheads into delay reduction in task offloading.Simulation results show that,compared with other classical schemes,the proposed strategy has lower average task offloading delays. 展开更多
关键词 Vehicular edge computing Task offloading Vehicle trajectory prediction Delay minimization Bi-LSTM model
在线阅读 下载PDF
Applications of Artificial Intelligence in Cardiac Electrophysiology and Clinical Diagnosis with Magnetic Resonance Imaging and Computational Modeling Techniques
14
作者 ZHAN Heqin HAN Guilail +1 位作者 WEI Chuan'an LI Zhiqun 《Journal of Shanghai Jiaotong university(Science)》 2025年第1期53-65,共13页
The underlying electrophysiological mechanisms and clinical treatments of cardiovascular diseases,which are the most common cause of morbidity and mortality worldwide,have gotten a lot of attention and been widely exp... The underlying electrophysiological mechanisms and clinical treatments of cardiovascular diseases,which are the most common cause of morbidity and mortality worldwide,have gotten a lot of attention and been widely explored in recent decades.Along the way,techniques such as medical imaging,computing modeling,and artificial intelligence(AI)have always played significant roles in above studies.In this article,we illustrated the applications of AI in cardiac electrophysiological research and disease prediction.We summarized general principles of AI and then focused on the roles of AI in cardiac basic and clinical studies incorporating magnetic resonance imaging and computing modeling techniques.The main challenges and perspectives were also analyzed. 展开更多
关键词 artificial intelligence(AI) magnetic resonance imaging computing modeling cardiovascular disease
原文传递
Unveiling micro-scale mechanisms of in-situ silicon alloying for tailoring mechanical properties in titanium alloys:Experiments and computational modeling
15
作者 Sisi Tang Li Li +3 位作者 Jinlong Su Yuan Yuan Yong Han Jinglian Fan 《Journal of Materials Science & Technology》 2025年第17期150-163,共14页
Titanium-silicon(Ti-Si)alloy system shows significant potential for aerospace and automotive applications due to its superior specific strength,creep resistance,and oxidation resistance.For Si-containing Ti alloys,the... Titanium-silicon(Ti-Si)alloy system shows significant potential for aerospace and automotive applications due to its superior specific strength,creep resistance,and oxidation resistance.For Si-containing Ti alloys,the sufficient content of Si is critical for achieving these favorable performances,while excessive Si addition will result in mechanical brittleness.Herein,both physical experiments and finite element(FE)simulations are employed to investigate the micro-mechanisms of Si alloying in tailoring the mechanical properties of Ti alloys.Four typical states of Si-containing Ti alloys(solid solution state,hypoeutectoid state,near-eutectoid state,hypereutectoid state)with varying Si content(0.3-1.2 wt.%)were fabricated via in-situ alloying spark plasma sintering.Experimental results indicate that in-situ alloying of 0.6 wt.%Si enhances the alloy’s strength and ductility simultaneously due to the formation of fine and uniformly dispersed Ti_(5)Si_(3)particles,while higher content of Si(0.9 and 1.2 wt.%)results in coarser primary Ti_(5)Si_(3)agglomerations,deteriorating the ductility.FE simulations support these findings,highlighting the finer and more uniformly distributed Ti_(5)Si_(3)particles contribute to less stress concentration and promote uniform deformation across the matrix,while agglomerated Ti_(5)Si_(3)particles result in increased local stress concentrations,leading to higher chances of particle fracture and reduced ductility.This study not only elucidates the micro-mechanisms of in-situ Si alloying for tailoring the mechanical properties of Ti alloys but also aids in optimizing the design of high-performance Si-containing Ti alloys. 展开更多
关键词 Titanium alloy Spark plasma sintering Micro-scale deformation behavior Mechanical property tailoring computational modeling
原文传递
A Computational Analysis of the Reception of Can Xue’s Translated Works in the English- Language World Based on BERTopic Model
16
作者 HE Tangxikun CHEN Xian 《译苑新谭》 2025年第1期20-36,共17页
Based on BERTopic Model,the paper combines qualitative and quantitative methods to explore the reception of Can Xue’s translated works by analyzing readers’book reviews posted on Goodreads and Lovereading.We first c... Based on BERTopic Model,the paper combines qualitative and quantitative methods to explore the reception of Can Xue’s translated works by analyzing readers’book reviews posted on Goodreads and Lovereading.We first collected book reviews from these two well-known websites by Python.Through topic analysis of these reviews,we identified recurring topics,including details of her translated works and appreciation of their translation quality.Then,employing sentiment and content analysis methods,the paper explored the emotional attitudes and the specific thoughts of readers toward Can Xue and her translated works.The fingdings revealed that,among the 408 reviews,though the reception of Can Xue’s translated works was relatively positive,the current level of attention and recognition remains insufficient.However,based on the research results,the paper can derive valuable insights into the translation and dissemination processes such as adjusting translation and dissemination strategies,so that the global reach of Chinese literature and culture can be better facilitated. 展开更多
关键词 Can Xue’s translated works RECEPTION English-language world BERTopic model computational analysis
原文传递
A Data-Driven Systematic Review of the Metaverse in Transportation:Current Research,Computational Modeling,and Future Trends
17
作者 Cecilia Castro Victor Leiva Franco Basso 《Computer Modeling in Engineering & Sciences》 2025年第8期1481-1543,共63页
Metaverse technologies are increasingly promoted as game-changers in transport planning,connectedautonomous mobility,and immersive traveler services.However,the field lacks a systematic review of what has been achieve... Metaverse technologies are increasingly promoted as game-changers in transport planning,connectedautonomous mobility,and immersive traveler services.However,the field lacks a systematic review of what has been achieved,where critical technical gaps remain,and where future deployments should be integrated.Using a transparent protocol-driven screening process,we reviewed 1589 records and retained 101 peer-reviewed journal and conference articles(2021–2025)that explicitly frame their contributions within a transport-oriented metaverse.Our reviewreveals a predominantly exploratory evidence base.Among the 101 studies reviewed,17(16.8%)apply fuzzymulticriteria decision-making,36(35.6%)feature digital-twin visualizations or simulation-based testbeds,9(8.9%)present hardware-in-the-loop or field pilots,and only 4(4.0%)report performance metrics such as latency,throughput,or safety under realistic network conditions.Over time,the literature evolves fromearly conceptual sketches(2021–2022)through simulation-centered frameworks(2023)to nascent engineering prototypes(2024–2025).To clarify persistent gaps,we synthesize findings into four foundational layers—geometry and rendering,distributed synchronization,cryptographic integrity,and human factors—enumerating essential algorithms(homogeneous 4×4 transforms,Lamport clocks,Raft consensus,Merkle proofs,sweep-and-prune collision culling,Q-learning,and real-time ergonomic feedback loops).A worked bus-fleet prototype illustrates how blockchain-based ticketing,reinforcement learning-optimized traffic signals,and extended reality dispatch can be integrated into a live digital twin.This prototype is supported by a threephase rollout strategy.Advancing the transport metaverse from blueprint to operation requires open data schemas,reproducible edge–cloud performance benchmarks,cross-disciplinary cyber-physical threat models,and city-scale sandboxes that apply their mathematical foundations in real-world settings. 展开更多
关键词 Artificial intelligence blockchain computational modeling digital twins extended reality fuzzy MCDM machine learning metaverse reinforcement learning
在线阅读 下载PDF
Explicit ARL Computational for a Modified EWMA Control Chart in Autocorrelated Statistical Process Control Models
18
作者 Yadpirun Supharakonsakun Yupaporn Areepong Korakoch Silpakob 《Computer Modeling in Engineering & Sciences》 2025年第10期699-720,共22页
This study presents an innovative development of the exponentially weighted moving average(EWMA)control chart,explicitly adapted for the examination of time series data distinguished by seasonal autoregressive moving ... This study presents an innovative development of the exponentially weighted moving average(EWMA)control chart,explicitly adapted for the examination of time series data distinguished by seasonal autoregressive moving average behavior—SARMA(1,1)L under exponential white noise.Unlike previous works that rely on simplified models such as AR(1)or assume independence,this research derives for the first time an exact two-sided Average Run Length(ARL)formula for theModified EWMAchart under SARMA(1,1)L conditions,using a mathematically rigorous Fredholm integral approach.The derived formulas are validated against numerical integral equation(NIE)solutions,showing strong agreement and significantly reduced computational burden.Additionally,a performance comparison index(PCI)is introduced to assess the chart’s detection capability.Results demonstrate that the proposed method exhibits superior sensitivity to mean shifts in autocorrelated environments,outperforming existing approaches.The findings offer a new,efficient framework for real-time quality control in complex seasonal processes,with potential applications in environmental monitoring and intelligent manufacturing systems. 展开更多
关键词 Statistical process control average run length modified EWMA control chart autocorrelated data SARMA process computational modeling real-time monitoring
在线阅读 下载PDF
Computational Modeling of Streptococcus Suis Dynamics via Stochastic Delay Differential Equations
19
作者 Umar Shafique Ali Raza +4 位作者 Dumitru Baleanu Khadija Nasir Muhammad Naveed Abu Bakar Siddique Emad Fadhal 《Computer Modeling in Engineering & Sciences》 2025年第4期449-476,共28页
Streptococcus suis(S.suis)is a major disease impacting pig farming globally.It can also be transferred to humans by eating raw pork.A comprehensive study was recently carried out to determine the indices throughmultip... Streptococcus suis(S.suis)is a major disease impacting pig farming globally.It can also be transferred to humans by eating raw pork.A comprehensive study was recently carried out to determine the indices throughmultiple geographic regions in China.Methods:The well-posed theorems were employed to conduct a thorough analysis of the model’s feasible features,including positivity,boundedness equilibria,reproduction number,and parameter sensitivity.Stochastic Euler,Runge Kutta,and EulerMaruyama are some of the numerical techniques used to replicate the behavior of the streptococcus suis infection in the pig population.However,the dynamic qualities of the suggested model cannot be restored using these techniques.Results:For the stochastic delay differential equations of the model,the non-standard finite difference approach in the sense of stochasticity is developed to avoid several problems such as negativity,unboundedness,inconsistency,and instability of the findings.Results from traditional stochastic methods either converge conditionally or diverge over time.The stochastic non-negative step size convergence nonstandard finite difference(NSFD)method unconditionally converges to the model’s true states.Conclusions:This study improves our understanding of the dynamics of streptococcus suis infection using versions of stochastic with delay approaches and opens up new avenues for the study of cognitive processes and neuronal analysis.Theplotted interaction behaviour and new solution comparison profiles. 展开更多
关键词 Streptococcus suis disease model stochastic delay differential equations(SDDEs) existence and uniqueness Lyapunov function stability results reproduction number computational methods
在线阅读 下载PDF
A Novel Predictive Model for Edge Computing Resource Scheduling Based on Deep Neural Network
20
作者 Ming Gao Weiwei Cai +3 位作者 Yizhang Jiang Wenjun Hu Jian Yao Pengjiang Qian 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期259-277,共19页
Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of se... Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of service(QoS)and quality of experience(QoE).Edge computing technology extends cloud service functionality to the edge of the mobile network,closer to the task execution end,and can effectivelymitigate the communication latency problem.However,the massive and heterogeneous nature of servers in edge computing systems brings new challenges to task scheduling and resource management,and the booming development of artificial neural networks provides us withmore powerfulmethods to alleviate this limitation.Therefore,in this paper,we proposed a time series forecasting model incorporating Conv1D,LSTM and GRU for edge computing device resource scheduling,trained and tested the forecasting model using a small self-built dataset,and achieved competitive experimental results. 展开更多
关键词 Edge computing resource scheduling predictive models
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部