期刊文献+
共找到254,433篇文章
< 1 2 250 >
每页显示 20 50 100
ESS-HPC与既有混凝土界面抗剪性能试验研究 被引量:1
1
作者 吕昭旭 张冠军 +2 位作者 杨才千 杜文平 李郴 《混凝土》 北大核心 2025年第5期1-6,11,共7页
为研究早强自密实补偿收缩高性能混凝土(ESS-HPC)和普通混凝土(OCS)的界面黏结性能,设计并制作了27组试件,通过直剪试验分析ESS-HPC抗压强度、OCS表面处理方式、ESS-HPC养护龄期和界面剂等参数对界面黏结强度的影响。试验结果表明:ESS-H... 为研究早强自密实补偿收缩高性能混凝土(ESS-HPC)和普通混凝土(OCS)的界面黏结性能,设计并制作了27组试件,通过直剪试验分析ESS-HPC抗压强度、OCS表面处理方式、ESS-HPC养护龄期和界面剂等参数对界面黏结强度的影响。试验结果表明:ESS-HPC&OCS试件界面破坏形态主要分为界面破坏、界面剪切破坏以及界面和OCS基体部分破坏;当ESS-HPC强度等级从C60增加到C75时,试件的剪切黏结强度增加了近15%;对比未处理表面,凿毛+钻孔组试件的黏结强度可提升80.91%;在界面处使用丁苯乳液作为界面剂时,试件的黏结强度要高于其他界面剂的试件,但均低于无界面剂的现浇试件组,增加界面剂最高可降低界面黏结强度约26.1%;此外,界面黏结强度随着养护龄期的增加而呈增长趋势,且在28 d后这一趋势逐渐趋于稳定。因此,建议采用强度更高的ESS-HPC并在OCS表面进行钻孔和凿毛,以有效确保ESS-HPC的加固效果。 展开更多
关键词 早强自密实补偿收缩高性能混凝土(ESS-hpc) 黏结性能 粗糙度 直剪试验
在线阅读 下载PDF
国产HPC与AI芯片制造装备技术现状与发展策略分析 被引量:1
2
作者 高岳 郭春华 +1 位作者 米雪 刘容嘉 《电子工业专用设备》 2025年第1期1-6,27,共7页
回顾了国产高性能计算(HPC)与人工智能(AI)芯片制造装备的发展历程,总结了目前的技术现状与面临的挑战。分析了国内外高性能计算与人工智能芯片制造装备的发展趋势和技术特点,并提出了针对国产装备发展的具体策略与建议,以期推动我国在... 回顾了国产高性能计算(HPC)与人工智能(AI)芯片制造装备的发展历程,总结了目前的技术现状与面临的挑战。分析了国内外高性能计算与人工智能芯片制造装备的发展趋势和技术特点,并提出了针对国产装备发展的具体策略与建议,以期推动我国在这一领域的自主创新能力和发展水平。 展开更多
关键词 高性能计算 人工智能 芯片制造设备 国产化 发展策略
在线阅读 下载PDF
Offload Strategy for Edge Computing in Satellite Networks Based on Software Defined Network 被引量:1
3
作者 Zhiguo Liu Yuqing Gui +1 位作者 Lin Wang Yingru Jiang 《Computers, Materials & Continua》 SCIE EI 2025年第1期863-879,共17页
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us... Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency. 展开更多
关键词 Satellite network edge computing task scheduling computing offloading
在线阅读 下载PDF
腹板植筋和填充ESS-HPC组合加固空心板梁抗剪性能研究
4
作者 杜文平 杨才千 张冠军 《西安建筑科技大学学报(自然科学版)》 北大核心 2025年第4期511-519,共9页
针对空心板梁桥出现腹板斜裂缝病害,提出“腹板植筋+填充ESS-HPC”组合加固法.共设计9根空心板梁(Hollow core beam,简称HCB)试件,包括5根加固梁和4根对比梁,主要研究剪跨比和开口尺寸对HCB抗剪性能影响,同时提出抗剪承载力分析模型.试... 针对空心板梁桥出现腹板斜裂缝病害,提出“腹板植筋+填充ESS-HPC”组合加固法.共设计9根空心板梁(Hollow core beam,简称HCB)试件,包括5根加固梁和4根对比梁,主要研究剪跨比和开口尺寸对HCB抗剪性能影响,同时提出抗剪承载力分析模型.试验结果表明:与未加固梁相比,当剪跨比大于1且小于3时,加固梁的抗剪力学性能提升约60%.随着剪跨比增加,抗剪力学性能逐渐降低.“腹板植筋+填充ESS-HPC”组合加固法可提高剪压区的开裂荷载约50%,并降低箍筋应力.全开口可以降低初始刚度和裂缝宽度,但对极限荷载影响比较小.对比梁发生腹剪破坏模式且为脆性破坏,而加固梁发生弯剪破坏模式且为延性破坏.随着剪跨比增加,试验梁的受剪破坏模式由剪切破坏逐渐向弯剪破坏转变.最后,结合试验结果提出符合“腹板植筋+填充ESS-HPC”组合加固法的评估模型. 展开更多
关键词 ESS-hpc 填充 腹板植筋 抗剪承承载力 评估模型
在线阅读 下载PDF
道德概念是一种HPC概念吗?——浅议厚道德概念与道德概念自然化
5
作者 王奕文 《北京科技大学学报(社会科学版)》 2025年第1期129-136,共8页
道德概念与自然概念的关系是元伦理学领域的重要议题,自然主义者同意道德概念可以通过自然概念说明却被摩尔批判为“自然主义谬误”,博伊德提出属性稳态丛聚理论(homeostatic property cluster,以下简称HPC)以规避该批评。定义HPC类词... 道德概念与自然概念的关系是元伦理学领域的重要议题,自然主义者同意道德概念可以通过自然概念说明却被摩尔批判为“自然主义谬误”,博伊德提出属性稳态丛聚理论(homeostatic property cluster,以下简称HPC)以规避该批评。定义HPC类词项的内在属性间应具有因果联系,但对“善”概念进行的两个思想实验证明“善”概念不具有归纳推理适宜性,这意味着薄道德概念难以具备成为HPC概念的必要条件。厚道德概念由于含有丰富的描述性内容更有利于归纳。文章试将厚道德概念作为HPC概念定义并确定其丛聚属性,以维护部分道德概念的自然性、彰显道德概念的历史与社会维度。 展开更多
关键词 hpc概念 厚概念 道德自然主义实在论
在线阅读 下载PDF
Optoelectronic memristor based on a-C:Te film for muti-mode reservoir computing 被引量:2
6
作者 Qiaoling Tian Kuo Xun +7 位作者 Zhuangzhuang Li Xiaoning Zhao Ya Lin Ye Tao Zhongqiang Wang Daniele Ielmini Haiyang Xu Yichun Liu 《Journal of Semiconductors》 2025年第2期144-149,共6页
Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic ... Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic functions,i.e.,excita-tory post-synaptic current and pair-pulse facilitation are successfully mimicked with the memristor under electrical and optical stimulations.More importantly,the device exhibited distinguishable response currents by adjusting 4-bit input electrical/opti-cal signals.A multi-mode reservoir computing(RC)system is constructed with the optoelectronic memristors to emulate human tactile-visual fusion recognition and an accuracy of 98.7%is achieved.The optoelectronic memristor provides potential for developing multi-mode RC system. 展开更多
关键词 optoelectronic memristor volatile switching muti-mode reservoir computing
在线阅读 下载PDF
Dynamic Task Offloading Scheme for Edge Computing via Meta-Reinforcement Learning 被引量:1
7
作者 Jiajia Liu Peng Xie +2 位作者 Wei Li Bo Tang Jianhua Liu 《Computers, Materials & Continua》 2025年第2期2609-2635,共27页
As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the... As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments. 展开更多
关键词 Edge computing adaptive META task offloading joint optimization
在线阅读 下载PDF
Near‑Sensor Edge Computing System Enabled by a CMOS Compatible Photonic Integrated Circuit Platform Using Bilayer AlN/Si Waveguides 被引量:1
8
作者 Zhihao Ren Zixuan Zhang +4 位作者 Yangyang Zhuge Zian Xiao Siyu Xu Jingkai Zhou Chengkuo Lee 《Nano-Micro Letters》 2025年第11期1-20,共20页
The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language proc... The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment. 展开更多
关键词 Photonic integrated circuits Edge computing Aluminum nitride Neural networks Wearable sensors
在线阅读 下载PDF
Synaptic devices based on silicon carbide for neuromorphic computing 被引量:1
9
作者 Boyu Ye Xiao Liu +2 位作者 Chao Wu Wensheng Yan Xiaodong Pi 《Journal of Semiconductors》 2025年第2期38-51,共14页
To address the increasing demand for massive data storage and processing,brain-inspired neuromorphic comput-ing systems based on artificial synaptic devices have been actively developed in recent years.Among the vario... To address the increasing demand for massive data storage and processing,brain-inspired neuromorphic comput-ing systems based on artificial synaptic devices have been actively developed in recent years.Among the various materials inves-tigated for the fabrication of synaptic devices,silicon carbide(SiC)has emerged as a preferred choices due to its high electron mobility,superior thermal conductivity,and excellent thermal stability,which exhibits promising potential for neuromorphic applications in harsh environments.In this review,the recent progress in SiC-based synaptic devices is summarized.Firstly,an in-depth discussion is conducted regarding the categories,working mechanisms,and structural designs of these devices.Subse-quently,several application scenarios for SiC-based synaptic devices are presented.Finally,a few perspectives and directions for their future development are outlined. 展开更多
关键词 silicon carbide wide bandgap semiconductors synaptic devices neuromorphic computing high temperature
在线阅读 下载PDF
基于HPC时间序列的Docker容器内恶意加密挖矿检测方法研究
10
作者 宋志伟 《自动化与仪器仪表》 2025年第8期88-91,96,共5页
为了实现Docker容器中恶意加密挖矿检测,研究提出了基于硬件性能计数器时间序列的检测方法,首先对容器运行进行分析,并与容器内恶意软件识别;然后采集时间序列特征数据,通过随机森林算法确定数据中的恶意加密挖矿行为特征,最后结合卷积... 为了实现Docker容器中恶意加密挖矿检测,研究提出了基于硬件性能计数器时间序列的检测方法,首先对容器运行进行分析,并与容器内恶意软件识别;然后采集时间序列特征数据,通过随机森林算法确定数据中的恶意加密挖矿行为特征,最后结合卷积神经网络识别恶意加密挖矿行为。结果显示,恶意检测方法的应用对各个测试项目的评分均产生一定的影响,但影响程度各不相同,但整体影响较小。研究方法的内存以及CPU使用成本分别为0.42%、1.8%。传统恶意检测方法数据收集时内存以及CPU使用成本分别为0.61%、2.2%,可见研究方法采用HPC时间序列进行恶意检测成本较低,效率更高,能够为网络安全提供更加坚实的保障。 展开更多
关键词 hpc Docker容器 恶意软件 加密挖矿检测 随机森林
原文传递
CBBM-WARM:A Workload-Aware Meta-Heuristic for Resource Management in Cloud Computing 被引量:1
11
作者 K Nivitha P Pabitha R Praveen 《China Communications》 2025年第6期255-275,共21页
The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achievi... The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks. 展开更多
关键词 autonomic resource management cloud computing coot bird behavior model SLA violation cost WORKLOAD
在线阅读 下载PDF
Providing Robust and Low-Cost Edge Computing in Smart Grid:An Energy Harvesting Based Task Scheduling and Resource Management Framework 被引量:1
12
作者 Xie Zhigang Song Xin +1 位作者 Xu Siyang Cao Jing 《China Communications》 2025年第2期226-240,共15页
Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta... Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework. 展开更多
关键词 edge computing energy harvesting energy storage unit renewable energy sampling average approximation task scheduling
在线阅读 下载PDF
DeepSeek vs.ChatGPT vs.Claude:A comparative study for scientific computing and scientific machine learning tasks 被引量:1
13
作者 Qile Jiang Zhiwei Gao George Em Karniadakis 《Theoretical & Applied Mechanics Letters》 2025年第3期194-206,共13页
Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different ... Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different models exhibit distinct strengths and preferences,resulting in varying levels of performance.In this paper,we compare the capabilities of the most advanced LLMs—DeepSeek,ChatGPT,and Claude—along with their reasoning-optimized versions in addressing computational challenges.Specifically,we evaluate their proficiency in solving traditional numerical problems in scientific computing as well as leveraging scientific machine learning techniques for PDE-based problems.We designed all our experiments so that a nontrivial decision is required,e.g,defining the proper space of input functions for neural operator learning.Our findings show that reasoning and hybrid-reasoning models consistently and significantly outperform non-reasoning ones in solving challenging problems,with ChatGPT o3-mini-high generally offering the fastest reasoning speed. 展开更多
关键词 Large language models(LLM) Scientific computing Scientific machine learning Physics-informed neural network
在线阅读 下载PDF
HPC墙板在建筑幕墙应用中的结构分析及优化
14
作者 陈雪瑞 赵丽华 曹百站 《建筑技艺(中英文)》 2025年第S1期425-427,共3页
本文研究了高性能混凝土(HPC)墙板系统在幕墙应用中的优势与局限,通过分析在风、温度、地震等荷载条件下墙板的受力情况,提出了纤维网格增强、节点构造改进等优化策略,并加入工程实例分析,为HPC墙板在建筑外立面应用提供一些技术参考。
关键词 高性能混凝土(hpc) 幕墙系统 纤维网格增强 受力机理 有限元分析
在线阅读 下载PDF
A Comprehensive Study of Resource Provisioning and Optimization in Edge Computing
15
作者 Sreebha Bhaskaran Supriya Muthuraman 《Computers, Materials & Continua》 2025年第6期5037-5070,共34页
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ... Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities. 展开更多
关键词 Cloud computing edge computing fog computing resource provisioning resource allocation computation offloading optimization techniques software defined network
在线阅读 下载PDF
勘探超算中心HPC服务器性能测试研究与分析
16
作者 朱启伟 李书平 王西林 《信息系统工程》 2025年第3期75-78,共4页
HPC以往都是以引进国际品牌的服务器、存储以及网络产品为主,随着国产技术发展及信息安全原因,国家现阶段高度重视国产化,HPC逐渐向国产化方向发展,国产HPC集群能否满足本行业的业务需求,就需对服务器集群作传统部署和Linpack测试。通... HPC以往都是以引进国际品牌的服务器、存储以及网络产品为主,随着国产技术发展及信息安全原因,国家现阶段高度重视国产化,HPC逐渐向国产化方向发展,国产HPC集群能否满足本行业的业务需求,就需对服务器集群作传统部署和Linpack测试。通过全都由国产知名品牌存储、服务器、网络部署HPC集群系统,并进行各种场景的性能测试研究与分析,得出国产HPC性能优越,完全符合业务需求的结论。 展开更多
关键词 hpc LINPACK 性能测试 国产
在线阅读 下载PDF
A comprehensive survey of orbital edge computing:Systems,applications,and algorithms
17
作者 Zengshan YIN Changhao WU +4 位作者 Chongbin GUO Yuanchun LI Mengwei XU Weiwei GAO Chuanxiu CHI 《Chinese Journal of Aeronautics》 2025年第7期310-339,共30页
The number of satellites,especially those operating in Low-Earth Orbit(LEO),has been exploding in recent years.Additionally,the burgeoning development of Artificial Intelligence(AI)software and hardware has opened up ... The number of satellites,especially those operating in Low-Earth Orbit(LEO),has been exploding in recent years.Additionally,the burgeoning development of Artificial Intelligence(AI)software and hardware has opened up new industrial opportunities in both air and space,with satellite-powered computing emerging as a new computing paradigm:Orbital Edge Computing(OEC).Compared to terrestrial edge computing,the mobility of LEO satellites and their limited communication,computation,and storage resources pose challenges in designing task-specific scheduling algorithms.Previous survey papers have largely focused on terrestrial edge computing or the integration of space and ground technologies,lacking a comprehensive summary of OEC architecture,algorithms,and case studies.This paper conducts a comprehensive survey and analysis of OEC's system architecture,applications,algorithms,and simulation tools,providing a solid background for researchers in the field.By discussing OEC use cases and the challenges faced,potential research directions for future OEC research are proposed. 展开更多
关键词 Orbital edge computing Ubiquitous computing Large-scale satellite constellations Computation offloading
原文传递
Comparative study of IoT-and AI-based computing disease detection approaches
18
作者 Wasiur Rhmann Jalaluddin Khan +8 位作者 Ghufran Ahmad Khan Zubair Ashraf Babita Pandey Mohammad Ahmar Khan Ashraf Ali Amaan Ishrat Abdulrahman Abdullah Alghamdi Bilal Ahamad Mohammad Khaja Shaik 《Data Science and Management》 2025年第1期94-106,共13页
The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machin... The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms. 展开更多
关键词 Deep learning Internet of Things(IoT) Cloud computing Fog computing Edge computing
在线阅读 下载PDF
Efficient rock joint detection from large-scale 3D point clouds using vectorization and parallel computing approaches
19
作者 Yunfeng Ge Zihao Li +2 位作者 Huiming Tang Qian Chen Zhongxu Wen 《Geoscience Frontiers》 2025年第5期1-15,共15页
The application of three-dimensional(3D)point cloud parametric analyses on exposed rock surfaces,enabled by Light Detection and Ranging(LiDAR)technology,has gained significant popularity due to its efficiency and the ... The application of three-dimensional(3D)point cloud parametric analyses on exposed rock surfaces,enabled by Light Detection and Ranging(LiDAR)technology,has gained significant popularity due to its efficiency and the high quality of data it provides.However,as research extends to address more regional and complex geological challenges,the demand for algorithms that are both robust and highly efficient in processing large datasets continues to grow.This study proposes an advanced rock joint identification algorithm leveraging artificial neural networks(ANNs),incorporating parallel computing and vectorization of high-performance computing.The algorithm utilizes point cloud attributes—specifically point normal and point curvatures-as input parameters for ANNs,which classify data into rock joints and non-rock joints.Subsequently,individual rock joints are extracted using the density-based spatial clustering of applications with noise(DBSCAN)technique.Principal component analysis(PCA)is subsequently employed to calculate their orientations.By fully utilizing the computational power of parallel computing and vectorization,the algorithm increases the running speed by 3–4 times,enabling the processing of large-scale datasets within seconds.This breakthrough maximizes computational efficiency while maintaining high accuracy(compared with manual measurement,the deviation of the automatic measurement is within 2°),making it an effective solution for large-scale rock joint detection challenges.©2025 China University of Geosciences(Beijing)and Peking University. 展开更多
关键词 Rock joints Pointclouds Artificialneuralnetwork High-performance computing Parallel computing VECTORIZATION
在线阅读 下载PDF
Nano device fabrication for in-memory and in-sensor reservoir computing
20
作者 Yinan Lin Xi Chen +4 位作者 Qianyu Zhang Junqi You Renjing Xu Zhongrui Wang Linfeng Sun 《International Journal of Extreme Manufacturing》 2025年第1期46-71,共26页
Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasti... Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasting.Despite their versatility,RNNs are frequently beset by significant training expenses and slow convergence times,which impinge upon their deployment in edge AI applications.Reservoir computing(RC),a specialized RNN variant,is attracting increased attention as a cost-effective alternative for processing temporal and sequential data at the edge.RC’s distinctive advantage stems from its compatibility with emerging memristive hardware,which leverages the energy efficiency and reduced footprint of analog in-memory and in-sensor computing,offering a streamlined and energy-efficient solution.This review offers a comprehensive explanation of RC’s underlying principles,fabrication processes,and surveys recent progress in nano-memristive device based RC systems from the viewpoints of in-memory and in-sensor RC function.It covers a spectrum of memristive device,from established oxide-based memristive device to cutting-edge material science developments,providing readers with a lucid understanding of RC’s hardware implementation and fostering innovative designs for in-sensor RC systems.Lastly,we identify prevailing challenges and suggest viable solutions,paving the way for future advancements in in-sensor RC technology. 展开更多
关键词 reservoir computing memristive device fabrication compute-in-memory in-sensor computing
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部