期刊文献+
共找到255,256篇文章
< 1 2 250 >
每页显示 20 50 100
Study on High-Performance Computing for Simulation of End Milling Force
1
作者 ZHANG Zhi-hai, ZHENG Li, LI Zhi-zhong, LIU Da-cheng, ZHAN G Bo-peng (Department of Industry Engineering, Tsinghua University, Beijing 1000 84, China) 《厦门大学学报(自然科学版)》 CAS CSCD 北大核心 2002年第S1期183-184,共2页
Milling Process Simulation is one of the important re search areas in manufacturing science. For the purpose of improving the prec ision of simulation and extending its usability, numerical algorithm is more and more ... Milling Process Simulation is one of the important re search areas in manufacturing science. For the purpose of improving the prec ision of simulation and extending its usability, numerical algorithm is more and more used in the milling modeling areas. But simulative efficiency is decreasin g with increase of its complexity. As a result, application of the method is lim ited. Aimed at above question, high-efficient algorithm for milling process sim ulation is studied. It is important for milling process simulation’s applicatio n. Parallel computing is widely used to solve the large-scale computation question s. Its advantages include system flexibility, robust, high-efficient computing capability and high ratio of performance to price. With the development of compu ter network, utilizing the computing resource in the Internet, a virtual computi ng environment with powerful computing capability can be consisted by microc omputers, and the difficulty of building hardware environment which is used to s upport parallel computing is reduced. How to use network technology and parallel algorithm to improve simulative effic iency for milling forces simulation is investigated in the paper. In order to pr edict milling forces, a simplified local milling forces model is used in the pap er. End milling cutter is assumed to be divided by r number of differential elem ents along the axial direction of the cutter. For a given time, the total cuttin g forces can be obtained by summarizing the resultant cutting force produced by each differential cutter disc. Divide the whole simulative time into some segmen ts, send these program’s segments to microcomputers in the Internet and obtain the result of the program’s segments, all of the result of program’s segments a re composed the final result. For implementing the algorithm, a distributed Parallel computing framework is de signed in the paper. In the framework, web server plays a role of controller. Us ing Java RMI(remote method interface), the computing processes in computing serv er are called by web server. There are lots of control processes in web server a nd control the computing servers. The codes of simulative algorithm can be dynam ic sent to the computing servers, and milling forces at the different time are c omputed through utilizing the local computer’s resource. The results that are ca lculated by every computing servers are sent to the web server, and composed the final result. The framework can be used by different simulative algorithm. Comp ared with the algorithm running single machine, the efficiency of provided algor ithm is higher than that of single machine. 展开更多
关键词 end-milling force model SIMULATION high-perfo rmance computing parallel algorithm Java RMI
在线阅读 下载PDF
High-performance computing of 3D blasting wave propagation in underground rock cavern by using 4D-LSM on TianHe-3 prototype E class supercomputer
2
作者 Meng Fu Gaofeng Zhao 《Deep Underground Science and Engineering》 2022年第1期87-100,共14页
Parallel computing assigns the computing model to different processors on different devices and implements it simultaneously.Accordingly,it has broad applications in the numerical simulation of geotechnical engineerin... Parallel computing assigns the computing model to different processors on different devices and implements it simultaneously.Accordingly,it has broad applications in the numerical simulation of geotechnical engineering and underground engineering,of which models are always large-scale.With parallel computing,the computing time or the memory requirements will be reduced by splitting the original domain of the numerical model into many subdomains,which is thus named as the domain decomposition method.In this study,a cubic and equal volume domain decomposition strategy was utilized to realize the parallel computing on the distributed memory system of four-dimensional lattice spring model(4D-LSM)based on the message passing interface.With a more efficient communication strategy introduced,this study aimed at operating an one-billion-particle model on a supercomputer platform.The preprocessing procedure of the parallelized 4D-LSM was restructured and the particle generation strategy suitable for the supercomputer platform was employed to minimize the time consumption in preprocessing and calculation.On this basis,numerical calculations were performed on TianHe-3 prototype E class supercomputer at the National Supercomputer Center in Tianjin.Two fieldscale three-dimensional blasting wave propagation models were carried out,of which the numerical results verify the computing power and the advantage of the parallelized 4D-LSM in the simulation of large-scale three-dimension models.Subsequently,the time complexity and spatial complexity of 4D-LSM and other particle discrete element methods were analyzed. 展开更多
关键词 domain decomposition method lattice spring model parallel computing wave propagation
原文传递
High-Performance Computing
3
《Bulletin of the Chinese Academy of Sciences》 2020年第1期38-39,共2页
High-performance computing(HPC)refers to the ability to process data and perform complex calculations at high speeds.It is one of the most essential tools fueling the advancement of science and technology.
关键词 technology. computing advancement
在线阅读 下载PDF
High-performance CPU-GPU heterogeneous computing method for 9-component ambient noise cross-correlation
4
作者 Jingxi Wang Weitao Wang +4 位作者 Chao Wu Lei Jiang Hanwen Zou Huajian Yao Ling Chen 《Earthquake Research Advances》 2025年第3期81-87,共7页
Ambient noise tomography is an established technique in seismology,where calculating single-or ninecomponent noise cross-correlation functions(NCFs)is a fundamental first step.In this study,we introduced a novel CPU-G... Ambient noise tomography is an established technique in seismology,where calculating single-or ninecomponent noise cross-correlation functions(NCFs)is a fundamental first step.In this study,we introduced a novel CPU-GPU heterogeneous computing framework designed to significantly enhance the efficiency of computing 9-component NCFs from seismic ambient noise data.This framework not only accelerated the computational process by leveraging the Compute Unified Device Architecture(CUDA)but also improved the signal-to-noise ratio(SNR)through innovative stacking techniques,such as time-frequency domain phaseweighted stacking(tf-PWS).We validated the program using multiple datasets,confirming its superior computation speed,improved reliability,and higher signal-to-noise ratios for NCFs.Our comprehensive study provides detailed insights into optimizing the computational processes for noise cross-correlation functions,thereby enhancing the precision and efficiency of ambient noise imaging. 展开更多
关键词 Nine-component NCFs Heterogeneous computing Ambient noise tomography CUDA tf-PWS
在线阅读 下载PDF
GCSS:a global collaborative scheduling strategy for wide-area high-performance computing 被引量:1
5
作者 Yao SONG Limin XIAO +4 位作者 Liang WANG Guangjun QIN Bing WEI Baicheng YAN Chenhao ZHANG 《Frontiers of Computer Science》 SCIE EI CSCD 2022年第5期1-15,共15页
Wide-area high-performance computing is widely used for large-scale parallel computing applications owing to its high computing and storage resources.However,the geographical distribution of computing and storage reso... Wide-area high-performance computing is widely used for large-scale parallel computing applications owing to its high computing and storage resources.However,the geographical distribution of computing and storage resources makes efficient task distribution and data placement more challenging.To achieve a higher system performance,this study proposes a two-level global collaborative scheduling strategy for wide-area high-performance computing environments.The collaborative scheduling strategy integrates lightweight solution selection,redundant data placement and task stealing mechanisms,optimizing task distribution and data placement to achieve efficient computing in wide-area environments.The experimental results indicate that compared with the state-of-the-art collaborative scheduling algorithm HPS+,the proposed scheduling strategy reduces the makespan by 23.24%,improves computing and storage resource utilization by 8.28%and 21.73%respectively,and achieves similar global data migration costs. 展开更多
关键词 high-performance computing scheduling strategy task scheduling data placement
原文传递
How Big Data and High-performance Computing Drive Brain Science
6
作者 Shanyu Chen Zhipeng He +9 位作者 Xinyin Han Xiaoyu He Ruilin Li Haidong Zhu Dan Zhao Chuangchuang Dai Yu Zhang Zhonghua Lu Xuebin Chi Beifang Niu 《Genomics, Proteomics & Bioinformatics》 SCIE CAS CSCD 2019年第4期381-392,共12页
Brain science accelerates the study of intelligence and behavior,contributes fundamental insights into human cognition,and offers prospective treatments for brain disease.Faced with the challenges posed by imaging tec... Brain science accelerates the study of intelligence and behavior,contributes fundamental insights into human cognition,and offers prospective treatments for brain disease.Faced with the challenges posed by imaging technologies and deep learning computational models,big data and high-performance computing(HPC)play essential roles in studying brain function,brain diseases,and large-scale brain models or connectomes.We review the driving forces behind big data and HPC methods applied to brain science,including deep learning,powerful data analysis capabilities,and computational performance solutions,each of which can be used to improve diagnostic accuracy and research output.This work reinforces predictions that big data and HPC will continue to improve brain science by making ultrahigh-performance analysis possible,by improving data standardization and sharing,and by providing new neuromorphic insights. 展开更多
关键词 Brain science Big data high-performance computing Brain connectomes Deep learning
原文传递
Call for Papers Special Issue on High-Performance Computing for the Next Decade
7
作者 Yutong Lu Zizhong Chen +1 位作者 Juan Chen Chao Li 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2018年第3期367-368,共2页
The publication of Tsinghua Science and Technology was started in 1996.Since then,it has been an international academic journal sponsored by Tsinghua University and published bimonthly.This journal aims at presenting ... The publication of Tsinghua Science and Technology was started in 1996.Since then,it has been an international academic journal sponsored by Tsinghua University and published bimonthly.This journal aims at presenting the state-of-the-art scientific achievements in computer science and other IT fields. 展开更多
关键词 Call for Papers Special Issue on high-performance computing for the Next Decade HPC EMAIL
原文传递
Optimization Techniques for GPU-Based Parallel Programming Models in High-Performance Computing
8
作者 Shuntao Tang Wei Chen 《信息工程期刊(中英文版)》 2024年第1期7-11,共5页
This study embarks on a comprehensive examination of optimization techniques within GPU-based parallel programming models,pivotal for advancing high-performance computing(HPC).Emphasizing the transition of GPUs from g... This study embarks on a comprehensive examination of optimization techniques within GPU-based parallel programming models,pivotal for advancing high-performance computing(HPC).Emphasizing the transition of GPUs from graphic-centric processors to versatile computing units,it delves into the nuanced optimization of memory access,thread management,algorithmic design,and data structures.These optimizations are critical for exploiting the parallel processing capabilities of GPUs,addressingboth the theoretical frameworks and practical implementations.By integrating advanced strategies such as memory coalescing,dynamic scheduling,and parallel algorithmic transformations,this research aims to significantly elevate computational efficiency and throughput.The findings underscore the potential of optimized GPU programming to revolutionize computational tasks across various domains,highlighting a pathway towards achieving unparalleled processing power and efficiency in HPC environments.The paper not only contributes to the academic discourse on GPU optimization but also provides actionable insights for developers,fostering advancements in computational sciences and technology. 展开更多
关键词 Optimization Techniques GPU-Based Parallel Programming Models high-performance computing
在线阅读 下载PDF
Two-Dimensional MXene-Based Advanced Sensors for Neuromorphic Computing Intelligent Application
9
作者 Lin Lu Bo Sun +2 位作者 Zheng Wang Jialin Meng Tianyu Wang 《Nano-Micro Letters》 2026年第2期664-691,共28页
As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and el... As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and electrochemical characteristics,MXenes have shown great potential in brain-inspired neuromorphic computing electronics,including neuromorphic gas sensors,pressure sensors and photodetectors.This paper provides a forward-looking review of the research progress regarding MXenes in the neuromorphic sensing domain and discussed the critical challenges that need to be resolved.Key bottlenecks such as insufficient long-term stability under environmental exposure,high costs,scalability limitations in large-scale production,and mechanical mismatch in wearable integration hinder their practical deployment.Furthermore,unresolved issues like interfacial compatibility in heterostructures and energy inefficiency in neu-romorphic signal conversion demand urgent attention.The review offers insights into future research directions enhance the fundamental understanding of MXene properties and promote further integration into neuromorphic computing applications through the convergence with various emerging technologies. 展开更多
关键词 TWO-DIMENSIONAL MXenes SENSOR Neuromorphic computing Multimodal intelligent system Wearable electronics
在线阅读 下载PDF
Mechanical Properties Analysis of Flexible Memristors for Neuromorphic Computing
10
作者 Zhenqian Zhu Jiheng Shui +1 位作者 Tianyu Wang Jialin Meng 《Nano-Micro Letters》 2026年第1期53-79,共27页
The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,fle... The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,flexible memristors exhibit great application potential in emulating artificial synapses for highefficiency and low power consumption neuromorphic computing.This paper provides comprehensive overview of flexible memristors from perspectives of development history,material system,device structure,mechanical deformation method,device performance analysis,stress simulation during deformation,and neuromorphic computing applications.The recent advances in flexible electronics are summarized,including single device,device array and integration.The challenges and future perspectives of flexible memristor for neuromorphic computing are discussed deeply,paving the way for constructing wearable smart electronics and applications in large-scale neuromorphic computing and high-order intelligent robotics. 展开更多
关键词 Flexible memristor Neuromorphic computing Mechanical property Wearable electronics
在线阅读 下载PDF
High-Entropy Oxide Memristors for Neuromorphic Computing:From Material Engineering to Functional Integration
11
作者 Jia‑Li Yang Xin‑Gui Tang +4 位作者 Xuan Gu Qi‑Jun Sun Zhen‑Hua Tang Wen‑Hua Li Yan-Ping Jiang 《Nano-Micro Letters》 2026年第2期138-169,共32页
High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic f... High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic features enable forming-free resistive switching,multilevel conductance modulation,and synaptic plasticity,making HEOs attractive for neuromorphic computing.This review outlines recent progress in HEO-based memristors across materials engineering,switching mechanisms,and synaptic emulation.Particular attention is given to vacancy migration,phase transitions,and valence-state dynamics—mechanisms that underlie the switching behaviors observed in both amorphous and crystalline systems.Their relevance to neuromorphic functions such as short-term plasticity and spike-timing-dependent learning is also examined.While encouraging results have been achieved at the device level,challenges remain in conductance precision,variability control,and scalable integration.Addressing these demands a concerted effort across materials design,interface optimization,and task-aware modeling.With such integration,HEO memristors offer a compelling pathway toward energy-efficient and adaptable brain-inspired electronics. 展开更多
关键词 High-entropy oxides MEMRISTORS Neuromorphic computing Configurational entropy Resistive switching
在线阅读 下载PDF
Ad Hoc File Systems for High-Performance Computing 被引量:1
12
作者 AndréBrinkmann Kathryn Mohror +7 位作者 Weikuan Yu Philip Carns Toni Cortes Scott A.Klasky Alberto Miranda Franz-Josef Pfreundt Robert B.Ross Marc-AndréVef 《Journal of Computer Science & Technology》 SCIE EI CSCD 2020年第1期4-26,共23页
Storage backends of parallel compute clusters are still based mostly on magnetic disks,while newer and faster storage technologies such as flash-based SSDs or non-volatile random access memory(NVRAM)are deployed withi... Storage backends of parallel compute clusters are still based mostly on magnetic disks,while newer and faster storage technologies such as flash-based SSDs or non-volatile random access memory(NVRAM)are deployed within compute nodes.Including these new storage technologies into scientific workflows is unfortunately today a mostly manual task,and most scientists therefore do not take advantage of the faster storage media.One approach to systematically include nodelocal SSDs or NVRAMs into scientific workflows is to deploy ad hoc file systems over a set of compute nodes,which serve as temporary storage systems for single applications or longer-running campaigns.This paper presents results from the Dagstuhl Seminar 17202"Challenges and Opportunities of User-Level File Systems for HPC"and discusses application scenarios as well as design strategies for ad hoc file systems using node-local storage media.The discussion includes open research questions,such as how to couple ad hoc file systems with the batch scheduling environment and how to schedule stage-in and stage-out processes of data between the storage backend and the ad hoc file systems.Also presented are strategies to build ad hoc file systems by using reusable components for networking and how to improve storage device compatibility.Various interfaces and semantics are presented,for example those used by the three ad hoc file systems BeeOND,GekkoFS,and BurstFS.Their presentation covers a range from file systems running in production to cutting-edge research focusing on reaching the performance limits of the underlying devices. 展开更多
关键词 parallel architectures distributed FILE SYSTEM high-performance computing BURST BUFFER POSIX(portable operating SYSTEM interface)
原文传递
Oscillation neuron based on a low-variability threshold switching device for high-performance neuromorphic computing 被引量:2
13
作者 Yujia Li Jianshi Tang +5 位作者 Bin Gao Xinyi Li Yue Xi Wanrong Zhang He Qian Huaqiang Wu 《Journal of Semiconductors》 EI CAS CSCD 2021年第6期64-69,共6页
Low-power and low-variability artificial neuronal devices are highly desired for high-performance neuromorphic computing.In this paper,an oscillation neuron based on a low-variability Ag nanodots(NDs)threshold switchi... Low-power and low-variability artificial neuronal devices are highly desired for high-performance neuromorphic computing.In this paper,an oscillation neuron based on a low-variability Ag nanodots(NDs)threshold switching(TS)device with low operation voltage,large on/off ratio and high uniformity is presented.Measurement results indicate that this neuron demonstrates self-oscillation behavior under applied voltages as low as 1 V.The oscillation frequency increases with the applied voltage pulse amplitude and decreases with the load resistance.It can then be used to evaluate the resistive random-access memory(RRAM)synaptic weights accurately when the oscillation neuron is connected to the output of the RRAM crossbar array for neuromorphic computing.Meanwhile,simulation results show that a large RRAM crossbar array(>128×128)can be supported by our oscillation neuron owing to the high on/off ratio(>10^(8))of Ag NDs TS device.Moreover,the high uniformity of the Ag NDs TS device helps improve the distribution of the output frequency and suppress the degradation of neural network recognition accuracy(<1%).Therefore,the developed oscillation neuron based on the Ag NDs TS device shows great potential for future neuromorphic computing applications. 展开更多
关键词 threshold switching Ag nanodots oscillation neuron neuromorphic computing
在线阅读 下载PDF
Mochi: Composing Data Services for High-Performance Computing Environments
14
作者 Robert BRoss George Amvrosiadis +14 位作者 Philip Carns Charles DCranor Matthieu Dorier Kevin Harms Greg Ganger Garth Gibson Samuel KGutierrez Robert Latham Bob Robey Dana Robinson Bradley Settlemyer Galen Shipman Shane Snyder Jerome Soumagne Qing Zheng 《Journal of Computer Science & Technology》 SCIE EI CSCD 2020年第1期121-144,共24页
Technology enhancements and the growing breadth of application workflows running on high-performance computing(HPC)platforms drive the development of new data services that provide high performance on these new platfo... Technology enhancements and the growing breadth of application workflows running on high-performance computing(HPC)platforms drive the development of new data services that provide high performance on these new platforms,provide capable and productive interfaces and abstractions for a variety of applications,and are readily adapted when new technologies are deployed.The Mochi framework enables composition of specialized distributed data services from a collection of connectable modules and subservices.Rather than forcing all applications to use a one-size-fits-all data staging and I/O software configuration,Mochi allows each application to use a data service specialized to its needs and access patterns.This paper introduces the Mochi framework and methodology.The Mochi core components and microservices are described.Examples of the application of the Mochi methodology to the development of four specialized services are detailed.Finally,a performance evaluation of a Mochi core component,a Mochi microservice,and a composed service providing an object model is performed.The paper concludes by positioning Mochi relative to related work in the HPC space and indicating directions for future work. 展开更多
关键词 STORAGE and I/O DATA-INTENSIVE computing distributed SERVICES high-performance computing
原文传递
FTRP:a new fault tolerance framework using process replication and prefetching for high-performance computing
15
作者 Wei HU Guang-ming LIU Yan-huang JIANG 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2018年第10期1273-1290,共18页
As the scale of supercomputers rapidly grows, the reliability problem dominates the system availability. Existing fault tolerance mechanisms, such as periodic checkpointing and process redundancy, cannot effectively f... As the scale of supercomputers rapidly grows, the reliability problem dominates the system availability. Existing fault tolerance mechanisms, such as periodic checkpointing and process redundancy, cannot effectively fix this problem. To address this issue, we present a new fault tolerance framework using process replication and prefetching (FTRP), combining the benefits of proactive and reactive mechanisms. FTRP incorporates a novel cost model and a new proactive fault tolerance mechanism to improve the application execution efficiency. The novel cost model, called the 'work-most' (WM) model, makes runtime decisions to adaptively choose an action from a set of fault tolerance mechanisms based on failure prediction results and application status. Similar to program locality, we observe the failure locality phenomenon in supercomputers for the first time. In the new proactive fault tolerance mechanism, process replication with process prefetching is proposed based on the failure locality, significantly avoiding losses caused by the failures regardless of whether they have been predicted. Simulations with real failure traces demonstrate that the FTRP framework outperforms existing fault tolerance mechanisms with up to 10% improvement in application efficiency for common failure prediction accuracy, and is effective for petascale systems and beyond. 展开更多
关键词 high-performance computing PROACTIVE fault tolerance Failure LOCALITY PROCESS REPLICATION PROCESS PREFETCHING
原文传递
The Paradigm of Power Bounded High-Performance Computing
16
作者 Rong Ge Xizhou Feng +1 位作者 Pengfei Zou Tyler Allen 《Journal of Computer Science & Technology》 SCIE EI CSCD 2023年第1期87-102,共16页
Modern computer systems are increasingly bounded by the available or permissible power at multiple layers from individual components to data centers.To cope with this reality,it is necessary to understand how power bo... Modern computer systems are increasingly bounded by the available or permissible power at multiple layers from individual components to data centers.To cope with this reality,it is necessary to understand how power bounds im-pact performance,especially for systems built from high-end nodes,each consisting of multiple power hungry components.Because placing an inappropriate power bound on a node or a component can lead to severe performance loss,coordinat-ing power allocation among nodes and components is mandatory to achieve desired performance given a total power bud-get.In this article,we describe the paradigm of power bounded high-performance computing,which considers coordinated power bound assignment to be a key factor in computer system performance analysis and optimization.We apply this paradigm to the problem of power coordination across multiple layers for both CPU and GPU computing.Using several case studies,we demonstrate how the principles of balanced power coordination can be applied and adapted to the inter-play of workloads,hardware technology,and the available total power for performance improvement. 展开更多
关键词 power bounded computing cross-component power coordination hierarchical power allocation
原文传递
The future is frozen:cryogenic CMOS for high-performance computing
17
作者 R.Saligram A.Raychowdhury Suman Datta 《Chip》 EI 2024年第1期43-54,共12页
Low temperature complementary metal oxide semiconductor(CMOS)or cryogenic CMOS is a promising avenue for the continuation of Moore’s law while serving the needs of high performance computing.With temperature as a con... Low temperature complementary metal oxide semiconductor(CMOS)or cryogenic CMOS is a promising avenue for the continuation of Moore’s law while serving the needs of high performance computing.With temperature as a control“knob”to steepen the subthreshold slope behavior of CMOS devices,the supply voltage of operation can be reduced with no impact on operating speed.With the optimal threshold voltage engineering,the device ON current can be further enhanced,translating to higher performance.In this article,the experimentally calibrated data was adopted to tune the threshold voltage and investigated the power performance area of cryogenic CMOS at device,circuit and system level.We also presented results from measurement and analysis of functional memory chips fabricated in 28 nm bulk CMOS and 22 nm fully depleted silicon on insulator(FDSOI)operating at cryogenic temperature.Finally,the challenges and opportunities in the further development and deployment of such systems were discussed. 展开更多
关键词 Cryogenic CMOS Design technology co-optimization High performance computing Parameter variation Threshold voltage engineering Cryogenic Memories Interconnects
原文传递
Measurement and analysis of defects in high-performance concrete with three-dimensional micro-computer tomography 被引量:8
18
作者 郭丽萍 Andrea Carpinteri +1 位作者 孙伟 秦文超 《Journal of Southeast University(English Edition)》 EI CAS 2009年第1期83-88,共6页
In order to investigate the effects of two mineral admixtures (i. e., fly ash and ground slag)on initial defects existing in concrete microstructures, a high-resolution X-ray micro-CT( micro-focus computer tomogra... In order to investigate the effects of two mineral admixtures (i. e., fly ash and ground slag)on initial defects existing in concrete microstructures, a high-resolution X-ray micro-CT( micro-focus computer tomography)is employed to quantitatively analyze the initial defects in four series of highperformance concrete (HPC)specimens with additions of different mineral admixtures. The nigh-resolution 3D images of microstructures and filtered defects are reconstructed by micro- CT software. The size distribution and volume fractions of initial defects are analyzed based on 3D and 2D micro-CT images. The analysis results are verified by experimental results of watersuction tests. The results show that the additions of mineral admixtures in concrete as cementitious materials greatly change the geometrical properties of the microstructures and the spatial features of defects by physical-chemistry actions of these mineral admixtures. This is the major cause of the differences between the mechanical behaviors of HPC with and without mineral admixtures when the water-to-binder ratio and the size distribution of aggregates are constant. 展开更多
关键词 high-performance concrete DEFECT MICROSTRUCTURE X- ray micro-focus computer tomography mineral admixtures
在线阅读 下载PDF
Efficient rock joint detection from large-scale 3D point clouds using vectorization and parallel computing approaches
19
作者 Yunfeng Ge Zihao Li +2 位作者 Huiming Tang Qian Chen Zhongxu Wen 《Geoscience Frontiers》 2025年第5期1-15,共15页
The application of three-dimensional(3D)point cloud parametric analyses on exposed rock surfaces,enabled by Light Detection and Ranging(LiDAR)technology,has gained significant popularity due to its efficiency and the ... The application of three-dimensional(3D)point cloud parametric analyses on exposed rock surfaces,enabled by Light Detection and Ranging(LiDAR)technology,has gained significant popularity due to its efficiency and the high quality of data it provides.However,as research extends to address more regional and complex geological challenges,the demand for algorithms that are both robust and highly efficient in processing large datasets continues to grow.This study proposes an advanced rock joint identification algorithm leveraging artificial neural networks(ANNs),incorporating parallel computing and vectorization of high-performance computing.The algorithm utilizes point cloud attributes—specifically point normal and point curvatures-as input parameters for ANNs,which classify data into rock joints and non-rock joints.Subsequently,individual rock joints are extracted using the density-based spatial clustering of applications with noise(DBSCAN)technique.Principal component analysis(PCA)is subsequently employed to calculate their orientations.By fully utilizing the computational power of parallel computing and vectorization,the algorithm increases the running speed by 3–4 times,enabling the processing of large-scale datasets within seconds.This breakthrough maximizes computational efficiency while maintaining high accuracy(compared with manual measurement,the deviation of the automatic measurement is within 2°),making it an effective solution for large-scale rock joint detection challenges.©2025 China University of Geosciences(Beijing)and Peking University. 展开更多
关键词 Rock joints Pointclouds Artificialneuralnetwork high-performance computing Parallel computing VECTORIZATION
在线阅读 下载PDF
Offload Strategy for Edge Computing in Satellite Networks Based on Software Defined Network 被引量:1
20
作者 Zhiguo Liu Yuqing Gui +1 位作者 Lin Wang Yingru Jiang 《Computers, Materials & Continua》 SCIE EI 2025年第1期863-879,共17页
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us... Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency. 展开更多
关键词 Satellite network edge computing task scheduling computing offloading
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部