期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Constitutive modeling of compression behavior of TC4 tube based on modified Arrhenius and artificial neural network models 被引量:5
1
作者 Zhi-Jun Tao He Yang +2 位作者 Heng Li Jun Ma Peng-Fei Gao 《Rare Metals》 SCIE EI CAS CSCD 2016年第2期162-171,共10页
Warm rotary draw bending provides a feasible method to form the large-diameter thin-walled(LDTW)TC4 bent tubes, which are widely used in the pneumatic system of aircrafts. An accurate prediction of flow behavior of ... Warm rotary draw bending provides a feasible method to form the large-diameter thin-walled(LDTW)TC4 bent tubes, which are widely used in the pneumatic system of aircrafts. An accurate prediction of flow behavior of TC4 tubes considering the couple effects of temperature,strain rate and strain is critical for understanding the deformation behavior of metals and optimizing the processing parameters in warm rotary draw bending of TC4 tubes. In this study, isothermal compression tests of TC4 tube alloy were performed from 573 to 873 K with an interval of 100 K and strain rates of 0.001, 0.010 and0.100 s^(-1). The prediction of flow behavior was done using two constitutive models, namely modified Arrhenius model and artificial neural network(ANN) model. The predictions of these constitutive models were compared using statistical measures like correlation coefficient(R), average absolute relative error(AARE) and its variation with the deformation parameters(temperature, strain rate and strain). Analysis of statistical measures reveals that the two models show high predicted accuracy in terms of R and AARE. Comparatively speaking, the ANN model presents higher predicted accuracy than the modified Arrhenius model. In addition, the predicted accuracy of ANN model presents high stability at the whole deformation parameter ranges, whereas the predictability of the modified Arrhenius model has some fluctuation at different deformation conditions. It presents higher predicted accuracy at temperatures of 573-773 K, strain rates of 0.010-0.100 s^(-1)and strain of 0.04-0.32, while low accuracy at temperature of 873 K, strain rates of 0.001 s^(-1)and strain of 0.36-0.48.Thus, the application of modified Arrhenius model is limited by its relatively low predicted accuracy at some deformation conditions, while the ANN model presents very high predicted accuracy at all deformation conditions,which can be used to study the compression behavior of TC4 tube at the temperature range of 573-873 K and the strain rate of 0.001-0.100 s^(-1). It can provide guideline for the design of processing parameters in warm rotary draw bending of LDTW TC4 tubes. 展开更多
关键词 TC4 tube compression behavior Constitutive model Modified Arrhenius model neural network model
原文传递
基于BP神经网络的页岩静弹性模量预测研究 被引量:5
2
作者 侯连浪 梁利喜 +1 位作者 刘向君 熊健 《科学技术与工程》 北大核心 2016年第30期176-180,195,共6页
页岩的静弹性模量是页岩油气资源勘探开发整个过程的重要参数,现阶段页岩静弹性模量的预测往往是先使用岩芯纵波时差及密度计算出动态弹性模量,再寻找动、静态弹性模量之间的关系。岩石矿物组成的差异常常导致常规思路得到的动、静态弹... 页岩的静弹性模量是页岩油气资源勘探开发整个过程的重要参数,现阶段页岩静弹性模量的预测往往是先使用岩芯纵波时差及密度计算出动态弹性模量,再寻找动、静态弹性模量之间的关系。岩石矿物组成的差异常常导致常规思路得到的动、静态弹性模量的相关性较差,预测结果难以满足工程需求。为完成对研究区块岩芯的静弹性模量预测研究,首先对岩芯进行密度及纵波时差的测量;而后运用全岩矿物分析、黏土矿物分析及三轴压缩试验的方法对岩芯静弹性模量进行对比分析;并由三轴压缩试获取岩芯静弹性模量。按输入变量的不同建立并训练了三个BP神经网络预测系统;并对三个预测系统的应用效果加以对比分析。分析结果表明:只以岩芯密度和纵波时差为输入变量的BP神经网络的预测效果较差;以岩芯密度、纵波时差、石英含量及伊利石含量为输入变量时的BP神经网络预测效果较好,以岩芯密度、纵波时差、石英含量、方解石含量、伊利石含量及伊/蒙混层含量为输入变量时的BP网络预测效果最好。 展开更多
关键词 页岩 BP神经网络 密度 纵波时差 矿物组成 静弹性模量
在线阅读 下载PDF
Recent advances in efficient computation of deep convolutional neural networks 被引量:37
3
作者 Jian CHENG Pei-song WANG +2 位作者 Gang LI Qing-hao HU Han-qing LU 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2018年第1期64-77,共14页
Deep neural networks have evolved remarkably over the past few years and they are currently the fundamental tools of many intelligent systems.At the same time,the computational complexity and resource consumption of t... Deep neural networks have evolved remarkably over the past few years and they are currently the fundamental tools of many intelligent systems.At the same time,the computational complexity and resource consumption of these networks continue to increase.This poses a significant challenge to the deployment of such networks,especially in real-time applications or on resource-limited devices.Thus,network acceleration has become a hot topic within the deep learning community.As for hardware implementation of deep neural networks,a batch of accelerators based on a field-programmable gate array(FPGA) or an application-specific integrated circuit(ASIC)have been proposed in recent years.In this paper,we provide a comprehensive survey of recent advances in network acceleration,compression,and accelerator design from both algorithm and hardware points of view.Specifically,we provide a thorough analysis of each of the following topics:network pruning,low-rank approximation,network quantization,teacher–student networks,compact network design,and hardware accelerators.Finally,we introduce and discuss a few possible future directions. 展开更多
关键词 Deep neural networks Acceleration compression Hardware accelerator
原文传递
Pushing to the Limit:An Attention-Based Dual-Prune Approach for Highly-Compacted CNN Filter Pruning
4
作者 Yu-Chu Fang Wen-Zhong Li +2 位作者 Yao Zeng Qing-Ning Lu Sang-Lu Lu 《Journal of Computer Science & Technology》 2025年第3期805-820,共16页
Filter pruning is an important technique to compress convolutional neural networks(CNNs)to acquire light-weight high-performance model for practical deployment.However,the existing filter pruning methods suffer from s... Filter pruning is an important technique to compress convolutional neural networks(CNNs)to acquire light-weight high-performance model for practical deployment.However,the existing filter pruning methods suffer from sharp performance drops when the pruning ratio is large,probably due to the unrecoverable information loss caused by aggressive pruning.In this paper,we propose a dual attention based pruning approach called DualPrune to push the limit of network pruning at an ultra-high compression ratio.Firstly,it adopts a graph attention network(GAT)to automatically extract filter-level and layer-level features from CNNs based on the roles of their filters in the whole computation graph.Then the extracted comprehensive features are fed to a side-attention network,which generates sparse attention weights for individual filters to guide model pruning.To avoid layer collapse,the side-attention network adopts a side-path design to preserve the information flow going through the CNN model properly,which allows the CNN model to be pruned at a high compression ratio at initialization and trained from scratch afterward.Extensive experiments based on several well-known CNN models and real-world datasets show that the proposed DualPrune method outperforms the state-of-the-art methods with significant performance improvement,particularly for model compression at a high pruning ratio. 展开更多
关键词 ultra-high compression dual attention based structured pruning inter-layer dependency layer collapse neural network compression
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部