期刊文献+
共找到58篇文章
< 1 2 3 >
每页显示 20 50 100
Modeling compression behaviors of freeze-thaw-impacted soils extending the disturbed state concept
1
作者 Pan Zhang Sai K.Vanapalli Zhong Han 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第10期6606-6620,共15页
This study investigates the volumetric behaviors of various soils during freeze-thaw(FT)cycles and subsequent one-dimensional(1D)compression from experimental and theoretical studies.Experimental studies were performe... This study investigates the volumetric behaviors of various soils during freeze-thaw(FT)cycles and subsequent one-dimensional(1D)compression from experimental and theoretical studies.Experimental studies were performed on saturated expansive soil specimens with varying compaction conditions and soil structures under different stress states.Experimental results demonstrate that the specimens expand during freezing and contract during thawing.All specimens converge to the same residual void ratio after seven FT cycles,irrespective of their different initial void ratio,stress state,and soil structure.The compression index of the expansive soil specimens increases with the initial void ratio,whereas their swelling index remains nearly constant.A model extending the disturbed state concept(DSC)is proposed to predict the 1D compression behaviors of FT-impacted soils.The model incorporates a parameter,b,to account for the impacts of FT cycles.Empirical equations have been developed to link the key model parameters(i.e.the normalized yield stress and parameter b)to the soil state parameter(i.e.the normalized void ratio)in order to simplify the prediction approach.The proposed model well predicts the results of the tested expansive soil.In addition,the model’s feasibility for other types of soils,including low-and high-plastic clays,and high-plastic organic soils,has been validated using published data from the literature.The proposed model is simple yet reliable for predicting the compression behaviors of soils subjected to FT cycles. 展开更多
关键词 Initial state Freeze-thaw(FT)cycle test One-dimensional(1D)compression test Disturbed state concept(DSC) Compression model
在线阅读 下载PDF
FLOW STRESS MODELING FOR AERONAUTICAL ALUMINUM ALLOY 7050-T7451 IN HIGH-SPEED CUTTING 被引量:15
2
作者 付秀丽 艾兴 +1 位作者 万熠 张松 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2007年第2期139-144,共6页
The high temperature split Hopkinson pressure bar (SHPB) compression experiment is conducted to obtain the data relationship among strain, strain rate and flow stress from room temperature to 550 C for aeronautical ... The high temperature split Hopkinson pressure bar (SHPB) compression experiment is conducted to obtain the data relationship among strain, strain rate and flow stress from room temperature to 550 C for aeronautical aluminum alloy 7050-T7451. Combined high-speed orthogonal cutting experiments with the cutting process simulations, the data relationship of high temperature, high strain rate and large strain in high-speed cutting is modified. The Johnson-Cook empirical model considering the effects of strain hardening, strain rate hardening and thermal softening is selected to describe the data relationship in high-speed cutting, and the material constants of flow stress constitutive model for aluminum alloy 7050-T7451 are determined. Finally, the constitutive model of aluminum alloy 7050-T7451 is established through experiment and simulation verification in high-speed cutting. The model is proved to be reasonable by matching the measured values of the cutting force with the estimated results from FEM simulations. 展开更多
关键词 high-speed cutting flow stress models SHPB compression experiment FEM simulation
在线阅读 下载PDF
Optimizing BERT for Bengali Emotion Classification: Evaluating Knowledge Distillation, Pruning, and Quantization
3
作者 Md Hasibur Rahman Mohammed Arif Uddin +1 位作者 Zinnat Fowzia Ria Rashedur M.Rahman 《Computer Modeling in Engineering & Sciences》 2025年第2期1637-1666,共30页
The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classificati... The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classification.However,BERT’s size and computational demands limit its practicality,especially in resource-constrained settings.This research compresses the BERT base model for Bengali emotion classification through knowledge distillation(KD),pruning,and quantization techniques.Despite Bengali being the sixth most spoken language globally,NLP research in this area is limited.Our approach addresses this gap by creating an efficient BERT-based model for Bengali text.We have explored 20 combinations for KD,quantization,and pruning,resulting in improved speedup,fewer parameters,and reduced memory size.Our best results demonstrate significant improvements in both speed and efficiency.For instance,in the case of mBERT,we achieved a 3.87×speedup and 4×compression ratio with a combination of Distil+Prune+Quant that reduced parameters from 178 to 46 M,while the memory size decreased from 711 to 178 MB.These results offer scalable solutions for NLP tasks in various languages and advance the field of model compression,making these models suitable for real-world applications in resource-limited environments. 展开更多
关键词 Bengali NLP black-box distillation emotion classification model compression post-training quantization unstructured pruning
在线阅读 下载PDF
A Literature Review on Model Conversion, Inference, and Learning Strategies in EdgeML with TinyML Deployment
4
作者 Muhammad Arif Muhammad Rashid 《Computers, Materials & Continua》 2025年第4期13-64,共52页
Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’... Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’s edge.However,the complexity of model conversion techniques,diverse inference mechanisms,and varied learning strategies make designing and deploying these models challenging.Additionally,deploying TinyML models on resource-constrained hardware with specific software frameworks has broadened EdgeML’s applications across various sectors.These factors underscore the necessity for a comprehensive literature review,as current reviews do not systematically encompass the most recent findings on these topics.Consequently,it provides a comprehensive overview of state-of-the-art techniques in model conversion,inference mechanisms,learning strategies within EdgeML,and deploying these models on resource-constrained edge devices using TinyML.It identifies 90 research articles published between 2018 and 2025,categorizing them into two main areas:(1)model conversion,inference,and learning strategies in EdgeML and(2)deploying TinyML models on resource-constrained hardware using specific software frameworks.In the first category,the synthesis of selected research articles compares and critically reviews various model conversion techniques,inference mechanisms,and learning strategies.In the second category,the synthesis identifies and elaborates on major development boards,software frameworks,sensors,and algorithms used in various applications across six major sectors.As a result,this article provides valuable insights for researchers,practitioners,and developers.It assists them in choosing suitable model conversion techniques,inference mechanisms,learning strategies,hardware development boards,software frameworks,sensors,and algorithms tailored to their specific needs and applications across various sectors. 展开更多
关键词 Edge machine learning tiny machine learning model compression INFERENCE learning algorithms
在线阅读 下载PDF
An Improved Knowledge Distillation Algorithm and Its Application to Object Detection
5
作者 Min Yao Guofeng Liu +1 位作者 Yaozu Zhang Guangjie Hu 《Computers, Materials & Continua》 2025年第5期2189-2205,共17页
Knowledge distillation(KD)is an emerging model compression technique for learning compact object detector models.Previous KD often focused solely on distilling from the logits layer or the feature intermediate layers,... Knowledge distillation(KD)is an emerging model compression technique for learning compact object detector models.Previous KD often focused solely on distilling from the logits layer or the feature intermediate layers,which may limit the comprehensive learning of the student network.Additionally,the imbalance between the foreground and background also affects the performance of the model.To address these issues,this paper employs feature-based distillation to enhance the detection performance of the bounding box localization part,and logit-based distillation to improve the detection performance of the category prediction part.Specifically,for the intermediate layer feature distillation,we introduce feature resampling to reduce the risk of the student model merely imitating the teacher model.At the same time,we incorporate a Spatial Attention Mechanism(SAM)to highlight the foreground features learned by the student model.In terms of output layer feature distillation,we divide the traditional distillation targets into target-class objects and non-target-class objects,aiming to improve overall distillation performance.Furthermore,we introduce a one-to-many matching distillation strategy based on Feature Alignment Module(FAM),which further enhances the studentmodel’s feature representation ability,making its feature distribution closer to that of the teacher model,and thus demonstrating superior localization and classification capabilities in object detection tasks.Experimental results demonstrate that our proposedmethodology outperforms conventional distillation techniques in terms of object detecting performance. 展开更多
关键词 Deep learning model compression knowledge distillation object detection
在线阅读 下载PDF
Hierarchical Shape Pruning for 3D Sparse Convolution Networks
6
作者 Haiyan Long Chonghao Zhang +2 位作者 Xudong Qiu Hai Chen Gang Chen 《Computers, Materials & Continua》 2025年第8期2975-2988,共14页
3D sparse convolution has emerged as a pivotal technique for efficient voxel-based perception in autonomous systems,enabling selective feature extraction from non-empty voxels while suppressing computational waste.Des... 3D sparse convolution has emerged as a pivotal technique for efficient voxel-based perception in autonomous systems,enabling selective feature extraction from non-empty voxels while suppressing computational waste.Despite its theoretical efficiency advantages,practical implementations face under-explored limitations:the fixed geometric patterns of conventional sparse convolutional kernels inevitably process non-contributory positions during sliding-window operations,particularly in regions with uneven point cloud density.To address this,we propose Hierarchical Shape Pruning for 3D Sparse Convolution(HSP-S),which dynamically eliminates redundant kernel stripes through layer-adaptive thresholding.Unlike static soft pruning methods,HSP-S maintains trainable sparsity patterns by progressively adjusting pruning thresholds during optimization,enlarging original parameter search space while removing redundant operations.Extensive experiments validate effectiveness of HSP-S acrossmajor autonomous driving benchmarks.On KITTI’s 3D object detection task,our method reduces 93.47%redundant kernel computations whilemaintaining comparable accuracy(1.56%mAP drop).Remarkably,on themore complexNuScenes benchmark,HSP-S achieves simultaneous computation reduction(21.94%sparsity)and accuracy gains(1.02%mAP(mean Average Precision)and 0.47%NDS(nuScenes detection score)improvement),demonstrating its scalability to diverse perception scenarios.This work establishes the first learnable shape pruning framework that simultaneously enhances computational efficiency and preserves detection accuracy in 3D perception systems. 展开更多
关键词 Shape pruning model compressing 3D sparse convolution
在线阅读 下载PDF
Real-time modular dynamic modeling for compression system of altitude ground test facilities 被引量:2
7
作者 Yang SU Xuejiang CHEN +2 位作者 Xin WANG Xiaodong LI Xiaoming LIU 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2023年第5期202-211,共10页
Modeling of a centrifugal compressor is of great significance to surge characteristics and fluid dynamics in the Altitude Ground Test Facilities(AGTF).Real-time Modular Dynamic System Greitzer(MDSG)modeling for dynami... Modeling of a centrifugal compressor is of great significance to surge characteristics and fluid dynamics in the Altitude Ground Test Facilities(AGTF).Real-time Modular Dynamic System Greitzer(MDSG)modeling for dynamic response and simulation of the compression system is introduced.The centrifugal compressor,pipeline network,and valve are divided into pressure output type and mass flow output type for module modeling,and the two types of components alternate when the system is established.The pressure loss and thermodynamics of the system are considered.An air supply compression system of AGTF is modeled and simulated by the MDSG model.The simulation results of mass flow,pressure,and temperature are compared with the experimental results,and the error is less than 5%,which demonstrates the reliability,practicability,and universality of the MDSG model. 展开更多
关键词 Altitude Ground Test Facilities(AGTF) Compression modeling Dynamic simulation Real-time modelling Modular Dynamic System Greitzer(MDSG)modeling
原文传递
Constitutive modeling of compression behavior of TC4 tube based on modified Arrhenius and artificial neural network models 被引量:5
8
作者 Zhi-Jun Tao He Yang +2 位作者 Heng Li Jun Ma Peng-Fei Gao 《Rare Metals》 SCIE EI CAS CSCD 2016年第2期162-171,共10页
Warm rotary draw bending provides a feasible method to form the large-diameter thin-walled(LDTW)TC4 bent tubes, which are widely used in the pneumatic system of aircrafts. An accurate prediction of flow behavior of ... Warm rotary draw bending provides a feasible method to form the large-diameter thin-walled(LDTW)TC4 bent tubes, which are widely used in the pneumatic system of aircrafts. An accurate prediction of flow behavior of TC4 tubes considering the couple effects of temperature,strain rate and strain is critical for understanding the deformation behavior of metals and optimizing the processing parameters in warm rotary draw bending of TC4 tubes. In this study, isothermal compression tests of TC4 tube alloy were performed from 573 to 873 K with an interval of 100 K and strain rates of 0.001, 0.010 and0.100 s^(-1). The prediction of flow behavior was done using two constitutive models, namely modified Arrhenius model and artificial neural network(ANN) model. The predictions of these constitutive models were compared using statistical measures like correlation coefficient(R), average absolute relative error(AARE) and its variation with the deformation parameters(temperature, strain rate and strain). Analysis of statistical measures reveals that the two models show high predicted accuracy in terms of R and AARE. Comparatively speaking, the ANN model presents higher predicted accuracy than the modified Arrhenius model. In addition, the predicted accuracy of ANN model presents high stability at the whole deformation parameter ranges, whereas the predictability of the modified Arrhenius model has some fluctuation at different deformation conditions. It presents higher predicted accuracy at temperatures of 573-773 K, strain rates of 0.010-0.100 s^(-1)and strain of 0.04-0.32, while low accuracy at temperature of 873 K, strain rates of 0.001 s^(-1)and strain of 0.36-0.48.Thus, the application of modified Arrhenius model is limited by its relatively low predicted accuracy at some deformation conditions, while the ANN model presents very high predicted accuracy at all deformation conditions,which can be used to study the compression behavior of TC4 tube at the temperature range of 573-873 K and the strain rate of 0.001-0.100 s^(-1). It can provide guideline for the design of processing parameters in warm rotary draw bending of LDTW TC4 tubes. 展开更多
关键词 TC4 tube Compression behavior Constitutive model Modified Arrhenius model Neural network model
原文传递
Failure behavior and strength model of blocky rock mass with and without rockbolts
9
作者 Chun Zhu Xiansen Xing +4 位作者 Manchao He Zhicheng Tang Feng Xiong Zuyang Ye Chaoshui Xu 《International Journal of Mining Science and Technology》 SCIE EI CAS CSCD 2024年第6期747-762,共16页
To better understand the failure behaviours and strength of bolt-reinforced blocky rocks,large scale extensive laboratory experiments are carried out on blocky rock-like specimens with and without rockbolt reinforceme... To better understand the failure behaviours and strength of bolt-reinforced blocky rocks,large scale extensive laboratory experiments are carried out on blocky rock-like specimens with and without rockbolt reinforcement.The results show that both shear failure and tensile failure along joint surfaces are observed but the shear failure is a main controlling factor for the peak strength of the rock mass with and without rockbolts.The rockbolts are necked and shear deformation simultaneously happens in bolt reinforced rock specimens.As the joint dip angle increases,the joint shear failure becomes more dominant.The number of rockbolts has a significant impact on the peak strain and uniaxial compressive strength(UCS),but little influence on the deformation modulus of the rock mass.Using the Winkler beam model to represent the rockbolt behaviours,an analytical model for the prediction of the strength of boltreinforced blocky rocks is proposed.Good agreement between the UCS values predicted by proposed model and obtained from experiments suggest an encouraging performance of the proposed model.In addition,the performance of the proposed model is further assessed using published results in the literature,indicating the proposed model can be used effectively in the prediction of UCS of bolt-reinforced blocky rocks. 展开更多
关键词 Blocky rock mass Rockbolt ground support Uniaxial compression test Failure mechanism Uniaxial compressive strength model
在线阅读 下载PDF
An Efficient Approach to Escalate the Speed of Training Convolution Neural Networks
10
作者 P Pabitha Anusha Jayasimhan 《China Communications》 SCIE CSCD 2024年第2期258-269,共12页
Deep neural networks excel at image identification and computer vision applications such as visual product search, facial recognition, medical image analysis, object detection, semantic segmentation,instance segmentat... Deep neural networks excel at image identification and computer vision applications such as visual product search, facial recognition, medical image analysis, object detection, semantic segmentation,instance segmentation, and many others. In image and video recognition applications, convolutional neural networks(CNNs) are widely employed. These networks provide better performance but at a higher cost of computation. With the advent of big data, the growing scale of datasets has made processing and model training a time-consuming operation, resulting in longer training times. Moreover, these large scale datasets contain redundant data points that have minimum impact on the final outcome of the model. To address these issues, an accelerated CNN system is proposed for speeding up training by eliminating the noncritical data points during training alongwith a model compression method. Furthermore, the identification of the critical input data is performed by aggregating the data points at two levels of granularity which are used for evaluating the impact on the model output.Extensive experiments are conducted using the proposed method on CIFAR-10 dataset on ResNet models giving a 40% reduction in number of FLOPs with a degradation of just 0.11% accuracy. 展开更多
关键词 CNN deep learning image classification model compression
在线阅读 下载PDF
Optimized Binary Neural Networks for Road Anomaly Detection:A TinyML Approach on Edge Devices
11
作者 Amna Khatoon Weixing Wang +2 位作者 Asad Ullah Limin Li Mengfei Wang 《Computers, Materials & Continua》 SCIE EI 2024年第7期527-546,共20页
Integrating Tiny Machine Learning(TinyML)with edge computing in remotely sensed images enhances the capabilities of road anomaly detection on a broader level.Constrained devices efficiently implement a Binary Neural N... Integrating Tiny Machine Learning(TinyML)with edge computing in remotely sensed images enhances the capabilities of road anomaly detection on a broader level.Constrained devices efficiently implement a Binary Neural Network(BNN)for road feature extraction,utilizing quantization and compression through a pruning strategy.The modifications resulted in a 28-fold decrease in memory usage and a 25%enhancement in inference speed while only experiencing a 2.5%decrease in accuracy.It showcases its superiority over conventional detection algorithms in different road image scenarios.Although constrained by computer resources and training datasets,our results indicate opportunities for future research,demonstrating that quantization and focused optimization can significantly improve machine learning models’accuracy and operational efficiency.ARM Cortex-M0 gives practical feasibility and substantial benefits while deploying our optimized BNN model on this low-power device:Advanced machine learning in edge computing.The analysis work delves into the educational significance of TinyML and its essential function in analyzing road networks using remote sensing,suggesting ways to improve smart city frameworks in road network assessment,traffic management,and autonomous vehicle navigation systems by emphasizing the importance of new technologies for maintaining and safeguarding road networks. 展开更多
关键词 Edge computing remote sensing TinyML optimization BNNs road anomaly detection QUANTIZATION model compression
在线阅读 下载PDF
De-biased knowledge distillation framework based on knowledge infusion and label de-biasing techniques
12
作者 Yan Li Tai-Kang Tian +1 位作者 Meng-Yu Zhuang Yu-Ting Sun 《Journal of Electronic Science and Technology》 EI CAS CSCD 2024年第3期57-68,共12页
Knowledge distillation,as a pivotal technique in the field of model compression,has been widely applied across various domains.However,the problem of student model performance being limited due to inherent biases in t... Knowledge distillation,as a pivotal technique in the field of model compression,has been widely applied across various domains.However,the problem of student model performance being limited due to inherent biases in the teacher model during the distillation process still persists.To address the inherent biases in knowledge distillation,we propose a de-biased knowledge distillation framework tailored for binary classification tasks.For the pre-trained teacher model,biases in the soft labels are mitigated through knowledge infusion and label de-biasing techniques.Based on this,a de-biased distillation loss is introduced,allowing the de-biased labels to replace the soft labels as the fitting target for the student model.This approach enables the student model to learn from the corrected model information,achieving high-performance deployment on lightweight student models.Experiments conducted on multiple real-world datasets demonstrate that deep learning models compressed under the de-biased knowledge distillation framework significantly outperform traditional response-based and feature-based knowledge distillation models across various evaluation metrics,highlighting the effectiveness and superiority of the de-biased knowledge distillation framework in model compression. 展开更多
关键词 De-biasing Deep learning Knowledge distillation Model compression
在线阅读 下载PDF
A Novel Quantization and Model Compression Approach for Hardware Accelerators in Edge Computing
13
作者 Fangzhou He Ke Ding +3 位作者 DingjiangYan Jie Li Jiajun Wang Mingzhe Chen 《Computers, Materials & Continua》 SCIE EI 2024年第8期3021-3045,共25页
Massive computational complexity and memory requirement of artificial intelligence models impede their deploy-ability on edge computing devices of the Internet of Things(IoT).While Power-of-Two(PoT)quantization is pro... Massive computational complexity and memory requirement of artificial intelligence models impede their deploy-ability on edge computing devices of the Internet of Things(IoT).While Power-of-Two(PoT)quantization is pro-posed to improve the efficiency for edge inference of Deep Neural Networks(DNNs),existing PoT schemes require a huge amount of bit-wise manipulation and have large memory overhead,and their efficiency is bounded by the bottleneck of computation latency and memory footprint.To tackle this challenge,we present an efficient inference approach on the basis of PoT quantization and model compression.An integer-only scalar PoT quantization(IOS-PoT)is designed jointly with a distribution loss regularizer,wherein the regularizer minimizes quantization errors and training disturbances.Additionally,two-stage model compression is developed to effectively reduce memory requirement,and alleviate bandwidth usage in communications of networked heterogenous learning systems.The product look-up table(P-LUT)inference scheme is leveraged to replace bit-shifting with only indexing and addition operations for achieving low-latency computation and implementing efficient edge accelerators.Finally,comprehensive experiments on Residual Networks(ResNets)and efficient architectures with Canadian Institute for Advanced Research(CIFAR),ImageNet,and Real-world Affective Faces Database(RAF-DB)datasets,indicate that our approach achieves 2×∼10×improvement in the reduction of both weight size and computation cost in comparison to state-of-the-art methods.A P-LUT accelerator prototype is implemented on the Xilinx KV260 Field Programmable Gate Array(FPGA)platform for accelerating convolution operations,with performance results showing that P-LUT reduces memory footprint by 1.45×,achieves more than 3×power efficiency and 2×resource efficiency,compared to the conventional bit-shifting scheme. 展开更多
关键词 Edge computing model compression hardware accelerator power-of-two quantization
在线阅读 下载PDF
Low rank optimization for efficient deep learning:making a balance between compact architecture and fast training
14
作者 OU Xinwei CHEN Zhangxin +1 位作者 ZHU Ce LIU Yipeng 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第3期509-531,F0002,共24页
Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices... Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices,and it is not environmental-friendly with much power cost.In this paper,we focus on low-rank optimization for efficient deep learning techniques.In the space domain,DNNs are compressed by low rank approximation of the network parameters,which directly reduces the storage requirement with a smaller number of network parameters.In the time domain,the network parameters can be trained in a few subspaces,which enables efficient training for fast convergence.The model compression in the spatial domain is summarized into three categories as pre-train,pre-set,and compression-aware methods,respectively.With a series of integrable techniques discussed,such as sparse pruning,quantization,and entropy coding,we can ensemble them in an integration framework with lower computational complexity and storage.In addition to summary of recent technical advances,we have two findings for motivating future works.One is that the effective rank,derived from the Shannon entropy of the normalized singular values,outperforms other conventional sparse measures such as the?_1 norm for network compression.The other is a spatial and temporal balance for tensorized neural networks.For accelerating the training of tensorized neural networks,it is crucial to leverage redundancy for both model compression and subspace training. 展开更多
关键词 model compression subspace training effective rank low rank tensor optimization efficient deep learning
在线阅读 下载PDF
DPAL-BERT:A Faster and Lighter Question Answering Model
15
作者 Lirong Yin Lei Wang +8 位作者 Zhuohang Cai Siyu Lu Ruiyang Wang Ahmed AlSanad Salman A.AlQahtani Xiaobing Chen Zhengtong Yin Xiaolu Li Wenfeng Zheng 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期771-786,共16页
Recent advancements in natural language processing have given rise to numerous pre-training language models in question-answering systems.However,with the constant evolution of algorithms,data,and computing power,the ... Recent advancements in natural language processing have given rise to numerous pre-training language models in question-answering systems.However,with the constant evolution of algorithms,data,and computing power,the increasing size and complexity of these models have led to increased training costs and reduced efficiency.This study aims to minimize the inference time of such models while maintaining computational performance.It also proposes a novel Distillation model for PAL-BERT(DPAL-BERT),specifically,employs knowledge distillation,using the PAL-BERT model as the teacher model to train two student models:DPAL-BERT-Bi and DPAL-BERTC.This research enhances the dataset through techniques such as masking,replacement,and n-gram sampling to optimize knowledge transfer.The experimental results showed that the distilled models greatly outperform models trained from scratch.In addition,although the distilled models exhibit a slight decrease in performance compared to PAL-BERT,they significantly reduce inference time to just 0.25%of the original.This demonstrates the effectiveness of the proposed approach in balancing model performance and efficiency. 展开更多
关键词 DPAL-BERT question answering systems knowledge distillation model compression BERT Bi-directional long short-term memory(BiLSTM) knowledge information transfer PAL-BERT training efficiency natural language processing
在线阅读 下载PDF
Effect of warm acupuncture on nitric oxide synthase and calcitonin gene-related peptide in a rat model of lumbar nerve root compression 被引量:5
16
作者 Yaochi Wu Yiqun Mi Peng Zhang Junfeng Zhang Wei Chen 《Neural Regeneration Research》 SCIE CAS CSCD 2009年第6期449-454,共6页
BACKGROUND: Varying degrees of inflammatory responses occur during lumbar nerve root compression. Studies have shown that nitric oxide synthase (NOS) and calcitonin gene-related peptide (CGRP) are involved in sec... BACKGROUND: Varying degrees of inflammatory responses occur during lumbar nerve root compression. Studies have shown that nitric oxide synthase (NOS) and calcitonin gene-related peptide (CGRP) are involved in secondary disc inflammation. OBJECTIVE: To observe the effects of warm acupuncture on the ultrastructure of inflammatory mediators in a rat model of lumbar nerve root compression, including NOS and CGRP contents. DESIGN, TIME AND SETTING: Randomized, controlled study, with molecular biological analysis, was performed at the Experimental Center, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University, between September 2006 and April 2007. MATERIALS: Acupuncture needles and refined Moxa grains were purchased from Shanghai Taicheng Technology Development Co., Ltd., China; Mobic tablets were purchased from Shanghai Boehringer Ingelheim Pharmaceuticals Co., Ltd., China; enzyme linked immunosorbent assay (ELISA) kits for NOS and CGRP were purchased from ADL Biotechnology, Inc., USA. METHODS: A total of 50, healthy, adult Sprague-Dawley rats, were randomly divided into five groups normal, model, warm acupuncture, acupuncture, and drug, with 10 rats in each group. Rats in the four groups, excluding the normal group, were used to establish models of lumbar nerve root compression. After 3 days, Jiaji points were set using reinforcing-reducing manipulation in the warm acupuncture group. Moxa grains were burned on each needle, with 2 grains each daily. The acupuncture group was the same as the warm acupuncture group, with the exception of non-moxibustion. Mobic suspension (3.75 mg/kg) was used in the oral drug group, once a day. Treatment of each group lasted for 14 consecutive days. Modeling and medication were not performed in the normal group. MAIN OUTCOME MEASURES: The ultrastructure of damaged nerve roots was observed with transmission electron microscopy; NOS and CGRP contents were measured using ELISA. RESULTS: The changes of the radicular ultramicrostructure were characterized by Wallerian degeneration; nerve fibers were clearly demyelinated; axons collapsed or degenerated; outer Schwann cell cytoplasm was swollen and its nucleus was compacted. Compared with the normal group, NOS and CGRP contents in the nerve root compression zone in the model group were significantly increased (P 〈 0.01). Nerve root edema was improved in the drug, acupuncture and the warm acupuncture groups over the model group. NOS and CGRP expressions were also decreased with the warm acupuncture group having the lowest concentration (P 〈 0.01). CONCLUSION: In comparison to the known effects of Mobic drug and acupuncture treatments, the warm acupuncture significantly decreased NOS and CGRP expression which helped improve the ultrastructure of the compressed nerve root. 展开更多
关键词 warm acupuncture nerve root compression model ULTRASTRUCTURE nitric oxide synthase calcitonin gene-related peptide
暂未订购
A Novel Deep Neural Network Compression Model for Airport Object Detection 被引量:4
17
作者 LYU Zonglei PAN Fuxi XU Xianhong 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2020年第4期562-573,共12页
A novel deep neural network compression model for airport object detection has been presented.This novel model aims at disadvantages of deep neural network,i.e.the complexity of the model and the great cost of calcula... A novel deep neural network compression model for airport object detection has been presented.This novel model aims at disadvantages of deep neural network,i.e.the complexity of the model and the great cost of calculation.According to the requirement of airport object detection,the model obtains temporal and spatial semantic rules from the uncompressed model.These spatial semantic rules are added to the model after parameter compression to assist the detection.The rules can improve the accuracy of the detection model in order to make up for the loss caused by parameter compression.The experiments show that the effect of the novel compression detection model is no worse than that of the uncompressed original model.Even some of the original model false detection can be eliminated through the prior knowledge. 展开更多
关键词 compression model semantic rules PRUNING prior probability lightweight detection
在线阅读 下载PDF
BLOWUP CRITERION FOR THE COMPRESSIBLE FLUID-PARTICLE INTERACTION MODEL IN 3D WITH VACUUM 被引量:3
18
作者 丁时进 黄炳远 卢友波 《Acta Mathematica Scientia》 SCIE CSCD 2016年第4期1030-1048,共19页
In this article, we consider the blowup criterion for the local strong solution to the compressible fluid-particle interaction model in dimension three with vacuum. We establish a BKM type criterion for possible break... In this article, we consider the blowup criterion for the local strong solution to the compressible fluid-particle interaction model in dimension three with vacuum. We establish a BKM type criterion for possible breakdown of such solutions at critical time in terms of both the L^∞ (0, T; L^6)-norm of the density of particles and the ^L1(0, T; L^∞)-norm of the deformation tensor of velocity gradient. 展开更多
关键词 Blowup criterion compressible fluid-particle interaction model VACUUM
在线阅读 下载PDF
REVIEW ON MATHEMATICAL ANALYSIS OF SOME TWO-PHASE FLOW MODELS 被引量:3
19
作者 Huanyao WEN Lei YAO Changjiang ZHU 《Acta Mathematica Scientia》 SCIE CSCD 2018年第5期1617-1636,共20页
The two-phase flow models are commonly used in industrial applications, such as nuclear, power, chemical-process, oil-and-gas, cryogenics, bio-medical, micro-technology and so on. This is a survey paper on the study o... The two-phase flow models are commonly used in industrial applications, such as nuclear, power, chemical-process, oil-and-gas, cryogenics, bio-medical, micro-technology and so on. This is a survey paper on the study of compressible nonconservative two-fluid model, drift-flux model and viscous liquid-gas two-phase flow model. We give the research developments of these three two-phase flow models, respectively. In the last part, we give some open problems about the above models. 展开更多
关键词 compressible nonconservative two-fluid model drift-flux model viscous liquid-gas two-phase flow model WELL-POSEDNESS
在线阅读 下载PDF
Numerical study of compression corner flowfield using Gao-Yong turbulence model 被引量:2
20
作者 GAO Ge ZHANG Chang-xian +1 位作者 YAN Wen-hui WANG Yong 《航空动力学报》 EI CAS CSCD 北大核心 2012年第1期124-128,共5页
A numerical simulation of shock wave turbulent boundary layer interaction induced by a 24° compression corner based on Gao-Yong compressible turbulence model was presented.The convection terms and the diffusion t... A numerical simulation of shock wave turbulent boundary layer interaction induced by a 24° compression corner based on Gao-Yong compressible turbulence model was presented.The convection terms and the diffusion terms were calculated using the second-order AUSM(advection upstream splitting method) scheme and the second-order central difference scheme,respectively.The Runge-Kutta time marching method was employed to solve the governing equations for steady state solutions.Significant flow separation-region which indicates highly non-isotropic turbulence structure has been found in the present work due to intensity interaction under the 24° compression corner.Comparisons between the calculated results and experimental data have been carried out,including surface pressure distribution,boundary-layer static pressure profiles and mean velocity profiles.The numerical results agree well with the experimental values,which indicate Gao-Yong compressible turbulence model is suitable for the prediction of shock wave turbulent boundary layer interaction in two-dimensional compression corner flows. 展开更多
关键词 shock wave turbulent boundary layer INTERACTION Gao-Yong compressible turbulence model compression corner flow
原文传递
上一页 1 2 3 下一页 到第
使用帮助 返回顶部