Synaptic pruning is a crucial process in synaptic refinement,eliminating unstable synaptic connections in neural circuits.This process is triggered and regulated primarily by spontaneous neural activity and experience...Synaptic pruning is a crucial process in synaptic refinement,eliminating unstable synaptic connections in neural circuits.This process is triggered and regulated primarily by spontaneous neural activity and experience-dependent mechanisms.The pruning process involves multiple molecular signals and a series of regulatory activities governing the“eat me”and“don't eat me”states.Under physiological conditions,the interaction between glial cells and neurons results in the clearance of unnecessary synapses,maintaining normal neural circuit functionality via synaptic pruning.Alterations in genetic and environmental factors can lead to imbalanced synaptic pruning,thus promoting the occurrence and development of autism spectrum disorder,schizophrenia,Alzheimer's disease,and other neurological disorders.In this review,we investigated the molecular mechanisms responsible for synaptic pruning during neural development.We focus on how synaptic pruning can regulate neural circuits and its association with neurological disorders.Furthermore,we discuss the application of emerging optical and imaging technologies to observe synaptic structure and function,as well as their potential for clinical translation.Our aim was to enhance our understanding of synaptic pruning during neural development,including the molecular basis underlying the regulation of synaptic function and the dynamic changes in synaptic density,and to investigate the potential role of these mechanisms in the pathophysiology of neurological diseases,thus providing a theoretical foundation for the treatment of neurological disorders.展开更多
The umbilical,a key component in offshore energy extraction,plays a vital role in ensuring the stable operation of the entire production system.The extensive variety of cross-sectional components creates highly comple...The umbilical,a key component in offshore energy extraction,plays a vital role in ensuring the stable operation of the entire production system.The extensive variety of cross-sectional components creates highly complex layout combinations.Furthermore,due to constraints in component quantity and geometry within the cross-sectional layout,filler bodies must be incorporated to maintain cross-section performance.Conventional design approaches based on manual experience suffer from inefficiency,high variability,and difficulties in quantification.This paper presents a multi-level automatic filling optimization design method for umbilical cross-sectional layouts to address these limitations.Initially,the research establishes a multi-objective optimization model that considers compactness,balance,and wear resistance of the cross-section,employing an enhanced genetic algorithm to achieve a near-optimal layout.Subsequently,the study implements an image processing-based vacancy detection technique to accurately identify cross-sectional gaps.To manage the variability and diversity of these vacant regions,the research introduces a multi-level filling method that strategically selects and places filler bodies of varying dimensions,overcoming the constraints of uniform-size fillers.Additionally,the method incorporates a hierarchical strategy that subdivides the complex cross-section into multiple layers,enabling layer-by-layer optimization and filling.This approach reduces manufac-turing equipment requirements while ensuring practical production process feasibility.The methodology is validated through a specific umbilical case study.The results demonstrate improvements in compactness,balance,and wear resistance compared with the initial cross-section,offering novel insights and valuable references for filler design in umbilical cross-sections.展开更多
As a key node of modern transportation network,the informationization management of road tunnels is crucial to ensure the operation safety and traffic efficiency.However,the existing tunnel vehicle modeling methods ge...As a key node of modern transportation network,the informationization management of road tunnels is crucial to ensure the operation safety and traffic efficiency.However,the existing tunnel vehicle modeling methods generally have problems such as insufficient 3D scene description capability and low dynamic update efficiency,which are difficult to meet the demand of real-time accurate management.For this reason,this paper proposes a vehicle twin modeling method for road tunnels.This approach starts from the actual management needs,and supports multi-level dynamic modeling from vehicle type,size to color by constructing a vehicle model library that can be flexibly invoked;at the same time,semantic constraint rules with geometric layout,behavioral attributes,and spatial relationships are designed to ensure that the virtual model matches with the real model with a high degree of similarity;ultimately,the prototype system is constructed and the case region is selected for the case study,and the dynamic vehicle status in the tunnel is realized by integrating real-time monitoring data with semantic constraints for precise virtual-real mapping.Finally,the prototype system is constructed and case experiments are conducted in selected case areas,which are combined with real-time monitoring data to realize dynamic updating and three-dimensional visualization of vehicle states in tunnels.The experiments show that the proposed method can run smoothly with an average rendering efficiency of 17.70 ms while guaranteeing the modeling accuracy(composite similarity of 0.867),which significantly improves the real-time and intuitive tunnel management.The research results provide reliable technical support for intelligent operation and emergency response of road tunnels,and offer new ideas for digital twin modeling of complex scenes.展开更多
The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and int...The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and intricate networks pose challenges for deployment and execution while also exacerbating the issue of network over-parameterization.To address this issue,various network compression techniques have been developed,such as network pruning.A typical pruning algorithm follows a three-step pipeline involving training,pruning,and retraining.Existing methods often directly set the pruned filters to zero during retraining,significantly reducing the parameter space.However,this direct pruning strategy frequently results in irreversible information loss.In the early stages of training,a network still contains much uncertainty,and evaluating filter importance may not be sufficiently rigorous.To manage the pruning process effectively,this paper proposes a flexible neural network pruning algorithm based on the logistic growth differential equation,considering the characteristics of network training.Unlike other pruning algorithms that directly reduce filter weights,this algorithm introduces a three-stage adaptive weight decay strategy inspired by the logistic growth differential equation.It employs a gentle decay rate in the initial training stage,a rapid decay rate during the intermediate stage,and a slower decay rate in the network convergence stage.Additionally,the decay rate is adjusted adaptively based on the filter weights at each stage.By controlling the adaptive decay rate at each stage,the pruning of neural network filters can be effectively managed.In experiments conducted on the CIFAR-10 and ILSVRC-2012 datasets,the pruning of neural networks significantly reduces the floating-point operations while maintaining the same pruning rate.Specifically,when implementing a 30%pruning rate on the ResNet-110 network,the pruned neural network not only decreases floating-point operations by 40.8%but also enhances the classification accuracy by 0.49%compared to the original network.展开更多
3D sparse convolution has emerged as a pivotal technique for efficient voxel-based perception in autonomous systems,enabling selective feature extraction from non-empty voxels while suppressing computational waste.Des...3D sparse convolution has emerged as a pivotal technique for efficient voxel-based perception in autonomous systems,enabling selective feature extraction from non-empty voxels while suppressing computational waste.Despite its theoretical efficiency advantages,practical implementations face under-explored limitations:the fixed geometric patterns of conventional sparse convolutional kernels inevitably process non-contributory positions during sliding-window operations,particularly in regions with uneven point cloud density.To address this,we propose Hierarchical Shape Pruning for 3D Sparse Convolution(HSP-S),which dynamically eliminates redundant kernel stripes through layer-adaptive thresholding.Unlike static soft pruning methods,HSP-S maintains trainable sparsity patterns by progressively adjusting pruning thresholds during optimization,enlarging original parameter search space while removing redundant operations.Extensive experiments validate effectiveness of HSP-S acrossmajor autonomous driving benchmarks.On KITTI’s 3D object detection task,our method reduces 93.47%redundant kernel computations whilemaintaining comparable accuracy(1.56%mAP drop).Remarkably,on themore complexNuScenes benchmark,HSP-S achieves simultaneous computation reduction(21.94%sparsity)and accuracy gains(1.02%mAP(mean Average Precision)and 0.47%NDS(nuScenes detection score)improvement),demonstrating its scalability to diverse perception scenarios.This work establishes the first learnable shape pruning framework that simultaneously enhances computational efficiency and preserves detection accuracy in 3D perception systems.展开更多
The dynamic routing mechanism in evolvable networks enables adaptive reconfiguration of topol-ogical structures and transmission pathways based on real-time task requirements and data character-istics.However,the heig...The dynamic routing mechanism in evolvable networks enables adaptive reconfiguration of topol-ogical structures and transmission pathways based on real-time task requirements and data character-istics.However,the heightened architectural complexity and expanded parameter dimensionality in evolvable networks present significant implementation challenges when deployed in resource-con-strained environments.Due to the critical paths ignored,traditional pruning strategies cannot get a desired trade-off between accuracy and efficiency.For this reason,a critical path retention pruning(CPRP)method is proposed.By deeply traversing the computational graph,the dependency rela-tionship among nodes is derived.Then the nodes are grouped and sorted according to their contribu-tion value.The redundant operations are removed as much as possible while ensuring that the criti-cal path is not affected.As a result,computational efficiency is improved while a higher accuracy is maintained.On the CIFAR benchmark,the experimental results demonstrate that CPRP-induced pruning incurs accuracy degradation below 4.00%,while outperforming traditional feature-agnostic grouping methods by an average 8.98%accuracy improvement.Simultaneously,the pruned model attains a 2.41 times inference acceleration while achieving 48.92%parameter compression and 53.40%floating-point operations(FLOPs)reduction.展开更多
The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classificati...The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classification.However,BERT’s size and computational demands limit its practicality,especially in resource-constrained settings.This research compresses the BERT base model for Bengali emotion classification through knowledge distillation(KD),pruning,and quantization techniques.Despite Bengali being the sixth most spoken language globally,NLP research in this area is limited.Our approach addresses this gap by creating an efficient BERT-based model for Bengali text.We have explored 20 combinations for KD,quantization,and pruning,resulting in improved speedup,fewer parameters,and reduced memory size.Our best results demonstrate significant improvements in both speed and efficiency.For instance,in the case of mBERT,we achieved a 3.87×speedup and 4×compression ratio with a combination of Distil+Prune+Quant that reduced parameters from 178 to 46 M,while the memory size decreased from 711 to 178 MB.These results offer scalable solutions for NLP tasks in various languages and advance the field of model compression,making these models suitable for real-world applications in resource-limited environments.展开更多
Filter pruning effectively compresses the neural network by reducing both its parameters and computational cost.Existing pruning methods typically rely on pre-designed pruning criteria to measure filter importance and...Filter pruning effectively compresses the neural network by reducing both its parameters and computational cost.Existing pruning methods typically rely on pre-designed pruning criteria to measure filter importance and remove those deemed unimportant.However,different layers of the neural network exhibit varying filter distributions,making it inappropriate to implement the same pruning criterion for all layers.Additionally,some approaches apply different criteria from the set of pre-defined pruning rules for different layers,but the limited space leads to the difficulty of covering all layers.If criteria for all layers are manually designed,it is costly and difficult to generalize to other networks.To solve this problem,we present a novel neural network pruning method based on the Criterion Learner and Attention Distillation(CLAD).Specifically,CLAD develops a differentiable criterion learner,which is integrated into each layer of the network.The learner can automatically learn the appropriate pruning criterion according to the filter parameters of each layer,thus the requirement of manual design is eliminated.Furthermore,the criterion learner is trained end-to-end by the gradient optimization algorithm to achieve efficient pruning.In addition,attention distillation,which fully utilizes the knowledge of unpruned networks to guide the optimization of the learner and improve the pruned network performance,is introduced in the process of learner optimization.Experiments conducted on various datasets and networks demonstrate the effectiveness of the proposed method.Notably,CLAD reduces the FLOPs of Res Net-110 by about 53%on the CIFAR-10 dataset,while simultaneously improves the network's accuracy by 0.05%.Moreover,it reduces the FLOPs of Res Net-50 by about 46%on the Image Net-1K dataset,and maintains a top-1 accuracy of 75.45%.展开更多
End-to-end object detection Transformer(DETR)successfully established the paradigm of the Transformer architecture in the field of object detection.Its end-to-end detection process and the idea of set prediction have ...End-to-end object detection Transformer(DETR)successfully established the paradigm of the Transformer architecture in the field of object detection.Its end-to-end detection process and the idea of set prediction have become one of the hottest network architectures in recent years.There has been an abundance of work improving upon DETR.However,DETR and its variants require a substantial amount of memory resources and computational costs,and the vast number of parameters in these networks is unfavorable for model deployment.To address this issue,a greedy pruning(GP)algorithm is proposed,applied to a variant denoising-DETR(DN-DETR),which can eliminate redundant parameters in the Transformer architecture of DN-DETR.Considering the different roles of the multi-head attention(MHA)module and the feed-forward network(FFN)module in the Transformer architecture,a modular greedy pruning(MGP)algorithm is proposed.This algorithm separates the two modules and applies their respective optimal strategies and parameters.The effectiveness of the proposed algorithm is validated on the COCO 2017 dataset.The model obtained through the MGP algorithm reduces the parameters by 49%and the number of floating point operations(FLOPs)by 44%compared to the Transformer architecture of DN-DETR.At the same time,the mean average precision(mAP)of the model increases from 44.1%to 45.3%.展开更多
Accurate prediction of landslide displacement is crucial for effective early warning of landslide disasters.While most existing prediction methods focus on time-series forecasting for individual monitoring points,ther...Accurate prediction of landslide displacement is crucial for effective early warning of landslide disasters.While most existing prediction methods focus on time-series forecasting for individual monitoring points,there is limited research on the spatiotemporal characteristics of landslide deformation.This paper proposes a novel Multi-Relation Spatiotemporal Graph Residual Network with Multi-Level Feature Attention(MFA-MRSTGRN)that effectively improves the prediction performance of landslide displacement through spatiotemporal fusion.This model integrates internal seepage factors as data feature enhancements with external triggering factors,allowing for accurate capture of the complex spatiotemporal characteristics of landslide displacement and the construction of a multi-source heterogeneous dataset.The MFA-MRSTGRN model incorporates dynamic graph theory and four key modules:multilevel feature attention,temporal-residual decomposition,spatial multi-relational graph convolution,and spatiotemporal fusion prediction.This comprehensive approach enables the efficient analyses of multi-source heterogeneous datasets,facilitating adaptive exploration of the evolving multi-relational,multi-dimensional spatiotemporal complexities in landslides.When applying this model to predict the displacement of the Liangshuijing landslide,we demonstrate that the MFA-MRSTGRN model surpasses traditional models,such as random forest(RF),long short-term memory(LSTM),and spatial temporal graph convolutional networks(ST-GCN)models in terms of various evaluation metrics including mean absolute error(MAE=1.27 mm),root mean square error(RMSE=1.49 mm),mean absolute percentage error(MAPE=0.026),and R-squared(R^(2)=0.88).Furthermore,feature ablation experiments indicate that incorporating internal seepage factors improves the predictive performance of landslide displacement models.This research provides an advanced and reliable method for landslide displacement prediction.展开更多
As we look ahead to future lunar exploration missions, such as crewed lunar exploration and establishing lunar scientific research stations, the lunar rovers will need to cover vast distances. These distances could ra...As we look ahead to future lunar exploration missions, such as crewed lunar exploration and establishing lunar scientific research stations, the lunar rovers will need to cover vast distances. These distances could range from kilometers to tens of kilometers, and even hundreds and thousands of kilometers. Therefore, it is crucial to develop effective long-range path planning for lunar rovers to meet the demands of lunar patrol exploration. This paper presents a hierarchical map model path planning method that utilizes the existing high-resolution images, digital elevation models and mineral abundance maps. The objective is to address the issue of the construction of lunar rover travel costs in the absence of large-scale, high-resolution digital elevation models. This method models the reference and semantic layers using the middle- and low-resolution remote sensing data. The multi-scale obstacles on the lunar surface are extracted by combining the deep learning algorithm on the high-resolution image, and the obstacle avoidance layer is modeled. A two-stage exploratory path planning decision is employed for long-distance driving path planning on a global–local scale. The proposed method analyzes the long-distance accessibility of various areas of scientific significance, such as Rima Bode. A high-precision digital elevation model is created using stereo images to validate the method. Based on the findings, it can be observed that the entire route spans a distance of 930.32 km. The route demonstrates an impressive ability to avoid meter-level impact craters and linear structures while maintaining an average slope of less than 8°. This paper explores scientific research by traversing at least seven basalt units, uncovering the secrets of lunar volcanic activities, and establishing ‘golden spike’ reference points for lunar stratigraphy. The final result of path planning can serve as a valuable reference for the design, mission demonstration, and subsequent project implementation of the new manned lunar rover.展开更多
Deep learning networks are increasingly exploited in the field of neuronal soma segmentation.However,annotating dataset is also an expensive and time-consuming task.Unsupervised domain adaptation is an effective metho...Deep learning networks are increasingly exploited in the field of neuronal soma segmentation.However,annotating dataset is also an expensive and time-consuming task.Unsupervised domain adaptation is an effective method to mitigate the problem,which is able to learn an adaptive segmentation model by transferring knowledge from a rich-labeled source domain.In this paper,we propose a multi-level distribution alignment-based unsupervised domain adaptation network(MDA-Net)for segmentation of 3D neuronal soma images.Distribution alignment is performed in both feature space and output space.In the feature space,features from different scales are adaptively fused to enhance the feature extraction capability for small target somata and con-strained to be domain invariant by adversarial adaptation strategy.In the output space,local discrepancy maps that can reveal the spatial structures of somata are constructed on the predicted segmentation results.Then thedistribution alignment is performed on the local discrepancies maps across domains to obtain a superior discrepancy map in the target domain,achieving refined segmentation performance of neuronal somata.Additionally,after a period of distribution align-ment procedure,a portion of target samples with high confident pseudo-labels are selected as training data,which assist in learning a more adaptive segmentation network.We verified the superiority of the proposed algorithm by comparing several domain adaptation networks on two 3D mouse brain neuronal somata datasets and one macaque brain neuronal soma dataset.展开更多
Thyroid nodules,a common disorder in the endocrine system,require accurate segmentation in ultrasound images for effective diagnosis and treatment.However,achieving precise segmentation remains a challenge due to vari...Thyroid nodules,a common disorder in the endocrine system,require accurate segmentation in ultrasound images for effective diagnosis and treatment.However,achieving precise segmentation remains a challenge due to various factors,including scattering noise,low contrast,and limited resolution in ultrasound images.Although existing segmentation models have made progress,they still suffer from several limitations,such as high error rates,low generalizability,overfitting,limited feature learning capability,etc.To address these challenges,this paper proposes a Multi-level Relation Transformer-based U-Net(MLRT-UNet)to improve thyroid nodule segmentation.The MLRTUNet leverages a novel Relation Transformer,which processes images at multiple scales,overcoming the limitations of traditional encoding methods.This transformer integrates both local and global features effectively through selfattention and cross-attention units,capturing intricate relationships within the data.The approach also introduces a Co-operative Transformer Fusion(CTF)module to combine multi-scale features from different encoding layers,enhancing the model’s ability to capture complex patterns in the data.Furthermore,the Relation Transformer block enhances long-distance dependencies during the decoding process,improving segmentation accuracy.Experimental results showthat the MLRT-UNet achieves high segmentation accuracy,reaching 98.2% on the Digital Database Thyroid Image(DDT)dataset,97.8% on the Thyroid Nodule 3493(TG3K)dataset,and 98.2% on the Thyroid Nodule3K(TN3K)dataset.These findings demonstrate that the proposed method significantly enhances the accuracy of thyroid nodule segmentation,addressing the limitations of existing models.展开更多
Accurate traffic flow prediction(TFP)is vital for efficient and sustainable transportation management and the development of intelligent traffic systems.However,missing data in real-world traffic datasets poses a sign...Accurate traffic flow prediction(TFP)is vital for efficient and sustainable transportation management and the development of intelligent traffic systems.However,missing data in real-world traffic datasets poses a significant challenge to maintaining prediction precision.This study introduces REPTF-TMDI,a novel method that combines a Reduced Error Pruning Tree Forest(REPTree Forest)with a newly proposed Time-based Missing Data Imputation(TMDI)approach.The REP Tree Forest,an ensemble learning approach,is tailored for time-related traffic data to enhance predictive accuracy and support the evolution of sustainable urbanmobility solutions.Meanwhile,the TMDI approach exploits temporal patterns to estimate missing values reliably whenever empty fields are encountered.The proposed method was evaluated using hourly traffic flow data from a major U.S.roadway spanning 2012-2018,incorporating temporal features(e.g.,hour,day,month,year,weekday),holiday indicator,and weather conditions(temperature,rain,snow,and cloud coverage).Experimental results demonstrated that the REPTF-TMDI method outperformed conventional imputation techniques across various missing data ratios by achieving an average 11.76%improvement in terms of correlation coefficient(R).Furthermore,REPTree Forest achieved improvements of 68.62%in RMSE and 70.52%in MAE compared to existing state-of-the-art models.These findings highlight the method’s ability to significantly boost traffic flow prediction accuracy,even in the presence of missing data,thereby contributing to the broader objectives of sustainable urban transportation systems.展开更多
An access control model is proposed based on the famous Bell-LaPadula (BLP) model.In the proposed model,hierarchical relationships among departments are built,a new concept named post is proposed,and assigning secur...An access control model is proposed based on the famous Bell-LaPadula (BLP) model.In the proposed model,hierarchical relationships among departments are built,a new concept named post is proposed,and assigning security tags to subjects and objects is greatly simplified.The interoperation among different departments is implemented through assigning multiple security tags to one post, and the more departments are closed on the organization tree,the more secret objects can be exchanged by the staff of the departments.The access control matrices of the department,post and staff are defined.By using the three access control matrices,a multi granularity and flexible discretionary access control policy is implemented.The outstanding merit of the BLP model is inherited,and the new model can guarantee that all the information flow is under control.Finally,our study shows that compared to the BLP model,the proposed model is more flexible.展开更多
基金supported by the National Natural Science Foundation of China,No.31760290,82160688the Key Development Areas Project of Ganzhou Science and Technology,No.2022B-SF9554(all to XL)。
文摘Synaptic pruning is a crucial process in synaptic refinement,eliminating unstable synaptic connections in neural circuits.This process is triggered and regulated primarily by spontaneous neural activity and experience-dependent mechanisms.The pruning process involves multiple molecular signals and a series of regulatory activities governing the“eat me”and“don't eat me”states.Under physiological conditions,the interaction between glial cells and neurons results in the clearance of unnecessary synapses,maintaining normal neural circuit functionality via synaptic pruning.Alterations in genetic and environmental factors can lead to imbalanced synaptic pruning,thus promoting the occurrence and development of autism spectrum disorder,schizophrenia,Alzheimer's disease,and other neurological disorders.In this review,we investigated the molecular mechanisms responsible for synaptic pruning during neural development.We focus on how synaptic pruning can regulate neural circuits and its association with neurological disorders.Furthermore,we discuss the application of emerging optical and imaging technologies to observe synaptic structure and function,as well as their potential for clinical translation.Our aim was to enhance our understanding of synaptic pruning during neural development,including the molecular basis underlying the regulation of synaptic function and the dynamic changes in synaptic density,and to investigate the potential role of these mechanisms in the pathophysiology of neurological diseases,thus providing a theoretical foundation for the treatment of neurological disorders.
基金financially supported by Guangdong Province Basic and Applied Basic Research Fund Project(Grant No.2022B1515250009)Liaoning Provincial Natural Science Foundation-Doctoral Research Start-up Fund Project(Grant No.2024-BSBA-05)+1 种基金Major Science and Technology Innovation Project in Shandong Province(Grant No.2024CXGC010803)the National Natural Science Foundation of China(Grant Nos.52271269 and 12302147).
文摘The umbilical,a key component in offshore energy extraction,plays a vital role in ensuring the stable operation of the entire production system.The extensive variety of cross-sectional components creates highly complex layout combinations.Furthermore,due to constraints in component quantity and geometry within the cross-sectional layout,filler bodies must be incorporated to maintain cross-section performance.Conventional design approaches based on manual experience suffer from inefficiency,high variability,and difficulties in quantification.This paper presents a multi-level automatic filling optimization design method for umbilical cross-sectional layouts to address these limitations.Initially,the research establishes a multi-objective optimization model that considers compactness,balance,and wear resistance of the cross-section,employing an enhanced genetic algorithm to achieve a near-optimal layout.Subsequently,the study implements an image processing-based vacancy detection technique to accurately identify cross-sectional gaps.To manage the variability and diversity of these vacant regions,the research introduces a multi-level filling method that strategically selects and places filler bodies of varying dimensions,overcoming the constraints of uniform-size fillers.Additionally,the method incorporates a hierarchical strategy that subdivides the complex cross-section into multiple layers,enabling layer-by-layer optimization and filling.This approach reduces manufac-turing equipment requirements while ensuring practical production process feasibility.The methodology is validated through a specific umbilical case study.The results demonstrate improvements in compactness,balance,and wear resistance compared with the initial cross-section,offering novel insights and valuable references for filler design in umbilical cross-sections.
基金National Natural Science Foundation of China(Nos.42301473,42271424,42171397)Chinese Postdoctoral Innovation Talents Support Program(No.BX20230299)+2 种基金China Postdoctoral Science Foundation(No.2023M742884)Natural Science Foundation of Sichuan Province(Nos.24NSFSC2264,2025ZNSFSC0322)Key Research and Development Project of Sichuan Province(No.24ZDYF0633).
文摘As a key node of modern transportation network,the informationization management of road tunnels is crucial to ensure the operation safety and traffic efficiency.However,the existing tunnel vehicle modeling methods generally have problems such as insufficient 3D scene description capability and low dynamic update efficiency,which are difficult to meet the demand of real-time accurate management.For this reason,this paper proposes a vehicle twin modeling method for road tunnels.This approach starts from the actual management needs,and supports multi-level dynamic modeling from vehicle type,size to color by constructing a vehicle model library that can be flexibly invoked;at the same time,semantic constraint rules with geometric layout,behavioral attributes,and spatial relationships are designed to ensure that the virtual model matches with the real model with a high degree of similarity;ultimately,the prototype system is constructed and the case region is selected for the case study,and the dynamic vehicle status in the tunnel is realized by integrating real-time monitoring data with semantic constraints for precise virtual-real mapping.Finally,the prototype system is constructed and case experiments are conducted in selected case areas,which are combined with real-time monitoring data to realize dynamic updating and three-dimensional visualization of vehicle states in tunnels.The experiments show that the proposed method can run smoothly with an average rendering efficiency of 17.70 ms while guaranteeing the modeling accuracy(composite similarity of 0.867),which significantly improves the real-time and intuitive tunnel management.The research results provide reliable technical support for intelligent operation and emergency response of road tunnels,and offer new ideas for digital twin modeling of complex scenes.
基金supported by the National Natural Science Foundation of China under Grant No.62172132.
文摘The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and intricate networks pose challenges for deployment and execution while also exacerbating the issue of network over-parameterization.To address this issue,various network compression techniques have been developed,such as network pruning.A typical pruning algorithm follows a three-step pipeline involving training,pruning,and retraining.Existing methods often directly set the pruned filters to zero during retraining,significantly reducing the parameter space.However,this direct pruning strategy frequently results in irreversible information loss.In the early stages of training,a network still contains much uncertainty,and evaluating filter importance may not be sufficiently rigorous.To manage the pruning process effectively,this paper proposes a flexible neural network pruning algorithm based on the logistic growth differential equation,considering the characteristics of network training.Unlike other pruning algorithms that directly reduce filter weights,this algorithm introduces a three-stage adaptive weight decay strategy inspired by the logistic growth differential equation.It employs a gentle decay rate in the initial training stage,a rapid decay rate during the intermediate stage,and a slower decay rate in the network convergence stage.Additionally,the decay rate is adjusted adaptively based on the filter weights at each stage.By controlling the adaptive decay rate at each stage,the pruning of neural network filters can be effectively managed.In experiments conducted on the CIFAR-10 and ILSVRC-2012 datasets,the pruning of neural networks significantly reduces the floating-point operations while maintaining the same pruning rate.Specifically,when implementing a 30%pruning rate on the ResNet-110 network,the pruned neural network not only decreases floating-point operations by 40.8%but also enhances the classification accuracy by 0.49%compared to the original network.
文摘3D sparse convolution has emerged as a pivotal technique for efficient voxel-based perception in autonomous systems,enabling selective feature extraction from non-empty voxels while suppressing computational waste.Despite its theoretical efficiency advantages,practical implementations face under-explored limitations:the fixed geometric patterns of conventional sparse convolutional kernels inevitably process non-contributory positions during sliding-window operations,particularly in regions with uneven point cloud density.To address this,we propose Hierarchical Shape Pruning for 3D Sparse Convolution(HSP-S),which dynamically eliminates redundant kernel stripes through layer-adaptive thresholding.Unlike static soft pruning methods,HSP-S maintains trainable sparsity patterns by progressively adjusting pruning thresholds during optimization,enlarging original parameter search space while removing redundant operations.Extensive experiments validate effectiveness of HSP-S acrossmajor autonomous driving benchmarks.On KITTI’s 3D object detection task,our method reduces 93.47%redundant kernel computations whilemaintaining comparable accuracy(1.56%mAP drop).Remarkably,on themore complexNuScenes benchmark,HSP-S achieves simultaneous computation reduction(21.94%sparsity)and accuracy gains(1.02%mAP(mean Average Precision)and 0.47%NDS(nuScenes detection score)improvement),demonstrating its scalability to diverse perception scenarios.This work establishes the first learnable shape pruning framework that simultaneously enhances computational efficiency and preserves detection accuracy in 3D perception systems.
基金Supported by the National Key Research and Development Program of China(No.2022ZD0119003)and the National Natural Science Founda-tion of China(No.61834005).
文摘The dynamic routing mechanism in evolvable networks enables adaptive reconfiguration of topol-ogical structures and transmission pathways based on real-time task requirements and data character-istics.However,the heightened architectural complexity and expanded parameter dimensionality in evolvable networks present significant implementation challenges when deployed in resource-con-strained environments.Due to the critical paths ignored,traditional pruning strategies cannot get a desired trade-off between accuracy and efficiency.For this reason,a critical path retention pruning(CPRP)method is proposed.By deeply traversing the computational graph,the dependency rela-tionship among nodes is derived.Then the nodes are grouped and sorted according to their contribu-tion value.The redundant operations are removed as much as possible while ensuring that the criti-cal path is not affected.As a result,computational efficiency is improved while a higher accuracy is maintained.On the CIFAR benchmark,the experimental results demonstrate that CPRP-induced pruning incurs accuracy degradation below 4.00%,while outperforming traditional feature-agnostic grouping methods by an average 8.98%accuracy improvement.Simultaneously,the pruned model attains a 2.41 times inference acceleration while achieving 48.92%parameter compression and 53.40%floating-point operations(FLOPs)reduction.
文摘The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classification.However,BERT’s size and computational demands limit its practicality,especially in resource-constrained settings.This research compresses the BERT base model for Bengali emotion classification through knowledge distillation(KD),pruning,and quantization techniques.Despite Bengali being the sixth most spoken language globally,NLP research in this area is limited.Our approach addresses this gap by creating an efficient BERT-based model for Bengali text.We have explored 20 combinations for KD,quantization,and pruning,resulting in improved speedup,fewer parameters,and reduced memory size.Our best results demonstrate significant improvements in both speed and efficiency.For instance,in the case of mBERT,we achieved a 3.87×speedup and 4×compression ratio with a combination of Distil+Prune+Quant that reduced parameters from 178 to 46 M,while the memory size decreased from 711 to 178 MB.These results offer scalable solutions for NLP tasks in various languages and advance the field of model compression,making these models suitable for real-world applications in resource-limited environments.
基金supported in part by the National Natural Science Foundation of China under grants 62073085,61973330 and 62350055in part by the Shenzhen Science and Technology Program,China under grant JCYJ20230807093513027in part by the Fundamental Research Funds for the Central Universities,China under grant 1243300008。
文摘Filter pruning effectively compresses the neural network by reducing both its parameters and computational cost.Existing pruning methods typically rely on pre-designed pruning criteria to measure filter importance and remove those deemed unimportant.However,different layers of the neural network exhibit varying filter distributions,making it inappropriate to implement the same pruning criterion for all layers.Additionally,some approaches apply different criteria from the set of pre-defined pruning rules for different layers,but the limited space leads to the difficulty of covering all layers.If criteria for all layers are manually designed,it is costly and difficult to generalize to other networks.To solve this problem,we present a novel neural network pruning method based on the Criterion Learner and Attention Distillation(CLAD).Specifically,CLAD develops a differentiable criterion learner,which is integrated into each layer of the network.The learner can automatically learn the appropriate pruning criterion according to the filter parameters of each layer,thus the requirement of manual design is eliminated.Furthermore,the criterion learner is trained end-to-end by the gradient optimization algorithm to achieve efficient pruning.In addition,attention distillation,which fully utilizes the knowledge of unpruned networks to guide the optimization of the learner and improve the pruned network performance,is introduced in the process of learner optimization.Experiments conducted on various datasets and networks demonstrate the effectiveness of the proposed method.Notably,CLAD reduces the FLOPs of Res Net-110 by about 53%on the CIFAR-10 dataset,while simultaneously improves the network's accuracy by 0.05%.Moreover,it reduces the FLOPs of Res Net-50 by about 46%on the Image Net-1K dataset,and maintains a top-1 accuracy of 75.45%.
基金Shanghai Municipal Commission of Economy and Information Technology,China(No.202301054)。
文摘End-to-end object detection Transformer(DETR)successfully established the paradigm of the Transformer architecture in the field of object detection.Its end-to-end detection process and the idea of set prediction have become one of the hottest network architectures in recent years.There has been an abundance of work improving upon DETR.However,DETR and its variants require a substantial amount of memory resources and computational costs,and the vast number of parameters in these networks is unfavorable for model deployment.To address this issue,a greedy pruning(GP)algorithm is proposed,applied to a variant denoising-DETR(DN-DETR),which can eliminate redundant parameters in the Transformer architecture of DN-DETR.Considering the different roles of the multi-head attention(MHA)module and the feed-forward network(FFN)module in the Transformer architecture,a modular greedy pruning(MGP)algorithm is proposed.This algorithm separates the two modules and applies their respective optimal strategies and parameters.The effectiveness of the proposed algorithm is validated on the COCO 2017 dataset.The model obtained through the MGP algorithm reduces the parameters by 49%and the number of floating point operations(FLOPs)by 44%compared to the Transformer architecture of DN-DETR.At the same time,the mean average precision(mAP)of the model increases from 44.1%to 45.3%.
基金the funding support from the National Natural Science Foundation of China(Grant No.52308340)Chongqing Talent Innovation and Entrepreneurship Demonstration Team Project(Grant No.cstc2024ycjh-bgzxm0012)the Science and Technology Projects supported by China Coal Technology and Engineering Chongqing Design and Research Institute(Group)Co.,Ltd.(Grant No.H20230317).
文摘Accurate prediction of landslide displacement is crucial for effective early warning of landslide disasters.While most existing prediction methods focus on time-series forecasting for individual monitoring points,there is limited research on the spatiotemporal characteristics of landslide deformation.This paper proposes a novel Multi-Relation Spatiotemporal Graph Residual Network with Multi-Level Feature Attention(MFA-MRSTGRN)that effectively improves the prediction performance of landslide displacement through spatiotemporal fusion.This model integrates internal seepage factors as data feature enhancements with external triggering factors,allowing for accurate capture of the complex spatiotemporal characteristics of landslide displacement and the construction of a multi-source heterogeneous dataset.The MFA-MRSTGRN model incorporates dynamic graph theory and four key modules:multilevel feature attention,temporal-residual decomposition,spatial multi-relational graph convolution,and spatiotemporal fusion prediction.This comprehensive approach enables the efficient analyses of multi-source heterogeneous datasets,facilitating adaptive exploration of the evolving multi-relational,multi-dimensional spatiotemporal complexities in landslides.When applying this model to predict the displacement of the Liangshuijing landslide,we demonstrate that the MFA-MRSTGRN model surpasses traditional models,such as random forest(RF),long short-term memory(LSTM),and spatial temporal graph convolutional networks(ST-GCN)models in terms of various evaluation metrics including mean absolute error(MAE=1.27 mm),root mean square error(RMSE=1.49 mm),mean absolute percentage error(MAPE=0.026),and R-squared(R^(2)=0.88).Furthermore,feature ablation experiments indicate that incorporating internal seepage factors improves the predictive performance of landslide displacement models.This research provides an advanced and reliable method for landslide displacement prediction.
基金co-supported by the National Key Research and Development Program of China(No.2022YFF0503100)the Youth Innovation Project of Pandeng Program of National Space Science Center,Chinese Academy of Sciences(No.E3PD40012S).
文摘As we look ahead to future lunar exploration missions, such as crewed lunar exploration and establishing lunar scientific research stations, the lunar rovers will need to cover vast distances. These distances could range from kilometers to tens of kilometers, and even hundreds and thousands of kilometers. Therefore, it is crucial to develop effective long-range path planning for lunar rovers to meet the demands of lunar patrol exploration. This paper presents a hierarchical map model path planning method that utilizes the existing high-resolution images, digital elevation models and mineral abundance maps. The objective is to address the issue of the construction of lunar rover travel costs in the absence of large-scale, high-resolution digital elevation models. This method models the reference and semantic layers using the middle- and low-resolution remote sensing data. The multi-scale obstacles on the lunar surface are extracted by combining the deep learning algorithm on the high-resolution image, and the obstacle avoidance layer is modeled. A two-stage exploratory path planning decision is employed for long-distance driving path planning on a global–local scale. The proposed method analyzes the long-distance accessibility of various areas of scientific significance, such as Rima Bode. A high-precision digital elevation model is created using stereo images to validate the method. Based on the findings, it can be observed that the entire route spans a distance of 930.32 km. The route demonstrates an impressive ability to avoid meter-level impact craters and linear structures while maintaining an average slope of less than 8°. This paper explores scientific research by traversing at least seven basalt units, uncovering the secrets of lunar volcanic activities, and establishing ‘golden spike’ reference points for lunar stratigraphy. The final result of path planning can serve as a valuable reference for the design, mission demonstration, and subsequent project implementation of the new manned lunar rover.
基金supported by the Fund of Key Laboratory of Biomedical Engineering of Hainan Province(No.BME20240001)the STI2030-Major Projects(No.2021ZD0200104)the National Natural Science Foundations of China under Grant 61771437.
文摘Deep learning networks are increasingly exploited in the field of neuronal soma segmentation.However,annotating dataset is also an expensive and time-consuming task.Unsupervised domain adaptation is an effective method to mitigate the problem,which is able to learn an adaptive segmentation model by transferring knowledge from a rich-labeled source domain.In this paper,we propose a multi-level distribution alignment-based unsupervised domain adaptation network(MDA-Net)for segmentation of 3D neuronal soma images.Distribution alignment is performed in both feature space and output space.In the feature space,features from different scales are adaptively fused to enhance the feature extraction capability for small target somata and con-strained to be domain invariant by adversarial adaptation strategy.In the output space,local discrepancy maps that can reveal the spatial structures of somata are constructed on the predicted segmentation results.Then thedistribution alignment is performed on the local discrepancies maps across domains to obtain a superior discrepancy map in the target domain,achieving refined segmentation performance of neuronal somata.Additionally,after a period of distribution align-ment procedure,a portion of target samples with high confident pseudo-labels are selected as training data,which assist in learning a more adaptive segmentation network.We verified the superiority of the proposed algorithm by comparing several domain adaptation networks on two 3D mouse brain neuronal somata datasets and one macaque brain neuronal soma dataset.
文摘Thyroid nodules,a common disorder in the endocrine system,require accurate segmentation in ultrasound images for effective diagnosis and treatment.However,achieving precise segmentation remains a challenge due to various factors,including scattering noise,low contrast,and limited resolution in ultrasound images.Although existing segmentation models have made progress,they still suffer from several limitations,such as high error rates,low generalizability,overfitting,limited feature learning capability,etc.To address these challenges,this paper proposes a Multi-level Relation Transformer-based U-Net(MLRT-UNet)to improve thyroid nodule segmentation.The MLRTUNet leverages a novel Relation Transformer,which processes images at multiple scales,overcoming the limitations of traditional encoding methods.This transformer integrates both local and global features effectively through selfattention and cross-attention units,capturing intricate relationships within the data.The approach also introduces a Co-operative Transformer Fusion(CTF)module to combine multi-scale features from different encoding layers,enhancing the model’s ability to capture complex patterns in the data.Furthermore,the Relation Transformer block enhances long-distance dependencies during the decoding process,improving segmentation accuracy.Experimental results showthat the MLRT-UNet achieves high segmentation accuracy,reaching 98.2% on the Digital Database Thyroid Image(DDT)dataset,97.8% on the Thyroid Nodule 3493(TG3K)dataset,and 98.2% on the Thyroid Nodule3K(TN3K)dataset.These findings demonstrate that the proposed method significantly enhances the accuracy of thyroid nodule segmentation,addressing the limitations of existing models.
文摘Accurate traffic flow prediction(TFP)is vital for efficient and sustainable transportation management and the development of intelligent traffic systems.However,missing data in real-world traffic datasets poses a significant challenge to maintaining prediction precision.This study introduces REPTF-TMDI,a novel method that combines a Reduced Error Pruning Tree Forest(REPTree Forest)with a newly proposed Time-based Missing Data Imputation(TMDI)approach.The REP Tree Forest,an ensemble learning approach,is tailored for time-related traffic data to enhance predictive accuracy and support the evolution of sustainable urbanmobility solutions.Meanwhile,the TMDI approach exploits temporal patterns to estimate missing values reliably whenever empty fields are encountered.The proposed method was evaluated using hourly traffic flow data from a major U.S.roadway spanning 2012-2018,incorporating temporal features(e.g.,hour,day,month,year,weekday),holiday indicator,and weather conditions(temperature,rain,snow,and cloud coverage).Experimental results demonstrated that the REPTF-TMDI method outperformed conventional imputation techniques across various missing data ratios by achieving an average 11.76%improvement in terms of correlation coefficient(R).Furthermore,REPTree Forest achieved improvements of 68.62%in RMSE and 70.52%in MAE compared to existing state-of-the-art models.These findings highlight the method’s ability to significantly boost traffic flow prediction accuracy,even in the presence of missing data,thereby contributing to the broader objectives of sustainable urban transportation systems.
基金The National Natural Science Foundation of China(No.60403027,60773191,70771043)the National High Technology Research and Development Program of China(863 Program)(No.2007AA01Z403)
文摘An access control model is proposed based on the famous Bell-LaPadula (BLP) model.In the proposed model,hierarchical relationships among departments are built,a new concept named post is proposed,and assigning security tags to subjects and objects is greatly simplified.The interoperation among different departments is implemented through assigning multiple security tags to one post, and the more departments are closed on the organization tree,the more secret objects can be exchanged by the staff of the departments.The access control matrices of the department,post and staff are defined.By using the three access control matrices,a multi granularity and flexible discretionary access control policy is implemented.The outstanding merit of the BLP model is inherited,and the new model can guarantee that all the information flow is under control.Finally,our study shows that compared to the BLP model,the proposed model is more flexible.