The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and int...The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and intricate networks pose challenges for deployment and execution while also exacerbating the issue of network over-parameterization.To address this issue,various network compression techniques have been developed,such as network pruning.A typical pruning algorithm follows a three-step pipeline involving training,pruning,and retraining.Existing methods often directly set the pruned filters to zero during retraining,significantly reducing the parameter space.However,this direct pruning strategy frequently results in irreversible information loss.In the early stages of training,a network still contains much uncertainty,and evaluating filter importance may not be sufficiently rigorous.To manage the pruning process effectively,this paper proposes a flexible neural network pruning algorithm based on the logistic growth differential equation,considering the characteristics of network training.Unlike other pruning algorithms that directly reduce filter weights,this algorithm introduces a three-stage adaptive weight decay strategy inspired by the logistic growth differential equation.It employs a gentle decay rate in the initial training stage,a rapid decay rate during the intermediate stage,and a slower decay rate in the network convergence stage.Additionally,the decay rate is adjusted adaptively based on the filter weights at each stage.By controlling the adaptive decay rate at each stage,the pruning of neural network filters can be effectively managed.In experiments conducted on the CIFAR-10 and ILSVRC-2012 datasets,the pruning of neural networks significantly reduces the floating-point operations while maintaining the same pruning rate.Specifically,when implementing a 30%pruning rate on the ResNet-110 network,the pruned neural network not only decreases floating-point operations by 40.8%but also enhances the classification accuracy by 0.49%compared to the original network.展开更多
The high similarity of shellfish images and unbalanced samples are key factors affecting the accuracy of shellfish recognition.This study proposes a new shellfish recognition method FL_Net based on a Convolutional Neu...The high similarity of shellfish images and unbalanced samples are key factors affecting the accuracy of shellfish recognition.This study proposes a new shellfish recognition method FL_Net based on a Convolutional Neural Network(CNN).We first establish the shellfish image(SI)dataset with 68 species and 93574 images,and then propose a filter pruning and repairing model driven by an output entropy and orthogonality measurement for the recognition of shellfish with high similarity features to improve the feature expression ability of valid information.For the shellfish recognition with unbalanced samples,a hybrid loss function,including regularization term and focus loss term,is employed to reduce the weight of easily classified samples by controlling the shared weight of each sample species to the total loss.The experimental results show that the accuracy of shell-fish recognition of the proposed method is 93.95%,13.68%higher than the benchmark network(VGG16),and the accuracy of shellfish recognition is improved by 0.46%,17.41%,17.36%,4.46%,1.67%,and 1.03%respectively compared with AlexNet,GoogLeNet,ResNet50,SN_Net,MutualNet,and ResNeSt,which are used to verify the efficiency of the proposed method.展开更多
基金supported by the National Natural Science Foundation of China under Grant No.62172132.
文摘The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and intricate networks pose challenges for deployment and execution while also exacerbating the issue of network over-parameterization.To address this issue,various network compression techniques have been developed,such as network pruning.A typical pruning algorithm follows a three-step pipeline involving training,pruning,and retraining.Existing methods often directly set the pruned filters to zero during retraining,significantly reducing the parameter space.However,this direct pruning strategy frequently results in irreversible information loss.In the early stages of training,a network still contains much uncertainty,and evaluating filter importance may not be sufficiently rigorous.To manage the pruning process effectively,this paper proposes a flexible neural network pruning algorithm based on the logistic growth differential equation,considering the characteristics of network training.Unlike other pruning algorithms that directly reduce filter weights,this algorithm introduces a three-stage adaptive weight decay strategy inspired by the logistic growth differential equation.It employs a gentle decay rate in the initial training stage,a rapid decay rate during the intermediate stage,and a slower decay rate in the network convergence stage.Additionally,the decay rate is adjusted adaptively based on the filter weights at each stage.By controlling the adaptive decay rate at each stage,the pruning of neural network filters can be effectively managed.In experiments conducted on the CIFAR-10 and ILSVRC-2012 datasets,the pruning of neural networks significantly reduces the floating-point operations while maintaining the same pruning rate.Specifically,when implementing a 30%pruning rate on the ResNet-110 network,the pruned neural network not only decreases floating-point operations by 40.8%but also enhances the classification accuracy by 0.49%compared to the original network.
基金the joint support of the National Key R&D Program Blue Granary Technology Innovation Key Special Project(2020YFD0900204)the Yantai Key R&D Project(2019XDHZ084).
文摘The high similarity of shellfish images and unbalanced samples are key factors affecting the accuracy of shellfish recognition.This study proposes a new shellfish recognition method FL_Net based on a Convolutional Neural Network(CNN).We first establish the shellfish image(SI)dataset with 68 species and 93574 images,and then propose a filter pruning and repairing model driven by an output entropy and orthogonality measurement for the recognition of shellfish with high similarity features to improve the feature expression ability of valid information.For the shellfish recognition with unbalanced samples,a hybrid loss function,including regularization term and focus loss term,is employed to reduce the weight of easily classified samples by controlling the shared weight of each sample species to the total loss.The experimental results show that the accuracy of shell-fish recognition of the proposed method is 93.95%,13.68%higher than the benchmark network(VGG16),and the accuracy of shellfish recognition is improved by 0.46%,17.41%,17.36%,4.46%,1.67%,and 1.03%respectively compared with AlexNet,GoogLeNet,ResNet50,SN_Net,MutualNet,and ResNeSt,which are used to verify the efficiency of the proposed method.