This study proposes a lightweight apple detection method employing cascaded knowledge distillation(KD)to address the critical challenges of excessive parameters and high deployment costs in existing models.We introduc...This study proposes a lightweight apple detection method employing cascaded knowledge distillation(KD)to address the critical challenges of excessive parameters and high deployment costs in existing models.We introduce a Lightweight Feature Pyramid Network(LFPN)integrated with Lightweight Downsampling Convolutions(LDConv)to substantially reduce model complexity without compromising accuracy.A Lightweight Multi-channel Attention(LMCA)mechanism is incorporated between the backbone and neck networks to effectively suppress complex background interference in orchard environments.Furthermore,model size is compressed via Group_Slim channel pruning combined with a cascaded distillation strategy.Experimental results demonstrate that the proposed model achieves a 1%higherAverage Precision(AP)than the baselinewhilemaintaining extreme lightweight advantages(only 800 k parameters).Notably,the two-stage KD version achieves over 20 Frames Per Second(FPS)on Central Processing Unit(CPU)devices,confirming its practical deployability in real-world applications.展开更多
基金funded by Jilin Provincial Department of Education Project Fund,grant number JJKH20240315KJthe National Natural Science Foundation of China under Grant 52175538.
文摘This study proposes a lightweight apple detection method employing cascaded knowledge distillation(KD)to address the critical challenges of excessive parameters and high deployment costs in existing models.We introduce a Lightweight Feature Pyramid Network(LFPN)integrated with Lightweight Downsampling Convolutions(LDConv)to substantially reduce model complexity without compromising accuracy.A Lightweight Multi-channel Attention(LMCA)mechanism is incorporated between the backbone and neck networks to effectively suppress complex background interference in orchard environments.Furthermore,model size is compressed via Group_Slim channel pruning combined with a cascaded distillation strategy.Experimental results demonstrate that the proposed model achieves a 1%higherAverage Precision(AP)than the baselinewhilemaintaining extreme lightweight advantages(only 800 k parameters).Notably,the two-stage KD version achieves over 20 Frames Per Second(FPS)on Central Processing Unit(CPU)devices,confirming its practical deployability in real-world applications.