Current you only look once(YOLO)-based algorithm model is facing the challenge of overwhelming parameters and calculation complexity under the printed circuit board(PCB)defect detection application scenario.In order t...Current you only look once(YOLO)-based algorithm model is facing the challenge of overwhelming parameters and calculation complexity under the printed circuit board(PCB)defect detection application scenario.In order to solve this problem,we propose a new method,which combined the lightweight network mobile vision transformer(Mobile Vi T)with the convolutional block attention module(CBAM)mechanism and the new regression loss function.This method needed less computation resources,making it more suitable for embedded edge detection devices.Meanwhile,the new loss function improved the positioning accuracy of the bounding box and enhanced the robustness of the model.In addition,experiments on public datasets demonstrate that the improved model achieves an average accuracy of 87.9%across six typical defect detection tasks,while reducing computational costs by nearly 90%.It significantly reduces the model's computational requirements while maintaining accuracy,ensuring reliable performance for edge deployment.展开更多
With the rapid advancement of virtual reality,dynamic gesture recognition technology has become an indispensable and critical technique for users to achieve human–computer interaction in virtual environments.The reco...With the rapid advancement of virtual reality,dynamic gesture recognition technology has become an indispensable and critical technique for users to achieve human–computer interaction in virtual environments.The recognition of dynamic gestures is a challenging task due to the high degree of freedom and the influence of individual differences and the change of gesture space.To solve the problem of low recognition accuracy of existing networks,an improved dynamic gesture recognition algorithm based on ResNeXt architecture is proposed.The algorithm employs three-dimensional convolution techniques to effectively capture the spatiotemporal features intrinsic to dynamic gestures.Additionally,to enhance the model’s focus and improve its accuracy in identifying dynamic gestures,a lightweight convolutional attention mechanism is introduced.This mechanism not only augments the model’s precision but also facilitates faster convergence during the training phase.In order to further optimize the performance of the model,a deep attention submodule is added to the convolutional attention mechanism module to strengthen the network’s capability in temporal feature extraction.Empirical evaluations on EgoGesture and NvGesture datasets show that the accuracy of the proposed model in dynamic gesture recognition reaches 95.03%and 86.21%,respectively.When operating in RGB mode,the accuracy reached 93.49%and 80.22%,respectively.These results underscore the effectiveness of the proposed algorithm in recognizing dynamic gestures with high accuracy,showcasing its potential for applications in advanced human–computer interaction systems.展开更多
基金supported by the National Natural Science Foundation of China(Nos.62373215,62373219 and 62073193)the Natural Science Foundation of Shandong Province(No.ZR2023MF100)+1 种基金the Key Projects of the Ministry of Industry and Information Technology(No.TC220H057-2022)the Independently Developed Instrument Funds of Shandong University(No.zy20240201)。
文摘Current you only look once(YOLO)-based algorithm model is facing the challenge of overwhelming parameters and calculation complexity under the printed circuit board(PCB)defect detection application scenario.In order to solve this problem,we propose a new method,which combined the lightweight network mobile vision transformer(Mobile Vi T)with the convolutional block attention module(CBAM)mechanism and the new regression loss function.This method needed less computation resources,making it more suitable for embedded edge detection devices.Meanwhile,the new loss function improved the positioning accuracy of the bounding box and enhanced the robustness of the model.In addition,experiments on public datasets demonstrate that the improved model achieves an average accuracy of 87.9%across six typical defect detection tasks,while reducing computational costs by nearly 90%.It significantly reduces the model's computational requirements while maintaining accuracy,ensuring reliable performance for edge deployment.
文摘With the rapid advancement of virtual reality,dynamic gesture recognition technology has become an indispensable and critical technique for users to achieve human–computer interaction in virtual environments.The recognition of dynamic gestures is a challenging task due to the high degree of freedom and the influence of individual differences and the change of gesture space.To solve the problem of low recognition accuracy of existing networks,an improved dynamic gesture recognition algorithm based on ResNeXt architecture is proposed.The algorithm employs three-dimensional convolution techniques to effectively capture the spatiotemporal features intrinsic to dynamic gestures.Additionally,to enhance the model’s focus and improve its accuracy in identifying dynamic gestures,a lightweight convolutional attention mechanism is introduced.This mechanism not only augments the model’s precision but also facilitates faster convergence during the training phase.In order to further optimize the performance of the model,a deep attention submodule is added to the convolutional attention mechanism module to strengthen the network’s capability in temporal feature extraction.Empirical evaluations on EgoGesture and NvGesture datasets show that the accuracy of the proposed model in dynamic gesture recognition reaches 95.03%and 86.21%,respectively.When operating in RGB mode,the accuracy reached 93.49%and 80.22%,respectively.These results underscore the effectiveness of the proposed algorithm in recognizing dynamic gestures with high accuracy,showcasing its potential for applications in advanced human–computer interaction systems.