Bees play a crucial role in the global food chain,pollinating over 75% of food and producing valuable products such as bee pollen,propolis,and royal jelly.However,theAsian hornet poses a serious threat to bee populati...Bees play a crucial role in the global food chain,pollinating over 75% of food and producing valuable products such as bee pollen,propolis,and royal jelly.However,theAsian hornet poses a serious threat to bee populations by preying on them and disrupting agricultural ecosystems.To address this issue,this study developed a modified YOLOv7tiny(You Only Look Once)model for efficient hornet detection.The model incorporated space-to-depth(SPD)and squeeze-and-excitation(SE)attention mechanisms and involved detailed annotation of the hornet’s head and full body,significantly enhancing the detection of small objects.The Taguchi method was also used to optimize the training parameters,resulting in optimal performance.Data for this study were collected from the Roboflow platformusing a 640×640 resolution dataset.The YOLOv7tinymodel was trained on this dataset.After optimizing the training parameters using the Taguchi method,significant improvements were observed in accuracy,precision,recall,F1 score,andmean average precision(mAP)for hornet detection.Without the hornet head label,incorporating the SPD attentionmechanism resulted in a peakmAP of 98.7%,representing an 8.58%increase over the original YOLOv7tiny.By including the hornet head label and applying the SPD attention mechanism and Soft-CIOU loss function,themAP was further enhanced to 97.3%,a 7.04% increase over the original YOLOv7tiny.Furthermore,the Soft-CIOU Loss function contributed to additional performance enhancements during the validation phase.展开更多
With the increasing application of surveillance cameras,vehicle re-identication(Re-ID)has attracted more attention in the eld of public security.Vehicle Re-ID meets challenge attributable to the large intra-class diff...With the increasing application of surveillance cameras,vehicle re-identication(Re-ID)has attracted more attention in the eld of public security.Vehicle Re-ID meets challenge attributable to the large intra-class differences caused by different views of vehicles in the traveling process and obvious inter-class similarities caused by similar appearances.Plentiful existing methods focus on local attributes by marking local locations.However,these methods require additional annotations,resulting in complex algorithms and insufferable computation time.To cope with these challenges,this paper proposes a vehicle Re-ID model based on optimized DenseNet121 with joint loss.This model applies the SE block to automatically obtain the importance of each channel feature and assign the corresponding weight to it,then features are transferred to the deep layer by adjusting the corresponding weights,which reduces the transmission of redundant information in the process of feature reuse in DenseNet121.At the same time,the proposed model leverages the complementary expression advantages of middle features of the CNN to enhance the feature expression ability.Additionally,a joint loss with focal loss and triplet loss is proposed in vehicle Re-ID to enhance the model’s ability to discriminate difcult-to-separate samples by enlarging the weight of the difcult-to-separate samples during the training process.Experimental results on the VeRi-776 dataset show that mAP and Rank-1 reach 75.5%and 94.8%,respectively.Besides,Rank-1 on small,medium and large sub-datasets of Vehicle ID dataset reach 81.3%,78.9%,and 76.5%,respectively,which surpasses most existing vehicle Re-ID methods.展开更多
Co0.85Se magnetic nanoparticles supported on carbon nanotubes were prepared by a one‐step hydrothermal method.The saturation magnetization and coercivity of the MWCNTs/Co0.85Se nanocomposites increased due to a decre...Co0.85Se magnetic nanoparticles supported on carbon nanotubes were prepared by a one‐step hydrothermal method.The saturation magnetization and coercivity of the MWCNTs/Co0.85Se nanocomposites increased due to a decrease in the Co0.85Se nanoparticle size in the MWCNTs/Co0.85Se nanocomposites and an increase in the distance between the Co0.85Se nanoparticles,which increased the specific surface area,thereby benefiting the electrocatalytic performance of the catalyst.Moreover,the MWCNTs/Co0.85Se nanocomposites exhibited an excellent hydrogen evolution reaction performance owing to the presence of MWCNTs,which enhanced the mass transport during the electrocatalytic reactions.Furthermore,in an acid solution,the 30 wt%MWCNTs/Co0.85Se composite catalyst exhibited a current density of 10 mA cm^‒2 at a small overpotential of 266 mV vs.RHE,a small Tafel slope of 60.5 mV dec^‐1,and good stability for HER.展开更多
Segmenting brain tumors in Magnetic Resonance Imaging(MRI)volumes is challenging due to their diffuse and irregular shapes.Recently,2D and 3D deep neural networks have become famous for medical image segmentation beca...Segmenting brain tumors in Magnetic Resonance Imaging(MRI)volumes is challenging due to their diffuse and irregular shapes.Recently,2D and 3D deep neural networks have become famous for medical image segmentation because of the availability of labelled datasets.However,3D networks can be computationally expensive and require significant training resources.This research proposes a 3D deep learning model for brain tumor segmentation that uses lightweight feature extraction modules to improve performance without compromising contextual information or accuracy.The proposed model,called Hybrid Attention-Based Residual Unet(HA-RUnet),is based on the Unet architecture and utilizes residual blocks to extract low-and high-level features from MRI volumes.Attention and Squeeze-Excitation(SE)modules are also integrated at different levels to learn attention-aware features adaptively within local and global receptive fields.The proposed model was trained on the BraTS-2020 dataset and achieved a dice score of 0.867,0.813,and 0.787,as well as a sensitivity of 0.93,0.88,and 0.83 for Whole Tumor,Tumor Core,and Enhancing Tumor,on test dataset respectively.Experimental results show that the proposed HA-RUnet model outperforms the ResUnet and AResUnet base models while having a smaller number of parameters than other state-of-the-art models.Overall,the proposed HA-RUnet model can improve brain tumor segmentation accuracy and facilitate appropriate diagnosis and treatment planning for medical practitioners.展开更多
文摘Bees play a crucial role in the global food chain,pollinating over 75% of food and producing valuable products such as bee pollen,propolis,and royal jelly.However,theAsian hornet poses a serious threat to bee populations by preying on them and disrupting agricultural ecosystems.To address this issue,this study developed a modified YOLOv7tiny(You Only Look Once)model for efficient hornet detection.The model incorporated space-to-depth(SPD)and squeeze-and-excitation(SE)attention mechanisms and involved detailed annotation of the hornet’s head and full body,significantly enhancing the detection of small objects.The Taguchi method was also used to optimize the training parameters,resulting in optimal performance.Data for this study were collected from the Roboflow platformusing a 640×640 resolution dataset.The YOLOv7tinymodel was trained on this dataset.After optimizing the training parameters using the Taguchi method,significant improvements were observed in accuracy,precision,recall,F1 score,andmean average precision(mAP)for hornet detection.Without the hornet head label,incorporating the SPD attentionmechanism resulted in a peakmAP of 98.7%,representing an 8.58%increase over the original YOLOv7tiny.By including the hornet head label and applying the SPD attention mechanism and Soft-CIOU loss function,themAP was further enhanced to 97.3%,a 7.04% increase over the original YOLOv7tiny.Furthermore,the Soft-CIOU Loss function contributed to additional performance enhancements during the validation phase.
基金supported,in part,by the National Nature Science Foundation of China under Grant Numbers 61502240,61502096,61304205,61773219in part,by the Natural Science Foundation of Jiangsu Province under Grant Numbers BK20201136,BK20191401in part,by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fund.
文摘With the increasing application of surveillance cameras,vehicle re-identication(Re-ID)has attracted more attention in the eld of public security.Vehicle Re-ID meets challenge attributable to the large intra-class differences caused by different views of vehicles in the traveling process and obvious inter-class similarities caused by similar appearances.Plentiful existing methods focus on local attributes by marking local locations.However,these methods require additional annotations,resulting in complex algorithms and insufferable computation time.To cope with these challenges,this paper proposes a vehicle Re-ID model based on optimized DenseNet121 with joint loss.This model applies the SE block to automatically obtain the importance of each channel feature and assign the corresponding weight to it,then features are transferred to the deep layer by adjusting the corresponding weights,which reduces the transmission of redundant information in the process of feature reuse in DenseNet121.At the same time,the proposed model leverages the complementary expression advantages of middle features of the CNN to enhance the feature expression ability.Additionally,a joint loss with focal loss and triplet loss is proposed in vehicle Re-ID to enhance the model’s ability to discriminate difcult-to-separate samples by enlarging the weight of the difcult-to-separate samples during the training process.Experimental results on the VeRi-776 dataset show that mAP and Rank-1 reach 75.5%and 94.8%,respectively.Besides,Rank-1 on small,medium and large sub-datasets of Vehicle ID dataset reach 81.3%,78.9%,and 76.5%,respectively,which surpasses most existing vehicle Re-ID methods.
文摘Co0.85Se magnetic nanoparticles supported on carbon nanotubes were prepared by a one‐step hydrothermal method.The saturation magnetization and coercivity of the MWCNTs/Co0.85Se nanocomposites increased due to a decrease in the Co0.85Se nanoparticle size in the MWCNTs/Co0.85Se nanocomposites and an increase in the distance between the Co0.85Se nanoparticles,which increased the specific surface area,thereby benefiting the electrocatalytic performance of the catalyst.Moreover,the MWCNTs/Co0.85Se nanocomposites exhibited an excellent hydrogen evolution reaction performance owing to the presence of MWCNTs,which enhanced the mass transport during the electrocatalytic reactions.Furthermore,in an acid solution,the 30 wt%MWCNTs/Co0.85Se composite catalyst exhibited a current density of 10 mA cm^‒2 at a small overpotential of 266 mV vs.RHE,a small Tafel slope of 60.5 mV dec^‐1,and good stability for HER.
基金supported by“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning(KETEP),granted financial resources from the Ministry of Trade,Industry&Energy,Republic of Korea.(No.20204010600090).
文摘Segmenting brain tumors in Magnetic Resonance Imaging(MRI)volumes is challenging due to their diffuse and irregular shapes.Recently,2D and 3D deep neural networks have become famous for medical image segmentation because of the availability of labelled datasets.However,3D networks can be computationally expensive and require significant training resources.This research proposes a 3D deep learning model for brain tumor segmentation that uses lightweight feature extraction modules to improve performance without compromising contextual information or accuracy.The proposed model,called Hybrid Attention-Based Residual Unet(HA-RUnet),is based on the Unet architecture and utilizes residual blocks to extract low-and high-level features from MRI volumes.Attention and Squeeze-Excitation(SE)modules are also integrated at different levels to learn attention-aware features adaptively within local and global receptive fields.The proposed model was trained on the BraTS-2020 dataset and achieved a dice score of 0.867,0.813,and 0.787,as well as a sensitivity of 0.93,0.88,and 0.83 for Whole Tumor,Tumor Core,and Enhancing Tumor,on test dataset respectively.Experimental results show that the proposed HA-RUnet model outperforms the ResUnet and AResUnet base models while having a smaller number of parameters than other state-of-the-art models.Overall,the proposed HA-RUnet model can improve brain tumor segmentation accuracy and facilitate appropriate diagnosis and treatment planning for medical practitioners.