With the increasing application of surveillance cameras,vehicle re-identication(Re-ID)has attracted more attention in the eld of public security.Vehicle Re-ID meets challenge attributable to the large intra-class diff...With the increasing application of surveillance cameras,vehicle re-identication(Re-ID)has attracted more attention in the eld of public security.Vehicle Re-ID meets challenge attributable to the large intra-class differences caused by different views of vehicles in the traveling process and obvious inter-class similarities caused by similar appearances.Plentiful existing methods focus on local attributes by marking local locations.However,these methods require additional annotations,resulting in complex algorithms and insufferable computation time.To cope with these challenges,this paper proposes a vehicle Re-ID model based on optimized DenseNet121 with joint loss.This model applies the SE block to automatically obtain the importance of each channel feature and assign the corresponding weight to it,then features are transferred to the deep layer by adjusting the corresponding weights,which reduces the transmission of redundant information in the process of feature reuse in DenseNet121.At the same time,the proposed model leverages the complementary expression advantages of middle features of the CNN to enhance the feature expression ability.Additionally,a joint loss with focal loss and triplet loss is proposed in vehicle Re-ID to enhance the model’s ability to discriminate difcult-to-separate samples by enlarging the weight of the difcult-to-separate samples during the training process.Experimental results on the VeRi-776 dataset show that mAP and Rank-1 reach 75.5%and 94.8%,respectively.Besides,Rank-1 on small,medium and large sub-datasets of Vehicle ID dataset reach 81.3%,78.9%,and 76.5%,respectively,which surpasses most existing vehicle Re-ID methods.展开更多
As ocular computer-aided diagnostic(CAD)tools become more widely accessible,many researchers are developing deep learning(DL)methods to aid in ocular disease(OHD)diagnosis.Common eye diseases like cataracts(CATR),glau...As ocular computer-aided diagnostic(CAD)tools become more widely accessible,many researchers are developing deep learning(DL)methods to aid in ocular disease(OHD)diagnosis.Common eye diseases like cataracts(CATR),glaucoma(GLU),and age-related macular degeneration(AMD)are the focus of this study,which uses DL to examine their identification.Data imbalance and outliers are widespread in fundus images,which can make it difficult to apply manyDL algorithms to accomplish this analytical assignment.The creation of efficient and reliable DL algorithms is seen to be the key to further enhancing detection performance.Using the analysis of images of the color of the retinal fundus,this study offers a DL model that is combined with a one-of-a-kind concoction loss function(CLF)for the automated identification of OHD.This study presents a combination of focal loss(FL)and correntropy-induced loss functions(CILF)in the proposed DL model to improve the recognition performance of classifiers for biomedical data.This is done because of the good generalization and robustness of these two types of losses in addressing complex datasets with class imbalance and outliers.The classification performance of the DL model with our proposed loss function is compared to that of the baseline models using accuracy(ACU),recall(REC),specificity(SPF),Kappa,and area under the receiver operating characteristic curve(AUC)as the evaluation metrics.The testing shows that the method is reliable and efficient.展开更多
基金supported,in part,by the National Nature Science Foundation of China under Grant Numbers 61502240,61502096,61304205,61773219in part,by the Natural Science Foundation of Jiangsu Province under Grant Numbers BK20201136,BK20191401in part,by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fund.
文摘With the increasing application of surveillance cameras,vehicle re-identication(Re-ID)has attracted more attention in the eld of public security.Vehicle Re-ID meets challenge attributable to the large intra-class differences caused by different views of vehicles in the traveling process and obvious inter-class similarities caused by similar appearances.Plentiful existing methods focus on local attributes by marking local locations.However,these methods require additional annotations,resulting in complex algorithms and insufferable computation time.To cope with these challenges,this paper proposes a vehicle Re-ID model based on optimized DenseNet121 with joint loss.This model applies the SE block to automatically obtain the importance of each channel feature and assign the corresponding weight to it,then features are transferred to the deep layer by adjusting the corresponding weights,which reduces the transmission of redundant information in the process of feature reuse in DenseNet121.At the same time,the proposed model leverages the complementary expression advantages of middle features of the CNN to enhance the feature expression ability.Additionally,a joint loss with focal loss and triplet loss is proposed in vehicle Re-ID to enhance the model’s ability to discriminate difcult-to-separate samples by enlarging the weight of the difcult-to-separate samples during the training process.Experimental results on the VeRi-776 dataset show that mAP and Rank-1 reach 75.5%and 94.8%,respectively.Besides,Rank-1 on small,medium and large sub-datasets of Vehicle ID dataset reach 81.3%,78.9%,and 76.5%,respectively,which surpasses most existing vehicle Re-ID methods.
基金supported by the Deanship of Scientific Research,Vice Presidency forGraduate Studies and Scientific Research,King Faisal University,Saudi Arabia[Grant No.3,363].
文摘As ocular computer-aided diagnostic(CAD)tools become more widely accessible,many researchers are developing deep learning(DL)methods to aid in ocular disease(OHD)diagnosis.Common eye diseases like cataracts(CATR),glaucoma(GLU),and age-related macular degeneration(AMD)are the focus of this study,which uses DL to examine their identification.Data imbalance and outliers are widespread in fundus images,which can make it difficult to apply manyDL algorithms to accomplish this analytical assignment.The creation of efficient and reliable DL algorithms is seen to be the key to further enhancing detection performance.Using the analysis of images of the color of the retinal fundus,this study offers a DL model that is combined with a one-of-a-kind concoction loss function(CLF)for the automated identification of OHD.This study presents a combination of focal loss(FL)and correntropy-induced loss functions(CILF)in the proposed DL model to improve the recognition performance of classifiers for biomedical data.This is done because of the good generalization and robustness of these two types of losses in addressing complex datasets with class imbalance and outliers.The classification performance of the DL model with our proposed loss function is compared to that of the baseline models using accuracy(ACU),recall(REC),specificity(SPF),Kappa,and area under the receiver operating characteristic curve(AUC)as the evaluation metrics.The testing shows that the method is reliable and efficient.