In its 2023 global health statistics,the World Health Organization noted that noncommunicable diseases(NCDs)remain the leading cause of disease burden worldwide,with cardiovascular diseases(CVDs)resulting in more deat...In its 2023 global health statistics,the World Health Organization noted that noncommunicable diseases(NCDs)remain the leading cause of disease burden worldwide,with cardiovascular diseases(CVDs)resulting in more deaths than the three other major NCDs combined.In this study,we developed a method that can comprehensively detect which CVDs are present in a patient.Specifically,we propose a multi-label classification method that utilizes photoplethysmography(PPG)signals and physiological characteristics from public datasets to classify four types of CVDs and related conditions:hypertension,diabetes,cerebral infarction,and cerebrovascular disease.Our approach to multi-disease classification of cardiovascular diseases(CVDs)using PPG signals achieves the highest classification performance when encompassing the broadest range of disease categories,thereby offering a more comprehensive assessment of human health.We employ a multi-label classification strategy to simultaneously predict the presence or absence of multiple diseases.Specifically,we first apply the Savitzky-Golay(S-G)filter to the PPG signals to reduce noise and then transform into statistical features.We integrate processed PPG signals with individual physiological features as a multimodal input,thereby expanding the learned feature space.Notably,even with a simple machine learning method,this approach can achieve relatively high accuracy.The proposed method achieved a maximum F1-score of 0.91,minimum Hamming loss of 0.04,and an accuracy of 0.95.Thus,our method represents an effective and rapid solution for detecting multiple diseases simultaneously,which is beneficial for comprehensively managing CVDs.展开更多
Automated and accurate movie genre classification is crucial for content organization,recommendation systems,and audience targeting in the film industry.Although most existing approaches focus on audiovisual features ...Automated and accurate movie genre classification is crucial for content organization,recommendation systems,and audience targeting in the film industry.Although most existing approaches focus on audiovisual features such as trailers and posters,the text-based classification remains underexplored despite its accessibility and semantic richness.This paper introduces the Genre Attention Model(GAM),a deep learning architecture that integrates transformer models with a hierarchical attention mechanism to extract and leverage contextual information from movie plots formulti-label genre classification.In order to assess its effectiveness,we assessmultiple transformer-based models,including Bidirectional Encoder Representations fromTransformers(BERT),ALite BERT(ALBERT),Distilled BERT(DistilBERT),Robustly Optimized BERT Pretraining Approach(RoBERTa),Efficiently Learning an Encoder that Classifies Token Replacements Accurately(ELECTRA),eXtreme Learning Network(XLNet)and Decodingenhanced BERT with Disentangled Attention(DeBERTa).Experimental results demonstrate the superior performance of DeBERTa-based GAM,which employs a two-tier hierarchical attention mechanism:word-level attention highlights key terms,while sentence-level attention captures critical narrative segments,ensuring a refined and interpretable representation of movie plots.Evaluated on three benchmark datasets Trailers12K,Large Movie Trailer Dataset-9(LMTD-9),and MovieLens37K.GAM achieves micro-average precision scores of 83.63%,83.32%,and 83.34%,respectively,surpassing state-of-the-artmodels.Additionally,GAMis computationally efficient,requiring just 6.10Giga Floating Point Operations Per Second(GFLOPS),making it a scalable and cost-effective solution.These results highlight the growing potential of text-based deep learning models in genre classification and GAM’s effectiveness in improving predictive accuracy while maintaining computational efficiency.With its robust performance,GAM offers a versatile and scalable framework for content recommendation,film indexing,and media analytics,providing an interpretable alternative to traditional audiovisual-based classification techniques.展开更多
Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feat...Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feature representation.However,existing methods often rely on the single-scale deep feature,neglecting shallow and deeper layer features,which poses challenges when predicting objects of varying scales within the same image.Although some studies have explored multi-scale features,they rarely address the flow of information between scales or efficiently obtain class-specific precise representations for features at different scales.To address these issues,we propose a two-stage,three-branch Transformer-based framework.The first stage incorporates multi-scale image feature extraction and hierarchical scale attention.This design enables the model to consider objects at various scales while enhancing the flow of information across different feature scales,improving the model’s generalization to diverse object scales.The second stage includes a global feature enhancement module and a region selection module.The global feature enhancement module strengthens interconnections between different image regions,mitigating the issue of incomplete represen-tations,while the region selection module models the cross-modal relationships between image features and labels.Together,these components enable the efficient acquisition of class-specific precise feature representations.Extensive experiments on public datasets,including COCO2014,VOC2007,and VOC2012,demonstrate the effectiveness of our proposed method.Our approach achieves consistent performance gains of 0.3%,0.4%,and 0.2%over state-of-the-art methods on the three datasets,respectively.These results validate the reliability and superiority of our approach for multi-label image classification.展开更多
Label correlations are an essential technique for data mining that solves the possible correlation problem between different labels in multi-label classification.Although this technique is widely used in multi-label c...Label correlations are an essential technique for data mining that solves the possible correlation problem between different labels in multi-label classification.Although this technique is widely used in multi-label classification problems,batch learning deals with most issues,which consumes a lot of time and space resources.Unlike traditional batch learning methods,online learning represents a promising family of efficient and scalable machine learning algorithms for large-scale datasets.However,existing online learning research has done little to consider correlations between labels.On the basis of existing research,this paper proposes a multi-label online learning algorithm based on label correlations by maximizing the interval between related labels and unrelated labels in multi-label samples.We evaluate the performance of the proposed algorithm on several public datasets.Experiments show the effectiveness of our algorithm.展开更多
Phononic crystals,as artificial composite materials,have sparked significant interest due to their novel characteristics that emerge upon the introduction of nonlinearity.Among these properties,second-harmonic feature...Phononic crystals,as artificial composite materials,have sparked significant interest due to their novel characteristics that emerge upon the introduction of nonlinearity.Among these properties,second-harmonic features exhibit potential applications in acoustic frequency conversion,non-reciprocal wave propagation,and non-destructive testing.Precisely manipulating the harmonic band structure presents a major challenge in the design of nonlinear phononic crystals.Traditional design approaches based on parameter adjustments to meet specific application requirements are inefficient and often yield suboptimal performance.Therefore,this paper develops a design methodology using Softmax logistic regression and multi-label classification learning to inversely design the material distribution of nonlinear phononic crystals by exploiting information from harmonic transmission spectra.The results demonstrate that the neural network-based inverse design method can effectively tailor nonlinear phononic crystals with desired functionalities.This work establishes a mapping relationship between the band structure and the material distribution within phononic crystals,providing valuable insights into the inverse design of metamaterials.展开更多
Objective:To explore the feasibility of remotely obtaining complex information on traditional Chinese medicine(TCM)pulse conditions through voice signals.Methods: We used multi-label pulse conditions as the entry poin...Objective:To explore the feasibility of remotely obtaining complex information on traditional Chinese medicine(TCM)pulse conditions through voice signals.Methods: We used multi-label pulse conditions as the entry point and modeled and analyzed TCM pulse diagnosis by combining voice analysis and machine learning.Audio features were extracted from voice recordings in the TCM pulse condition dataset.The obtained features were combined with information from tongue and facial diagnoses.A multi-label pulse condition voice classification DNN model was built using 10-fold cross-validation,and the modeling methods were validated using publicly available datasets.Results: The analysis showed that the proposed method achieved an accuracy of 92.59%on the public dataset.The accuracies of the three single-label pulse manifestation models in the test set were 94.27%,96.35%,and 95.39%.The absolute accuracy of the multi-label model was 92.74%.Conclusion: Voice data analysis may serve as a remote adjunct to the TCM diagnostic method for pulse condition assessment.展开更多
As digital technologies have advanced more rapidly,the number of paper documents recently converted into a digital format has exponentially increased.To respond to the urgent need to categorize the growing number of d...As digital technologies have advanced more rapidly,the number of paper documents recently converted into a digital format has exponentially increased.To respond to the urgent need to categorize the growing number of digitized documents,the classification of digitized documents in real time has been identified as the primary goal of our study.A paper classification is the first stage in automating document control and efficient knowledge discovery with no or little human involvement.Artificial intelligence methods such as Deep Learning are now combined with segmentation to study and interpret those traits,which were not conceivable ten years ago.Deep learning aids in comprehending input patterns so that object classes may be predicted.The segmentation process divides the input image into separate segments for a more thorough image study.This study proposes a deep learning-enabled framework for automated document classification,which can be implemented in higher education.To further this goal,a dataset was developed that includes seven categories:Diplomas,Personal documents,Journal of Accounting of higher education diplomas,Service letters,Orders,Production orders,and Student orders.Subsequently,a deep learning model based on Conv2D layers is proposed for the document classification process.In the final part of this research,the proposed model is evaluated and compared with other machine-learning techniques.The results demonstrate that the proposed deep learning model shows high results in document categorization overtaking the other machine learning models by reaching 94.84%,94.79%,94.62%,94.43%,94.07%in accuracy,precision,recall,F-score,and AUC-ROC,respectively.The achieved results prove that the proposed deep model is acceptable to use in practice as an assistant to an office worker.展开更多
The Internet of Medical Things(IoMT)will come to be of great importance in the mediation of medical disputes,as it is emerging as the core of intelligent medical treatment.First,IoMT can track the entire medical treat...The Internet of Medical Things(IoMT)will come to be of great importance in the mediation of medical disputes,as it is emerging as the core of intelligent medical treatment.First,IoMT can track the entire medical treatment process in order to provide detailed trace data in medical dispute resolution.Second,IoMT can infiltrate the ongoing treatment and provide timely intelligent decision support to medical staff.This information includes recommendation of similar historical cases,guidance for medical treatment,alerting of hired dispute profiteers etc.The multi-label classification of medical dispute documents(MDDs)plays an important role as a front-end process for intelligent decision support,especially in the recommendation of similar historical cases.However,MDDs usually appear as long texts containing a large amount of redundant information,and there is a serious distribution imbalance in the dataset,which directly leads to weaker classification performance.Accordingly,in this paper,a multi-label classification method based on key sentence extraction is proposed for MDDs.The method is divided into two parts.First,the attention-based hierarchical bi-directional long short-term memory(BiLSTM)model is used to extract key sentences from documents;second,random comprehensive sampling Bagging(RCS-Bagging),which is an ensemble multi-label classification model,is employed to classify MDDs based on key sentence sets.The use of this approach greatly improves the classification performance.Experiments show that the performance of the two models proposed in this paper is remarkably better than that of the baseline methods.展开更多
Supervised topic modeling algorithms have been successfully applied to multi-label document classification tasks.Representative models include labeled latent Dirichlet allocation(L-LDA)and dependency-LDA.However,these...Supervised topic modeling algorithms have been successfully applied to multi-label document classification tasks.Representative models include labeled latent Dirichlet allocation(L-LDA)and dependency-LDA.However,these models neglect the class frequency information of words(i.e.,the number of classes where a word has occurred in the training data),which is significant for classification.To address this,we propose a method,namely the class frequency weight(CF-weight),to weight words by considering the class frequency knowledge.This CF-weight is based on the intuition that a word with higher(lower)class frequency will be less(more)discriminative.In this study,the CF-weight is used to improve L-LDA and dependency-LDA.A number of experiments have been conducted on real-world multi-label datasets.Experimental results demonstrate that CF-weight based algorithms are competitive with the existing supervised topic models.展开更多
A rough set based corner classification neural network, the Rough-CC4, is presented to solve document classification problems such as document representation of different document sizes, document feature selection and...A rough set based corner classification neural network, the Rough-CC4, is presented to solve document classification problems such as document representation of different document sizes, document feature selection and document feature encoding. In the Rough-CC4, the documents are described by the equivalent classes of the approximate words. By this method, the dimensions representing the documents can be reduced, which can solve the precision problems caused by the different document sizes and also blur the differences caused by the approximate words. In the Rough-CC4, a binary encoding method is introduced, through which the importance of documents relative to each equivalent class is encoded. By this encoding method, the precision of the Rough-CC4 is improved greatly and the space complexity of the Rough-CC4 is reduced. The Rough-CC4 can be used in automatic classification of documents.展开更多
In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the researc...In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the research of multi-label classification algorithms. Considering the fact that the high dimensionality of the multi-label datasets may cause the curse of dimensionality and wil hamper the classification process, a dimensionality reduction algorithm, named multi-label kernel discriminant analysis (MLKDA), is proposed to reduce the dimensionality of multi-label datasets. MLKDA, with the kernel trick, processes the multi-label integrally and realizes the nonlinear dimensionality reduction with the idea similar with linear discriminant analysis (LDA). In the classification process of multi-label data, the extreme learning machine (ELM) is an efficient algorithm in the premise of good accuracy. MLKDA, combined with ELM, shows a good performance in multi-label learning experiments with several datasets. The experiments on both static data and data stream show that MLKDA outperforms multi-label dimensionality reduction via dependence maximization (MDDM) and multi-label linear discriminant analysis (MLDA) in cases of balanced datasets and stronger correlation between tags, and ELM is also a good choice for multi-label classification.展开更多
Aiming at the problem of multi-label classification, a multi-label classification algorithm based on label-specific features is proposed in this paper. In this algorithm, we compute feature density on the positive and...Aiming at the problem of multi-label classification, a multi-label classification algorithm based on label-specific features is proposed in this paper. In this algorithm, we compute feature density on the positive and negative instances set of each class firstly and then select mk features of high density from the positive and negative instances set of each class, respectively; the intersec- tion is taken as the label-specific features of the corresponding class. Finally, multi-label data are classified on the basis of la- bel-specific features. The algorithm can show the label-specific features of each class. Experiments show that our proposed method, the MLSF algorithm, performs significantly better than the other state-of-the-art multi-label learning approaches.展开更多
In the realm of Multi-Label Text Classification(MLTC),the dual challenges of extracting rich semantic features from text and discerning inter-label relationships have spurred innovative approaches.Many studies in sema...In the realm of Multi-Label Text Classification(MLTC),the dual challenges of extracting rich semantic features from text and discerning inter-label relationships have spurred innovative approaches.Many studies in semantic feature extraction have turned to external knowledge to augment the model’s grasp of textual content,often overlooking intrinsic textual cues such as label statistical features.In contrast,these endogenous insights naturally align with the classification task.In our paper,to complement this focus on intrinsic knowledge,we introduce a novel Gate-Attention mechanism.This mechanism adeptly integrates statistical features from the text itself into the semantic fabric,enhancing the model’s capacity to understand and represent the data.Additionally,to address the intricate task of mining label correlations,we propose a Dual-end enhancement mechanism.This mechanism effectively mitigates the challenges of information loss and erroneous transmission inherent in traditional long short term memory propagation.We conducted an extensive battery of experiments on the AAPD and RCV1-2 datasets.These experiments serve the dual purpose of confirming the efficacy of both the Gate-Attention mechanism and the Dual-end enhancement mechanism.Our final model unequivocally outperforms the baseline model,attesting to its robustness.These findings emphatically underscore the imperativeness of taking into account not just external knowledge but also the inherent intricacies of textual data when crafting potent MLTC models.展开更多
An effective domain ontology automatically constructed is proposed in this paper. The main concept is using the Formal Concept Analysis to automatically establish domain ontology. Finally, the ontology is acted as the...An effective domain ontology automatically constructed is proposed in this paper. The main concept is using the Formal Concept Analysis to automatically establish domain ontology. Finally, the ontology is acted as the base for the Naive Bayes classifier to approve the effectiveness of the domain ontology for document classification. The 1752 documents divided into 10 categories are used to assess the effectiveness of the ontology, where 1252 and 500 documents are the training and testing documents, respectively. The Fl-measure is as the assessment criteria and the following three results are obtained. The average recall of Naive Bayes classifier is 0.94. Therefore, in recall, the performance of Naive Bayes classifier is excellent based on the automatically constructed ontology. The average precision of Naive Bayes classifier is 0.81. Therefore, in precision, the performance of Naive Bayes classifier is gored based on the automatically constructed ontology. The average Fl-measure for 10 categories by Naive Bayes classifier is 0.86. Therefore, the performance of Naive Bayes classifier is effective based on the automatically constructed ontology in the point of F 1-measure. Thus, the domain ontology automatically constructed could indeed be acted as the document categories to reach the effectiveness for document classification.展开更多
To reduce the discrepancy between the source and target domains,a new multi-label adaptation network(ML-ANet)based on multiple kernel variants with maximum mean discrepancies is proposed in this paper.The hidden repre...To reduce the discrepancy between the source and target domains,a new multi-label adaptation network(ML-ANet)based on multiple kernel variants with maximum mean discrepancies is proposed in this paper.The hidden representations of the task-specific layers in ML-ANet are embedded in the reproducing kernel Hilbert space(RKHS)so that the mean-embeddings of specific features in different domains could be precisely matched.Multiple kernel functions are used to improve feature distribution efficiency for explicit mean embedding matching,which can further reduce domain discrepancy.Adverse weather and cross-camera adaptation examinations are conducted to verify the effectiveness of our proposed ML-ANet.The results show that our proposed ML-ANet achieves higher accuracies than the compared state-of-the-art methods for multi-label image classification in both the adverse weather adaptation and cross-camera adaptation experiments.These results indicate that ML-ANet can alleviate the reliance on fully labeled training data and improve the accuracy of multi-label image classification in various domain shift scenarios.展开更多
In this paper, we utilize the framework of multi-label learning for face demographic classification. We also attempt t;o explore the suitable classifiers and features for face demographic classification. Three most po...In this paper, we utilize the framework of multi-label learning for face demographic classification. We also attempt t;o explore the suitable classifiers and features for face demographic classification. Three most popular demographic information, gender, ethnicity and age are considered in experiments. Based on the results from demographic classification, we utilize statistic analysis to explore the correlation among various face demographic information. Through the analysis, we draw several conclusions on the correlation and interaction among these high-level face semantic, and the obtained results can be helpful in automatic face semantic annotation and other face analysis tasks.展开更多
Pedestrian attribute classification from a pedestrian image captured in surveillance scenarios is challenging due to diverse clothing appearances,varied poses and different camera views. A multiscale and multi-label c...Pedestrian attribute classification from a pedestrian image captured in surveillance scenarios is challenging due to diverse clothing appearances,varied poses and different camera views. A multiscale and multi-label convolutional neural network( MSMLCNN) is proposed to predict multiple pedestrian attributes simultaneously. The pedestrian attribute classification problem is firstly transformed into a multi-label problem including multiple binary attributes needed to be classified. Then,the multi-label problem is solved by fully connecting all binary attributes to multi-scale features with logistic regression functions. Moreover,the multi-scale features are obtained by concatenating those featured maps produced from multiple pooling layers of the MSMLCNN at different scales. Extensive experiment results show that the proposed MSMLCNN outperforms state-of-the-art pedestrian attribute classification methods with a large margin.展开更多
Sum-product networks(SPNs)are an expressive deep probabilistic architecture with solid theoretical foundations,which allows tractable and exact inference.SPNs always act as black-box inference machine in many artifici...Sum-product networks(SPNs)are an expressive deep probabilistic architecture with solid theoretical foundations,which allows tractable and exact inference.SPNs always act as black-box inference machine in many artificial intelligence tasks.Due to their recursive definition,SPNs can also be naturally employed as hierarchical feature extractors.Recently,SPNs have been successfully employed as autoencoder framework in representation learning.However,SPNs autoencoder ignores the model structural duality and trains the models separately and independently.In this work,we propose a Dual-SPNs autoencoder which designs two SPNs autoencoders to compose as a dual form.This approach trains the models simultaneously,and explicitly exploits the structural duality between them to enhance the training process.Experimental results on several multilabel classification problems demonstrate that Dual-SPNs autoencoder is very competitive against with state-of-the-art autoencoder architectures.展开更多
基金supporting of the National Science and Technology Council NSTC(grant nos.NSTC 112-2221-E-019-023,NSTC 113-2221-E-019-039)Taiwan University of Science and Technology.
文摘In its 2023 global health statistics,the World Health Organization noted that noncommunicable diseases(NCDs)remain the leading cause of disease burden worldwide,with cardiovascular diseases(CVDs)resulting in more deaths than the three other major NCDs combined.In this study,we developed a method that can comprehensively detect which CVDs are present in a patient.Specifically,we propose a multi-label classification method that utilizes photoplethysmography(PPG)signals and physiological characteristics from public datasets to classify four types of CVDs and related conditions:hypertension,diabetes,cerebral infarction,and cerebrovascular disease.Our approach to multi-disease classification of cardiovascular diseases(CVDs)using PPG signals achieves the highest classification performance when encompassing the broadest range of disease categories,thereby offering a more comprehensive assessment of human health.We employ a multi-label classification strategy to simultaneously predict the presence or absence of multiple diseases.Specifically,we first apply the Savitzky-Golay(S-G)filter to the PPG signals to reduce noise and then transform into statistical features.We integrate processed PPG signals with individual physiological features as a multimodal input,thereby expanding the learned feature space.Notably,even with a simple machine learning method,this approach can achieve relatively high accuracy.The proposed method achieved a maximum F1-score of 0.91,minimum Hamming loss of 0.04,and an accuracy of 0.95.Thus,our method represents an effective and rapid solution for detecting multiple diseases simultaneously,which is beneficial for comprehensively managing CVDs.
基金would like to thank the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2025).
文摘Automated and accurate movie genre classification is crucial for content organization,recommendation systems,and audience targeting in the film industry.Although most existing approaches focus on audiovisual features such as trailers and posters,the text-based classification remains underexplored despite its accessibility and semantic richness.This paper introduces the Genre Attention Model(GAM),a deep learning architecture that integrates transformer models with a hierarchical attention mechanism to extract and leverage contextual information from movie plots formulti-label genre classification.In order to assess its effectiveness,we assessmultiple transformer-based models,including Bidirectional Encoder Representations fromTransformers(BERT),ALite BERT(ALBERT),Distilled BERT(DistilBERT),Robustly Optimized BERT Pretraining Approach(RoBERTa),Efficiently Learning an Encoder that Classifies Token Replacements Accurately(ELECTRA),eXtreme Learning Network(XLNet)and Decodingenhanced BERT with Disentangled Attention(DeBERTa).Experimental results demonstrate the superior performance of DeBERTa-based GAM,which employs a two-tier hierarchical attention mechanism:word-level attention highlights key terms,while sentence-level attention captures critical narrative segments,ensuring a refined and interpretable representation of movie plots.Evaluated on three benchmark datasets Trailers12K,Large Movie Trailer Dataset-9(LMTD-9),and MovieLens37K.GAM achieves micro-average precision scores of 83.63%,83.32%,and 83.34%,respectively,surpassing state-of-the-artmodels.Additionally,GAMis computationally efficient,requiring just 6.10Giga Floating Point Operations Per Second(GFLOPS),making it a scalable and cost-effective solution.These results highlight the growing potential of text-based deep learning models in genre classification and GAM’s effectiveness in improving predictive accuracy while maintaining computational efficiency.With its robust performance,GAM offers a versatile and scalable framework for content recommendation,film indexing,and media analytics,providing an interpretable alternative to traditional audiovisual-based classification techniques.
基金supported by the National Natural Science Foundation of China(62302167,62477013)Natural Science Foundation of Shanghai(No.24ZR1456100)+1 种基金Science and Technology Commission of Shanghai Municipality(No.24DZ2305900)the Shanghai Municipal Special Fund for Promoting High-Quality Development of Industries(2211106).
文摘Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feature representation.However,existing methods often rely on the single-scale deep feature,neglecting shallow and deeper layer features,which poses challenges when predicting objects of varying scales within the same image.Although some studies have explored multi-scale features,they rarely address the flow of information between scales or efficiently obtain class-specific precise representations for features at different scales.To address these issues,we propose a two-stage,three-branch Transformer-based framework.The first stage incorporates multi-scale image feature extraction and hierarchical scale attention.This design enables the model to consider objects at various scales while enhancing the flow of information across different feature scales,improving the model’s generalization to diverse object scales.The second stage includes a global feature enhancement module and a region selection module.The global feature enhancement module strengthens interconnections between different image regions,mitigating the issue of incomplete represen-tations,while the region selection module models the cross-modal relationships between image features and labels.Together,these components enable the efficient acquisition of class-specific precise feature representations.Extensive experiments on public datasets,including COCO2014,VOC2007,and VOC2012,demonstrate the effectiveness of our proposed method.Our approach achieves consistent performance gains of 0.3%,0.4%,and 0.2%over state-of-the-art methods on the three datasets,respectively.These results validate the reliability and superiority of our approach for multi-label image classification.
基金Supported by the State Grid Technology Item(52460D230002)。
文摘Label correlations are an essential technique for data mining that solves the possible correlation problem between different labels in multi-label classification.Although this technique is widely used in multi-label classification problems,batch learning deals with most issues,which consumes a lot of time and space resources.Unlike traditional batch learning methods,online learning represents a promising family of efficient and scalable machine learning algorithms for large-scale datasets.However,existing online learning research has done little to consider correlations between labels.On the basis of existing research,this paper proposes a multi-label online learning algorithm based on label correlations by maximizing the interval between related labels and unrelated labels in multi-label samples.We evaluate the performance of the proposed algorithm on several public datasets.Experiments show the effectiveness of our algorithm.
基金supported by the National Key Research and Development Program of China(Grant No.2020YFA0211400)the State Key Program of the National Natural Science of China(Grant No.11834008)+2 种基金the National Natural Science Foundation of China(Grant Nos.12174192,12174188,and 11974176)the State Key Laboratory of Acoustics,Chinese Academy of Sciences(Grant No.SKLA202410)the Fund from the Key Laboratory of Underwater Acoustic Environment,Chinese Academy of Sciences(Grant No.SSHJ-KFKT-1701).
文摘Phononic crystals,as artificial composite materials,have sparked significant interest due to their novel characteristics that emerge upon the introduction of nonlinearity.Among these properties,second-harmonic features exhibit potential applications in acoustic frequency conversion,non-reciprocal wave propagation,and non-destructive testing.Precisely manipulating the harmonic band structure presents a major challenge in the design of nonlinear phononic crystals.Traditional design approaches based on parameter adjustments to meet specific application requirements are inefficient and often yield suboptimal performance.Therefore,this paper develops a design methodology using Softmax logistic regression and multi-label classification learning to inversely design the material distribution of nonlinear phononic crystals by exploiting information from harmonic transmission spectra.The results demonstrate that the neural network-based inverse design method can effectively tailor nonlinear phononic crystals with desired functionalities.This work establishes a mapping relationship between the band structure and the material distribution within phononic crystals,providing valuable insights into the inverse design of metamaterials.
基金supported by Fundamental Research Funds from the Beijing University of Chinese Medicine(2023-JYB-KYPT-13)the Developmental Fund of Beijing University of Chinese Medicine(2020-ZXFZJJ-083).
文摘Objective:To explore the feasibility of remotely obtaining complex information on traditional Chinese medicine(TCM)pulse conditions through voice signals.Methods: We used multi-label pulse conditions as the entry point and modeled and analyzed TCM pulse diagnosis by combining voice analysis and machine learning.Audio features were extracted from voice recordings in the TCM pulse condition dataset.The obtained features were combined with information from tongue and facial diagnoses.A multi-label pulse condition voice classification DNN model was built using 10-fold cross-validation,and the modeling methods were validated using publicly available datasets.Results: The analysis showed that the proposed method achieved an accuracy of 92.59%on the public dataset.The accuracies of the three single-label pulse manifestation models in the test set were 94.27%,96.35%,and 95.39%.The absolute accuracy of the multi-label model was 92.74%.Conclusion: Voice data analysis may serve as a remote adjunct to the TCM diagnostic method for pulse condition assessment.
文摘As digital technologies have advanced more rapidly,the number of paper documents recently converted into a digital format has exponentially increased.To respond to the urgent need to categorize the growing number of digitized documents,the classification of digitized documents in real time has been identified as the primary goal of our study.A paper classification is the first stage in automating document control and efficient knowledge discovery with no or little human involvement.Artificial intelligence methods such as Deep Learning are now combined with segmentation to study and interpret those traits,which were not conceivable ten years ago.Deep learning aids in comprehending input patterns so that object classes may be predicted.The segmentation process divides the input image into separate segments for a more thorough image study.This study proposes a deep learning-enabled framework for automated document classification,which can be implemented in higher education.To further this goal,a dataset was developed that includes seven categories:Diplomas,Personal documents,Journal of Accounting of higher education diplomas,Service letters,Orders,Production orders,and Student orders.Subsequently,a deep learning model based on Conv2D layers is proposed for the document classification process.In the final part of this research,the proposed model is evaluated and compared with other machine-learning techniques.The results demonstrate that the proposed deep learning model shows high results in document categorization overtaking the other machine learning models by reaching 94.84%,94.79%,94.62%,94.43%,94.07%in accuracy,precision,recall,F-score,and AUC-ROC,respectively.The achieved results prove that the proposed deep model is acceptable to use in practice as an assistant to an office worker.
基金supported by the National Key R&D Program of China(2018YFC0830200,Zhang,B,www.most.gov.cn)the Fundamental Research Funds for the Central Universities(2242018S30021 and 2242017S30023,Zhou S,www.seu.edu.cn)the Open Research Fund from Key Laboratory of Computer Network and Information Integration In Southeast University,Ministry of Education,China(3209012001C3,Zhang B,www.seu.edu.cn).
文摘The Internet of Medical Things(IoMT)will come to be of great importance in the mediation of medical disputes,as it is emerging as the core of intelligent medical treatment.First,IoMT can track the entire medical treatment process in order to provide detailed trace data in medical dispute resolution.Second,IoMT can infiltrate the ongoing treatment and provide timely intelligent decision support to medical staff.This information includes recommendation of similar historical cases,guidance for medical treatment,alerting of hired dispute profiteers etc.The multi-label classification of medical dispute documents(MDDs)plays an important role as a front-end process for intelligent decision support,especially in the recommendation of similar historical cases.However,MDDs usually appear as long texts containing a large amount of redundant information,and there is a serious distribution imbalance in the dataset,which directly leads to weaker classification performance.Accordingly,in this paper,a multi-label classification method based on key sentence extraction is proposed for MDDs.The method is divided into two parts.First,the attention-based hierarchical bi-directional long short-term memory(BiLSTM)model is used to extract key sentences from documents;second,random comprehensive sampling Bagging(RCS-Bagging),which is an ensemble multi-label classification model,is employed to classify MDDs based on key sentence sets.The use of this approach greatly improves the classification performance.Experiments show that the performance of the two models proposed in this paper is remarkably better than that of the baseline methods.
基金Project supported by the National Natural Science Foundation of China(No.61602204)
文摘Supervised topic modeling algorithms have been successfully applied to multi-label document classification tasks.Representative models include labeled latent Dirichlet allocation(L-LDA)and dependency-LDA.However,these models neglect the class frequency information of words(i.e.,the number of classes where a word has occurred in the training data),which is significant for classification.To address this,we propose a method,namely the class frequency weight(CF-weight),to weight words by considering the class frequency knowledge.This CF-weight is based on the intuition that a word with higher(lower)class frequency will be less(more)discriminative.In this study,the CF-weight is used to improve L-LDA and dependency-LDA.A number of experiments have been conducted on real-world multi-label datasets.Experimental results demonstrate that CF-weight based algorithms are competitive with the existing supervised topic models.
基金The National Natural Science Foundation of China(No.60503020,60373066,60403016,60425206),the Natural Science Foundation of Jiangsu Higher Education Institutions ( No.04KJB520096),the Doctoral Foundation of Nanjing University of Posts and Telecommunication (No.0302).
文摘A rough set based corner classification neural network, the Rough-CC4, is presented to solve document classification problems such as document representation of different document sizes, document feature selection and document feature encoding. In the Rough-CC4, the documents are described by the equivalent classes of the approximate words. By this method, the dimensions representing the documents can be reduced, which can solve the precision problems caused by the different document sizes and also blur the differences caused by the approximate words. In the Rough-CC4, a binary encoding method is introduced, through which the importance of documents relative to each equivalent class is encoded. By this encoding method, the precision of the Rough-CC4 is improved greatly and the space complexity of the Rough-CC4 is reduced. The Rough-CC4 can be used in automatic classification of documents.
基金supported by the National Natural Science Foundation of China(5110505261173163)the Liaoning Provincial Natural Science Foundation of China(201102037)
文摘In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the research of multi-label classification algorithms. Considering the fact that the high dimensionality of the multi-label datasets may cause the curse of dimensionality and wil hamper the classification process, a dimensionality reduction algorithm, named multi-label kernel discriminant analysis (MLKDA), is proposed to reduce the dimensionality of multi-label datasets. MLKDA, with the kernel trick, processes the multi-label integrally and realizes the nonlinear dimensionality reduction with the idea similar with linear discriminant analysis (LDA). In the classification process of multi-label data, the extreme learning machine (ELM) is an efficient algorithm in the premise of good accuracy. MLKDA, combined with ELM, shows a good performance in multi-label learning experiments with several datasets. The experiments on both static data and data stream show that MLKDA outperforms multi-label dimensionality reduction via dependence maximization (MDDM) and multi-label linear discriminant analysis (MLDA) in cases of balanced datasets and stronger correlation between tags, and ELM is also a good choice for multi-label classification.
基金Supported by the Opening Fund of Key Laboratory of Symbol Computation and Knowledge Engineering of Ministry of Education (93K-17-2010-K02)the Opening Fund of Key Discipline of Computer Soft-Ware and Theory of Zhejiang Province at Zhejiang Normal University (ZSDZZZZXK05)
文摘Aiming at the problem of multi-label classification, a multi-label classification algorithm based on label-specific features is proposed in this paper. In this algorithm, we compute feature density on the positive and negative instances set of each class firstly and then select mk features of high density from the positive and negative instances set of each class, respectively; the intersec- tion is taken as the label-specific features of the corresponding class. Finally, multi-label data are classified on the basis of la- bel-specific features. The algorithm can show the label-specific features of each class. Experiments show that our proposed method, the MLSF algorithm, performs significantly better than the other state-of-the-art multi-label learning approaches.
基金supported by National Natural Science Foundation of China(NSFC)(Grant Nos.62162022,62162024)the Key Research and Development Program of Hainan Province(Grant Nos.ZDYF2020040,ZDYF2021GXJS003)+2 种基金the Major Science and Technology Project of Hainan Province(Grant No.ZDKJ2020012)Hainan Provincial Natural Science Foundation of China(Grant Nos.620MS021,621QN211)Science and Technology Development Center of the Ministry of Education Industry-University-Research Innovation Fund(2021JQR017).
文摘In the realm of Multi-Label Text Classification(MLTC),the dual challenges of extracting rich semantic features from text and discerning inter-label relationships have spurred innovative approaches.Many studies in semantic feature extraction have turned to external knowledge to augment the model’s grasp of textual content,often overlooking intrinsic textual cues such as label statistical features.In contrast,these endogenous insights naturally align with the classification task.In our paper,to complement this focus on intrinsic knowledge,we introduce a novel Gate-Attention mechanism.This mechanism adeptly integrates statistical features from the text itself into the semantic fabric,enhancing the model’s capacity to understand and represent the data.Additionally,to address the intricate task of mining label correlations,we propose a Dual-end enhancement mechanism.This mechanism effectively mitigates the challenges of information loss and erroneous transmission inherent in traditional long short term memory propagation.We conducted an extensive battery of experiments on the AAPD and RCV1-2 datasets.These experiments serve the dual purpose of confirming the efficacy of both the Gate-Attention mechanism and the Dual-end enhancement mechanism.Our final model unequivocally outperforms the baseline model,attesting to its robustness.These findings emphatically underscore the imperativeness of taking into account not just external knowledge but also the inherent intricacies of textual data when crafting potent MLTC models.
文摘An effective domain ontology automatically constructed is proposed in this paper. The main concept is using the Formal Concept Analysis to automatically establish domain ontology. Finally, the ontology is acted as the base for the Naive Bayes classifier to approve the effectiveness of the domain ontology for document classification. The 1752 documents divided into 10 categories are used to assess the effectiveness of the ontology, where 1252 and 500 documents are the training and testing documents, respectively. The Fl-measure is as the assessment criteria and the following three results are obtained. The average recall of Naive Bayes classifier is 0.94. Therefore, in recall, the performance of Naive Bayes classifier is excellent based on the automatically constructed ontology. The average precision of Naive Bayes classifier is 0.81. Therefore, in precision, the performance of Naive Bayes classifier is gored based on the automatically constructed ontology. The average Fl-measure for 10 categories by Naive Bayes classifier is 0.86. Therefore, the performance of Naive Bayes classifier is effective based on the automatically constructed ontology in the point of F 1-measure. Thus, the domain ontology automatically constructed could indeed be acted as the document categories to reach the effectiveness for document classification.
基金Supported by National High Technology Research and Development Program of China (863 Program) (2008AA01Z144) National Natural Science Foundation of China (60803093 60975055)
基金Supported by Shenzhen Fundamental Research Fund of China(Grant No.JCYJ20190808142613246)National Natural Science Foundation of China(Grant No.51805332),and Young Elite Scientists Sponsorship Program funded by the China Society of Automotive Engineers.
文摘To reduce the discrepancy between the source and target domains,a new multi-label adaptation network(ML-ANet)based on multiple kernel variants with maximum mean discrepancies is proposed in this paper.The hidden representations of the task-specific layers in ML-ANet are embedded in the reproducing kernel Hilbert space(RKHS)so that the mean-embeddings of specific features in different domains could be precisely matched.Multiple kernel functions are used to improve feature distribution efficiency for explicit mean embedding matching,which can further reduce domain discrepancy.Adverse weather and cross-camera adaptation examinations are conducted to verify the effectiveness of our proposed ML-ANet.The results show that our proposed ML-ANet achieves higher accuracies than the compared state-of-the-art methods for multi-label image classification in both the adverse weather adaptation and cross-camera adaptation experiments.These results indicate that ML-ANet can alleviate the reliance on fully labeled training data and improve the accuracy of multi-label image classification in various domain shift scenarios.
基金Project supported by the National Natural Science Foundation of China(Grant No.60605012)the Natural Science Foundation of Shanghai(Grant No.08ZR1408200)+1 种基金the Open Project Program of the National Laboratory of Pattern Recognition of China(Grant No.08-2-16)the Shanghai Leading Academic Discipline Project(Grant No.J50103)
文摘In this paper, we utilize the framework of multi-label learning for face demographic classification. We also attempt t;o explore the suitable classifiers and features for face demographic classification. Three most popular demographic information, gender, ethnicity and age are considered in experiments. Based on the results from demographic classification, we utilize statistic analysis to explore the correlation among various face demographic information. Through the analysis, we draw several conclusions on the correlation and interaction among these high-level face semantic, and the obtained results can be helpful in automatic face semantic annotation and other face analysis tasks.
基金Supported by the National Natural Science Foundation of China(No.61602191,61672521,61375037,61473291,61572501,61572536,61502491,61372107,61401167)the Natural Science Foundation of Fujian Province(No.2016J01308)+3 种基金the Scientific and Technology Funds of Quanzhou(No.2015Z114)the Scientific and Technology Funds of Xiamen(No.3502Z20173045)the Promotion Program for Young and Middle aged Teacher in Science and Technology Research of Huaqiao University(No.ZQN-PY418,ZQN-YX403)the Scientific Research Funds of Huaqiao University(No.16BS108)
文摘Pedestrian attribute classification from a pedestrian image captured in surveillance scenarios is challenging due to diverse clothing appearances,varied poses and different camera views. A multiscale and multi-label convolutional neural network( MSMLCNN) is proposed to predict multiple pedestrian attributes simultaneously. The pedestrian attribute classification problem is firstly transformed into a multi-label problem including multiple binary attributes needed to be classified. Then,the multi-label problem is solved by fully connecting all binary attributes to multi-scale features with logistic regression functions. Moreover,the multi-scale features are obtained by concatenating those featured maps produced from multiple pooling layers of the MSMLCNN at different scales. Extensive experiment results show that the proposed MSMLCNN outperforms state-of-the-art pedestrian attribute classification methods with a large margin.
基金the National Natural Science Foundation of China(No.61472161)the Science&Technology Development Project of Jilin Province(Nos.20180101334JC and 20160520099JH)。
文摘Sum-product networks(SPNs)are an expressive deep probabilistic architecture with solid theoretical foundations,which allows tractable and exact inference.SPNs always act as black-box inference machine in many artificial intelligence tasks.Due to their recursive definition,SPNs can also be naturally employed as hierarchical feature extractors.Recently,SPNs have been successfully employed as autoencoder framework in representation learning.However,SPNs autoencoder ignores the model structural duality and trains the models separately and independently.In this work,we propose a Dual-SPNs autoencoder which designs two SPNs autoencoders to compose as a dual form.This approach trains the models simultaneously,and explicitly exploits the structural duality between them to enhance the training process.Experimental results on several multilabel classification problems demonstrate that Dual-SPNs autoencoder is very competitive against with state-of-the-art autoencoder architectures.