Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feat...Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feature representation.However,existing methods often rely on the single-scale deep feature,neglecting shallow and deeper layer features,which poses challenges when predicting objects of varying scales within the same image.Although some studies have explored multi-scale features,they rarely address the flow of information between scales or efficiently obtain class-specific precise representations for features at different scales.To address these issues,we propose a two-stage,three-branch Transformer-based framework.The first stage incorporates multi-scale image feature extraction and hierarchical scale attention.This design enables the model to consider objects at various scales while enhancing the flow of information across different feature scales,improving the model’s generalization to diverse object scales.The second stage includes a global feature enhancement module and a region selection module.The global feature enhancement module strengthens interconnections between different image regions,mitigating the issue of incomplete represen-tations,while the region selection module models the cross-modal relationships between image features and labels.Together,these components enable the efficient acquisition of class-specific precise feature representations.Extensive experiments on public datasets,including COCO2014,VOC2007,and VOC2012,demonstrate the effectiveness of our proposed method.Our approach achieves consistent performance gains of 0.3%,0.4%,and 0.2%over state-of-the-art methods on the three datasets,respectively.These results validate the reliability and superiority of our approach for multi-label image classification.展开更多
On the evening of May 3Oth,the parallel forum"Equality and Inclusiveness&Harmonious Coexistence:Multi-dimensional Narratives of Civilisations from Writers'Perspective",as part of the 4th Dialogue on ...On the evening of May 3Oth,the parallel forum"Equality and Inclusiveness&Harmonious Coexistence:Multi-dimensional Narratives of Civilisations from Writers'Perspective",as part of the 4th Dialogue on Exchanges and Mutual Learning among Civilisations,was held in Dunhuang.The forum was organised by the China Writers Association and co-organised by China National Publications Import&Export(Group)Corporation.展开更多
We propose a hierarchical multi-scale attention mechanism-based model in response to the low accuracy and inefficient manual classification of existing oceanic biological image classification methods. Firstly, the hie...We propose a hierarchical multi-scale attention mechanism-based model in response to the low accuracy and inefficient manual classification of existing oceanic biological image classification methods. Firstly, the hierarchical efficient multi-scale attention(H-EMA) module is designed for lightweight feature extraction, achieving outstanding performance at a relatively low cost. Secondly, an improved EfficientNetV2 block is used to integrate information from different scales better and enhance inter-layer message passing. Furthermore, introducing the convolutional block attention module(CBAM) enhances the model's perception of critical features, optimizing its generalization ability. Lastly, Focal Loss is introduced to adjust the weights of complex samples to address the issue of imbalanced categories in the dataset, further improving the model's performance. The model achieved 96.11% accuracy on the intertidal marine organism dataset of Nanji Islands and 84.78% accuracy on the CIFAR-100 dataset, demonstrating its strong generalization ability to meet the demands of oceanic biological image classification.展开更多
Cervical cancer,a leading malignancy globally,poses a significant threat to women's health,with an estimated 604,000 new cases and 342,000 deaths reported in 2020^([1]).As cervical cancer is closely linked to huma...Cervical cancer,a leading malignancy globally,poses a significant threat to women's health,with an estimated 604,000 new cases and 342,000 deaths reported in 2020^([1]).As cervical cancer is closely linked to human papilloma virus(HPV)infection,early detection relies on HPV screening;however,late-stage prognosis remains poor,underscoring the need for novel diagnostic and therapeutic targets^([2]).展开更多
In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the researc...In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the research of multi-label classification algorithms. Considering the fact that the high dimensionality of the multi-label datasets may cause the curse of dimensionality and wil hamper the classification process, a dimensionality reduction algorithm, named multi-label kernel discriminant analysis (MLKDA), is proposed to reduce the dimensionality of multi-label datasets. MLKDA, with the kernel trick, processes the multi-label integrally and realizes the nonlinear dimensionality reduction with the idea similar with linear discriminant analysis (LDA). In the classification process of multi-label data, the extreme learning machine (ELM) is an efficient algorithm in the premise of good accuracy. MLKDA, combined with ELM, shows a good performance in multi-label learning experiments with several datasets. The experiments on both static data and data stream show that MLKDA outperforms multi-label dimensionality reduction via dependence maximization (MDDM) and multi-label linear discriminant analysis (MLDA) in cases of balanced datasets and stronger correlation between tags, and ELM is also a good choice for multi-label classification.展开更多
Some dimensionality reduction (DR) approaches based on support vector machine (SVM) are proposed. But the acquirement of the projection matrix in these approaches only considers the between-class margin based on S...Some dimensionality reduction (DR) approaches based on support vector machine (SVM) are proposed. But the acquirement of the projection matrix in these approaches only considers the between-class margin based on SVM while ignoring the within-class information in data. This paper presents a new DR approach, call- ed the dimensionality reduction based on SVM and LDA (DRSL). DRSL considers the between-class margins from SVM and LDA, and the within-class compactness from LDA to obtain the projection matrix. As a result, DRSL can realize the combination of the between-class and within-class information and fit the between-class and within-class structures in data. Hence, the obtained projection matrix increases the generalization ability of subsequent classification techniques. Experiments applied to classification techniques show the effectiveness of the proposed method.展开更多
It is quite common that both categorical and continuous covariates appear in the data. But, most feature screening methods for ultrahigh-dimensional classification assume the covariates are continuous. And applicable ...It is quite common that both categorical and continuous covariates appear in the data. But, most feature screening methods for ultrahigh-dimensional classification assume the covariates are continuous. And applicable feature screening method is very limited;to handle this non-trivial situation, we propose a model-free feature screening for ultrahigh-dimensional multi-classification with both categorical and continuous covariates. The proposed feature screening method will be based on Gini impurity to evaluate the prediction power of covariates. Under certain regularity conditions, it is proved that the proposed screening procedure possesses the sure screening property and ranking consistency properties. We demonstrate the finite sample performance of the proposed procedure by simulation studies and illustrate using real data analysis.展开更多
It is common for datasets to contain both categorical and continuous variables. However, many feature screening methods designed for high-dimensional classification assume that the variables are continuous. This limit...It is common for datasets to contain both categorical and continuous variables. However, many feature screening methods designed for high-dimensional classification assume that the variables are continuous. This limits the applicability of existing methods in handling this complex scenario. To address this issue, we propose a model-free feature screening approach for ultra-high-dimensional multi-classification that can handle both categorical and continuous variables. Our proposed feature screening method utilizes the Maximal Information Coefficient to assess the predictive power of the variables. By satisfying certain regularity conditions, we have proven that our screening procedure possesses the sure screening property and ranking consistency properties. To validate the effectiveness of our approach, we conduct simulation studies and provide real data analysis examples to demonstrate its performance in finite samples. In summary, our proposed method offers a solution for effectively screening features in ultra-high-dimensional datasets with a mixture of categorical and continuous covariates.展开更多
Intrusion detection is a predominant task that monitors and protects the network infrastructure.Therefore,many datasets have been published and investigated by researchers to analyze and understand the problem of intr...Intrusion detection is a predominant task that monitors and protects the network infrastructure.Therefore,many datasets have been published and investigated by researchers to analyze and understand the problem of intrusion prediction and detection.In particular,the Network Security Laboratory-Knowledge Discovery in Databases(NSL-KDD)is an extensively used benchmark dataset for evaluating intrusion detection systems(IDSs)as it incorporates various network traffic attacks.It is worth mentioning that a large number of studies have tackled the problem of intrusion detection using machine learning models,but the performance of these models often decreases when evaluated on new attacks.This has led to the utilization of deep learning techniques,which have showcased significant potential for processing large datasets and therefore improving detection accuracy.For that reason,this paper focuses on the role of stacking deep learning models,including convolution neural network(CNN)and deep neural network(DNN)for improving the intrusion detection rate of the NSL-KDD dataset.Each base model is trained on the NSL-KDD dataset to extract significant features.Once the base models have been trained,the stacking process proceeds to the second stage,where a simple meta-model has been trained on the predictions generated from the proposed base models.The combination of the predictions allows the meta-model to distinguish different classes of attacks and increase the detection rate.Our experimental evaluations using the NSL-KDD dataset have shown the efficacy of stacking deep learning models for intrusion detection.The performance of the ensemble of base models,combined with the meta-model,exceeds the performance of individual models.Our stacking model has attained an accuracy of 99%and an average F1-score of 93%for the multi-classification scenario.Besides,the training time of the proposed ensemble model is lower than the training time of benchmark techniques,demonstrating its efficiency and robustness.展开更多
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso...Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.展开更多
The frame of text classification system was presented. The high dimensionality in feature space for text classification was studied. The mutual information is a widely used information theoretic measure, in a descript...The frame of text classification system was presented. The high dimensionality in feature space for text classification was studied. The mutual information is a widely used information theoretic measure, in a descriptive way, to measure the stochastic dependency of discrete random variables. The measure method was used as a criterion to reduce high dimensionality of feature vectors in text classification on Web. Feature selections or conversions were performed by using maximum mutual information including linear and non-linear feature conversions. Entropy was used and extended to find right features commendably in pattern recognition systems. Favorable foundation would be established for text classification mining.展开更多
This paper presents a supervised classification method of sonar image, which takes advantages of both multi-fractal theory and wavelet analysis. In the process of feature extraction, image transformation and wavelet d...This paper presents a supervised classification method of sonar image, which takes advantages of both multi-fractal theory and wavelet analysis. In the process of feature extraction, image transformation and wavelet decomposition are combined and a feature set based on multi-fractal dimension is obtained. In the part of classifier construction, the Learning Vector Quantization (LVQ) network is adopted as a classifier. Experiments of sonar image classification were carried out with satisfactory results, which verify the effectiveness of this method.展开更多
The advent of the digital era has provided unprecedented opportunities for businesses to collect and analyze customer behavior data. Precision marketing, as a key means to improve marketing efficiency, highly depends ...The advent of the digital era has provided unprecedented opportunities for businesses to collect and analyze customer behavior data. Precision marketing, as a key means to improve marketing efficiency, highly depends on a deep understanding of customer behavior. This study proposes a theoretical framework for multi-dimensional customer behavior analysis, aiming to comprehensively capture customer behavioral characteristics in the digital environment. This framework integrates concepts of multi-source data including transaction history, browsing trajectories, social media interactions, and location information, constructing a theoretically more comprehensive customer profile. The research discusses the potential applications of this theoretical framework in precision marketing scenarios such as personalized recommendations, cross-selling, and customer churn prevention. Through analysis, the study points out that multi-dimensional analysis may significantly improve the targeting and theoretical conversion rates of marketing activities. However, the research also explores theoretical challenges that may be faced in the application process, such as data privacy and information overload, and proposes corresponding conceptual coping strategies. This study provides a new theoretical perspective on how businesses can optimize marketing decisions using big data thinking while respecting customer privacy, laying a foundation for future empirical research.展开更多
To solve the high-dimensionality issue and improve its accuracy in credit risk assessment,a high-dimensionality-trait-driven learning paradigm is proposed for feature extraction and classifier selection.The proposed p...To solve the high-dimensionality issue and improve its accuracy in credit risk assessment,a high-dimensionality-trait-driven learning paradigm is proposed for feature extraction and classifier selection.The proposed paradigm consists of three main stages:categorization of high dimensional data,high-dimensionality-trait-driven feature extraction,and high-dimensionality-trait-driven classifier selection.In the first stage,according to the definition of high-dimensionality and the relationship between sample size and feature dimensions,the high-dimensionality traits of credit dataset are further categorized into two types:100<feature dimensions<sample size,and feature dimensions≥sample size.In the second stage,some typical feature extraction methods are tested regarding the two categories of high dimensionality.In the final stage,four types of classifiers are performed to evaluate credit risk considering different high-dimensionality traits.For the purpose of illustration and verification,credit classification experiments are performed on two publicly available credit risk datasets,and the results show that the proposed high-dimensionality-trait-driven learning paradigm for feature extraction and classifier selection is effective in handling high-dimensional credit classification issues and improving credit classification accuracy relative to the benchmark models listed in this study.展开更多
We developed a ground observation system for solid precipitation using two-dimensional video disdrometer (2DVD). Among 16,010 particles observed by the system, around 10% of them were randomly sampled and manually cla...We developed a ground observation system for solid precipitation using two-dimensional video disdrometer (2DVD). Among 16,010 particles observed by the system, around 10% of them were randomly sampled and manually classified into five classes which are snowflake, snowflake-like, intermediate, graupel-like, and graupel. At first, each particle was represented as a vector of 72 features containing fractal dimension and box-count to represent the complexity of particle shape. Feature analysis on the dataset clarified the importance of fractal dimension and box-count features for characterizing particles varying from snowflakes to graupels. On the other hand, performance evaluation of two-class classification by Support Vector Machine (SVM) was conducted. The experimental results revealed that, by selecting only 10 features out of 72, the average accuracy of classifying particles into snowflakes and graupels could reach around 95.4%, which had not been achieved by previous studies.展开更多
Real-world objects exhibit intricate semantic properties that can be characterized from a multitude of perspectives, which necessitates the development of a model capable of discerning multiple patterns within data, w...Real-world objects exhibit intricate semantic properties that can be characterized from a multitude of perspectives, which necessitates the development of a model capable of discerning multiple patterns within data, while concurrently predicting several Labeling Dimensions (LDs) — a task known as Multi-dimensional Classification (MDC). While the class imbalance issue has been extensively investigated within the multi-class paradigm, its study in the MDC context has been limited due to the imbalance shift phenomenon. A sample’s classification as a minor or major class instance becomes ambiguous when it belongs to a minor class in one LD and a major class in another. Previous MDC methodologies predominantly emphasized instance-wise criteria, neglecting prediction capabilities from a dimension aspect, i.e., the average classification performance across LDs. We assert the significance of dimension-wise metrics in real-world MDC applications and introduce two such metrics. Furthermore, we observe imbalanced class distributions within each LD and propose a novel Imbalance-Aware fusion Model (IMAM) for addressing the MDC problem. Specifically, we first decompose the task into multiple multi-class classification problems, creating imbalance-aware deep models for each LD separately. This straightforward method performs well across LDs without sacrificing performance in instance-wise criteria. Subsequently, we employ LD-wise models as multiple teachers and transfer their knowledge across all LDs to a unified student model. Experimental results on several real-world datasets demonstrate that our IMAM approach excels in both instance-wise evaluations and the proposed dimension-wise metrics.展开更多
In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified fro...In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified from the point of view of introducing Jensen-Shannon divergence to measure the importance of covariates. The idea of the method is to calculate the Jensen-Shannon divergence between the conditional probability distribution of the covariates on a given response variable and the unconditional probability distribution of the covariates, and then use the probabilities of the response variables as weights to calculate the weighted Jensen-Shannon divergence, where a larger weighted Jensen-Shannon divergence means that the covariates are more important. Additionally, we also investigated an adapted version of the method, which is to measure the relationship between the covariates and the response variable using the weighted Jensen-Shannon divergence adjusted by the logarithmic factor of the number of categories when the number of categories in each covariate varies. Then, through both theoretical and simulation experiments, it was demonstrated that the proposed methods have sure screening and ranking consistency properties. Finally, the results from simulation and real-dataset experiments show that in feature screening, the proposed methods investigated are robust in performance and faster in computational speed compared with an existing method.展开更多
Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods...Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.展开更多
This article proposes a VGG network with histogram of oriented gradient(HOG) feature fusion(HOG-VGG) for polarization synthetic aperture radar(PolSAR) image terrain classification.VGG-Net has a strong ability of deep ...This article proposes a VGG network with histogram of oriented gradient(HOG) feature fusion(HOG-VGG) for polarization synthetic aperture radar(PolSAR) image terrain classification.VGG-Net has a strong ability of deep feature extraction,which can fully extract the global deep features of different terrains in PolSAR images,so it is widely used in PolSAR terrain classification.However,VGG-Net ignores the local edge & shape features,resulting in incomplete feature representation of the PolSAR terrains,as a consequence,the terrain classification accuracy is not promising.In fact,edge and shape features play an important role in PolSAR terrain classification.To solve this problem,a new VGG network with HOG feature fusion was specifically proposed for high-precision PolSAR terrain classification.HOG-VGG extracts both the global deep semantic features and the local edge & shape features of the PolSAR terrains,so the terrain feature representation completeness is greatly elevated.Moreover,HOG-VGG optimally fuses the global deep features and the local edge & shape features to achieve the best classification results.The superiority of HOG-VGG is verified on the Flevoland,San Francisco and Oberpfaffenhofen datasets.Experiments show that the proposed HOG-VGG achieves much better PolSAR terrain classification performance,with overall accuracies of 97.54%,94.63%,and 96.07%,respectively.展开更多
As an essential function of encrypted Internet traffic analysis,encrypted traffic service classification can support both coarse-grained network service traffic management and security supervision.However,the traditio...As an essential function of encrypted Internet traffic analysis,encrypted traffic service classification can support both coarse-grained network service traffic management and security supervision.However,the traditional plaintext-based Deep Packet Inspection(DPI)method cannot be applied to such a classification.Moreover,machine learning-based existing methods encounter two problems during feature selection:complex feature overcost processing and Transport Layer Security(TLS)version discrepancy.In this paper,we consider differences between encryption network protocol stacks and propose a composite deep learning-based method in multiprotocol environments using a sliding multiple Protocol Data Unit(multiPDU)length sequence as features by fully utilizing the Markov property in a multiPDU length sequence and maintaining suitability with a TLS-1.3 environment.Control experiments show that both Length-Sensitive(LS)composite deep learning model using a capsule neural network and LS-long short time memory achieve satisfactory effectiveness in F1-score and performance.Owing to faster feature extraction,our method is suitable for actual network environments and superior to state-of-the-art methods.展开更多
基金supported by the National Natural Science Foundation of China(62302167,62477013)Natural Science Foundation of Shanghai(No.24ZR1456100)+1 种基金Science and Technology Commission of Shanghai Municipality(No.24DZ2305900)the Shanghai Municipal Special Fund for Promoting High-Quality Development of Industries(2211106).
文摘Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feature representation.However,existing methods often rely on the single-scale deep feature,neglecting shallow and deeper layer features,which poses challenges when predicting objects of varying scales within the same image.Although some studies have explored multi-scale features,they rarely address the flow of information between scales or efficiently obtain class-specific precise representations for features at different scales.To address these issues,we propose a two-stage,three-branch Transformer-based framework.The first stage incorporates multi-scale image feature extraction and hierarchical scale attention.This design enables the model to consider objects at various scales while enhancing the flow of information across different feature scales,improving the model’s generalization to diverse object scales.The second stage includes a global feature enhancement module and a region selection module.The global feature enhancement module strengthens interconnections between different image regions,mitigating the issue of incomplete represen-tations,while the region selection module models the cross-modal relationships between image features and labels.Together,these components enable the efficient acquisition of class-specific precise feature representations.Extensive experiments on public datasets,including COCO2014,VOC2007,and VOC2012,demonstrate the effectiveness of our proposed method.Our approach achieves consistent performance gains of 0.3%,0.4%,and 0.2%over state-of-the-art methods on the three datasets,respectively.These results validate the reliability and superiority of our approach for multi-label image classification.
文摘On the evening of May 3Oth,the parallel forum"Equality and Inclusiveness&Harmonious Coexistence:Multi-dimensional Narratives of Civilisations from Writers'Perspective",as part of the 4th Dialogue on Exchanges and Mutual Learning among Civilisations,was held in Dunhuang.The forum was organised by the China Writers Association and co-organised by China National Publications Import&Export(Group)Corporation.
基金supported by the National Natural Science Foundation of China (Nos.61806107 and 61702135)。
文摘We propose a hierarchical multi-scale attention mechanism-based model in response to the low accuracy and inefficient manual classification of existing oceanic biological image classification methods. Firstly, the hierarchical efficient multi-scale attention(H-EMA) module is designed for lightweight feature extraction, achieving outstanding performance at a relatively low cost. Secondly, an improved EfficientNetV2 block is used to integrate information from different scales better and enhance inter-layer message passing. Furthermore, introducing the convolutional block attention module(CBAM) enhances the model's perception of critical features, optimizing its generalization ability. Lastly, Focal Loss is introduced to adjust the weights of complex samples to address the issue of imbalanced categories in the dataset, further improving the model's performance. The model achieved 96.11% accuracy on the intertidal marine organism dataset of Nanji Islands and 84.78% accuracy on the CIFAR-100 dataset, demonstrating its strong generalization ability to meet the demands of oceanic biological image classification.
基金supported by a project funded by the Hebei Provincial Central Guidance Local Science and Technology Development Fund(236Z7714G)。
文摘Cervical cancer,a leading malignancy globally,poses a significant threat to women's health,with an estimated 604,000 new cases and 342,000 deaths reported in 2020^([1]).As cervical cancer is closely linked to human papilloma virus(HPV)infection,early detection relies on HPV screening;however,late-stage prognosis remains poor,underscoring the need for novel diagnostic and therapeutic targets^([2]).
基金supported by the National Natural Science Foundation of China(5110505261173163)the Liaoning Provincial Natural Science Foundation of China(201102037)
文摘In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the research of multi-label classification algorithms. Considering the fact that the high dimensionality of the multi-label datasets may cause the curse of dimensionality and wil hamper the classification process, a dimensionality reduction algorithm, named multi-label kernel discriminant analysis (MLKDA), is proposed to reduce the dimensionality of multi-label datasets. MLKDA, with the kernel trick, processes the multi-label integrally and realizes the nonlinear dimensionality reduction with the idea similar with linear discriminant analysis (LDA). In the classification process of multi-label data, the extreme learning machine (ELM) is an efficient algorithm in the premise of good accuracy. MLKDA, combined with ELM, shows a good performance in multi-label learning experiments with several datasets. The experiments on both static data and data stream show that MLKDA outperforms multi-label dimensionality reduction via dependence maximization (MDDM) and multi-label linear discriminant analysis (MLDA) in cases of balanced datasets and stronger correlation between tags, and ELM is also a good choice for multi-label classification.
文摘Some dimensionality reduction (DR) approaches based on support vector machine (SVM) are proposed. But the acquirement of the projection matrix in these approaches only considers the between-class margin based on SVM while ignoring the within-class information in data. This paper presents a new DR approach, call- ed the dimensionality reduction based on SVM and LDA (DRSL). DRSL considers the between-class margins from SVM and LDA, and the within-class compactness from LDA to obtain the projection matrix. As a result, DRSL can realize the combination of the between-class and within-class information and fit the between-class and within-class structures in data. Hence, the obtained projection matrix increases the generalization ability of subsequent classification techniques. Experiments applied to classification techniques show the effectiveness of the proposed method.
文摘It is quite common that both categorical and continuous covariates appear in the data. But, most feature screening methods for ultrahigh-dimensional classification assume the covariates are continuous. And applicable feature screening method is very limited;to handle this non-trivial situation, we propose a model-free feature screening for ultrahigh-dimensional multi-classification with both categorical and continuous covariates. The proposed feature screening method will be based on Gini impurity to evaluate the prediction power of covariates. Under certain regularity conditions, it is proved that the proposed screening procedure possesses the sure screening property and ranking consistency properties. We demonstrate the finite sample performance of the proposed procedure by simulation studies and illustrate using real data analysis.
文摘It is common for datasets to contain both categorical and continuous variables. However, many feature screening methods designed for high-dimensional classification assume that the variables are continuous. This limits the applicability of existing methods in handling this complex scenario. To address this issue, we propose a model-free feature screening approach for ultra-high-dimensional multi-classification that can handle both categorical and continuous variables. Our proposed feature screening method utilizes the Maximal Information Coefficient to assess the predictive power of the variables. By satisfying certain regularity conditions, we have proven that our screening procedure possesses the sure screening property and ranking consistency properties. To validate the effectiveness of our approach, we conduct simulation studies and provide real data analysis examples to demonstrate its performance in finite samples. In summary, our proposed method offers a solution for effectively screening features in ultra-high-dimensional datasets with a mixture of categorical and continuous covariates.
文摘Intrusion detection is a predominant task that monitors and protects the network infrastructure.Therefore,many datasets have been published and investigated by researchers to analyze and understand the problem of intrusion prediction and detection.In particular,the Network Security Laboratory-Knowledge Discovery in Databases(NSL-KDD)is an extensively used benchmark dataset for evaluating intrusion detection systems(IDSs)as it incorporates various network traffic attacks.It is worth mentioning that a large number of studies have tackled the problem of intrusion detection using machine learning models,but the performance of these models often decreases when evaluated on new attacks.This has led to the utilization of deep learning techniques,which have showcased significant potential for processing large datasets and therefore improving detection accuracy.For that reason,this paper focuses on the role of stacking deep learning models,including convolution neural network(CNN)and deep neural network(DNN)for improving the intrusion detection rate of the NSL-KDD dataset.Each base model is trained on the NSL-KDD dataset to extract significant features.Once the base models have been trained,the stacking process proceeds to the second stage,where a simple meta-model has been trained on the predictions generated from the proposed base models.The combination of the predictions allows the meta-model to distinguish different classes of attacks and increase the detection rate.Our experimental evaluations using the NSL-KDD dataset have shown the efficacy of stacking deep learning models for intrusion detection.The performance of the ensemble of base models,combined with the meta-model,exceeds the performance of individual models.Our stacking model has attained an accuracy of 99%and an average F1-score of 93%for the multi-classification scenario.Besides,the training time of the proposed ensemble model is lower than the training time of benchmark techniques,demonstrating its efficiency and robustness.
文摘Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.
文摘The frame of text classification system was presented. The high dimensionality in feature space for text classification was studied. The mutual information is a widely used information theoretic measure, in a descriptive way, to measure the stochastic dependency of discrete random variables. The measure method was used as a criterion to reduce high dimensionality of feature vectors in text classification on Web. Feature selections or conversions were performed by using maximum mutual information including linear and non-linear feature conversions. Entropy was used and extended to find right features commendably in pattern recognition systems. Favorable foundation would be established for text classification mining.
文摘This paper presents a supervised classification method of sonar image, which takes advantages of both multi-fractal theory and wavelet analysis. In the process of feature extraction, image transformation and wavelet decomposition are combined and a feature set based on multi-fractal dimension is obtained. In the part of classifier construction, the Learning Vector Quantization (LVQ) network is adopted as a classifier. Experiments of sonar image classification were carried out with satisfactory results, which verify the effectiveness of this method.
文摘The advent of the digital era has provided unprecedented opportunities for businesses to collect and analyze customer behavior data. Precision marketing, as a key means to improve marketing efficiency, highly depends on a deep understanding of customer behavior. This study proposes a theoretical framework for multi-dimensional customer behavior analysis, aiming to comprehensively capture customer behavioral characteristics in the digital environment. This framework integrates concepts of multi-source data including transaction history, browsing trajectories, social media interactions, and location information, constructing a theoretically more comprehensive customer profile. The research discusses the potential applications of this theoretical framework in precision marketing scenarios such as personalized recommendations, cross-selling, and customer churn prevention. Through analysis, the study points out that multi-dimensional analysis may significantly improve the targeting and theoretical conversion rates of marketing activities. However, the research also explores theoretical challenges that may be faced in the application process, such as data privacy and information overload, and proposes corresponding conceptual coping strategies. This study provides a new theoretical perspective on how businesses can optimize marketing decisions using big data thinking while respecting customer privacy, laying a foundation for future empirical research.
基金This work is partially supported by grants from the Key Program of National Natural Science Foundation of China(NSFC Nos.71631005 and 71731009)the Major Program of the National Social Science Foundation of China(No.19ZDA103).
文摘To solve the high-dimensionality issue and improve its accuracy in credit risk assessment,a high-dimensionality-trait-driven learning paradigm is proposed for feature extraction and classifier selection.The proposed paradigm consists of three main stages:categorization of high dimensional data,high-dimensionality-trait-driven feature extraction,and high-dimensionality-trait-driven classifier selection.In the first stage,according to the definition of high-dimensionality and the relationship between sample size and feature dimensions,the high-dimensionality traits of credit dataset are further categorized into two types:100<feature dimensions<sample size,and feature dimensions≥sample size.In the second stage,some typical feature extraction methods are tested regarding the two categories of high dimensionality.In the final stage,four types of classifiers are performed to evaluate credit risk considering different high-dimensionality traits.For the purpose of illustration and verification,credit classification experiments are performed on two publicly available credit risk datasets,and the results show that the proposed high-dimensionality-trait-driven learning paradigm for feature extraction and classifier selection is effective in handling high-dimensional credit classification issues and improving credit classification accuracy relative to the benchmark models listed in this study.
文摘We developed a ground observation system for solid precipitation using two-dimensional video disdrometer (2DVD). Among 16,010 particles observed by the system, around 10% of them were randomly sampled and manually classified into five classes which are snowflake, snowflake-like, intermediate, graupel-like, and graupel. At first, each particle was represented as a vector of 72 features containing fractal dimension and box-count to represent the complexity of particle shape. Feature analysis on the dataset clarified the importance of fractal dimension and box-count features for characterizing particles varying from snowflakes to graupels. On the other hand, performance evaluation of two-class classification by Support Vector Machine (SVM) was conducted. The experimental results revealed that, by selecting only 10 features out of 72, the average accuracy of classifying particles into snowflakes and graupels could reach around 95.4%, which had not been achieved by previous studies.
基金supported by the National Key R&D Program of China(2020AAA0109401,2020AAA0109405),(62376118,62006112,62250069,62206245)the Young Elite Scientists Sponsorship Program of Jiangsu Association for Science and the Technology 2021-020Collaborative Innovation Center of Novel Software Technology and Industrialization.
文摘Real-world objects exhibit intricate semantic properties that can be characterized from a multitude of perspectives, which necessitates the development of a model capable of discerning multiple patterns within data, while concurrently predicting several Labeling Dimensions (LDs) — a task known as Multi-dimensional Classification (MDC). While the class imbalance issue has been extensively investigated within the multi-class paradigm, its study in the MDC context has been limited due to the imbalance shift phenomenon. A sample’s classification as a minor or major class instance becomes ambiguous when it belongs to a minor class in one LD and a major class in another. Previous MDC methodologies predominantly emphasized instance-wise criteria, neglecting prediction capabilities from a dimension aspect, i.e., the average classification performance across LDs. We assert the significance of dimension-wise metrics in real-world MDC applications and introduce two such metrics. Furthermore, we observe imbalanced class distributions within each LD and propose a novel Imbalance-Aware fusion Model (IMAM) for addressing the MDC problem. Specifically, we first decompose the task into multiple multi-class classification problems, creating imbalance-aware deep models for each LD separately. This straightforward method performs well across LDs without sacrificing performance in instance-wise criteria. Subsequently, we employ LD-wise models as multiple teachers and transfer their knowledge across all LDs to a unified student model. Experimental results on several real-world datasets demonstrate that our IMAM approach excels in both instance-wise evaluations and the proposed dimension-wise metrics.
文摘In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified from the point of view of introducing Jensen-Shannon divergence to measure the importance of covariates. The idea of the method is to calculate the Jensen-Shannon divergence between the conditional probability distribution of the covariates on a given response variable and the unconditional probability distribution of the covariates, and then use the probabilities of the response variables as weights to calculate the weighted Jensen-Shannon divergence, where a larger weighted Jensen-Shannon divergence means that the covariates are more important. Additionally, we also investigated an adapted version of the method, which is to measure the relationship between the covariates and the response variable using the weighted Jensen-Shannon divergence adjusted by the logarithmic factor of the number of categories when the number of categories in each covariate varies. Then, through both theoretical and simulation experiments, it was demonstrated that the proposed methods have sure screening and ranking consistency properties. Finally, the results from simulation and real-dataset experiments show that in feature screening, the proposed methods investigated are robust in performance and faster in computational speed compared with an existing method.
基金Ministry of Education,Youth and Sports of the Chezk Republic,Grant/Award Numbers:SP2023/039,SP2023/042the European Union under the REFRESH,Grant/Award Number:CZ.10.03.01/00/22_003/0000048。
文摘Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.
基金Sponsored by the Fundamental Research Funds for the Central Universities of China(Grant No.PA2023IISL0098)the Hefei Municipal Natural Science Foundation(Grant No.202201)+1 种基金the National Natural Science Foundation of China(Grant No.62071164)the Open Fund of Information Materials and Intelligent Sensing Laboratory of Anhui Province(Anhui University)(Grant No.IMIS202214 and IMIS202102)。
文摘This article proposes a VGG network with histogram of oriented gradient(HOG) feature fusion(HOG-VGG) for polarization synthetic aperture radar(PolSAR) image terrain classification.VGG-Net has a strong ability of deep feature extraction,which can fully extract the global deep features of different terrains in PolSAR images,so it is widely used in PolSAR terrain classification.However,VGG-Net ignores the local edge & shape features,resulting in incomplete feature representation of the PolSAR terrains,as a consequence,the terrain classification accuracy is not promising.In fact,edge and shape features play an important role in PolSAR terrain classification.To solve this problem,a new VGG network with HOG feature fusion was specifically proposed for high-precision PolSAR terrain classification.HOG-VGG extracts both the global deep semantic features and the local edge & shape features of the PolSAR terrains,so the terrain feature representation completeness is greatly elevated.Moreover,HOG-VGG optimally fuses the global deep features and the local edge & shape features to achieve the best classification results.The superiority of HOG-VGG is verified on the Flevoland,San Francisco and Oberpfaffenhofen datasets.Experiments show that the proposed HOG-VGG achieves much better PolSAR terrain classification performance,with overall accuracies of 97.54%,94.63%,and 96.07%,respectively.
基金supported by the General Program of the National Natural Science Foundation of China under Grant No.62172093the National Key R&D Program of China under Grant No.2018YFB1800602+1 种基金2019 Industrial Internet Innovation and Development Project,Ministry of Industry and Information Technology(MIIT)under Grant No.6709010003Ministry of Education-China Mobile Research Fund under Grant No.MCM20180506。
文摘As an essential function of encrypted Internet traffic analysis,encrypted traffic service classification can support both coarse-grained network service traffic management and security supervision.However,the traditional plaintext-based Deep Packet Inspection(DPI)method cannot be applied to such a classification.Moreover,machine learning-based existing methods encounter two problems during feature selection:complex feature overcost processing and Transport Layer Security(TLS)version discrepancy.In this paper,we consider differences between encryption network protocol stacks and propose a composite deep learning-based method in multiprotocol environments using a sliding multiple Protocol Data Unit(multiPDU)length sequence as features by fully utilizing the Markov property in a multiPDU length sequence and maintaining suitability with a TLS-1.3 environment.Control experiments show that both Length-Sensitive(LS)composite deep learning model using a capsule neural network and LS-long short time memory achieve satisfactory effectiveness in F1-score and performance.Owing to faster feature extraction,our method is suitable for actual network environments and superior to state-of-the-art methods.