In contemporary society,rapid and accurate optical cable fault detection is of paramount importance for ensuring the stability and reliability of optical networks.The emergence of novel faults in optical networks has ...In contemporary society,rapid and accurate optical cable fault detection is of paramount importance for ensuring the stability and reliability of optical networks.The emergence of novel faults in optical networks has introduced new challenges,significantly compromising their normal operation.Machine learning has emerged as a highly promising approach.Consequently,it is imperative to develop an automated and reliable algorithm that utilizes telemetry data acquired from Optical Time-Domain Reflectometers(OTDR)to enable real-time fault detection and diagnosis in optical fibers.In this paper,we introduce a multi-scale Convolutional Neural Network–Bidirectional Long Short-Term Memory(CNN-BiLSTM)deep learning model for accurate optical fiber fault detection.The proposed multi-scale CNN-BiLSTM comprises three variants:the Independent Multi-scale CNN-BiLSTM(IMC-BiLSTM),the Combined Multi-scale CNN-BiLSTM(CMC-BiLSTM),and the Shared Multi-scale CNN-BiLSTM(SMC-BiLSTM).These models employ convolutional kernels of varying sizes to extract spatial features from time-series data,while leveraging BiLSTM to enhance the capture of global event characteristics.Experiments were conducted using the publicly available OTDR_data dataset,and comparisons with existing methods demonstrate the effectiveness of our approach.The results show that(i)IMC-BiLSTM,CMC-BiLSTM,and SMC-BiLSTM achieve F1-scores of 97.37%,97.25%,and 97.1%,(ii)respectively,with accuracy of 97.36%,97.23%,and 97.12%.These performances surpass those of traditional techniques.展开更多
Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal...Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.展开更多
While the usage of digital ocular fundus image has been widespread in ophthalmology practice,the interpretation of the image has been still on the hands of the ophthalmologists which are quite costly.We explored a rob...While the usage of digital ocular fundus image has been widespread in ophthalmology practice,the interpretation of the image has been still on the hands of the ophthalmologists which are quite costly.We explored a robust deep learning system that detects three major ocular diseases:diabetic retinopathy(DR),glaucoma(GLC),and age-related macular degeneration(AMD).The proposed method is composed of two steps.First,an initial quality evaluation in the classification system is proposed to filter out poorquality images to enhance its performance,a technique that has not been explored previously.Second,the transfer learning technique is used with various convolutional neural networks(CNN)models that automatically learn a thousand features in the digital retinal image,and are based on those features for diagnosing eye diseases.Comparison performance of many models is conducted to find the optimal model which fits with fundus classification.Among the different CNN models,DenseNet-201 outperforms others with an area under the receiver operating characteristic curve of 0.99.Furthermore,the corresponding specificities for healthy,DR,GLC,andAMDpatients are found to be 89.52%,96.69%,89.58%,and 100%,respectively.These results demonstrate that the proposed method can reduce the time-consumption by automatically diagnosing multiple eye diseases using computer-aided assistance tools.展开更多
Quantum computing is a promising new approach to tackle the complex real-world computational problems by harnessing the power of quantum mechanics principles.The inherent parallelism and exponential computational powe...Quantum computing is a promising new approach to tackle the complex real-world computational problems by harnessing the power of quantum mechanics principles.The inherent parallelism and exponential computational power of quantum systems hold the potential to outpace classical counterparts in solving complex optimization problems,which are pervasive in machine learning.Quantum Support Vector Machine(QSVM)is a quantum machine learning algorithm inspired by classical Support Vector Machine(SVM)that exploits quantum parallelism to efficiently classify data points in high-dimensional feature spaces.We provide a comprehensive overview of the underlying principles of QSVM,elucidating how different quantum feature maps and quantum kernels enable the manipulation of quantum states to perform classification tasks.Through a comparative analysis,we reveal the quantum advantage achieved by these algorithms in terms of speedup and solution quality.As a case study,we explored the potential of quantum paradigms in the context of a real-world problem:classifying pancreatic cancer biomarker data.The Support Vector Classifier(SVC)algorithm was employed for the classical approach while the QSVM algorithm was executed on a quantum simulator provided by the Qiskit quantum computing framework.The classical approach as well as the quantum-based techniques reported similar accuracy.This uniformity suggests that these methods effectively captured similar underlying patterns in the dataset.Remarkably,quantum implementations exhibited substantially reduced execution times demonstrating the potential of quantum approaches in enhancing classification efficiency.This affirms the growing significance of quantum computing as a transformative tool for augmenting machine learning paradigms and also underscores the potency of quantum execution for computational acceleration.展开更多
Speech emotion recognition (SER) in noisy environment is a vital issue in artificial intelligence (AI). In this paper, the reconstruction of speech samples removes the added noise. Acoustic features extracted from...Speech emotion recognition (SER) in noisy environment is a vital issue in artificial intelligence (AI). In this paper, the reconstruction of speech samples removes the added noise. Acoustic features extracted from the reconstructed samples are selected to build an optimal feature subset with better emotional recognizability. A multiple-kernel (MK) support vector machine (SVM) classifier solved by semi-definite programming (SDP) is adopted in SER procedure. The proposed method in this paper is demonstrated on Berlin Database of Emotional Speech. Recognition accuracies of the original, noisy, and reconstructed samples classified by both single-kernel (SK) and MK classifiers are compared and analyzed. The experimental results show that the proposed method is effective and robust when noise exists.展开更多
Since traditional machine learning methods are sensitive to skewed distribution and do not consider the characteristics in multiclass imbalance problems,the skewed distribution of multiclass data poses a major challen...Since traditional machine learning methods are sensitive to skewed distribution and do not consider the characteristics in multiclass imbalance problems,the skewed distribution of multiclass data poses a major challenge to machine learning algorithms.To tackle such issues,we propose a new splitting criterion of the decision tree based on the one-against-all-based Hellinger distance(OAHD).Two crucial elements are included in OAHD.First,the one-against-all scheme is integrated into the process of computing the Hellinger distance in OAHD,thereby extending the Hellinger distance decision tree to cope with the multiclass imbalance problem.Second,for the multiclass imbalance problem,the distribution and the number of distinct classes are taken into account,and a modified Gini index is designed.Moreover,we give theoretical proofs for the properties of OAHD,including skew insensitivity and the ability to seek a purer node in the decision tree.Finally,we collect 20 public real-world imbalanced data sets from the Knowledge Extraction based on Evolutionary Learning(KEEL)repository and the University of California,Irvine(UCI)repository.Experimental and statistical results show that OAHD significantly improves the performance compared with the five other well-known decision trees in terms of Precision,F-measure,and multiclass area under the receiver operating characteristic curve(MAUC).Moreover,through statistical analysis,the Friedman and Nemenyi tests are used to prove the advantage of OAHD over the five other decision trees.展开更多
基金supported in part by the Guangxi Science and Technology Department Key Research and Development Project(Grant No.23026149)in part by the Guangxi Key Research and Development Plan Project(Grant No.AB24010073).
文摘In contemporary society,rapid and accurate optical cable fault detection is of paramount importance for ensuring the stability and reliability of optical networks.The emergence of novel faults in optical networks has introduced new challenges,significantly compromising their normal operation.Machine learning has emerged as a highly promising approach.Consequently,it is imperative to develop an automated and reliable algorithm that utilizes telemetry data acquired from Optical Time-Domain Reflectometers(OTDR)to enable real-time fault detection and diagnosis in optical fibers.In this paper,we introduce a multi-scale Convolutional Neural Network–Bidirectional Long Short-Term Memory(CNN-BiLSTM)deep learning model for accurate optical fiber fault detection.The proposed multi-scale CNN-BiLSTM comprises three variants:the Independent Multi-scale CNN-BiLSTM(IMC-BiLSTM),the Combined Multi-scale CNN-BiLSTM(CMC-BiLSTM),and the Shared Multi-scale CNN-BiLSTM(SMC-BiLSTM).These models employ convolutional kernels of varying sizes to extract spatial features from time-series data,while leveraging BiLSTM to enhance the capture of global event characteristics.Experiments were conducted using the publicly available OTDR_data dataset,and comparisons with existing methods demonstrate the effectiveness of our approach.The results show that(i)IMC-BiLSTM,CMC-BiLSTM,and SMC-BiLSTM achieve F1-scores of 97.37%,97.25%,and 97.1%,(ii)respectively,with accuracy of 97.36%,97.23%,and 97.12%.These performances surpass those of traditional techniques.
文摘Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.NRF-2021R1A2C1010362)and the Soonchunhyang University Research Fund.
文摘While the usage of digital ocular fundus image has been widespread in ophthalmology practice,the interpretation of the image has been still on the hands of the ophthalmologists which are quite costly.We explored a robust deep learning system that detects three major ocular diseases:diabetic retinopathy(DR),glaucoma(GLC),and age-related macular degeneration(AMD).The proposed method is composed of two steps.First,an initial quality evaluation in the classification system is proposed to filter out poorquality images to enhance its performance,a technique that has not been explored previously.Second,the transfer learning technique is used with various convolutional neural networks(CNN)models that automatically learn a thousand features in the digital retinal image,and are based on those features for diagnosing eye diseases.Comparison performance of many models is conducted to find the optimal model which fits with fundus classification.Among the different CNN models,DenseNet-201 outperforms others with an area under the receiver operating characteristic curve of 0.99.Furthermore,the corresponding specificities for healthy,DR,GLC,andAMDpatients are found to be 89.52%,96.69%,89.58%,and 100%,respectively.These results demonstrate that the proposed method can reduce the time-consumption by automatically diagnosing multiple eye diseases using computer-aided assistance tools.
文摘Quantum computing is a promising new approach to tackle the complex real-world computational problems by harnessing the power of quantum mechanics principles.The inherent parallelism and exponential computational power of quantum systems hold the potential to outpace classical counterparts in solving complex optimization problems,which are pervasive in machine learning.Quantum Support Vector Machine(QSVM)is a quantum machine learning algorithm inspired by classical Support Vector Machine(SVM)that exploits quantum parallelism to efficiently classify data points in high-dimensional feature spaces.We provide a comprehensive overview of the underlying principles of QSVM,elucidating how different quantum feature maps and quantum kernels enable the manipulation of quantum states to perform classification tasks.Through a comparative analysis,we reveal the quantum advantage achieved by these algorithms in terms of speedup and solution quality.As a case study,we explored the potential of quantum paradigms in the context of a real-world problem:classifying pancreatic cancer biomarker data.The Support Vector Classifier(SVC)algorithm was employed for the classical approach while the QSVM algorithm was executed on a quantum simulator provided by the Qiskit quantum computing framework.The classical approach as well as the quantum-based techniques reported similar accuracy.This uniformity suggests that these methods effectively captured similar underlying patterns in the dataset.Remarkably,quantum implementations exhibited substantially reduced execution times demonstrating the potential of quantum approaches in enhancing classification efficiency.This affirms the growing significance of quantum computing as a transformative tool for augmenting machine learning paradigms and also underscores the potency of quantum execution for computational acceleration.
基金supported by the National Natural Science Foundation of China (61501204,61601198)the Hebei Province Natural Science Foundation (E2016202341)+2 种基金the Hebei Province Foundation for Returned Scholars (C2012003038)the Shandong Province Natural Science Foundation (ZR2015FL010)the Science and Technology Program of University of Jinan (XKY1710)
文摘Speech emotion recognition (SER) in noisy environment is a vital issue in artificial intelligence (AI). In this paper, the reconstruction of speech samples removes the added noise. Acoustic features extracted from the reconstructed samples are selected to build an optimal feature subset with better emotional recognizability. A multiple-kernel (MK) support vector machine (SVM) classifier solved by semi-definite programming (SDP) is adopted in SER procedure. The proposed method in this paper is demonstrated on Berlin Database of Emotional Speech. Recognition accuracies of the original, noisy, and reconstructed samples classified by both single-kernel (SK) and MK classifiers are compared and analyzed. The experimental results show that the proposed method is effective and robust when noise exists.
基金Project supported by the National Natural Science Foundation of China(Nos.61802085 and 61563012)the Guangxi Provincial Natural Science Foundation,China(Nos.2021GXNSFAA220074and 2020GXNSFAA159038)+1 种基金the Guangxi Key Laboratory of Embedded Technology and Intelligent System Foundation,China(No.2018A-04)the Guangxi Key Laboratory of Trusted Software Foundation,China(No.kx202011)。
文摘Since traditional machine learning methods are sensitive to skewed distribution and do not consider the characteristics in multiclass imbalance problems,the skewed distribution of multiclass data poses a major challenge to machine learning algorithms.To tackle such issues,we propose a new splitting criterion of the decision tree based on the one-against-all-based Hellinger distance(OAHD).Two crucial elements are included in OAHD.First,the one-against-all scheme is integrated into the process of computing the Hellinger distance in OAHD,thereby extending the Hellinger distance decision tree to cope with the multiclass imbalance problem.Second,for the multiclass imbalance problem,the distribution and the number of distinct classes are taken into account,and a modified Gini index is designed.Moreover,we give theoretical proofs for the properties of OAHD,including skew insensitivity and the ability to seek a purer node in the decision tree.Finally,we collect 20 public real-world imbalanced data sets from the Knowledge Extraction based on Evolutionary Learning(KEEL)repository and the University of California,Irvine(UCI)repository.Experimental and statistical results show that OAHD significantly improves the performance compared with the five other well-known decision trees in terms of Precision,F-measure,and multiclass area under the receiver operating characteristic curve(MAUC).Moreover,through statistical analysis,the Friedman and Nemenyi tests are used to prove the advantage of OAHD over the five other decision trees.