The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing pr...The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing presents substantial security challenges,rendering it a prominent target for sophisticated malware attacks.Existing signature-based and behavior-based detection methods are ineffective against the swiftly evolving nature of malware threats and are constrained by the availability of resources.This paper suggests the Genetic Encoding for Novel Optimization of Malware Evaluation(GENOME)framework,a novel solution that is intended to improve the performance of malware detection and classification in peripheral computing environments.GENOME optimizes data storage and computa-tional efficiency by converting malware artifacts into compact,structured sequences through a Deoxyribonucleic Acid(DNA)encoding mechanism.The framework employs two DNA encoding algorithms,standard and compressed,which substantially reduce data size while preserving high detection accuracy.The Edge-IIoTset dataset was used to conduct experiments that showed that GENOME was able to achieve high classification performance using models such as Random Forest and Logistic Regression,resulting in a reduction of data size by up to 42%.Further evaluations with the CIC-IoT-23 dataset and Deep Learning models confirmed GENOME’s scalability and adaptability across diverse datasets and algorithms.The potential of GENOME to address critical challenges,such as the rapid mutation of malware,real-time processing demands,and resource limitations,is emphasized in this study.GENOME offers comprehensive protection for peripheral computing environments by offering a security solution that is both efficient and scalable.展开更多
Effectively handling imbalanced datasets remains a fundamental challenge in computational modeling and machine learning,particularly when class overlap significantly deteriorates classification performance.Traditional...Effectively handling imbalanced datasets remains a fundamental challenge in computational modeling and machine learning,particularly when class overlap significantly deteriorates classification performance.Traditional oversampling methods often generate synthetic samples without considering density variations,leading to redundant or misleading instances that exacerbate class overlap in high-density regions.To address these limitations,we propose Wasserstein Generative Adversarial Network Variational Density Estimation WGAN-VDE,a computationally efficient density-aware adversarial resampling framework that enhances minority class representation while strategically reducing class overlap.The originality of WGAN-VDE lies in its density-aware sample refinement,ensuring that synthetic samples are positioned in underrepresented regions,thereby improving class distinctiveness.By applying structured feature representation,targeted sample generation,and density-based selection mechanisms strategies,the proposed framework ensures the generation of well-separated and diverse synthetic samples,improving class separability and reducing redundancy.The experimental evaluation on 20 benchmark datasets demonstrates that this approach outperforms 11 state-of-the-art rebalancing techniques,achieving superior results in F1-score,Accuracy,G-Mean,and AUC metrics.These results establish the proposed method as an effective and robust computational approach,suitable for diverse engineering and scientific applications involving imbalanced data classification and computational modeling.展开更多
There is a widespread agreement that lung cancer is one of the deadliest types of cancer,affecting both women and men.As a result,detecting lung cancer at an early stage is crucial to create an accurate treatment plan...There is a widespread agreement that lung cancer is one of the deadliest types of cancer,affecting both women and men.As a result,detecting lung cancer at an early stage is crucial to create an accurate treatment plan and forecasting the reaction of the patient to the adopted treatment.For this reason,the development of convolutional neural networks(CNNs)for the task of lung cancer classification has recently seen a trend in attention.CNNs have great potential,but they need a lot of training data and struggle with input alterations.To address these limitations of CNNs,a novel machine-learning architecture of capsule networks has been presented,and it has the potential to completely transform the areas of deep learning.Capsule networks,which are the focus of this work,are interesting because they can withstand rotation and affine translation with relatively little training data.This research optimizes the performance of CapsNets by designing a new architecture that allows them to perform better on the challenge of lung cancer classification.The findings demonstrate that the proposed capsule network method outperforms CNNs on the lung cancer classification challenge.CapsNet with a single convolution layer and 32 features(CN-1-32),CapsNet with a single convolution layer and 64 features(CN-1-64),and CapsNet with a double convolution layer and 64 features(CN-2-64)are the three capsulel networks developed in this research for lung cancer classification.Lung nodules,both benign and malignant,are classified using these networks using CT images.The LIDC-IDRI database was utilized to assess the performance of those networks.Based on the testing results,CN-2-64 network performed better out of the three networks tested,with a specificity of 98.37%,sensitivity of 97.47%and an accuracy of 97.92%.展开更多
Accurate classification of encrypted traffic plays an important role in network management.However,current methods confronts several problems:inability to characterize traffic that exhibits great dispersion,inability ...Accurate classification of encrypted traffic plays an important role in network management.However,current methods confronts several problems:inability to characterize traffic that exhibits great dispersion,inability to classify traffic with multi-level features,and degradation due to limited training traffic size.To address these problems,this paper proposes a traffic granularity-based cryptographic traffic classification method,called Granular Classifier(GC).In this paper,a novel Cardinality-based Constrained Fuzzy C-Means(CCFCM)clustering algorithm is proposed to address the problem caused by limited training traffic,considering the ratio of cardinality that must be linked between flows to achieve good traffic partitioning.Then,an original representation format of traffic is presented based on granular computing,named Traffic Granules(TG),to accurately describe traffic structure by catching the dispersion of different traffic features.Each granule is a compact set of similar data with a refined boundary by excluding outliers.Based on TG,GC is constructed to perform traffic classification based on multi-level features.The performance of the GC is evaluated based on real-world encrypted network traffic data.Experimental results show that the GC achieves outstanding performance for encrypted traffic classification with limited size of training traffic and keeps accurate classification in dynamic network conditions.展开更多
BACKGROUND The degree of obstruction plays an important role in decision-making for obstructive colorectal cancer(OCRC).The existing assessment still relies on the colorectal obstruction scoring system(CROSS)which is ...BACKGROUND The degree of obstruction plays an important role in decision-making for obstructive colorectal cancer(OCRC).The existing assessment still relies on the colorectal obstruction scoring system(CROSS)which is based on a comprehensive analysis of patients’complaints and eating conditions.The data collection relies on subjective descriptions and lacks objective parameters.Therefore,a scoring system for the evaluation of computed tomography-based obstructive degree(CTOD)is urgently required for OCRC.AIM To explore the relationship between CTOD and CROSS and to determine whether CTOD could affect the short-term and long-term prognosis.METHODS Of 173 patients were enrolled.CTOD was obtained using k-means,the ratio of proximal to distal obstruction,and the proportion of nonparenchymal areas at the site of obstruction.CTOD was integrated with the CROSS to analyze the effect of emergency intervention on complications.Short-term and long-term outcomes were compared between the groups.RESULTS CTOD severe obstruction(CTOD grade 3)was an independent risk factor[odds ratio(OR)=3.390,95%confidence interval(CI):1.340-8.570,P=0.010]via multivariate analysis of short-term outcomes,while CROSS grade was not.In the CTOD-CROSS grade system,for the non-severe obstructive(CTOD 1-2 to CROSS 1-4)group,the complication rate of emergency interventions was significantly higher than that of non-emergency interventions(71.4%vs 41.8%,P=0.040).The postoperative pneumonia rate was higher in the emergency intervention group than in the non-severe obstructive group(35.7%vs 8.9%,P=0.020).However,CTOD grade was not an independent risk factor of overall survival and progression-free survival.CONCLUSION CTOD was useful in preoperative decision-making to avoid unnecessary emergency interventions and complications.展开更多
Bird species classification is not only a challenging topic in artificial intelligence but also a domain closely related to environmental protection and ecological research.Additionally,performing edge computing on lo...Bird species classification is not only a challenging topic in artificial intelligence but also a domain closely related to environmental protection and ecological research.Additionally,performing edge computing on low-level devices using small neural networks can be an important research direction.In this paper,we use the EfficientNetV2B0 model for bird species classification,applying transfer learning on a dataset of 525 bird species.We also employ the BiRefNet model to remove backgrounds from images in the training set.The generated background-removed images are mixed with the original training set as a form of data augmentation.We aim for these background-removed images to help the model focus on key features,and by combining data augmentation with transfer learning,we trained a highly accurate and efficient bird species classification model.The training process is divided into a transfer learning stage and a fine-tuning stage.In the transfer learning stage,only the newly added custom layers are trained;while in the fine-tuning stage,all pre-trained layers except for the batch normalization layers are fine-tuned.According to the experimental results,the proposed model not only has an advantage in size compared to other models but also outperforms them in various metrics.The training results show that the proposed model achieved an accuracy of 99.54%and a precision of 99.62%,demonstrating that it achieves both lightweight design and high accuracy.To confirm the credibility of the results,we use heatmaps to interpret the model.The heatmaps show that our model can clearly highlight the image feature area.In addition,we also perform the 10-fold cross-validation on the model to verify its credibility.Finally,this paper proposes a model with low training cost and high accuracy,making it suitable for deployment on edge computing devices to provide lighter and more convenient services.展开更多
Optical and hybrid convolutional neural networks(CNNs)recently have become of increasing interest to achieve low-latency,low-power image classification,and computer-vision tasks.However,implementing optical nonlineari...Optical and hybrid convolutional neural networks(CNNs)recently have become of increasing interest to achieve low-latency,low-power image classification,and computer-vision tasks.However,implementing optical nonlinearity is challenging,and omitting the nonlinear layers in a standard CNN comes with a significant reduction in accuracy.We use knowledge distillation to compress modified AlexNet to a single linear convolutional layer and an electronic backend(two fully connected layers).We obtain comparable performance with a purely electronic CNN with five convolutional layers and three fully connected layers.We implement the convolution optically via engineering the point spread function of an inverse-designed meta-optic.Using this hybrid approach,we estimate a reduction in multiply-accumulate operations from 17M in a conventional electronic modified AlexNet to only 86 K in the hybrid compressed network enabled by the optical front end.This constitutes over 2 orders of magnitude of reduction in latency and power consumption.Furthermore,we experimentally demonstrate that the classification accuracy of the system exceeds 93%on the MNIST dataset of handwritten digits.展开更多
The feasibility of constructing shallow foundations on saturated sands remains uncertain.Seismic design standards simply stipulate that geotechnical investigations for a shallow foundation on such soils shall be condu...The feasibility of constructing shallow foundations on saturated sands remains uncertain.Seismic design standards simply stipulate that geotechnical investigations for a shallow foundation on such soils shall be conducted to mitigate the effects of the liquefaction hazard.This study investigates the seismic behavior of strip foundations on typical two-layered soil profiles-a natural loose sand layer supported by a dense sand layer.Coupled nonlinear dynamic analyses have been conducted to calculate response parameters,including seismic settlement,the acceleration response on the ground surface,and excess pore pressure beneath strip foundations.A novel liquefaction potential index(LPI_(footing)),based on excess pore pressure ratios across a given region of soil mass beneath footings is introduced to classify liquefaction severity into three distinct levels:minor,moderate,and severe.To validate the proposed LPI_(footing),the foundation settlement is evaluated for the different liquefaction potential classes.A classification tree model has been grown to predict liquefaction susceptibility,utilizing various input variables,including earthquake intensity on the ground surface,foundation pressure,sand permeability,and top layer thickness.Moreover,a nonlinear regression function has been established to map LPI_(footing) in relation to these input predictors.The models have been constructed using a substantial dataset comprising 13,824 excess pore pressure ratio time histories.The performance of the developed models has been examined using various methods,including the 10-fold cross-validation method.The predictive capability of the tree also has been validated through existing experimental studies.The results indicate that the classification tree is not only interpretable but also highly predictive,with a testing accuracy level of 78.1%.The decision tree provides valuable insights for engineers assessing liquefaction potential beneath strip foundations.展开更多
AIM: To establish a computed tomography (CT)-morphological classification for hepatic alveolar echinococcosis was the aim of the study.METHODS: The CT morphology of hepatic lesions in 228 patients with confirmed alveo...AIM: To establish a computed tomography (CT)-morphological classification for hepatic alveolar echinococcosis was the aim of the study.METHODS: The CT morphology of hepatic lesions in 228 patients with confirmed alveolar echinococcosis (AE) drawn from the Echinococcus Databank of the University Hospital of Ulm was reviewed retrospectively. For this reason, CT datasets of combined positron emission tomography (PET)-CT examinations were evaluated. The diagnosis of AE was made in patients with unequivocal seropositivity; positive histological findings following diagnostic puncture or partial resection of the liver; and/or findings typical for AE at either ultrasonography, CT, magnetic resonance imaging or PET-CT. The CT-morphological findings were grouped into the new classification scheme.RESULTS: Within the classification a lesion was dedicated to one out of five “primary morphologies” as well as to one out of six “patterns of calcification”. “primary morphology” and “pattern of calcification” are primarily focussed on separately from each other and combined, whereas the “primary morphology” V is not further characterized by a “pattern of calcification”. Based on the five primary morphologies, further descriptive sub-criteria were appended to types I-III. An analysis of the calcification pattern in relation to the primary morphology revealed the exclusive association of the central calcification with type IV primary morphology. Similarly, certain calcification patterns exhibited a clear predominance for other primary morphologies, which underscores the delimitation of the individual primary morphological types from each other. These relationships in terms of calcification patterns extend into the primary morphological sub-criteria, demonstrating the clear subordination of those criteria.CONCLUSION: The proposed CT-morphological classification (EMUC-CT) is intended to facilitate the recognition and interpretation of lesions in hepatic alveolar echinococcosis. This could help to interpret different clinical courses better and shall assist in the context of scientific studies to improve the comparability of CT findings.展开更多
BACKGROUND In previous studies,celiomesenteric trunk(CMT)was narrowly defined as a hepato-gastro-spleno-mesenteric(HGSM)trunk,but other possible types were ignored.With the widespread use of multidetector computed tom...BACKGROUND In previous studies,celiomesenteric trunk(CMT)was narrowly defined as a hepato-gastro-spleno-mesenteric(HGSM)trunk,but other possible types were ignored.With the widespread use of multidetector computed tomography(MDCT)angiography,it is easy to collect a large sampling of data on arterial anatomy of the abdomen in daily radiological practice.A new classification system for CMT may be created based on its MDCT angiographic findings and variation patterns.AIM To identify the spectrum and prevalence of CMT according to a new classification based on MDCT angiographic findings,and discuss the probable embryological mechanisms to explain the CMT variants.METHODS A retrospective study was carried out on 5580 abdominal MDCT angiography images.CMT was defined as a single common trunk arising from the aorta and its branches including the superior mesenteric artery and at least two major branches of the celiac trunk.Various types of CMT were investigated.RESULTS Of the 5580 patients,171(3.06%)were identified as having CMT.According to the new definitions and classification,the CMT variants included five types:Ⅰ,Ⅱ,Ⅲ,Ⅳ and Ⅴ,which were found in 96(56.14%),57(33.33%),4(2.34%),3(1.75%)and 8(4.68%)patients,respectively.The CMT variants also were classified as long type(106 patients,61.99%)and short type(65 patients,38.01%)based on the length of single common trunk.Further CMT classification was based on the origin of the left gastric artery:Type a(92 patients,53.80%),type b(57 patients,33.33%),type c(11 patients,6.43%)and type d(8 patients,4.68%).CONCLUSION We systematically classified CMT variants according to our new classification system based on MDCT angiographic findings.Dislocation interruption,incomplete interruption and persistence of the longitudinal anastomosis could all be embryological mechanisms of various types of CMT variants.展开更多
BACKGROUND The accurate classification of focal liver lesions(FLLs)is essential to properly guide treatment options and predict prognosis.Dynamic contrast-enhanced computed tomography(DCE-CT)is still the cornerstone i...BACKGROUND The accurate classification of focal liver lesions(FLLs)is essential to properly guide treatment options and predict prognosis.Dynamic contrast-enhanced computed tomography(DCE-CT)is still the cornerstone in the exact classification of FLLs due to its noninvasive nature,high scanning speed,and high-density resolution.Since their recent development,convolutional neural network-based deep learning techniques has been recognized to have high potential for image recognition tasks.AIM To develop and evaluate an automated multiphase convolutional dense network(MP-CDN)to classify FLLs on multiphase CT.METHODS A total of 517 FLLs scanned on a 320-detector CT scanner using a four-phase DCECT imaging protocol(including precontrast phase,arterial phase,portal venous phase,and delayed phase)from 2012 to 2017 were retrospectively enrolled.FLLs were classified into four categories:Category A,hepatocellular carcinoma(HCC);category B,liver metastases;category C,benign non-inflammatory FLLs including hemangiomas,focal nodular hyperplasias and adenomas;and category D,hepatic abscesses.Each category was split into a training set and test set in an approximate 8:2 ratio.An MP-CDN classifier with a sequential input of the fourphase CT images was developed to automatically classify FLLs.The classification performance of the model was evaluated on the test set;the accuracy and specificity were calculated from the confusion matrix,and the area under the receiver operating characteristic curve(AUC)was calculated from the SoftMax probability outputted from the last layer of the MP-CDN.RESULTS A total of 410 FLLs were used for training and 107 FLLs were used for testing.The mean classification accuracy of the test set was 81.3%(87/107).The accuracy/specificity of distinguishing each category from the others were 0.916/0.964,0.925/0.905,0.860/0.918,and 0.925/0.963 for HCC,metastases,benign non-inflammatory FLLs,and abscesses on the test set,respectively.The AUC(95%confidence interval)for differentiating each category from the others was 0.92(0.837-0.992),0.99(0.967-1.00),0.88(0.795-0.955)and 0.96(0.914-0.996)for HCC,metastases,benign non-inflammatory FLLs,and abscesses on the test set,respectively.CONCLUSION MP-CDN accurately classified FLLs detected on four-phase CT as HCC,metastases,benign non-inflammatory FLLs and hepatic abscesses and may assist radiologists in identifying the different types of FLLs.展开更多
Corona Virus Disease 2019(COVID-19) has affected millions of people worldwide and caused more than6.3 million deaths(World Health Organization, June 2022). Increased attempts have been made to develop deep learning me...Corona Virus Disease 2019(COVID-19) has affected millions of people worldwide and caused more than6.3 million deaths(World Health Organization, June 2022). Increased attempts have been made to develop deep learning methods to diagnose COVID-19 based on computed tomography(CT) lung images. It is a challenge to reproduce and obtain the CT lung data, because it is not publicly available. This paper introduces a new generalized framework to segment and classify CT images and determine whether a patient is tested positive or negative for COVID-19 based on lung CT images. In this work, many different strategies are explored for the classification task.ResNet50 and VGG16 models are applied to classify CT lung images into COVID-19 positive or negative. Also,VGG16 and ReNet50 combined with U-Net, which is one of the most used architectures in deep learning for image segmentation, are employed to segment CT lung images before the classifying process to increase system performance. Moreover, the image size dependent normalization technique(ISDNT) and Wiener filter are utilized as the preprocessing techniques to enhance images and noise suppression. Additionally, transfer learning and data augmentation techniques are performed to solve the problem of COVID-19 CT lung images deficiency, therefore the over-fitting of deep models can be avoided. The proposed frameworks, which comprised of end-to-end, VGG16,ResNet50, and U-Net with VGG16 or ResNet50, are applied on the dataset that is sourced from COVID-19 lung CT images in Kaggle. The classification results show that using the preprocessed CT lung images as the input for U-Net hybrid with ResNet50 achieves the best performance. The proposed classification model achieves the 98.98%accuracy(ACC), 98.87% area under the ROC curve(AUC), 98.89% sensitivity(Se), 97.99 % precision(Pr), 97.88%F-score, and 1.8974-seconds computational time.展开更多
Tissue texture reflects the spatial distribution of contrasts of image voxel gray levels,i.e.,the tissue heterogeneity,and has been recognized as important biomarkers in various clinical tasks.Spectral computed tomogr...Tissue texture reflects the spatial distribution of contrasts of image voxel gray levels,i.e.,the tissue heterogeneity,and has been recognized as important biomarkers in various clinical tasks.Spectral computed tomography(CT)is believed to be able to enrich tissue texture by providing different voxel contrast images using different X-ray energies.Therefore,this paper aims to address two related issues for clinical usage of spectral CT,especially the photon counting CT(PCCT):(1)texture enhancement by spectral CT image reconstruction,and(2)spectral energy enriched tissue texture for improved lesion classification.For issue(1),we recently proposed a tissue-specific texture prior in addition to low rank prior for the individual energy-channel low-count image reconstruction problems in PCCT under the Bayesian theory.Reconstruction results showed the proposed method outperforms existing methods of total variation(TV),low-rank TV and tensor dictionary learning in terms of not only preserving texture features but also suppressing image noise.For issue(2),this paper will investigate three models to incorporate the enriched texture by PCCT in accordance with three types of inputs:one is the spectral images,another is the cooccurrence matrices(CMs)extracted from the spectral images,and the third one is the Haralick features(HF)extracted from the CMs.Studies were performed on simulated photon counting data by introducing attenuationenergy response curve to the traditional CT images from energy integration detectors.Classification results showed the spectral CT enriched texture model can improve the area under the receiver operating characteristic curve(AUC)score by 7.3%,0.42%and 3.0%for the spectral images,CMs and HFs respectively on the five-energy spectral data over the original single energy data only.The CM-and HF-inputs can achieve the best AUC of 0.934 and 0.927.This texture themed study shows the insight that incorporating clinical important prior information,e.g.,tissue texture in this paper,into the medical imaging,such as the upstream image reconstruction,the downstream diagnosis,and so on,can benefit the clinical tasks.展开更多
We redesign the parameterized quantum circuit in the quantum deep neural network, construct a three-layer structure as the hidden layer, and then use classical optimization algorithms to train the parameterized quantu...We redesign the parameterized quantum circuit in the quantum deep neural network, construct a three-layer structure as the hidden layer, and then use classical optimization algorithms to train the parameterized quantum circuit, thereby propose a novel hybrid quantum deep neural network(HQDNN) used for image classification. After bilinear interpolation reduces the original image to a suitable size, an improved novel enhanced quantum representation(INEQR) is used to encode it into quantum states as the input of the HQDNN. Multi-layer parameterized quantum circuits are used as the main structure to implement feature extraction and classification. The output results of parameterized quantum circuits are converted into classical data through quantum measurements and then optimized on a classical computer. To verify the performance of the HQDNN, we conduct binary classification and three classification experiments on the MNIST(Modified National Institute of Standards and Technology) data set. In the first binary classification, the accuracy of 0 and 4 exceeds98%. Then we compare the performance of three classification with other algorithms, the results on two datasets show that the classification accuracy is higher than that of quantum deep neural network and general quantum convolutional neural network.展开更多
With the rapid development and popularization of new-generation technologies such as cloud computing,big data,and artificial intelligence,the construction of smart grids has become more diversified.Accurate quick read...With the rapid development and popularization of new-generation technologies such as cloud computing,big data,and artificial intelligence,the construction of smart grids has become more diversified.Accurate quick reading and classification of the electricity consumption of residential users can provide a more in-depth perception of the actual power consumption of residents,which is essential to ensure the normal operation of the power system,energy management and planning.Based on the distributed architecture of cloud computing,this paper designs an improved random forest residential electricity classification method.It uses the unique out-of-bag error of random forest and combines the Drosophila algorithm to optimize the internal parameters of the random forest,thereby improving the performance of the random forest algorithm.This method uses MapReduce to train an improved random forest model on the cloud computing platform,and then uses the trained model to analyze the residential electricity consumption data set,divides all residents into 5 categories,and verifies the effectiveness of the model through experiments and feasibility.展开更多
Field computation, an emerging computation technique, has inspired passion of intelligence science research. A novel field computation model based on the magnetic field theory is constructed. The proposed magnetic fie...Field computation, an emerging computation technique, has inspired passion of intelligence science research. A novel field computation model based on the magnetic field theory is constructed. The proposed magnetic field computation (MFC) model consists of a field simulator, a non-derivative optimization algo- rithm and an auxiliary data processing unit. The mathematical model is deduced and proved that the MFC model is equivalent to a quadratic discriminant function. Furthermore, the finite element prototype is derived, and the simulator is developed, combining with particle swarm optimizer for the field configuration. Two benchmark classification experiments are studied in the numerical experiment, and one notable advantage is demonstrated that less training samples are required and a better generalization can be achieved.展开更多
Computational intelligence(CI)is a group of nature-simulated computationalmodels and processes for addressing difficult real-life problems.The CI is useful in the UAV domain as it produces efficient,precise,and rapid ...Computational intelligence(CI)is a group of nature-simulated computationalmodels and processes for addressing difficult real-life problems.The CI is useful in the UAV domain as it produces efficient,precise,and rapid solutions.Besides,unmanned aerial vehicles(UAV)developed a hot research topic in the smart city environment.Despite the benefits of UAVs,security remains a major challenging issue.In addition,deep learning(DL)enabled image classification is useful for several applications such as land cover classification,smart buildings,etc.This paper proposes novel meta-heuristics with a deep learning-driven secure UAV image classification(MDLS-UAVIC)model in a smart city environment.Themajor purpose of the MDLS-UAVIC algorithm is to securely encrypt the images and classify them into distinct class labels.The proposedMDLS-UAVIC model follows a two-stage process:encryption and image classification.The encryption technique for image encryption effectively encrypts the UAV images.Next,the image classification process involves anXception-based deep convolutional neural network for the feature extraction process.Finally,shuffled shepherd optimization(SSO)with a recurrent neural network(RNN)model is applied for UAV image classification,showing the novelty of the work.The experimental validation of the MDLS-UAVIC approach is tested utilizing a benchmark dataset,and the outcomes are examined in various measures.It achieved a high accuracy of 98%.展开更多
A right-hand motor imagery based brain-computer interface is proposed in this work. Such a system requires the identification of different brain states and their classification. Brain signals recorded by electroenceph...A right-hand motor imagery based brain-computer interface is proposed in this work. Such a system requires the identification of different brain states and their classification. Brain signals recorded by electroencephalography are naturally contaminated by various noises and interferences. Ocular artifact removal is performed by implementing an auto-matic method “Kmeans-ICA” which does not require a reference channel. This method starts by decomposing EEG signals into Independent Components;artefactual ones are then identified using Kmeans clustering, a non-supervised machine learning technique. After signal preprocessing, a Brain computer interface system is implemented;physiologically interpretable features extracting the wavelet-coherence, the wavelet-phase locking value and band power are computed and introduced into a statistical test to check for a significant difference between relaxed and motor imagery states. Features which pass the test are conserved and used for classification. Leave One Out Cross Validation is performed to evaluate the performance of the classifier. Two types of classifiers are compared: a Linear Discriminant Analysis and a Support Vector Machine. Using a Linear Discriminant Analysis, classification accuracy improved from 66% to 88.10% after ocular artifacts removal using Kmeans-ICA. The proposed methodology outperformed state of art feature extraction methods, namely, the mu rhythm band power.展开更多
Architectural distortion is an important ultrasonographic indicator of breast cancer. However, it is difficult for clinicians to determine whether a given lesion is malignant because such distortions can be subtle in ...Architectural distortion is an important ultrasonographic indicator of breast cancer. However, it is difficult for clinicians to determine whether a given lesion is malignant because such distortions can be subtle in ultrasonographic images. In this paper, we report on a study to develop a computerized scheme for the histological classification of masses with architectural distortions as a differential diagnosis aid. Our database consisted of 72 ultrasonographic images obtained from 47 patients whose masses had architectural distortions. This included 51 malignant (35 invasive and 16 non-invasive carcinomas) and 21 benign masses. In the proposed method, the location of the masses and the area occupied by them were first determined by an experienced clinician. Fourteen objective features concerning masses with architectural distortions were then extracted automatically by taking into account subjective features commonly used by experienced clinicians to describe such masses. The k-nearest neighbors (k-NN) rule was finally used to distinguish three histological classifications. The proposed method yielded classification accuracy values of 91.4% (32/35) for invasive carcinoma, 75.0% (12/16) for noninvasive carcinoma, and 85.7% (18/21) for benign mass, respectively. The sensitivity and specificity values were 92.2% (47/51) and 85.7% (18/21), respectively. The positive predictive values (PPV) were 88.9% (32/36) for invasive carcinoma and 85.7% (12/14) for noninvasive carcinoma whereas the negative predictive values (NPV) were 81.8% (18/22) for benign mass. Thus, the proposed method can help the differential diagnosis of masses with architectural distortions in ultrasonographic images.展开更多
Development of computational agent organizations or “societies” has become the domiant computing paradigm in the arena of Distributed Artificial Intelligence, and many foreseeable future applications need agent orga...Development of computational agent organizations or “societies” has become the domiant computing paradigm in the arena of Distributed Artificial Intelligence, and many foreseeable future applications need agent organizations, in which diversified agents cooperate in a distributed manner, forming teams. In such scenarios, the agents would need to know each other in order to facilitate the interactions. Moreover, agents in such an environment are not statically defined in advance but they can adaptively enter and leave an organization. This begs the question of how agents locate each other in order to cooperate in achieving organizational goals. Locating agents is a quite challenging task, especially in organizations that involve a large number of agents and where the resource avaiability is intermittent. The authors explore here an approach based on self organization map (SOM) which will serve as a clustering method in the light of the knowledge gathered about various agents. The approach begins by categorizing agents using a selected set of agent properties. These categories are used to derive various ranks and a distance matrix. The SOM algorithm uses this matrix as input to obtain clusters of agents. These clusters reduce the search space, resulting in a relatively short agent search time.展开更多
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)(Project Nos.RS-2024-00438551,30%,2022-11220701,30%,2021-0-01816,30%)the National Research Foundation of Korea(NRF)grant funded by the Korean Government(Project No.RS2023-00208460,10%).
文摘The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing presents substantial security challenges,rendering it a prominent target for sophisticated malware attacks.Existing signature-based and behavior-based detection methods are ineffective against the swiftly evolving nature of malware threats and are constrained by the availability of resources.This paper suggests the Genetic Encoding for Novel Optimization of Malware Evaluation(GENOME)framework,a novel solution that is intended to improve the performance of malware detection and classification in peripheral computing environments.GENOME optimizes data storage and computa-tional efficiency by converting malware artifacts into compact,structured sequences through a Deoxyribonucleic Acid(DNA)encoding mechanism.The framework employs two DNA encoding algorithms,standard and compressed,which substantially reduce data size while preserving high detection accuracy.The Edge-IIoTset dataset was used to conduct experiments that showed that GENOME was able to achieve high classification performance using models such as Random Forest and Logistic Regression,resulting in a reduction of data size by up to 42%.Further evaluations with the CIC-IoT-23 dataset and Deep Learning models confirmed GENOME’s scalability and adaptability across diverse datasets and algorithms.The potential of GENOME to address critical challenges,such as the rapid mutation of malware,real-time processing demands,and resource limitations,is emphasized in this study.GENOME offers comprehensive protection for peripheral computing environments by offering a security solution that is both efficient and scalable.
基金supported by Ongoing Research Funding Program(ORF-2025-488)King Saud University,Riyadh,Saudi Arabia.
文摘Effectively handling imbalanced datasets remains a fundamental challenge in computational modeling and machine learning,particularly when class overlap significantly deteriorates classification performance.Traditional oversampling methods often generate synthetic samples without considering density variations,leading to redundant or misleading instances that exacerbate class overlap in high-density regions.To address these limitations,we propose Wasserstein Generative Adversarial Network Variational Density Estimation WGAN-VDE,a computationally efficient density-aware adversarial resampling framework that enhances minority class representation while strategically reducing class overlap.The originality of WGAN-VDE lies in its density-aware sample refinement,ensuring that synthetic samples are positioned in underrepresented regions,thereby improving class distinctiveness.By applying structured feature representation,targeted sample generation,and density-based selection mechanisms strategies,the proposed framework ensures the generation of well-separated and diverse synthetic samples,improving class separability and reducing redundancy.The experimental evaluation on 20 benchmark datasets demonstrates that this approach outperforms 11 state-of-the-art rebalancing techniques,achieving superior results in F1-score,Accuracy,G-Mean,and AUC metrics.These results establish the proposed method as an effective and robust computational approach,suitable for diverse engineering and scientific applications involving imbalanced data classification and computational modeling.
文摘There is a widespread agreement that lung cancer is one of the deadliest types of cancer,affecting both women and men.As a result,detecting lung cancer at an early stage is crucial to create an accurate treatment plan and forecasting the reaction of the patient to the adopted treatment.For this reason,the development of convolutional neural networks(CNNs)for the task of lung cancer classification has recently seen a trend in attention.CNNs have great potential,but they need a lot of training data and struggle with input alterations.To address these limitations of CNNs,a novel machine-learning architecture of capsule networks has been presented,and it has the potential to completely transform the areas of deep learning.Capsule networks,which are the focus of this work,are interesting because they can withstand rotation and affine translation with relatively little training data.This research optimizes the performance of CapsNets by designing a new architecture that allows them to perform better on the challenge of lung cancer classification.The findings demonstrate that the proposed capsule network method outperforms CNNs on the lung cancer classification challenge.CapsNet with a single convolution layer and 32 features(CN-1-32),CapsNet with a single convolution layer and 64 features(CN-1-64),and CapsNet with a double convolution layer and 64 features(CN-2-64)are the three capsulel networks developed in this research for lung cancer classification.Lung nodules,both benign and malignant,are classified using these networks using CT images.The LIDC-IDRI database was utilized to assess the performance of those networks.Based on the testing results,CN-2-64 network performed better out of the three networks tested,with a specificity of 98.37%,sensitivity of 97.47%and an accuracy of 97.92%.
基金supported in part by the Shandong Provincial Natural Science Foundation under Grant ZR2021QF008the National Natural Science Foundation of China under Grant 62072351+1 种基金in part by the open research project of ZheJiang Lab under grant 2021PD0AB01in part by the 111 Project under Grant B16037。
文摘Accurate classification of encrypted traffic plays an important role in network management.However,current methods confronts several problems:inability to characterize traffic that exhibits great dispersion,inability to classify traffic with multi-level features,and degradation due to limited training traffic size.To address these problems,this paper proposes a traffic granularity-based cryptographic traffic classification method,called Granular Classifier(GC).In this paper,a novel Cardinality-based Constrained Fuzzy C-Means(CCFCM)clustering algorithm is proposed to address the problem caused by limited training traffic,considering the ratio of cardinality that must be linked between flows to achieve good traffic partitioning.Then,an original representation format of traffic is presented based on granular computing,named Traffic Granules(TG),to accurately describe traffic structure by catching the dispersion of different traffic features.Each granule is a compact set of similar data with a refined boundary by excluding outliers.Based on TG,GC is constructed to perform traffic classification based on multi-level features.The performance of the GC is evaluated based on real-world encrypted network traffic data.Experimental results show that the GC achieves outstanding performance for encrypted traffic classification with limited size of training traffic and keeps accurate classification in dynamic network conditions.
基金the Youth Foundation of Fujian Provincial Health Commission,No.2021QNA014the Construction Project of Fujian Province Minimally Invasive Medical Center,No.[2021]76.
文摘BACKGROUND The degree of obstruction plays an important role in decision-making for obstructive colorectal cancer(OCRC).The existing assessment still relies on the colorectal obstruction scoring system(CROSS)which is based on a comprehensive analysis of patients’complaints and eating conditions.The data collection relies on subjective descriptions and lacks objective parameters.Therefore,a scoring system for the evaluation of computed tomography-based obstructive degree(CTOD)is urgently required for OCRC.AIM To explore the relationship between CTOD and CROSS and to determine whether CTOD could affect the short-term and long-term prognosis.METHODS Of 173 patients were enrolled.CTOD was obtained using k-means,the ratio of proximal to distal obstruction,and the proportion of nonparenchymal areas at the site of obstruction.CTOD was integrated with the CROSS to analyze the effect of emergency intervention on complications.Short-term and long-term outcomes were compared between the groups.RESULTS CTOD severe obstruction(CTOD grade 3)was an independent risk factor[odds ratio(OR)=3.390,95%confidence interval(CI):1.340-8.570,P=0.010]via multivariate analysis of short-term outcomes,while CROSS grade was not.In the CTOD-CROSS grade system,for the non-severe obstructive(CTOD 1-2 to CROSS 1-4)group,the complication rate of emergency interventions was significantly higher than that of non-emergency interventions(71.4%vs 41.8%,P=0.040).The postoperative pneumonia rate was higher in the emergency intervention group than in the non-severe obstructive group(35.7%vs 8.9%,P=0.020).However,CTOD grade was not an independent risk factor of overall survival and progression-free survival.CONCLUSION CTOD was useful in preoperative decision-making to avoid unnecessary emergency interventions and complications.
文摘Bird species classification is not only a challenging topic in artificial intelligence but also a domain closely related to environmental protection and ecological research.Additionally,performing edge computing on low-level devices using small neural networks can be an important research direction.In this paper,we use the EfficientNetV2B0 model for bird species classification,applying transfer learning on a dataset of 525 bird species.We also employ the BiRefNet model to remove backgrounds from images in the training set.The generated background-removed images are mixed with the original training set as a form of data augmentation.We aim for these background-removed images to help the model focus on key features,and by combining data augmentation with transfer learning,we trained a highly accurate and efficient bird species classification model.The training process is divided into a transfer learning stage and a fine-tuning stage.In the transfer learning stage,only the newly added custom layers are trained;while in the fine-tuning stage,all pre-trained layers except for the batch normalization layers are fine-tuned.According to the experimental results,the proposed model not only has an advantage in size compared to other models but also outperforms them in various metrics.The training results show that the proposed model achieved an accuracy of 99.54%and a precision of 99.62%,demonstrating that it achieves both lightweight design and high accuracy.To confirm the credibility of the results,we use heatmaps to interpret the model.The heatmaps show that our model can clearly highlight the image feature area.In addition,we also perform the 10-fold cross-validation on the model to verify its credibility.Finally,this paper proposes a model with low training cost and high accuracy,making it suitable for deployment on edge computing devices to provide lighter and more convenient services.
基金supported by the National Science Foundation(Grant Nos.NSF-ECCS-2127235 and EFRI-BRAID-2223495)Part of this work was conducted at the Washington Nanofabrication Facility/Molecular Analysis Facility,a National Nanotechnology Coordinated Infrastructure(NNCI)site at the University of Washington with partial support from the National Science Foundation(Grant Nos.NNCI-1542101 and NNCI-2025489).
文摘Optical and hybrid convolutional neural networks(CNNs)recently have become of increasing interest to achieve low-latency,low-power image classification,and computer-vision tasks.However,implementing optical nonlinearity is challenging,and omitting the nonlinear layers in a standard CNN comes with a significant reduction in accuracy.We use knowledge distillation to compress modified AlexNet to a single linear convolutional layer and an electronic backend(two fully connected layers).We obtain comparable performance with a purely electronic CNN with five convolutional layers and three fully connected layers.We implement the convolution optically via engineering the point spread function of an inverse-designed meta-optic.Using this hybrid approach,we estimate a reduction in multiply-accumulate operations from 17M in a conventional electronic modified AlexNet to only 86 K in the hybrid compressed network enabled by the optical front end.This constitutes over 2 orders of magnitude of reduction in latency and power consumption.Furthermore,we experimentally demonstrate that the classification accuracy of the system exceeds 93%on the MNIST dataset of handwritten digits.
文摘The feasibility of constructing shallow foundations on saturated sands remains uncertain.Seismic design standards simply stipulate that geotechnical investigations for a shallow foundation on such soils shall be conducted to mitigate the effects of the liquefaction hazard.This study investigates the seismic behavior of strip foundations on typical two-layered soil profiles-a natural loose sand layer supported by a dense sand layer.Coupled nonlinear dynamic analyses have been conducted to calculate response parameters,including seismic settlement,the acceleration response on the ground surface,and excess pore pressure beneath strip foundations.A novel liquefaction potential index(LPI_(footing)),based on excess pore pressure ratios across a given region of soil mass beneath footings is introduced to classify liquefaction severity into three distinct levels:minor,moderate,and severe.To validate the proposed LPI_(footing),the foundation settlement is evaluated for the different liquefaction potential classes.A classification tree model has been grown to predict liquefaction susceptibility,utilizing various input variables,including earthquake intensity on the ground surface,foundation pressure,sand permeability,and top layer thickness.Moreover,a nonlinear regression function has been established to map LPI_(footing) in relation to these input predictors.The models have been constructed using a substantial dataset comprising 13,824 excess pore pressure ratio time histories.The performance of the developed models has been examined using various methods,including the 10-fold cross-validation method.The predictive capability of the tree also has been validated through existing experimental studies.The results indicate that the classification tree is not only interpretable but also highly predictive,with a testing accuracy level of 78.1%.The decision tree provides valuable insights for engineers assessing liquefaction potential beneath strip foundations.
文摘AIM: To establish a computed tomography (CT)-morphological classification for hepatic alveolar echinococcosis was the aim of the study.METHODS: The CT morphology of hepatic lesions in 228 patients with confirmed alveolar echinococcosis (AE) drawn from the Echinococcus Databank of the University Hospital of Ulm was reviewed retrospectively. For this reason, CT datasets of combined positron emission tomography (PET)-CT examinations were evaluated. The diagnosis of AE was made in patients with unequivocal seropositivity; positive histological findings following diagnostic puncture or partial resection of the liver; and/or findings typical for AE at either ultrasonography, CT, magnetic resonance imaging or PET-CT. The CT-morphological findings were grouped into the new classification scheme.RESULTS: Within the classification a lesion was dedicated to one out of five “primary morphologies” as well as to one out of six “patterns of calcification”. “primary morphology” and “pattern of calcification” are primarily focussed on separately from each other and combined, whereas the “primary morphology” V is not further characterized by a “pattern of calcification”. Based on the five primary morphologies, further descriptive sub-criteria were appended to types I-III. An analysis of the calcification pattern in relation to the primary morphology revealed the exclusive association of the central calcification with type IV primary morphology. Similarly, certain calcification patterns exhibited a clear predominance for other primary morphologies, which underscores the delimitation of the individual primary morphological types from each other. These relationships in terms of calcification patterns extend into the primary morphological sub-criteria, demonstrating the clear subordination of those criteria.CONCLUSION: The proposed CT-morphological classification (EMUC-CT) is intended to facilitate the recognition and interpretation of lesions in hepatic alveolar echinococcosis. This could help to interpret different clinical courses better and shall assist in the context of scientific studies to improve the comparability of CT findings.
基金the National Natural Science Foundation of China,No.81671943
文摘BACKGROUND In previous studies,celiomesenteric trunk(CMT)was narrowly defined as a hepato-gastro-spleno-mesenteric(HGSM)trunk,but other possible types were ignored.With the widespread use of multidetector computed tomography(MDCT)angiography,it is easy to collect a large sampling of data on arterial anatomy of the abdomen in daily radiological practice.A new classification system for CMT may be created based on its MDCT angiographic findings and variation patterns.AIM To identify the spectrum and prevalence of CMT according to a new classification based on MDCT angiographic findings,and discuss the probable embryological mechanisms to explain the CMT variants.METHODS A retrospective study was carried out on 5580 abdominal MDCT angiography images.CMT was defined as a single common trunk arising from the aorta and its branches including the superior mesenteric artery and at least two major branches of the celiac trunk.Various types of CMT were investigated.RESULTS Of the 5580 patients,171(3.06%)were identified as having CMT.According to the new definitions and classification,the CMT variants included five types:Ⅰ,Ⅱ,Ⅲ,Ⅳ and Ⅴ,which were found in 96(56.14%),57(33.33%),4(2.34%),3(1.75%)and 8(4.68%)patients,respectively.The CMT variants also were classified as long type(106 patients,61.99%)and short type(65 patients,38.01%)based on the length of single common trunk.Further CMT classification was based on the origin of the left gastric artery:Type a(92 patients,53.80%),type b(57 patients,33.33%),type c(11 patients,6.43%)and type d(8 patients,4.68%).CONCLUSION We systematically classified CMT variants according to our new classification system based on MDCT angiographic findings.Dislocation interruption,incomplete interruption and persistence of the longitudinal anastomosis could all be embryological mechanisms of various types of CMT variants.
基金Supported by National Natural Science Foundation of China,No.91959118Science and Technology Program of Guangzhou,China,No.201704020016+1 种基金SKY Radiology Department International Medical Research Foundation of China,No.Z-2014-07-1912-15Clinical Research Foundation of the 3rd Affiliated Hospital of Sun Yat-Sen University,No.YHJH201901.
文摘BACKGROUND The accurate classification of focal liver lesions(FLLs)is essential to properly guide treatment options and predict prognosis.Dynamic contrast-enhanced computed tomography(DCE-CT)is still the cornerstone in the exact classification of FLLs due to its noninvasive nature,high scanning speed,and high-density resolution.Since their recent development,convolutional neural network-based deep learning techniques has been recognized to have high potential for image recognition tasks.AIM To develop and evaluate an automated multiphase convolutional dense network(MP-CDN)to classify FLLs on multiphase CT.METHODS A total of 517 FLLs scanned on a 320-detector CT scanner using a four-phase DCECT imaging protocol(including precontrast phase,arterial phase,portal venous phase,and delayed phase)from 2012 to 2017 were retrospectively enrolled.FLLs were classified into four categories:Category A,hepatocellular carcinoma(HCC);category B,liver metastases;category C,benign non-inflammatory FLLs including hemangiomas,focal nodular hyperplasias and adenomas;and category D,hepatic abscesses.Each category was split into a training set and test set in an approximate 8:2 ratio.An MP-CDN classifier with a sequential input of the fourphase CT images was developed to automatically classify FLLs.The classification performance of the model was evaluated on the test set;the accuracy and specificity were calculated from the confusion matrix,and the area under the receiver operating characteristic curve(AUC)was calculated from the SoftMax probability outputted from the last layer of the MP-CDN.RESULTS A total of 410 FLLs were used for training and 107 FLLs were used for testing.The mean classification accuracy of the test set was 81.3%(87/107).The accuracy/specificity of distinguishing each category from the others were 0.916/0.964,0.925/0.905,0.860/0.918,and 0.925/0.963 for HCC,metastases,benign non-inflammatory FLLs,and abscesses on the test set,respectively.The AUC(95%confidence interval)for differentiating each category from the others was 0.92(0.837-0.992),0.99(0.967-1.00),0.88(0.795-0.955)and 0.96(0.914-0.996)for HCC,metastases,benign non-inflammatory FLLs,and abscesses on the test set,respectively.CONCLUSION MP-CDN accurately classified FLLs detected on four-phase CT as HCC,metastases,benign non-inflammatory FLLs and hepatic abscesses and may assist radiologists in identifying the different types of FLLs.
文摘Corona Virus Disease 2019(COVID-19) has affected millions of people worldwide and caused more than6.3 million deaths(World Health Organization, June 2022). Increased attempts have been made to develop deep learning methods to diagnose COVID-19 based on computed tomography(CT) lung images. It is a challenge to reproduce and obtain the CT lung data, because it is not publicly available. This paper introduces a new generalized framework to segment and classify CT images and determine whether a patient is tested positive or negative for COVID-19 based on lung CT images. In this work, many different strategies are explored for the classification task.ResNet50 and VGG16 models are applied to classify CT lung images into COVID-19 positive or negative. Also,VGG16 and ReNet50 combined with U-Net, which is one of the most used architectures in deep learning for image segmentation, are employed to segment CT lung images before the classifying process to increase system performance. Moreover, the image size dependent normalization technique(ISDNT) and Wiener filter are utilized as the preprocessing techniques to enhance images and noise suppression. Additionally, transfer learning and data augmentation techniques are performed to solve the problem of COVID-19 CT lung images deficiency, therefore the over-fitting of deep models can be avoided. The proposed frameworks, which comprised of end-to-end, VGG16,ResNet50, and U-Net with VGG16 or ResNet50, are applied on the dataset that is sourced from COVID-19 lung CT images in Kaggle. The classification results show that using the preprocessed CT lung images as the input for U-Net hybrid with ResNet50 achieves the best performance. The proposed classification model achieves the 98.98%accuracy(ACC), 98.87% area under the ROC curve(AUC), 98.89% sensitivity(Se), 97.99 % precision(Pr), 97.88%F-score, and 1.8974-seconds computational time.
基金This work was partially supported by the NIH/NCI,No.CA206171.
文摘Tissue texture reflects the spatial distribution of contrasts of image voxel gray levels,i.e.,the tissue heterogeneity,and has been recognized as important biomarkers in various clinical tasks.Spectral computed tomography(CT)is believed to be able to enrich tissue texture by providing different voxel contrast images using different X-ray energies.Therefore,this paper aims to address two related issues for clinical usage of spectral CT,especially the photon counting CT(PCCT):(1)texture enhancement by spectral CT image reconstruction,and(2)spectral energy enriched tissue texture for improved lesion classification.For issue(1),we recently proposed a tissue-specific texture prior in addition to low rank prior for the individual energy-channel low-count image reconstruction problems in PCCT under the Bayesian theory.Reconstruction results showed the proposed method outperforms existing methods of total variation(TV),low-rank TV and tensor dictionary learning in terms of not only preserving texture features but also suppressing image noise.For issue(2),this paper will investigate three models to incorporate the enriched texture by PCCT in accordance with three types of inputs:one is the spectral images,another is the cooccurrence matrices(CMs)extracted from the spectral images,and the third one is the Haralick features(HF)extracted from the CMs.Studies were performed on simulated photon counting data by introducing attenuationenergy response curve to the traditional CT images from energy integration detectors.Classification results showed the spectral CT enriched texture model can improve the area under the receiver operating characteristic curve(AUC)score by 7.3%,0.42%and 3.0%for the spectral images,CMs and HFs respectively on the five-energy spectral data over the original single energy data only.The CM-and HF-inputs can achieve the best AUC of 0.934 and 0.927.This texture themed study shows the insight that incorporating clinical important prior information,e.g.,tissue texture in this paper,into the medical imaging,such as the upstream image reconstruction,the downstream diagnosis,and so on,can benefit the clinical tasks.
基金Project supported by the Natural Science Foundation of Shandong Province,China (Grant No. ZR2021MF049)the Joint Fund of Natural Science Foundation of Shandong Province (Grant Nos. ZR2022LLZ012 and ZR2021LLZ001)。
文摘We redesign the parameterized quantum circuit in the quantum deep neural network, construct a three-layer structure as the hidden layer, and then use classical optimization algorithms to train the parameterized quantum circuit, thereby propose a novel hybrid quantum deep neural network(HQDNN) used for image classification. After bilinear interpolation reduces the original image to a suitable size, an improved novel enhanced quantum representation(INEQR) is used to encode it into quantum states as the input of the HQDNN. Multi-layer parameterized quantum circuits are used as the main structure to implement feature extraction and classification. The output results of parameterized quantum circuits are converted into classical data through quantum measurements and then optimized on a classical computer. To verify the performance of the HQDNN, we conduct binary classification and three classification experiments on the MNIST(Modified National Institute of Standards and Technology) data set. In the first binary classification, the accuracy of 0 and 4 exceeds98%. Then we compare the performance of three classification with other algorithms, the results on two datasets show that the classification accuracy is higher than that of quantum deep neural network and general quantum convolutional neural network.
基金This work was partially supported by the National Natural Science Foundation of China(61876089).
文摘With the rapid development and popularization of new-generation technologies such as cloud computing,big data,and artificial intelligence,the construction of smart grids has become more diversified.Accurate quick reading and classification of the electricity consumption of residential users can provide a more in-depth perception of the actual power consumption of residents,which is essential to ensure the normal operation of the power system,energy management and planning.Based on the distributed architecture of cloud computing,this paper designs an improved random forest residential electricity classification method.It uses the unique out-of-bag error of random forest and combines the Drosophila algorithm to optimize the internal parameters of the random forest,thereby improving the performance of the random forest algorithm.This method uses MapReduce to train an improved random forest model on the cloud computing platform,and then uses the trained model to analyze the residential electricity consumption data set,divides all residents into 5 categories,and verifies the effectiveness of the model through experiments and feasibility.
基金supported by the National Natural Science Foundation of China(60903005)the National Basic Research Program of China(973 Program)(2012CB821206)
文摘Field computation, an emerging computation technique, has inspired passion of intelligence science research. A novel field computation model based on the magnetic field theory is constructed. The proposed magnetic field computation (MFC) model consists of a field simulator, a non-derivative optimization algo- rithm and an auxiliary data processing unit. The mathematical model is deduced and proved that the MFC model is equivalent to a quadratic discriminant function. Furthermore, the finite element prototype is derived, and the simulator is developed, combining with particle swarm optimizer for the field configuration. Two benchmark classification experiments are studied in the numerical experiment, and one notable advantage is demonstrated that less training samples are required and a better generalization can be achieved.
基金Deputyship for Research&Inno-vation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number RI-44-0446.
文摘Computational intelligence(CI)is a group of nature-simulated computationalmodels and processes for addressing difficult real-life problems.The CI is useful in the UAV domain as it produces efficient,precise,and rapid solutions.Besides,unmanned aerial vehicles(UAV)developed a hot research topic in the smart city environment.Despite the benefits of UAVs,security remains a major challenging issue.In addition,deep learning(DL)enabled image classification is useful for several applications such as land cover classification,smart buildings,etc.This paper proposes novel meta-heuristics with a deep learning-driven secure UAV image classification(MDLS-UAVIC)model in a smart city environment.Themajor purpose of the MDLS-UAVIC algorithm is to securely encrypt the images and classify them into distinct class labels.The proposedMDLS-UAVIC model follows a two-stage process:encryption and image classification.The encryption technique for image encryption effectively encrypts the UAV images.Next,the image classification process involves anXception-based deep convolutional neural network for the feature extraction process.Finally,shuffled shepherd optimization(SSO)with a recurrent neural network(RNN)model is applied for UAV image classification,showing the novelty of the work.The experimental validation of the MDLS-UAVIC approach is tested utilizing a benchmark dataset,and the outcomes are examined in various measures.It achieved a high accuracy of 98%.
文摘A right-hand motor imagery based brain-computer interface is proposed in this work. Such a system requires the identification of different brain states and their classification. Brain signals recorded by electroencephalography are naturally contaminated by various noises and interferences. Ocular artifact removal is performed by implementing an auto-matic method “Kmeans-ICA” which does not require a reference channel. This method starts by decomposing EEG signals into Independent Components;artefactual ones are then identified using Kmeans clustering, a non-supervised machine learning technique. After signal preprocessing, a Brain computer interface system is implemented;physiologically interpretable features extracting the wavelet-coherence, the wavelet-phase locking value and band power are computed and introduced into a statistical test to check for a significant difference between relaxed and motor imagery states. Features which pass the test are conserved and used for classification. Leave One Out Cross Validation is performed to evaluate the performance of the classifier. Two types of classifiers are compared: a Linear Discriminant Analysis and a Support Vector Machine. Using a Linear Discriminant Analysis, classification accuracy improved from 66% to 88.10% after ocular artifacts removal using Kmeans-ICA. The proposed methodology outperformed state of art feature extraction methods, namely, the mu rhythm band power.
文摘Architectural distortion is an important ultrasonographic indicator of breast cancer. However, it is difficult for clinicians to determine whether a given lesion is malignant because such distortions can be subtle in ultrasonographic images. In this paper, we report on a study to develop a computerized scheme for the histological classification of masses with architectural distortions as a differential diagnosis aid. Our database consisted of 72 ultrasonographic images obtained from 47 patients whose masses had architectural distortions. This included 51 malignant (35 invasive and 16 non-invasive carcinomas) and 21 benign masses. In the proposed method, the location of the masses and the area occupied by them were first determined by an experienced clinician. Fourteen objective features concerning masses with architectural distortions were then extracted automatically by taking into account subjective features commonly used by experienced clinicians to describe such masses. The k-nearest neighbors (k-NN) rule was finally used to distinguish three histological classifications. The proposed method yielded classification accuracy values of 91.4% (32/35) for invasive carcinoma, 75.0% (12/16) for noninvasive carcinoma, and 85.7% (18/21) for benign mass, respectively. The sensitivity and specificity values were 92.2% (47/51) and 85.7% (18/21), respectively. The positive predictive values (PPV) were 88.9% (32/36) for invasive carcinoma and 85.7% (12/14) for noninvasive carcinoma whereas the negative predictive values (NPV) were 81.8% (18/22) for benign mass. Thus, the proposed method can help the differential diagnosis of masses with architectural distortions in ultrasonographic images.
文摘Development of computational agent organizations or “societies” has become the domiant computing paradigm in the arena of Distributed Artificial Intelligence, and many foreseeable future applications need agent organizations, in which diversified agents cooperate in a distributed manner, forming teams. In such scenarios, the agents would need to know each other in order to facilitate the interactions. Moreover, agents in such an environment are not statically defined in advance but they can adaptively enter and leave an organization. This begs the question of how agents locate each other in order to cooperate in achieving organizational goals. Locating agents is a quite challenging task, especially in organizations that involve a large number of agents and where the resource avaiability is intermittent. The authors explore here an approach based on self organization map (SOM) which will serve as a clustering method in the light of the knowledge gathered about various agents. The approach begins by categorizing agents using a selected set of agent properties. These categories are used to derive various ranks and a distance matrix. The SOM algorithm uses this matrix as input to obtain clusters of agents. These clusters reduce the search space, resulting in a relatively short agent search time.