Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness a...Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.展开更多
AIM: To further improve the endoscopic detection of intestinal mucosa alterations due to celiac disease(CD).METHODS: We assessed a hybrid approach based on the integration of expert knowledge into the computerbased cl...AIM: To further improve the endoscopic detection of intestinal mucosa alterations due to celiac disease(CD).METHODS: We assessed a hybrid approach based on the integration of expert knowledge into the computerbased classification pipeline. A total of 2835 endoscopic images from the duodenum were recorded in 290 children using the modified immersion technique(MIT). These children underwent routine upper endoscopy for suspected CD or non-celiac upper abdominal symptoms between August 2008 and December 2014. Blinded to the clinical data and biopsy results, three medical experts visually classified each image as normal mucosa(Marsh-0) or villous atrophy(Marsh-3). The experts' decisions were further integrated into state-of-the-arttexture recognition systems. Using the biopsy results as the reference standard, the classification accuracies of this hybrid approach were compared to the experts' diagnoses in 27 different settings.RESULTS: Compared to the experts' diagnoses, in 24 of 27 classification settings(consisting of three imaging modalities, three endoscopists and three classification approaches), the best overall classification accuracies were obtained with the new hybrid approach. In 17 of 24 classification settings, the improvements achieved with the hybrid approach were statistically significant(P < 0.05). Using the hybrid approach classification accuracies between 94% and 100% were obtained. Whereas the improvements are only moderate in the case of the most experienced expert, the results of the less experienced expert could be improved significantly in 17 out of 18 classification settings. Furthermore, the lowest classification accuracy, based on the combination of one database and one specific expert, could be improved from 80% to 95%(P < 0.001).CONCLUSION: The overall classification performance of medical experts, especially less experienced experts, can be boosted significantly by integrating expert knowledge into computer-aided diagnosis systems.展开更多
Early detection of lung cancer can help for improving the survival rate of the patients.Biomedical imaging tools such as computed tomography(CT)image was utilized to the proper identification and positioning of lung c...Early detection of lung cancer can help for improving the survival rate of the patients.Biomedical imaging tools such as computed tomography(CT)image was utilized to the proper identification and positioning of lung cancer.The recently developed deep learning(DL)models can be employed for the effectual identification and classification of diseases.This article introduces novel deep learning enabled CAD technique for lung cancer using biomedical CT image,named DLCADLC-BCT technique.The proposed DLCADLC-BCT technique intends for detecting and classifying lung cancer using CT images.The proposed DLCADLC-BCT technique initially uses gray level co-occurrence matrix(GLCM)model for feature extraction.Also,long short term memory(LSTM)model was applied for classifying the existence of lung cancer in the CT images.Moreover,moth swarm optimization(MSO)algorithm is employed to optimally choose the hyperparameters of the LSTM model such as learning rate,batch size,and epoch count.For demonstrating the improved classifier results of the DLCADLC-BCT approach,a set of simulations were executed on benchmark dataset and the outcomes exhibited the supremacy of the DLCADLC-BCT technique over the recent approaches.展开更多
BACKGROUND It was shown in previous studies that high definition endoscopy,high magnification endoscopy and image enhancement technologies,such as chromoendoscopy and digital chromoendoscopy[narrow-band imaging(NBI),i...BACKGROUND It was shown in previous studies that high definition endoscopy,high magnification endoscopy and image enhancement technologies,such as chromoendoscopy and digital chromoendoscopy[narrow-band imaging(NBI),iScan]facilitate the detection and classification of colonic polyps during endoscopic sessions.However,there are no comprehensive studies so far that analyze which endoscopic imaging modalities facilitate the automated classification of colonic polyps.In this work,we investigate the impact of endoscopic imaging modalities on the results of computer-assisted diagnosis systems for colonic polyp staging.AIM To assess which endoscopic imaging modalities are best suited for the computerassisted staging of colonic polyps.METHODS In our experiments,we apply twelve state-of-the-art feature extraction methods for the classification of colonic polyps to five endoscopic image databases of colonic lesions.For this purpose,we employ a specifically designed experimental setup to avoid biases in the outcomes caused by differing numbers of images per image database.The image databases were obtained using different imaging modalities.Two databases were obtained by high-definition endoscopy in combination with i-Scan technology(one with chromoendoscopy and one without chromoendoscopy).Three databases were obtained by highmagnification endoscopy(two databases using narrow band imaging and one using chromoendoscopy).The lesions are categorized into non-neoplastic and neoplastic according to the histological diagnosis.RESULTS Generally,it is feature-dependent which imaging modalities achieve high results and which do not.For the high-definition image databases,we achieved overall classification rates of up to 79.2%with chromoendoscopy and 88.9%without chromoendoscopy.In the case of the database obtained by high-magnification chromoendoscopy,the classification rates were up to 81.4%.For the combination of high-magnification endoscopy with NBI,results of up to 97.4%for one database and up to 84%for the other were achieved.Non-neoplastic lesions were classified more accurately in general than non-neoplastic lesions.It was shown that the image recording conditions highly affect the performance of automated diagnosis systems and partly contribute to a stronger effect on the staging results than the used imaging modality.CONCLUSION Chromoendoscopy has a negative impact on the results of the methods.NBI is better suited than chromoendoscopy.High-definition and high-magnification endoscopy are equally suited.展开更多
Limbal Stem Cell Deficiency(LSCD)is an eye disease that can cause corneal opacity and vascularization.In its advanced stage it can lead to a degree of visual impairment.It involves the changing in the semispherical sh...Limbal Stem Cell Deficiency(LSCD)is an eye disease that can cause corneal opacity and vascularization.In its advanced stage it can lead to a degree of visual impairment.It involves the changing in the semispherical shape of the cornea to a drooping shape to downwards direction.LSCD is hard to be diagnosed at early stages.The color and texture of the cornea surface can provide significant information about the cornea affected by LSCD.Parameters such as shape and texture are very crucial to differentiate normal from LSCD cornea.Although several medical approaches exist,most of them requires complicated procedure and medical devices.Therefore,in this paper,we pursued the development of a LSCD detection technique(LDT)utilizing image processing methods.Early diagnosis of LSCD is very crucial for physicians to arrange for effective treatment.In the proposed technique,we developed a method for LSCD detection utilizing frontal eye images.A dataset of 280 eye images of frontal and lateral LSCD and normal patients were used in this research.First,the cornea region of both frontal and lateral images is segmented,and the geometric features are extracted through the automated active contour model and the spline curve.While the texture features are extracted using the feature selection algorithm.The experimental results exhibited that the combined features of the geometric and texture will exhibit accuracy of 95.95%,sensitivity of 97.91% and specificity of 94.05% with the random forest classifier of n=40.As a result,this research developed a Limbal stem cell deficiency detection system utilizing features’fusion using image processing techniques for frontal and lateral digital images of the eyes.展开更多
Proactive Semantic Interference (PSI) and failure to recover from PSI (frPSI), are novel constructs assessed by the LASSI-L. These measures are sensitive to cognitive changes in early Mild Cognitive Impairment (MCI) a...Proactive Semantic Interference (PSI) and failure to recover from PSI (frPSI), are novel constructs assessed by the LASSI-L. These measures are sensitive to cognitive changes in early Mild Cognitive Impairment (MCI) and preclinical AD determined by Aβ load using PET. The goal of this study was to compare a new computerized version of the LASSI-L (LASSI-Brief Computerized) to the standard paper-and-pencil version of the test. In this study, we examined 110 cognitively unimpaired (CU) older adults and 79 with amnestic MCI (aMCI) who were administered the paper-and-pencil form of the LASSI-L. Their performance was compared with 62 CU older adults and 52 aMCI participants examined using the LASSI-BC. After adjustment for covariates (degree of initial learning, sex, education, and language of evaluation) both the standard and computerized versions distinguished between aMCI and CU participants. The performance of CU and aMCI groups using either form was relatively commensurate. Importantly, an optimal combination of Cued B2 recall and Cued B1 intrusions on the LASSI-BC yielded an area under the ROC curve of .927, a sensitivity of 92.3% and specificity of 88.1%, relative to an area under the ROC curve of .815, a sensitivity of 72.5%, and a specificity of 79.1% obtained for the paper-and-pencil LASSI-L. Overall, the LASSI-BC was comparable, and in some ways, superior to the paper-and-pencil LASSI-L. Advantages of the LASSI-BC include a more standardized administration, suitability for remote assessment, and an automated scoring mechanism that can be verified by a built-in audio recording of responses.展开更多
Customer segmentation according to load-shape profiles using smart meter data is an increasingly important application to vital the planning and operation of energy systems and to enable citizens’participation in the...Customer segmentation according to load-shape profiles using smart meter data is an increasingly important application to vital the planning and operation of energy systems and to enable citizens’participation in the energy transition.This study proposes an innovative multi-step clustering procedure to segment customers based on load-shape patterns at the daily and intra-daily time horizons.Smart meter data is split between daily and hourly normalized time series to assess monthly,weekly,daily,and hourly seasonality patterns separately.The dimensionality reduction implicit in the splitting allows a direct approach to clustering raw daily energy time series data.The intraday clustering procedure sequentially identifies representative hourly day-unit profiles for each customer and the entire population.For the first time,a step function approach is applied to reduce time series dimensionality.Customer attributes embedded in surveys are employed to build external clustering validation metrics using Cramer’s V correlation factors and to identify statistically significant determinants of load-shape in energy usage.In addition,a time series features engineering approach is used to extract 16 relevant demand flexibility indicators that characterize customers and corresponding clusters along four different axes:available Energy(E),Temporal patterns(T),Consistency(C),and Variability(V).The methodology is implemented on a real-world electricity consumption dataset of 325 Small and Medium-sized Enterprise(SME)customers,identifying 4 daily and 6 hourly easy-to-interpret,well-defined clusters.The application of the methodology includes selecting key parameters via grid search and a thorough comparison of clustering distances and methods to ensure the robustness of the results.Further research can test the scalability of the methodology to larger datasets from various customer segments(households and large commercial)and locations with different weather and socioeconomic conditions.展开更多
This paper introduces a robust Distributed Denial-of-Service attack detection framework tailored for Software-Defined Networking based Internet of Things environments,built upon a novel,syntheticmulti-vector dataset g...This paper introduces a robust Distributed Denial-of-Service attack detection framework tailored for Software-Defined Networking based Internet of Things environments,built upon a novel,syntheticmulti-vector dataset generated in a Mininet-Ryu testbed using real-time flow-based labeling.The proposed model is based on the XGBoost algorithm,optimized with Principal Component Analysis for dimensionality reduction,utilizing lightweight flowlevel features extracted from Open Flow statistics to classify attacks across critical IoT protocols including TCP,UDP,HTTP,MQTT,and CoAP.The model employs lightweight flow-level features extracted from Open Flow statistics to ensure low computational overhead and fast processing.Performance was rigorously evaluated using key metrics,including Accuracy,Precision,Recall,F1-Score,False Alarm Rate,AUC-ROC,and Detection Time.Experimental results demonstrate the model’s high performance,achieving an accuracy of 98.93%and a low FAR of 0.86%,with a rapid median detection time of 1.02 s.This efficiency validates its superiority in meeting critical Key Performance Indicators,such as Latency and high Throughput,necessary for time-sensitive SDN-IoT systems.Furthermore,the model’s robustness and statistically significant outperformance against baseline models such as Random Forest,k-Nearest Neighbors,and Gradient Boosting Machine,validating through statistical tests using Wilcoxon signed-rank test and confirmed via successful deployment in a real SDN testbed for live traffic detection and mitigation.展开更多
Global security threats have motivated organizations to adopt robust and reliable security systems to ensure the safety of individuals and assets.Biometric authentication systems offer a strong solution.However,choosi...Global security threats have motivated organizations to adopt robust and reliable security systems to ensure the safety of individuals and assets.Biometric authentication systems offer a strong solution.However,choosing the best security system requires a structured decision-making framework,especially in complex scenarios involving multiple criteria.To address this problem,we develop a novel quantum spherical fuzzy technique for order preference by similarity to ideal solution(QSF-TOPSIS)methodology,integrating quantum mechanics principles and fuzzy theory.The proposed approach enhances decision-making accuracy,handles uncertainty,and incorporates criteria relationships.Criteria weights are determined using spherical fuzzy sets,and alternatives are ranked through the QSFTOPSIS framework.This comprehensive multi-criteria decision-making(MCDM)approach is applied to identify the optimal gate security system for an organization,considering critical factors such as accuracy,cost,and reliability.Additionally,the study compares the proposed approach with other established MCDM methods.The results confirm the alignment of rankings across these methods,demonstrating the robustness and reliability of the QSF-TOPSIS framework.The study identifies the infrared recognition and identification system(IRIS)as the most effective,with a score value of 0.5280 and optimal security system among the evaluated alternatives.This research contributes to the growing literature on quantum-enhanced decision-making models and offers a practical framework for solving complex,real-world problems involving uncertainty and ambiguity.展开更多
The rapid increase in the number of Internet of Things(IoT)devices,coupled with a rise in sophisticated cyberattacks,demands robust intrusion detection systems.This study presents a holistic,intelligent intrusion dete...The rapid increase in the number of Internet of Things(IoT)devices,coupled with a rise in sophisticated cyberattacks,demands robust intrusion detection systems.This study presents a holistic,intelligent intrusion detection system.It uses a combined method that integrates machine learning(ML)and deep learning(DL)techniques to improve the protection of contemporary information technology(IT)systems.Unlike traditional signature-based or singlemodel methods,this system integrates the strengths of ensemble learning for binary classification and deep learning for multi-class classification.This combination provides a more nuanced and adaptable defense.The research utilizes the NF-UQ-NIDS-v2 dataset,a recent,comprehensive benchmark for evaluating network intrusion detection systems(NIDS).Our methodological framework employs advanced artificial intelligence techniques.Specifically,we use ensemble learning algorithms(Random Forest,Gradient Boosting,AdaBoost,and XGBoost)for binary classification.Deep learning architectures are also employed to address the complexities of multi-class classification,allowing for fine-grained identification of intrusion types.To mitigate class imbalance,a common problem in multi-class intrusion detection that biases model performance,we use oversampling and data augmentation.These techniques ensure equitable class representation.The results demonstrate the efficacy of the proposed hybrid ML-DL system.It achieves significant improvements in intrusion detection accuracy and reliability.This research contributes substantively to cybersecurity by providing a more robust and adaptable intrusion detection solution.展开更多
Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes...Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes.Existing machine and deep learning-based anomalies detection methods often rely on centralized training,leading to reduced accuracy and potential privacy breaches.Therefore,this study proposes a Blockchain-based-Federated Learning architecture for Malicious Node Detection(BFL-MND)model.It trains models locally within healthcare clusters,sharing only model updates instead of patient data,preserving privacy and improving accuracy.Cloud and edge computing enhance the model’s scalability,while blockchain ensures secure,tamper-proof access to health data.Using the PhysioNet dataset,the proposed model achieves an accuracy of 0.95,F1 score of 0.93,precision of 0.94,and recall of 0.96,outperforming baseline models like random forest(0.88),adaptive boosting(0.90),logistic regression(0.86),perceptron(0.83),and deep neural networks(0.92).展开更多
In today’s digital world,the Internet of Things(IoT)plays an important role in both local and global economies due to its widespread adoption in different applications.This technology has the potential to offer sever...In today’s digital world,the Internet of Things(IoT)plays an important role in both local and global economies due to its widespread adoption in different applications.This technology has the potential to offer several advantages over conventional technologies in the near future.However,the potential growth of this technology also attracts attention from hackers,which introduces new challenges for the research community that range from hardware and software security to user privacy and authentication.Therefore,we focus on a particular security concern that is associated with malware detection.The literature presents many countermeasures,but inconsistent results on identical datasets and algorithms raise concerns about model biases,training quality,and complexity.This highlights the need for an adaptive,real-time learning framework that can effectively mitigate malware threats in IoT applications.To address these challenges,(i)we propose an intelligent framework based on Two-step Deep Reinforcement Learning(TwStDRL)that is capable of learning and adapting in real-time to counter malware threats in IoT applications.This framework uses exploration and exploitation phenomena during both the training and testing phases by storing results in a replay memory.The stored knowledge allows the model to effectively navigate the environment and maximize cumulative rewards.(ii)To demonstrate the superiority of the TwStDRL framework,we implement and evaluate several machine learning algorithms for comparative analysis that include Support Vector Machines(SVM),Multi-Layer Perceptron,Random Forests,and k-means Clustering.The selection of these algorithms is driven by the inconsistent results reported in the literature,which create doubt about their robustness and reliability in real-world IoT deployments.(iii)Finally,we provide a comprehensive evaluation to justify why the TwStDRL framework outperforms them in mitigating security threats.During analysis,we noted that our proposed TwStDRL scheme achieves an average performance of 99.45%across accuracy,precision,recall,and F1-score,which is an absolute improvement of roughly 3%over the existing malware-detection models.展开更多
Environmental transition can potentially influence cardiovascular health.Investigating the relationship between such transition and heart disease has important applications.This study uses federated learning(FL)in thi...Environmental transition can potentially influence cardiovascular health.Investigating the relationship between such transition and heart disease has important applications.This study uses federated learning(FL)in this context and investigates the link between climate change and heart disease.The dataset containing environmental,meteorological,and health-related factors like blood sugar,cholesterol,maximum heart rate,fasting ECG,etc.,is used with machine learning models to identify hidden patterns and relationships.Algorithms such as federated learning,XGBoost,random forest,support vector classifier,extra tree classifier,k-nearest neighbor,and logistic regression are used.A framework for diagnosing heart disease is designed using FL along with other models.Experiments involve discriminating healthy subjects from those who are heart patients and obtain an accuracy of 94.03%.The proposed FL-based framework proves to be superior to existing techniques in terms of usability,dependability,and accuracy.This study paves the way for screening people for early heart disease detection and continuous monitoring in telemedicine and remote care.Personalized treatment can also be planned with customized therapies.展开更多
Soilcrete is a composite material of soil and cement that is highly valued in the construction industry.Accurate measurement of its mechanical properties is essential,but laboratory testing methods are expensive,timec...Soilcrete is a composite material of soil and cement that is highly valued in the construction industry.Accurate measurement of its mechanical properties is essential,but laboratory testing methods are expensive,timeconsuming,and include inaccuracies.Machine learning(ML)algorithms provide a more efficient alternative for this purpose,so after assessment with a statistical extraction method,ML algorithms including back-propagation neural network(BPNN),K-nearest neighbor(KNN),radial basis function(RBF),feed-forward neural networks(FFNN),and support vector regression(SVR)for predicting the uniaxial compressive strength(UCS)of soilcrete,were proposed in this study.The developed models in this study were optimized using an optimization technique,gradient descent(GD),throughout the analysis(direct optimization for neural networks and indirect optimization for other models corresponding to their hyperparameters).After doing laboratory analysis,data pre-preprocessing,and data-processing analysis,a database including 600 soilcrete specimens was gathered,which includes two different soil types(clay and limestone)and metakaolin as a mineral additive.80%of the database was used for the training set and 20%for testing,considering eight input parameters,including metakaolin content,soil type,superplasticizer content,water-to-binder ratio,shrinkage,binder,density,and ultrasonic velocity.The analysis showed that most algorithms performed well in the prediction,with BPNN,KNN,and RBF having higher accuracy compared to others(R^(2)=0.95,0.95,0.92,respectively).Based on this evaluation,it was observed that all models show an acceptable accuracy rate in prediction(RMSE:BPNN=0.11,FFNN=0.24,KNN=0.05,SVR=0.06,RBF=0.05,MAD:BPNN=0.006,FFNN=0.012,KNN=0.008,SVR=0.006,RBF=0.009).The ML importance ranking-sensitivity analysis indicated that all input parameters influence theUCS of soilcrete,especially the water-to-binder ratio and density,which have themost impact.展开更多
Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers t...Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers to techniques that increase the amount of image data.Common operations for image augmentation include changes in illumination,rotation,contrast,size,viewing angle,and others.Recently,Generative Adversarial Networks(GANs)have been employed for image generation.However,like image augmentation methods,GAN approaches can only generate images that are similar to the original images.Therefore,they also cannot generate new classes of data.Texture images presentmore challenges than general images,and generating textures is more complex than creating other types of images.This study proposes a gradient-based deep neural network method that generates a new class of texture.It is possible to rapidly generate new classes of textures using different kernels from pre-trained deep networks.After generating new textures for each class,the number of textures increases through image augmentation.During this process,several techniques are proposed to automatically remove incomplete and similar textures that are created.The proposed method is faster than some well-known generative networks by around 4 to 10 times.In addition,the quality of the generated textures surpasses that of these networks.The proposed method can generate textures that surpass those of someGANs and parametric models in certain image qualitymetrics.It can provide a big texture dataset to train deep networks.A new big texture dataset is created artificially using the proposed method.This dataset is approximately 2 GB in size and comprises 30,000 textures,each 150×150 pixels in size,organized into 600 classes.It is uploaded to the Kaggle site and Google Drive.This dataset is called BigTex.Compared to other texture datasets,the proposed dataset is the largest and can serve as a comprehensive texture dataset for training more powerful deep neural networks and mitigating overfitting.展开更多
Image processing plays a vital role in various fields such as autonomous systems,healthcare,and cataloging,especially when integrated with deep learning(DL).It is crucial in medical diagnostics,including the early det...Image processing plays a vital role in various fields such as autonomous systems,healthcare,and cataloging,especially when integrated with deep learning(DL).It is crucial in medical diagnostics,including the early detection of diseases like chronic obstructive pulmonary disease(COPD),which claimed 3.2 million lives in 2015.COPD,a life-threatening condition often caused by prolonged exposure to lung irritants and smoking,progresses through stages.Early diagnosis through image processing can significantly improve survival rates.COPD encompasses chronic bronchitis(CB)and emphysema;CB particularly increases in smokers and generally affects individuals between 50 and 70 years old.It damages the lungs’air sacs,reducing oxygen transport and causing symptoms like coughing and shortness of breath.Treatments such as beta-agonists and inhaled steroids are used to manage symptoms and prolong lung function.Moreover,COVID-19 poses an additional risk to individuals with CB due to its impact on the respiratory system.The proposed system utilizes convolutional neural networks(CNN)to diagnose CB.In this system,CNN extracts essential and significant features from X-ray modalities,which are then fed into the neural network.The network undergoes training to recognize patterns and make accurate predictions based on the learned features.By leveraging DL techniques,the system aims to enhance the precision and reliability of CB detection.Our research specifically focuses on a subset of 189 lung disease images,carefully selected for model evaluation.To further refine the training process,various data augmentation and noise removal techniques are implemented.These techniques significantly enhance the quality of the training data,improving the model’s robustness and generalizability.As a result,the diagnostic accuracy has improved from 98.6%to 99.2%.This advancement not only validates the efficacy of our proposed model but also represents a significant improvement over existing literature.It highlights the potential of CNN-based approaches in transforming medical diagnostics through refined image analysis,learning capabilities,and automated feature extraction.展开更多
Human Activity Recognition(HAR)in drone-captured videos has become popular because of the interest in various fields such as video surveillance,sports analysis,and human-robot interaction.However,recognizing actions f...Human Activity Recognition(HAR)in drone-captured videos has become popular because of the interest in various fields such as video surveillance,sports analysis,and human-robot interaction.However,recognizing actions from such videos poses the following challenges:variations of human motion,the complexity of backdrops,motion blurs,occlusions,and restricted camera angles.This research presents a human activity recognition system to address these challenges by working with drones’red-green-blue(RGB)videos.The first step in the proposed system involves partitioning videos into frames and then using bilateral filtering to improve the quality of object foregrounds while reducing background interference before converting from RGB to grayscale images.The YOLO(You Only Look Once)algorithm detects and extracts humans from each frame,obtaining their skeletons for further processing.The joint angles,displacement and velocity,histogram of oriented gradients(HOG),3D points,and geodesic Distance are included.These features are optimized using Quadratic Discriminant Analysis(QDA)and utilized in a Neuro-Fuzzy Classifier(NFC)for activity classification.Real-world evaluations on the Drone-Action,Unmanned Aerial Vehicle(UAV)-Gesture,and Okutama-Action datasets substantiate the proposed system’s superiority in accuracy rates over existing methods.In particular,the system obtains recognition rates of 93%for drone action,97%for UAV gestures,and 81%for Okutama-action,demonstrating the system’s reliability and ability to learn human activity from drone videos.展开更多
Challenges in land use and land cover(LULC)include rapid urbanization encroaching on agricultural land,leading to fragmentation and loss of natural habitats.However,the effects of urbanization on LULC of different cro...Challenges in land use and land cover(LULC)include rapid urbanization encroaching on agricultural land,leading to fragmentation and loss of natural habitats.However,the effects of urbanization on LULC of different crop types are less concerned.The study assessed the impacts of LULC changes on agriculture and drought vulnerability in the Aguascalientes region,Mexico,from 1994 to 2024,and predicted the LULC in 2034 using remote sensing data,with the goals of sustainable land management and climate resilience strategies.Despite increasing urbanization and drought,the integration of satellite imagery and machine learning models in LULC analysis has been underutilized in this region.Using Landsat imagery,we assessed crop attributes through indices such as normalized difference vegetation index(NDVI),normalized difference water index(NDWI),normalized difference moisture index(NDMI),and vegetation condition index(VCI),alongside watershed delineation and spectral features.The random forest model was applied to classify LULC,providing insights into both historical and future trends.Results indicated a significant decline in vegetation cover(109.13 km^(2))from 1994 to 2024,accompanied by an increase in built-up land(75.11 km^(2))and bare land(67.13 km^(2)).Projections suggested a further decline in vegetation cover(41.51 km^(2))and continued urban land expansion by 2034.The study found that paddy crops exhibited the highest values,while common bean and maize performed poorly.Drought analysis revealed that mildly dry areas in 2004 became severely dry in 2024,highlighting the increasing vulnerability of agriculture to climate change.The study concludes that sustainable land management,improved water resource practices,and advanced monitoring techniques are essential to mitigate the adverse effects of LULC changes on agricultural productivity and drought resilience in the area.These findings contribute to the understanding of how remote sensing can be effectively used for long-term agricultural planning and environmental sustainability.展开更多
Remote sensing plays a pivotal role in environmental monitoring,disaster relief,and urban planning,where accurate scene classification of aerial images is essential.However,conventional convolutional neural networks(C...Remote sensing plays a pivotal role in environmental monitoring,disaster relief,and urban planning,where accurate scene classification of aerial images is essential.However,conventional convolutional neural networks(CNNs)struggle with long-range dependencies and preserving high-resolution features,limiting their effectiveness in complex aerial image analysis.To address these challenges,we propose a Hybrid HRNet-Swin Transformer model that synergizes the strengths of HRNet-W48 for high-resolution segmentation and the Swin Transformer for global feature extraction.This hybrid architecture ensures robust multi-scale feature fusion,capturing fine-grained details and broader contextual relationships in aerial imagery.Our methodology begins with preprocessing steps,including normalization,histogram equalization,and noise reduction,to enhance input data quality.The HRNet-W48 backbone maintains high-resolution feature maps throughout the network,enabling precise segmentation,while the Swin Transformer leverages hierarchical self-attention to model long-range dependencies efficiently.By integrating these components,our model achieves superior performance in segmentation and classification tasks compared to traditional CNNs and standalone transformer models.We evaluate our approach on two benchmark datasets:UC Merced and WHU-RS19.Experimental results demonstrate that the proposed hybrid model outperforms existing methods,achieving state-of-the-art accuracy while maintaining computational efficiency.Specifically,it excels in preserving fine spatial details and contextual understanding,critical for applications like land-use classification and disaster assessment.展开更多
Inertial Sensor-based Daily Activity Recognition(IS-DAR)requires adaptable,data-efficient methods for effective multi-sensor use.This study presents an advanced detection system using body-worn sensors to accurately r...Inertial Sensor-based Daily Activity Recognition(IS-DAR)requires adaptable,data-efficient methods for effective multi-sensor use.This study presents an advanced detection system using body-worn sensors to accurately recognize activities.A structured pipeline enhances IS-DAR by applying signal preprocessing,feature extraction and optimization,followed by classification.Before segmentation,a Chebyshev filter removes noise,and Blackman window-ing improves signal representation.Discriminative features-Gaussian Mixture Model(GMM)with Mel-Frequency Cepstral Coefficients(MFCC),spectral entropy,quaternion-based features,and Gammatone Cepstral Coefficients(GCC)-are fused to expand the feature space.Unlike existing approaches,the proposed IS-DAR system uniquely inte-grates diverse handcrafted features using a novel fusion strategy combined with Bayesian-based optimization,enabling a more accurate and generalized activity recognition.The key contribution lies in the joint optimization and fusion of features via Bayesian-based subset selection,resulting in a compact and highly discriminative feature representation.These features are then fed into a Convolutional Neural Network(CNN)to effectively detect spatial-temporal patterns in activity signals.Testing on two public datasets-IM-WSHA and ENABL3S-achieved accuracy levels of 93.0%and 92.0%,respectively.The integration of advanced feature extraction methods with fusion and optimization techniques significantly enhanced detection performance,surpassing traditional methods.The obtained results establish the effectiveness of the proposed IS-DAR system for deployment in real-world activity recognition applications.展开更多
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R104)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.
基金Supported by the Austrian Science Fund(FWF),No.KLI 429-B13 to Vécsei A
文摘AIM: To further improve the endoscopic detection of intestinal mucosa alterations due to celiac disease(CD).METHODS: We assessed a hybrid approach based on the integration of expert knowledge into the computerbased classification pipeline. A total of 2835 endoscopic images from the duodenum were recorded in 290 children using the modified immersion technique(MIT). These children underwent routine upper endoscopy for suspected CD or non-celiac upper abdominal symptoms between August 2008 and December 2014. Blinded to the clinical data and biopsy results, three medical experts visually classified each image as normal mucosa(Marsh-0) or villous atrophy(Marsh-3). The experts' decisions were further integrated into state-of-the-arttexture recognition systems. Using the biopsy results as the reference standard, the classification accuracies of this hybrid approach were compared to the experts' diagnoses in 27 different settings.RESULTS: Compared to the experts' diagnoses, in 24 of 27 classification settings(consisting of three imaging modalities, three endoscopists and three classification approaches), the best overall classification accuracies were obtained with the new hybrid approach. In 17 of 24 classification settings, the improvements achieved with the hybrid approach were statistically significant(P < 0.05). Using the hybrid approach classification accuracies between 94% and 100% were obtained. Whereas the improvements are only moderate in the case of the most experienced expert, the results of the less experienced expert could be improved significantly in 17 out of 18 classification settings. Furthermore, the lowest classification accuracy, based on the combination of one database and one specific expert, could be improved from 80% to 95%(P < 0.001).CONCLUSION: The overall classification performance of medical experts, especially less experienced experts, can be boosted significantly by integrating expert knowledge into computer-aided diagnosis systems.
基金The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4310373DSR03).
文摘Early detection of lung cancer can help for improving the survival rate of the patients.Biomedical imaging tools such as computed tomography(CT)image was utilized to the proper identification and positioning of lung cancer.The recently developed deep learning(DL)models can be employed for the effectual identification and classification of diseases.This article introduces novel deep learning enabled CAD technique for lung cancer using biomedical CT image,named DLCADLC-BCT technique.The proposed DLCADLC-BCT technique intends for detecting and classifying lung cancer using CT images.The proposed DLCADLC-BCT technique initially uses gray level co-occurrence matrix(GLCM)model for feature extraction.Also,long short term memory(LSTM)model was applied for classifying the existence of lung cancer in the CT images.Moreover,moth swarm optimization(MSO)algorithm is employed to optimally choose the hyperparameters of the LSTM model such as learning rate,batch size,and epoch count.For demonstrating the improved classifier results of the DLCADLC-BCT approach,a set of simulations were executed on benchmark dataset and the outcomes exhibited the supremacy of the DLCADLC-BCT technique over the recent approaches.
文摘BACKGROUND It was shown in previous studies that high definition endoscopy,high magnification endoscopy and image enhancement technologies,such as chromoendoscopy and digital chromoendoscopy[narrow-band imaging(NBI),iScan]facilitate the detection and classification of colonic polyps during endoscopic sessions.However,there are no comprehensive studies so far that analyze which endoscopic imaging modalities facilitate the automated classification of colonic polyps.In this work,we investigate the impact of endoscopic imaging modalities on the results of computer-assisted diagnosis systems for colonic polyp staging.AIM To assess which endoscopic imaging modalities are best suited for the computerassisted staging of colonic polyps.METHODS In our experiments,we apply twelve state-of-the-art feature extraction methods for the classification of colonic polyps to five endoscopic image databases of colonic lesions.For this purpose,we employ a specifically designed experimental setup to avoid biases in the outcomes caused by differing numbers of images per image database.The image databases were obtained using different imaging modalities.Two databases were obtained by high-definition endoscopy in combination with i-Scan technology(one with chromoendoscopy and one without chromoendoscopy).Three databases were obtained by highmagnification endoscopy(two databases using narrow band imaging and one using chromoendoscopy).The lesions are categorized into non-neoplastic and neoplastic according to the histological diagnosis.RESULTS Generally,it is feature-dependent which imaging modalities achieve high results and which do not.For the high-definition image databases,we achieved overall classification rates of up to 79.2%with chromoendoscopy and 88.9%without chromoendoscopy.In the case of the database obtained by high-magnification chromoendoscopy,the classification rates were up to 81.4%.For the combination of high-magnification endoscopy with NBI,results of up to 97.4%for one database and up to 84%for the other were achieved.Non-neoplastic lesions were classified more accurately in general than non-neoplastic lesions.It was shown that the image recording conditions highly affect the performance of automated diagnosis systems and partly contribute to a stronger effect on the staging results than the used imaging modality.CONCLUSION Chromoendoscopy has a negative impact on the results of the methods.NBI is better suited than chromoendoscopy.High-definition and high-magnification endoscopy are equally suited.
基金funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University through the Fast-track Research Funding Program.
文摘Limbal Stem Cell Deficiency(LSCD)is an eye disease that can cause corneal opacity and vascularization.In its advanced stage it can lead to a degree of visual impairment.It involves the changing in the semispherical shape of the cornea to a drooping shape to downwards direction.LSCD is hard to be diagnosed at early stages.The color and texture of the cornea surface can provide significant information about the cornea affected by LSCD.Parameters such as shape and texture are very crucial to differentiate normal from LSCD cornea.Although several medical approaches exist,most of them requires complicated procedure and medical devices.Therefore,in this paper,we pursued the development of a LSCD detection technique(LDT)utilizing image processing methods.Early diagnosis of LSCD is very crucial for physicians to arrange for effective treatment.In the proposed technique,we developed a method for LSCD detection utilizing frontal eye images.A dataset of 280 eye images of frontal and lateral LSCD and normal patients were used in this research.First,the cornea region of both frontal and lateral images is segmented,and the geometric features are extracted through the automated active contour model and the spline curve.While the texture features are extracted using the feature selection algorithm.The experimental results exhibited that the combined features of the geometric and texture will exhibit accuracy of 95.95%,sensitivity of 97.91% and specificity of 94.05% with the random forest classifier of n=40.As a result,this research developed a Limbal stem cell deficiency detection system utilizing features’fusion using image processing techniques for frontal and lateral digital images of the eyes.
文摘Proactive Semantic Interference (PSI) and failure to recover from PSI (frPSI), are novel constructs assessed by the LASSI-L. These measures are sensitive to cognitive changes in early Mild Cognitive Impairment (MCI) and preclinical AD determined by Aβ load using PET. The goal of this study was to compare a new computerized version of the LASSI-L (LASSI-Brief Computerized) to the standard paper-and-pencil version of the test. In this study, we examined 110 cognitively unimpaired (CU) older adults and 79 with amnestic MCI (aMCI) who were administered the paper-and-pencil form of the LASSI-L. Their performance was compared with 62 CU older adults and 52 aMCI participants examined using the LASSI-BC. After adjustment for covariates (degree of initial learning, sex, education, and language of evaluation) both the standard and computerized versions distinguished between aMCI and CU participants. The performance of CU and aMCI groups using either form was relatively commensurate. Importantly, an optimal combination of Cued B2 recall and Cued B1 intrusions on the LASSI-BC yielded an area under the ROC curve of .927, a sensitivity of 92.3% and specificity of 88.1%, relative to an area under the ROC curve of .815, a sensitivity of 72.5%, and a specificity of 79.1% obtained for the paper-and-pencil LASSI-L. Overall, the LASSI-BC was comparable, and in some ways, superior to the paper-and-pencil LASSI-L. Advantages of the LASSI-BC include a more standardized administration, suitability for remote assessment, and an automated scoring mechanism that can be verified by a built-in audio recording of responses.
基金supported by the Spanish Ministry of Science and Innovation under Projects PID2022-137680OB-C32 and PID2022-139187OB-I00.
文摘Customer segmentation according to load-shape profiles using smart meter data is an increasingly important application to vital the planning and operation of energy systems and to enable citizens’participation in the energy transition.This study proposes an innovative multi-step clustering procedure to segment customers based on load-shape patterns at the daily and intra-daily time horizons.Smart meter data is split between daily and hourly normalized time series to assess monthly,weekly,daily,and hourly seasonality patterns separately.The dimensionality reduction implicit in the splitting allows a direct approach to clustering raw daily energy time series data.The intraday clustering procedure sequentially identifies representative hourly day-unit profiles for each customer and the entire population.For the first time,a step function approach is applied to reduce time series dimensionality.Customer attributes embedded in surveys are employed to build external clustering validation metrics using Cramer’s V correlation factors and to identify statistically significant determinants of load-shape in energy usage.In addition,a time series features engineering approach is used to extract 16 relevant demand flexibility indicators that characterize customers and corresponding clusters along four different axes:available Energy(E),Temporal patterns(T),Consistency(C),and Variability(V).The methodology is implemented on a real-world electricity consumption dataset of 325 Small and Medium-sized Enterprise(SME)customers,identifying 4 daily and 6 hourly easy-to-interpret,well-defined clusters.The application of the methodology includes selecting key parameters via grid search and a thorough comparison of clustering distances and methods to ensure the robustness of the results.Further research can test the scalability of the methodology to larger datasets from various customer segments(households and large commercial)and locations with different weather and socioeconomic conditions.
文摘This paper introduces a robust Distributed Denial-of-Service attack detection framework tailored for Software-Defined Networking based Internet of Things environments,built upon a novel,syntheticmulti-vector dataset generated in a Mininet-Ryu testbed using real-time flow-based labeling.The proposed model is based on the XGBoost algorithm,optimized with Principal Component Analysis for dimensionality reduction,utilizing lightweight flowlevel features extracted from Open Flow statistics to classify attacks across critical IoT protocols including TCP,UDP,HTTP,MQTT,and CoAP.The model employs lightweight flow-level features extracted from Open Flow statistics to ensure low computational overhead and fast processing.Performance was rigorously evaluated using key metrics,including Accuracy,Precision,Recall,F1-Score,False Alarm Rate,AUC-ROC,and Detection Time.Experimental results demonstrate the model’s high performance,achieving an accuracy of 98.93%and a low FAR of 0.86%,with a rapid median detection time of 1.02 s.This efficiency validates its superiority in meeting critical Key Performance Indicators,such as Latency and high Throughput,necessary for time-sensitive SDN-IoT systems.Furthermore,the model’s robustness and statistically significant outperformance against baseline models such as Random Forest,k-Nearest Neighbors,and Gradient Boosting Machine,validating through statistical tests using Wilcoxon signed-rank test and confirmed via successful deployment in a real SDN testbed for live traffic detection and mitigation.
文摘Global security threats have motivated organizations to adopt robust and reliable security systems to ensure the safety of individuals and assets.Biometric authentication systems offer a strong solution.However,choosing the best security system requires a structured decision-making framework,especially in complex scenarios involving multiple criteria.To address this problem,we develop a novel quantum spherical fuzzy technique for order preference by similarity to ideal solution(QSF-TOPSIS)methodology,integrating quantum mechanics principles and fuzzy theory.The proposed approach enhances decision-making accuracy,handles uncertainty,and incorporates criteria relationships.Criteria weights are determined using spherical fuzzy sets,and alternatives are ranked through the QSFTOPSIS framework.This comprehensive multi-criteria decision-making(MCDM)approach is applied to identify the optimal gate security system for an organization,considering critical factors such as accuracy,cost,and reliability.Additionally,the study compares the proposed approach with other established MCDM methods.The results confirm the alignment of rankings across these methods,demonstrating the robustness and reliability of the QSF-TOPSIS framework.The study identifies the infrared recognition and identification system(IRIS)as the most effective,with a score value of 0.5280 and optimal security system among the evaluated alternatives.This research contributes to the growing literature on quantum-enhanced decision-making models and offers a practical framework for solving complex,real-world problems involving uncertainty and ambiguity.
文摘The rapid increase in the number of Internet of Things(IoT)devices,coupled with a rise in sophisticated cyberattacks,demands robust intrusion detection systems.This study presents a holistic,intelligent intrusion detection system.It uses a combined method that integrates machine learning(ML)and deep learning(DL)techniques to improve the protection of contemporary information technology(IT)systems.Unlike traditional signature-based or singlemodel methods,this system integrates the strengths of ensemble learning for binary classification and deep learning for multi-class classification.This combination provides a more nuanced and adaptable defense.The research utilizes the NF-UQ-NIDS-v2 dataset,a recent,comprehensive benchmark for evaluating network intrusion detection systems(NIDS).Our methodological framework employs advanced artificial intelligence techniques.Specifically,we use ensemble learning algorithms(Random Forest,Gradient Boosting,AdaBoost,and XGBoost)for binary classification.Deep learning architectures are also employed to address the complexities of multi-class classification,allowing for fine-grained identification of intrusion types.To mitigate class imbalance,a common problem in multi-class intrusion detection that biases model performance,we use oversampling and data augmentation.These techniques ensure equitable class representation.The results demonstrate the efficacy of the proposed hybrid ML-DL system.It achieves significant improvements in intrusion detection accuracy and reliability.This research contributes substantively to cybersecurity by providing a more robust and adaptable intrusion detection solution.
基金funded by the Northern Border University,Arar,KSA,under the project number“NBU-FFR-2025-3555-07”.
文摘Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes.Existing machine and deep learning-based anomalies detection methods often rely on centralized training,leading to reduced accuracy and potential privacy breaches.Therefore,this study proposes a Blockchain-based-Federated Learning architecture for Malicious Node Detection(BFL-MND)model.It trains models locally within healthcare clusters,sharing only model updates instead of patient data,preserving privacy and improving accuracy.Cloud and edge computing enhance the model’s scalability,while blockchain ensures secure,tamper-proof access to health data.Using the PhysioNet dataset,the proposed model achieves an accuracy of 0.95,F1 score of 0.93,precision of 0.94,and recall of 0.96,outperforming baseline models like random forest(0.88),adaptive boosting(0.90),logistic regression(0.86),perceptron(0.83),and deep neural networks(0.92).
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R104)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia。
文摘In today’s digital world,the Internet of Things(IoT)plays an important role in both local and global economies due to its widespread adoption in different applications.This technology has the potential to offer several advantages over conventional technologies in the near future.However,the potential growth of this technology also attracts attention from hackers,which introduces new challenges for the research community that range from hardware and software security to user privacy and authentication.Therefore,we focus on a particular security concern that is associated with malware detection.The literature presents many countermeasures,but inconsistent results on identical datasets and algorithms raise concerns about model biases,training quality,and complexity.This highlights the need for an adaptive,real-time learning framework that can effectively mitigate malware threats in IoT applications.To address these challenges,(i)we propose an intelligent framework based on Two-step Deep Reinforcement Learning(TwStDRL)that is capable of learning and adapting in real-time to counter malware threats in IoT applications.This framework uses exploration and exploitation phenomena during both the training and testing phases by storing results in a replay memory.The stored knowledge allows the model to effectively navigate the environment and maximize cumulative rewards.(ii)To demonstrate the superiority of the TwStDRL framework,we implement and evaluate several machine learning algorithms for comparative analysis that include Support Vector Machines(SVM),Multi-Layer Perceptron,Random Forests,and k-means Clustering.The selection of these algorithms is driven by the inconsistent results reported in the literature,which create doubt about their robustness and reliability in real-world IoT deployments.(iii)Finally,we provide a comprehensive evaluation to justify why the TwStDRL framework outperforms them in mitigating security threats.During analysis,we noted that our proposed TwStDRL scheme achieves an average performance of 99.45%across accuracy,precision,recall,and F1-score,which is an absolute improvement of roughly 3%over the existing malware-detection models.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R104),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Environmental transition can potentially influence cardiovascular health.Investigating the relationship between such transition and heart disease has important applications.This study uses federated learning(FL)in this context and investigates the link between climate change and heart disease.The dataset containing environmental,meteorological,and health-related factors like blood sugar,cholesterol,maximum heart rate,fasting ECG,etc.,is used with machine learning models to identify hidden patterns and relationships.Algorithms such as federated learning,XGBoost,random forest,support vector classifier,extra tree classifier,k-nearest neighbor,and logistic regression are used.A framework for diagnosing heart disease is designed using FL along with other models.Experiments involve discriminating healthy subjects from those who are heart patients and obtain an accuracy of 94.03%.The proposed FL-based framework proves to be superior to existing techniques in terms of usability,dependability,and accuracy.This study paves the way for screening people for early heart disease detection and continuous monitoring in telemedicine and remote care.Personalized treatment can also be planned with customized therapies.
基金The support of Prince Sultan University for paying the Article Processing Charge(APC)of this publication and their support.Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R300).
文摘Soilcrete is a composite material of soil and cement that is highly valued in the construction industry.Accurate measurement of its mechanical properties is essential,but laboratory testing methods are expensive,timeconsuming,and include inaccuracies.Machine learning(ML)algorithms provide a more efficient alternative for this purpose,so after assessment with a statistical extraction method,ML algorithms including back-propagation neural network(BPNN),K-nearest neighbor(KNN),radial basis function(RBF),feed-forward neural networks(FFNN),and support vector regression(SVR)for predicting the uniaxial compressive strength(UCS)of soilcrete,were proposed in this study.The developed models in this study were optimized using an optimization technique,gradient descent(GD),throughout the analysis(direct optimization for neural networks and indirect optimization for other models corresponding to their hyperparameters).After doing laboratory analysis,data pre-preprocessing,and data-processing analysis,a database including 600 soilcrete specimens was gathered,which includes two different soil types(clay and limestone)and metakaolin as a mineral additive.80%of the database was used for the training set and 20%for testing,considering eight input parameters,including metakaolin content,soil type,superplasticizer content,water-to-binder ratio,shrinkage,binder,density,and ultrasonic velocity.The analysis showed that most algorithms performed well in the prediction,with BPNN,KNN,and RBF having higher accuracy compared to others(R^(2)=0.95,0.95,0.92,respectively).Based on this evaluation,it was observed that all models show an acceptable accuracy rate in prediction(RMSE:BPNN=0.11,FFNN=0.24,KNN=0.05,SVR=0.06,RBF=0.05,MAD:BPNN=0.006,FFNN=0.012,KNN=0.008,SVR=0.006,RBF=0.009).The ML importance ranking-sensitivity analysis indicated that all input parameters influence theUCS of soilcrete,especially the water-to-binder ratio and density,which have themost impact.
基金supported via funding from Prince Sattam bin Abdulaziz University(PSAU/2025/R/1446)Princess Nourah bint Abdulrahman University(PNURSP2025R300)Prince Sultan University.
文摘Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers to techniques that increase the amount of image data.Common operations for image augmentation include changes in illumination,rotation,contrast,size,viewing angle,and others.Recently,Generative Adversarial Networks(GANs)have been employed for image generation.However,like image augmentation methods,GAN approaches can only generate images that are similar to the original images.Therefore,they also cannot generate new classes of data.Texture images presentmore challenges than general images,and generating textures is more complex than creating other types of images.This study proposes a gradient-based deep neural network method that generates a new class of texture.It is possible to rapidly generate new classes of textures using different kernels from pre-trained deep networks.After generating new textures for each class,the number of textures increases through image augmentation.During this process,several techniques are proposed to automatically remove incomplete and similar textures that are created.The proposed method is faster than some well-known generative networks by around 4 to 10 times.In addition,the quality of the generated textures surpasses that of these networks.The proposed method can generate textures that surpass those of someGANs and parametric models in certain image qualitymetrics.It can provide a big texture dataset to train deep networks.A new big texture dataset is created artificially using the proposed method.This dataset is approximately 2 GB in size and comprises 30,000 textures,each 150×150 pixels in size,organized into 600 classes.It is uploaded to the Kaggle site and Google Drive.This dataset is called BigTex.Compared to other texture datasets,the proposed dataset is the largest and can serve as a comprehensive texture dataset for training more powerful deep neural networks and mitigating overfitting.
文摘Image processing plays a vital role in various fields such as autonomous systems,healthcare,and cataloging,especially when integrated with deep learning(DL).It is crucial in medical diagnostics,including the early detection of diseases like chronic obstructive pulmonary disease(COPD),which claimed 3.2 million lives in 2015.COPD,a life-threatening condition often caused by prolonged exposure to lung irritants and smoking,progresses through stages.Early diagnosis through image processing can significantly improve survival rates.COPD encompasses chronic bronchitis(CB)and emphysema;CB particularly increases in smokers and generally affects individuals between 50 and 70 years old.It damages the lungs’air sacs,reducing oxygen transport and causing symptoms like coughing and shortness of breath.Treatments such as beta-agonists and inhaled steroids are used to manage symptoms and prolong lung function.Moreover,COVID-19 poses an additional risk to individuals with CB due to its impact on the respiratory system.The proposed system utilizes convolutional neural networks(CNN)to diagnose CB.In this system,CNN extracts essential and significant features from X-ray modalities,which are then fed into the neural network.The network undergoes training to recognize patterns and make accurate predictions based on the learned features.By leveraging DL techniques,the system aims to enhance the precision and reliability of CB detection.Our research specifically focuses on a subset of 189 lung disease images,carefully selected for model evaluation.To further refine the training process,various data augmentation and noise removal techniques are implemented.These techniques significantly enhance the quality of the training data,improving the model’s robustness and generalizability.As a result,the diagnostic accuracy has improved from 98.6%to 99.2%.This advancement not only validates the efficacy of our proposed model but also represents a significant improvement over existing literature.It highlights the potential of CNN-based approaches in transforming medical diagnostics through refined image analysis,learning capabilities,and automated feature extraction.
基金funded by the Open Access Initiative of the University of Bremen and the DFG via SuUB Bremen.Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2024R348),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Human Activity Recognition(HAR)in drone-captured videos has become popular because of the interest in various fields such as video surveillance,sports analysis,and human-robot interaction.However,recognizing actions from such videos poses the following challenges:variations of human motion,the complexity of backdrops,motion blurs,occlusions,and restricted camera angles.This research presents a human activity recognition system to address these challenges by working with drones’red-green-blue(RGB)videos.The first step in the proposed system involves partitioning videos into frames and then using bilateral filtering to improve the quality of object foregrounds while reducing background interference before converting from RGB to grayscale images.The YOLO(You Only Look Once)algorithm detects and extracts humans from each frame,obtaining their skeletons for further processing.The joint angles,displacement and velocity,histogram of oriented gradients(HOG),3D points,and geodesic Distance are included.These features are optimized using Quadratic Discriminant Analysis(QDA)and utilized in a Neuro-Fuzzy Classifier(NFC)for activity classification.Real-world evaluations on the Drone-Action,Unmanned Aerial Vehicle(UAV)-Gesture,and Okutama-Action datasets substantiate the proposed system’s superiority in accuracy rates over existing methods.In particular,the system obtains recognition rates of 93%for drone action,97%for UAV gestures,and 81%for Okutama-action,demonstrating the system’s reliability and ability to learn human activity from drone videos.
基金supported by the Deanship of Research and Graduate Studies at the King Khalid University(RGP2/287/46)the Princess Nourah bint Abdulrahman University Researchers Supporting Project(PNURSP2025R733)+1 种基金the Princess Nourah bint Abdulrahman University Research Supporting Project(RSPD2025R787)the King Saud University,Saudi Arabia.
文摘Challenges in land use and land cover(LULC)include rapid urbanization encroaching on agricultural land,leading to fragmentation and loss of natural habitats.However,the effects of urbanization on LULC of different crop types are less concerned.The study assessed the impacts of LULC changes on agriculture and drought vulnerability in the Aguascalientes region,Mexico,from 1994 to 2024,and predicted the LULC in 2034 using remote sensing data,with the goals of sustainable land management and climate resilience strategies.Despite increasing urbanization and drought,the integration of satellite imagery and machine learning models in LULC analysis has been underutilized in this region.Using Landsat imagery,we assessed crop attributes through indices such as normalized difference vegetation index(NDVI),normalized difference water index(NDWI),normalized difference moisture index(NDMI),and vegetation condition index(VCI),alongside watershed delineation and spectral features.The random forest model was applied to classify LULC,providing insights into both historical and future trends.Results indicated a significant decline in vegetation cover(109.13 km^(2))from 1994 to 2024,accompanied by an increase in built-up land(75.11 km^(2))and bare land(67.13 km^(2)).Projections suggested a further decline in vegetation cover(41.51 km^(2))and continued urban land expansion by 2034.The study found that paddy crops exhibited the highest values,while common bean and maize performed poorly.Drought analysis revealed that mildly dry areas in 2004 became severely dry in 2024,highlighting the increasing vulnerability of agriculture to climate change.The study concludes that sustainable land management,improved water resource practices,and advanced monitoring techniques are essential to mitigate the adverse effects of LULC changes on agricultural productivity and drought resilience in the area.These findings contribute to the understanding of how remote sensing can be effectively used for long-term agricultural planning and environmental sustainability.
基金supported by the ITP(Institute of Information&Communications Technology Planning&Evaluation)-ICAN(ICT Challenge and Advanced Network of HRD)(ITP-2025-RS-2022-00156326,33)grant funded by the Korea government(Ministry of Science and ICT)the Deanship of Research and Graduate Studies at King Khalid University for funding this work through the Large Group Project under grant number(RGP2/568/45)the Deanship of Scientific Research at Northern Border University,Arar,Saudi Arabia,for funding this research work through the Project Number"NBU-FFR-2025-231-03".
文摘Remote sensing plays a pivotal role in environmental monitoring,disaster relief,and urban planning,where accurate scene classification of aerial images is essential.However,conventional convolutional neural networks(CNNs)struggle with long-range dependencies and preserving high-resolution features,limiting their effectiveness in complex aerial image analysis.To address these challenges,we propose a Hybrid HRNet-Swin Transformer model that synergizes the strengths of HRNet-W48 for high-resolution segmentation and the Swin Transformer for global feature extraction.This hybrid architecture ensures robust multi-scale feature fusion,capturing fine-grained details and broader contextual relationships in aerial imagery.Our methodology begins with preprocessing steps,including normalization,histogram equalization,and noise reduction,to enhance input data quality.The HRNet-W48 backbone maintains high-resolution feature maps throughout the network,enabling precise segmentation,while the Swin Transformer leverages hierarchical self-attention to model long-range dependencies efficiently.By integrating these components,our model achieves superior performance in segmentation and classification tasks compared to traditional CNNs and standalone transformer models.We evaluate our approach on two benchmark datasets:UC Merced and WHU-RS19.Experimental results demonstrate that the proposed hybrid model outperforms existing methods,achieving state-of-the-art accuracy while maintaining computational efficiency.Specifically,it excels in preserving fine spatial details and contextual understanding,critical for applications like land-use classification and disaster assessment.
基金funded by the Open Access Initiative of the University of Bremen and the DFG via SuUB BremenThe authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding thiswork through Large Group Project under grant number(RGP.2/568/45)The authors extend their appreciation to the Deanship of Scientific Research at Northern Border University,Arar,KSA for funding this research work through the Project Number“NBU-FFR-2025-231-04”.
文摘Inertial Sensor-based Daily Activity Recognition(IS-DAR)requires adaptable,data-efficient methods for effective multi-sensor use.This study presents an advanced detection system using body-worn sensors to accurately recognize activities.A structured pipeline enhances IS-DAR by applying signal preprocessing,feature extraction and optimization,followed by classification.Before segmentation,a Chebyshev filter removes noise,and Blackman window-ing improves signal representation.Discriminative features-Gaussian Mixture Model(GMM)with Mel-Frequency Cepstral Coefficients(MFCC),spectral entropy,quaternion-based features,and Gammatone Cepstral Coefficients(GCC)-are fused to expand the feature space.Unlike existing approaches,the proposed IS-DAR system uniquely inte-grates diverse handcrafted features using a novel fusion strategy combined with Bayesian-based optimization,enabling a more accurate and generalized activity recognition.The key contribution lies in the joint optimization and fusion of features via Bayesian-based subset selection,resulting in a compact and highly discriminative feature representation.These features are then fed into a Convolutional Neural Network(CNN)to effectively detect spatial-temporal patterns in activity signals.Testing on two public datasets-IM-WSHA and ENABL3S-achieved accuracy levels of 93.0%and 92.0%,respectively.The integration of advanced feature extraction methods with fusion and optimization techniques significantly enhanced detection performance,surpassing traditional methods.The obtained results establish the effectiveness of the proposed IS-DAR system for deployment in real-world activity recognition applications.