Face liveness detection is essential for securing biometric authentication systems against spoofing attacks,including printed photos,replay videos,and 3D masks.This study systematically evaluates pre-trained CNN model...Face liveness detection is essential for securing biometric authentication systems against spoofing attacks,including printed photos,replay videos,and 3D masks.This study systematically evaluates pre-trained CNN models—DenseNet201,VGG16,InceptionV3,ResNet50,VGG19,MobileNetV2,Xception,and InceptionResNetV2—leveraging transfer learning and fine-tuning to enhance liveness detection performance.The models were trained and tested on NUAA and Replay-Attack datasets,with cross-dataset generalization validated on SiW-MV2 to assess real-world adaptability.Performance was evaluated using accuracy,precision,recall,FAR,FRR,HTER,and specialized spoof detection metrics(APCER,NPCER,ACER).Fine-tuning significantly improved detection accuracy,with DenseNet201 achieving the highest performance(98.5%on NUAA,97.71%on Replay-Attack),while MobileNetV2 proved the most efficient model for real-time applications(latency:15 ms,memory usage:45 MB,energy consumption:30 mJ).A statistical significance analysis(paired t-tests,confidence intervals)validated these improvements.Cross-dataset experiments identified DenseNet201 and MobileNetV2 as the most generalizable architectures,with DenseNet201 achieving 86.4%accuracy on Replay-Attack when trained on NUAA,demonstrating robust feature extraction and adaptability.In contrast,ResNet50 showed lower generalization capabilities,struggling with dataset variability and complex spoofing attacks.These findings suggest that MobileNetV2 is well-suited for low-power applications,while DenseNet201 is ideal for high-security environments requiring superior accuracy.This research provides a framework for improving real-time face liveness detection,enhancing biometric security,and guiding future advancements in AI-driven anti-spoofing techniques.展开更多
Background: Erythrodermic psoriasis (EP) is a rare, severe variant of psoriasis characterized by widespread erythema, scaling, and systemic complications. Despite advances in systemic treatments, the management of EP ...Background: Erythrodermic psoriasis (EP) is a rare, severe variant of psoriasis characterized by widespread erythema, scaling, and systemic complications. Despite advances in systemic treatments, the management of EP remains challenging, particularly in patients with comorbidities or contraindications to standard therapies. Objectives: To evaluate the effectiveness of ozonated water as an adjunctive treatment for EP, delivered using a patented robotic therapy system designed for hygiene and infection prevention in non-self-sufficient patients. Methods: We report the case of a 90-year-old male patient with acute EP who received daily skin treatments with ozonated water in conjunction with supportive care, including rehydration and antibiotics. The intervention was facilitated by the robotic system “COPERNICO Surveillance & Prevention,” which ensured standardized hygiene practices and clinical documentation. Results: Within one week of treatment, the patient showed complete desquamation of necrotic skin, resolution of erythema, and significant metabolic recovery. Fever subsided, renal function improved, and the patient was discharged in stable condition. Follow-up confirmed sustained clinical improvement, and no adverse events were reported. Conclusions: Ozonated water demonstrated efficacy in alleviating the dermatological and systemic manifestations of EP in a high-risk elderly patient. This case highlights the potential of ozone therapy as a safe, cost-effective adjunctive treatment for EP and underscores the utility of robotic systems in managing complex dermatological conditions. Further research is warranted to validate these findings in larger cohorts.展开更多
Adaptive robust secure framework plays a vital role in implementing intelligent automation and decentralized decision making of Industry 5.0.Latency,privacy risks and the complexity of industrial networks have been pr...Adaptive robust secure framework plays a vital role in implementing intelligent automation and decentralized decision making of Industry 5.0.Latency,privacy risks and the complexity of industrial networks have been preventing attempts at traditional cloud-based learning systems.We demonstrate that,to overcome these challenges,for instance,the EdgeGuard-IoT framework,a 6G edge intelligence framework enhancing cybersecurity and operational resilience of the smart grid,is needed on the edge to integrate Secure Federated Learning(SFL)and Adaptive Anomaly Detection(AAD).With ultra-reliable low latency communication(URLLC)of 6G,artificial intelligence-based network orchestration,and massive machine type communication(mMTC),EdgeGuard-IoT brings real-time,distributed intelligence on the edge,and mitigates risks in data transmission and enhances privacy.EdgeGuard-IoT,with a hierarchical federated learning framework,helps edge devices to collaboratively train models without revealing the sensitive grid data,which is crucial in the smart grid where real-time power anomaly detection and the decentralization of the energy management are a big deal.The hybrid AI models driven adaptive anomaly detection mechanism immediately raises the thumb if the grid stability and strength are negatively affected due to cyber threats,faults,and energy distribution,thereby keeping the grid stable with resilience.The proposed framework also adopts various security means within the blockchain and zero-trust authentication techniques to reduce the adversarial attack risks and model poisoning during federated learning.EdgeGuard-IoT shows superior detection accuracy,response time,and scalability performance at a much reduced communication overhead via extensive simulations and deployment in real-world case studies in smart grids.This research pioneers a 6G-driven federated intelligence model designed for secure,self-optimizing,and resilient Industry 5.0 ecosystems,paving the way for next-generation autonomous smart grids and industrial cyber-physical systems.展开更多
Skin cancer is among the most common malignancies worldwide,but its mortality burden is largely driven by aggressive subtypes such as melanoma,with outcomes varying across regions and healthcare settings.These variati...Skin cancer is among the most common malignancies worldwide,but its mortality burden is largely driven by aggressive subtypes such as melanoma,with outcomes varying across regions and healthcare settings.These variations emphasize the importance of reliable diagnostic technologies that support clinicians in detecting skin malignancies with higher accuracy.Traditional diagnostic methods often rely on subjective visual assessments,which can lead to misdiagnosis.This study addresses these challenges by developing HybridFusionNet,a novel model that integrates Convolutional Neural Networks(CNN)with 1D feature extraction techniques to enhance diagnostic accuracy.Utilizing two extensive datasets,BCN20000 and HAM10000,the methodology includes data preprocessing,application of Synthetic Minority Oversampling Technique combined with Edited Nearest Neighbors(SMOTEENN)for data balancing,and optimization of feature selection using the Tree-based Pipeline Optimization Tool(TPOT).The results demonstrate significant performance improvements over traditional CNN models,achieving an accuracy of 0.9693 on the BCN20000 dataset and 0.9909 on the HAM10000 dataset.The HybridFusionNet model not only outperforms conventionalmethods but also effectively addresses class imbalance.To enhance transparency,it integrates post-hoc explanation techniques such as LIME,which highlight the features influencing predictions.These findings highlight the potential of HybridFusionNet to support real-world applications,including physician-assist systems,teledermatology,and large-scale skin cancer screening programs.By improving diagnostic efficiency and enabling access to expert-level analysis,the modelmay enhance patient outcomes and foster greater trust in artificial intelligence(AI)-assisted clinical decision-making.展开更多
Parkinson’s disease(PD)is a progressive neurodegenerative disorder characterized by tremors,rigidity,and decreased movement.PD poses risks to individuals’lives and independence.Early detection of PD is essential bec...Parkinson’s disease(PD)is a progressive neurodegenerative disorder characterized by tremors,rigidity,and decreased movement.PD poses risks to individuals’lives and independence.Early detection of PD is essential because it allows timely intervention,which can slow disease progression and improve outcomes.Manual diagnosis of PD is problematic because it is difficult to capture the subtle patterns and changes that help diagnose PD.In addition,the subjectivity and lack of doctors compared to the number of patients constitute an obstacle to early diagnosis.Artificial intelligence(AI)techniques,especially deep and automated learning models,provide promising solutions to address deficiencies in manual diagnosis.This study develops robust systems for PD diagnosis by analyzing handwritten helical and wave graphical images.Handwritten graphic images of the PD dataset are enhanced using two overlapping filters,the average filter and the Laplacian filter,to improve image quality and highlight essential features.The enhanced images are segmented to isolate regions of interest(ROIs)from the rest of the image using a gradient vector flow(GVF)algorithm,which ensures that features are extracted from only relevant regions.The segmented ROIs are fed into convolutional neural network(CNN)models,namely DenseNet169,MobileNet,and VGG16,to extract fine and deep feature maps that capture complex patterns and representations relevant to PD diagnosis.Fine and deep feature maps extracted from individual CNN models are combined into fused feature vectors for DenseNet169-MobileNet,MobileNet-VGG16,DenseNet169-VGG16,and DenseNet169-MobileNet-VGG16 models.This fusion technique aims to combine complementary and robust features from several models,which improves the extracted features.Two feature selection algorithms are considered to remove redundancy and weak correlations within the combined feature set:Ant Colony Optimization(ACO)and Maximum Entropy Score-based Selection(MESbS).These algorithms identify and retain the most strongly correlated features while eliminating redundant and weakly correlated features,thus optimizing the features to improve system performance.The fused and enhanced feature vectors are fed into two powerful classifiers,XGBoost and random forest(RF),for accurate classification and differentiation between individuals with PD and healthy controls.The proposed hybrid systems show superior performance,where the RF classifier used the combined features from the DenseNet169-MobileNet-VGG16 models with the ACO feature selection method,achieving outstanding results:area under the curve(AUC)of 99%,sensitivity of 99.6%,99.3%accuracy,99.35%accuracy,and 99.65%specificity.展开更多
The implementation of Countermeasure Techniques(CTs)in the context of Network-On-Chip(NoC)based Multiprocessor System-On-Chip(MPSoC)routers against the Flooding Denial-of-Service Attack(F-DoSA)falls under Multi-Criter...The implementation of Countermeasure Techniques(CTs)in the context of Network-On-Chip(NoC)based Multiprocessor System-On-Chip(MPSoC)routers against the Flooding Denial-of-Service Attack(F-DoSA)falls under Multi-Criteria Decision-Making(MCDM)due to the three main concerns,called:traffic variations,multiple evaluation criteria-based traffic features,and prioritization NoC routers as an alternative.In this study,we propose a comprehensive evaluation of various NoC traffic features to identify the most efficient routers under the F-DoSA scenarios.Consequently,an MCDM approach is essential to address these emerging challenges.While the recent MCDM approach has some issues,such as uncertainty,this study utilizes Fuzzy-Weighted Zero-Inconsistency(FWZIC)to estimate the criteria weight values and Fuzzy Decision by Opinion Score Method(FDOSM)for ranking the routers with fuzzy Single-valued Neutrosophic under names(SvN-FWZIC and SvN-FDOSM)to overcome the ambiguity.The results obtained by using the SvN-FWZIC method indicate that the Max packet count has the highest importance among the evaluated criteria,with a weighted score of 0.1946.In contrast,the Hop count is identified as the least significant criterion,with a weighted score of 0.1090.The remaining criteria fall within a range of intermediate importance,with enqueue time scoring 0.1845,packet count decremented and traversal index scoring 0.1262,packet count incremented scoring 0.1124,and packet count index scoring 0.1472.In terms of ranking,SvN-FDOSM has two approaches:individual and group.Both the individual and group ranking processes show that(Router 4)is the most effective router,while(Router 3)is the lowest router under F-DoSA.The sensitivity analysis provides a high stability in ranking among all 10 scenarios.This approach offers essential feedback in making proper decisions in the design of countermeasure techniques in the domain of NoC-based MPSoC.展开更多
Software defect prediction plays a critical role in software development and quality assurance processes. Effective defect prediction enables testers to accurately prioritize testing efforts and enhance defect detecti...Software defect prediction plays a critical role in software development and quality assurance processes. Effective defect prediction enables testers to accurately prioritize testing efforts and enhance defect detection efficiency. Additionally, this technology provides developers with a means to quickly identify errors, thereby improving software robustness and overall quality. However, current research in software defect prediction often faces challenges, such as relying on a single data source or failing to adequately account for the characteristics of multiple coexisting data sources. This approach may overlook the differences and potential value of various data sources, affecting the accuracy and generalization performance of prediction results. To address this issue, this study proposes a multivariate heterogeneous hybrid deep learning algorithm for defect prediction (DP-MHHDL). Initially, Abstract Syntax Tree (AST), Code Dependency Network (CDN), and code static quality metrics are extracted from source code files and used as inputs to ensure data diversity. Subsequently, for the three types of heterogeneous data, the study employs a graph convolutional network optimization model based on adjacency and spatial topologies, a Convolutional Neural Network-Bidirectional Long Short-Term Memory (CNN-BiLSTM) hybrid neural network model, and a TabNet model to extract data features. These features are then concatenated and processed through a fully connected neural network for defect prediction. Finally, the proposed framework is evaluated using ten promise defect repository projects, and performance is assessed with three metrics: F1, Area under the curve (AUC), and Matthews correlation coefficient (MCC). The experimental results demonstrate that the proposed algorithm outperforms existing methods, offering a novel solution for software defect prediction.展开更多
It is a common observation that whenever patients arrives at the front desk of a hospital,outpatient clinic,or other health-associated centers,they have to first queue up in a line and wait to fill in their registrati...It is a common observation that whenever patients arrives at the front desk of a hospital,outpatient clinic,or other health-associated centers,they have to first queue up in a line and wait to fill in their registration form to get admitted.The long waiting time without any status updates is the most common complaint,concerning health officials.In this paper,UrNext,a location-aware mobile-based solution using Bluetooth low-energy(BLE)technology is presented to solve the problem.Recently,a technology-oriented method,the Internet of Things(IoT),has been gaining popularity in helping to solve some of the healthcare sector’s problems.The implementation of this solution could be illustrated through a simple example of when a patient arrives at a clinic for a consultation.Instead of having to wait in long lines,that patient will be greeted automatically,receive a push notification of an admittance along with an estimated waiting time for a consultation session.This will not only provide the patients with a sense of freedom but would also reduce the uncertainty levels that are generally observed,thus saving both time and money.This work aims to improve the clinics’quality of services,organize queues and minimize waiting times,leading to patients’comfort while reducing the burden on nurses and receptionists.The results demonstrate that the presented system is successful in its performance and helps achieves a plea-sant and conducive clinic visitation process with higher productivity.展开更多
Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computa...Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computation experience.This paper aims to present a retrospective yet modern approach to the world of speech recognition systems.The development journey of ASR(Automatic Speech Recognition)has seen quite a few milestones and breakthrough technologies that have been highlighted in this paper.A step-by-step rundown of the fundamental stages in developing speech recognition systems has been presented,along with a brief discussion of various modern-day developments and applications in this domain.This review paper aims to summarize and provide a beginning point for those starting in the vast field of speech signal processing.Since speech recognition has a vast potential in various industries like telecommunication,emotion recognition,healthcare,etc.,this review would be helpful to researchers who aim at exploring more applications that society can quickly adopt in future years of evolution.展开更多
Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only f...Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only for removing irrelevant or redundant rules but also for uncovering hidden associations that impact other factors.Recently,several post-processing methods have been proposed,each with its own strengths and weaknesses.In this paper,we propose THAPE(Tunable Hybrid Associative Predictive Engine),which combines descriptive and predictive techniques.By leveraging both techniques,our aim is to enhance the quality of analyzing generated rules.This includes removing irrelevant or redundant rules,uncovering interesting and useful rules,exploring hidden association rules that may affect other factors,and providing backtracking ability for a given product.The proposed approach offers a tailored method that suits specific goals for retailers,enabling them to gain a better understanding of customer behavior based on factual transactions in the target market.We applied THAPE to a real dataset as a case study in this paper to demonstrate its effectiveness.Through this application,we successfully mined a concise set of highly interesting and useful association rules.Out of the 11,265 rules generated,we identified 125 rules that are particularly relevant to the business context.These identified rules significantly improve the interpretability and usefulness of association rules for decision-making purposes.展开更多
The user’s intent to seek online information has been an active area of research in user profiling.User profiling considers user characteristics,behaviors,activities,and preferences to sketch user intentions,interest...The user’s intent to seek online information has been an active area of research in user profiling.User profiling considers user characteristics,behaviors,activities,and preferences to sketch user intentions,interests,and motivations.Determining user characteristics can help capture implicit and explicit preferences and intentions for effective user-centric and customized content presentation.The user’s complete online experience in seeking information is a blend of activities such as searching,verifying,and sharing it on social platforms.However,a combination of multiple behaviors in profiling users has yet to be considered.This research takes a novel approach and explores user intent types based on multidimensional online behavior in information acquisition.This research explores information search,verification,and dissemination behavior and identifies diverse types of users based on their online engagement using machine learning.The research proposes a generic user profile template that explains the user characteristics based on the internet experience and uses it as ground truth for data annotation.User feedback is based on online behavior and practices collected by using a survey method.The participants include both males and females from different occupation sectors and different ages.The data collected is subject to feature engineering,and the significant features are presented to unsupervised machine learning methods to identify user intent classes or profiles and their characteristics.Different techniques are evaluated,and the K-Mean clustering method successfully generates five user groups observing different user characteristics with an average silhouette of 0.36 and a distortion score of 1136.Feature average is computed to identify user intent type characteristics.The user intent classes are then further generalized to create a user intent template with an Inter-Rater Reliability of 75%.This research successfully extracts different user types based on their preferences in online content,platforms,criteria,and frequency.The study also validates the proposed template on user feedback data through Inter-Rater Agreement process using an external human rater.展开更多
Malware attacks on Windows machines pose significant cybersecurity threats,necessitating effective detection and prevention mechanisms.Supervised machine learning classifiers have emerged as promising tools for malwar...Malware attacks on Windows machines pose significant cybersecurity threats,necessitating effective detection and prevention mechanisms.Supervised machine learning classifiers have emerged as promising tools for malware detection.However,there remains a need for comprehensive studies that compare the performance of different classifiers specifically for Windows malware detection.Addressing this gap can provide valuable insights for enhancing cybersecurity strategies.While numerous studies have explored malware detection using machine learning techniques,there is a lack of systematic comparison of supervised classifiers for Windows malware detection.Understanding the relative effectiveness of these classifiers can inform the selection of optimal detection methods and improve overall security measures.This study aims to bridge the research gap by conducting a comparative analysis of supervised machine learning classifiers for detecting malware on Windows systems.The objectives include Investigating the performance of various classifiers,such as Gaussian Naïve Bayes,K Nearest Neighbors(KNN),Stochastic Gradient Descent Classifier(SGDC),and Decision Tree,in detecting Windows malware.Evaluating the accuracy,efficiency,and suitability of each classifier for real-world malware detection scenarios.Identifying the strengths and limitations of different classifiers to provide insights for cybersecurity practitioners and researchers.Offering recommendations for selecting the most effective classifier for Windows malware detection based on empirical evidence.The study employs a structured methodology consisting of several phases:exploratory data analysis,data preprocessing,model training,and evaluation.Exploratory data analysis involves understanding the dataset’s characteristics and identifying preprocessing requirements.Data preprocessing includes cleaning,feature encoding,dimensionality reduction,and optimization to prepare the data for training.Model training utilizes various supervised classifiers,and their performance is evaluated using metrics such as accuracy,precision,recall,and F1 score.The study’s outcomes comprise a comparative analysis of supervised machine learning classifiers for Windows malware detection.Results reveal the effectiveness and efficiency of each classifier in detecting different types of malware.Additionally,insights into their strengths and limitations provide practical guidance for enhancing cybersecurity defenses.Overall,this research contributes to advancing malware detection techniques and bolstering the security posture of Windows systems against evolving cyber threats.展开更多
Big data and information and communication technologies can be important to the effectiveness of smart cities.Based on the maximal attention on smart city sustainability,developing data-driven smart cities is newly ob...Big data and information and communication technologies can be important to the effectiveness of smart cities.Based on the maximal attention on smart city sustainability,developing data-driven smart cities is newly obtained attention as a vital technology for addressing sustainability problems.Real-time monitoring of pollution allows local authorities to analyze the present traffic condition of cities and make decisions.Relating to air pollution occurs a main environmental problem in smart city environments.The effect of the deep learning(DL)approach quickly increased and penetrated almost every domain,comprising air pollution forecast.Therefore,this article develops a new Coot Optimization Algorithm with an Ensemble Deep Learning based Air Pollution Prediction(COAEDL-APP)system for Sustainable Smart Cities.The projected COAEDL-APP algorithm accurately forecasts the presence of air quality in the sustainable smart city environment.To achieve this,the COAEDL-APP technique initially performs a linear scaling normalization(LSN)approach to pre-process the input data.For air quality prediction,an ensemble of three DL models has been involved,namely autoencoder(AE),long short-term memory(LSTM),and deep belief network(DBN).Furthermore,the COA-based hyperparameter tuning procedure can be designed to adjust the hyperparameter values of the DL models.The simulation outcome of the COAEDL-APP algorithm was tested on the air quality database,and the outcomes stated the improved performance of the COAEDL-APP algorithm over other existing systems with maximum accuracy of 98.34%.展开更多
Software-Defined Networking(SDN)represents a significant paradigm shift in network architecture,separating network logic from the underlying forwarding devices to enhance flexibility and centralize deployment.Concur-r...Software-Defined Networking(SDN)represents a significant paradigm shift in network architecture,separating network logic from the underlying forwarding devices to enhance flexibility and centralize deployment.Concur-rently,the Internet of Things(IoT)connects numerous devices to the Internet,enabling autonomous interactions with minimal human intervention.However,implementing and managing an SDN-IoT system is inherently complex,particularly for those with limited resources,as the dynamic and distributed nature of IoT infrastructures creates security and privacy challenges during SDN integration.The findings of this study underscore the primary security and privacy challenges across application,control,and data planes.A comprehensive review evaluates the root causes of these challenges and the defense techniques employed in prior works to establish sufficient secrecy and privacy protection.Recent investigations have explored cutting-edge methods,such as leveraging blockchain for transaction recording to enhance security and privacy,along with applying machine learning and deep learning approaches to identify and mitigate the impacts of Denial of Service(DoS)and Distributed DoS(DDoS)attacks.Moreover,the analysis indicates that encryption and hashing techniques are prevalent in the data plane,whereas access control and certificate authorization are prominently considered in the control plane,and authentication is commonly employed within the application plane.Additionally,this paper outlines future directions,offering insights into potential strategies and technological advancements aimed at fostering a more secure and privacy-conscious SDN-based IoT ecosystem.展开更多
Cancer frequently develops resistance to the majority of chemotherapy treatments.This study aimed to examine the synergistic cytotoxic and antitumor effects of SGLT2 inhibitors,specifically Canagliflozin(CAN),Dapaglif...Cancer frequently develops resistance to the majority of chemotherapy treatments.This study aimed to examine the synergistic cytotoxic and antitumor effects of SGLT2 inhibitors,specifically Canagliflozin(CAN),Dapagliflozin(DAP),Empagliflozin(EMP),and Doxorubicin(DOX),using in vitro experimentation.The precise combination of CAN+DOX has been found to greatly enhance the cytotoxic effects of doxorubicin(DOX)in MCF-7 cells.Interestingly,it was shown that cancer cells exhibit an increased demand for glucose and ATP in order to support their growth.Notably,when these medications were combined with DOX,there was a considerable inhibition of glucose consumption,as well as reductions in intracellular ATP and lactate levels.Moreover,this effect was found to be dependent on the dosages of the drugs.In addition to effectively inhibiting the cell cycle,the combination of CAN+DOX induces substantial modifications in both cell cycle and apoptotic gene expression.This work represents the initial report on the beneficial impact of SGLT2 inhibitor medications,namely CAN,DAP,and EMP,on the responsiveness to the anticancer properties of DOX.The underlying molecular mechanisms potentially involve the suppression of the function of SGLT2.展开更多
Manual investigation of chest radiography(CXR)images by physicians is crucial for effective decision-making in COVID-19 diagnosis.However,the high demand during the pandemic necessitates auxiliary help through image a...Manual investigation of chest radiography(CXR)images by physicians is crucial for effective decision-making in COVID-19 diagnosis.However,the high demand during the pandemic necessitates auxiliary help through image analysis and machine learning techniques.This study presents a multi-threshold-based segmentation technique to probe high pixel intensity regions in CXR images of various pathologies,including normal cases.Texture information is extracted using gray co-occurrence matrix(GLCM)-based features,while vessel-like features are obtained using Frangi,Sato,and Meijering filters.Machine learning models employing Decision Tree(DT)and RandomForest(RF)approaches are designed to categorize CXR images into common lung infections,lung opacity(LO),COVID-19,and viral pneumonia(VP).The results demonstrate that the fusion of texture and vesselbased features provides an effective ML model for aiding diagnosis.The ML model validation using performance measures,including an accuracy of approximately 91.8%with an RF-based classifier,supports the usefulness of the feature set and classifier model in categorizing the four different pathologies.Furthermore,the study investigates the importance of the devised features in identifying the underlying pathology and incorporates histogrambased analysis.This analysis reveals varying natural pixel distributions in CXR images belonging to the normal,COVID-19,LO,and VP groups,motivating the incorporation of additional features such as mean,standard deviation,skewness,and percentile based on the filtered images.Notably,the study achieves a considerable improvement in categorizing COVID-19 from LO,with a true positive rate of 97%,further substantiating the effectiveness of the methodology implemented.展开更多
Serial remote sensing images offer a valuable means of tracking the evolutionary changes and growth of a specific geographical area over time.Although the original images may provide limited insights,they harbor consi...Serial remote sensing images offer a valuable means of tracking the evolutionary changes and growth of a specific geographical area over time.Although the original images may provide limited insights,they harbor considerable potential for identifying clusters and patterns.The aggregation of these serial remote sensing images(SRSI)becomes increasingly viable as distinct patterns emerge in diverse scenarios,such as suburbanization,the expansion of native flora,and agricultural activities.In a novel approach,we propose an innovative method for extracting sequential patterns by combining Ant Colony Optimization(ACD)and Empirical Mode Decomposition(EMD).This integration of the newly developed EMD and ACO techniques proves remarkably effective in identifying the most significant characteristic features within serial remote sensing images,guided by specific criteria.Our findings highlight a substantial improvement in the efficiency of sequential pattern mining through the application of this unique hybrid method,seamlessly integrating EMD and ACO for feature selection.This study exposes the potential of our innovative methodology,particularly in the realms of urbanization,native vegetation expansion,and agricultural activities.展开更多
Vehicular ad hoc networks(VANETs)provide intelligent navigation and efficient route management,resulting in time savings and cost reductions in the transportation sector.However,the exchange of beacons and messages ov...Vehicular ad hoc networks(VANETs)provide intelligent navigation and efficient route management,resulting in time savings and cost reductions in the transportation sector.However,the exchange of beacons and messages over public channels among vehicles and roadside units renders these networks vulnerable to numerous attacks and privacy violations.To address these challenges,several privacy and security preservation protocols based on blockchain and public key cryptography have been proposed recently.However,most of these schemes are limited by a long execution time and massive communication costs,which make them inefficient for on-board units(OBUs).Additionally,some of them are still susceptible to many attacks.As such,this study presents a novel protocol based on the fusion of elliptic curve cryptography(ECC)and bilinear pairing(BP)operations.The formal security analysis is accomplished using the Burrows–Abadi–Needham(BAN)logic,demonstrating that our scheme is verifiably secure.The proposed scheme’s informal security assessment also shows that it provides salient security features,such as non-repudiation,anonymity,and unlinkability.Moreover,the scheme is shown to be resilient against attacks,such as packet replays,forgeries,message falsifications,and impersonations.From the performance perspective,this protocol yields a 37.88%reduction in communication overheads and a 44.44%improvement in the supported security features.Therefore,the proposed scheme can be deployed in VANETs to provide robust security at low overheads.展开更多
Background: Personal hygiene in non-self-sufficient patients is essential to prevent the proliferation and spread of bacteria from one patient to another, both through inanimate objects (fomites) and directly through ...Background: Personal hygiene in non-self-sufficient patients is essential to prevent the proliferation and spread of bacteria from one patient to another, both through inanimate objects (fomites) and directly through healthcare workers. The first 1000 bed hygiene treatments performed by the collaborative robot “COPERNICO Surveillance & Prevention” in 229 non-self-sufficient patients were analyzed. Materials and Methods: A total of 229 patients were included: 215 patients came from emergency contexts or home, and 14 from long-term care facilities;the presence of sepsis, venous or urinary catheters, non-invasive ventilation, bedsores, clinical condition at discharge, and treatment sessions performed were recorded. All patients were hospitalized in the Geriatrics, Medicine and Pneumology departments. The system is able to collect and process data in real time. Results: Seventy-one patients with community-acquired sepsis and fourteen with healthcare-associated infections were treated;sixty-two had pressure ulcers. The analysis of the first 1000 treatments shows the healing of almost all sepsis cases, positive evolution of pressure ulcers, and hospital stays comparable to those of the entire group of 1008 hospitalized in the same period. There was no onset of side effects or complications. Conclusions: Although the healthcare setting is not among those at greatest risk of infections, the clinical efficacy, along with excellent evaluations from patients, family members, and healthcare personnel and the absence of side effects and complications, makes the system exceptionally manageable and user-friendly for non-self-sufficient patients.展开更多
Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial i...Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial information,and these methods have made it feasible to handle a wide range of problems associated with image analysis.Images with little information or low payload are used by information embedding methods,but the goal of all contemporary research is to employ high-payload images for classification.To address the need for both low-and high-payload images,this work provides a machine-learning approach to steganography image classification that uses Curvelet transformation to efficiently extract characteristics from both type of images.Support Vector Machine(SVM),a commonplace classification technique,has been employed to determine whether the image is a stego or cover.The Wavelet Obtained Weights(WOW),Spatial Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Steganography(HUGO),and Minimizing the Power of Optimal Detector(MiPOD)steganography techniques are used in a variety of experimental scenarios to evaluate the performance of the proposedmethod.Using WOW at several payloads,the proposed approach proves its classification accuracy of 98.60%.It exhibits its superiority over SOTA methods.展开更多
基金funded by Centre for Advanced Modelling and Geospatial Information Systems(CAMGIS),Faculty of Engineering and IT,University of Technology Sydney.Moreover,Ongoing Research Funding Program(ORF-2025-14)King Saud University,Riyadh,Saudi Arabia,under Project ORF-2025-。
文摘Face liveness detection is essential for securing biometric authentication systems against spoofing attacks,including printed photos,replay videos,and 3D masks.This study systematically evaluates pre-trained CNN models—DenseNet201,VGG16,InceptionV3,ResNet50,VGG19,MobileNetV2,Xception,and InceptionResNetV2—leveraging transfer learning and fine-tuning to enhance liveness detection performance.The models were trained and tested on NUAA and Replay-Attack datasets,with cross-dataset generalization validated on SiW-MV2 to assess real-world adaptability.Performance was evaluated using accuracy,precision,recall,FAR,FRR,HTER,and specialized spoof detection metrics(APCER,NPCER,ACER).Fine-tuning significantly improved detection accuracy,with DenseNet201 achieving the highest performance(98.5%on NUAA,97.71%on Replay-Attack),while MobileNetV2 proved the most efficient model for real-time applications(latency:15 ms,memory usage:45 MB,energy consumption:30 mJ).A statistical significance analysis(paired t-tests,confidence intervals)validated these improvements.Cross-dataset experiments identified DenseNet201 and MobileNetV2 as the most generalizable architectures,with DenseNet201 achieving 86.4%accuracy on Replay-Attack when trained on NUAA,demonstrating robust feature extraction and adaptability.In contrast,ResNet50 showed lower generalization capabilities,struggling with dataset variability and complex spoofing attacks.These findings suggest that MobileNetV2 is well-suited for low-power applications,while DenseNet201 is ideal for high-security environments requiring superior accuracy.This research provides a framework for improving real-time face liveness detection,enhancing biometric security,and guiding future advancements in AI-driven anti-spoofing techniques.
文摘Background: Erythrodermic psoriasis (EP) is a rare, severe variant of psoriasis characterized by widespread erythema, scaling, and systemic complications. Despite advances in systemic treatments, the management of EP remains challenging, particularly in patients with comorbidities or contraindications to standard therapies. Objectives: To evaluate the effectiveness of ozonated water as an adjunctive treatment for EP, delivered using a patented robotic therapy system designed for hygiene and infection prevention in non-self-sufficient patients. Methods: We report the case of a 90-year-old male patient with acute EP who received daily skin treatments with ozonated water in conjunction with supportive care, including rehydration and antibiotics. The intervention was facilitated by the robotic system “COPERNICO Surveillance & Prevention,” which ensured standardized hygiene practices and clinical documentation. Results: Within one week of treatment, the patient showed complete desquamation of necrotic skin, resolution of erythema, and significant metabolic recovery. Fever subsided, renal function improved, and the patient was discharged in stable condition. Follow-up confirmed sustained clinical improvement, and no adverse events were reported. Conclusions: Ozonated water demonstrated efficacy in alleviating the dermatological and systemic manifestations of EP in a high-risk elderly patient. This case highlights the potential of ozone therapy as a safe, cost-effective adjunctive treatment for EP and underscores the utility of robotic systems in managing complex dermatological conditions. Further research is warranted to validate these findings in larger cohorts.
基金supported by Department of Information Technology,University of Tabuk,Tabuk,71491,Saudi Arabia.
文摘Adaptive robust secure framework plays a vital role in implementing intelligent automation and decentralized decision making of Industry 5.0.Latency,privacy risks and the complexity of industrial networks have been preventing attempts at traditional cloud-based learning systems.We demonstrate that,to overcome these challenges,for instance,the EdgeGuard-IoT framework,a 6G edge intelligence framework enhancing cybersecurity and operational resilience of the smart grid,is needed on the edge to integrate Secure Federated Learning(SFL)and Adaptive Anomaly Detection(AAD).With ultra-reliable low latency communication(URLLC)of 6G,artificial intelligence-based network orchestration,and massive machine type communication(mMTC),EdgeGuard-IoT brings real-time,distributed intelligence on the edge,and mitigates risks in data transmission and enhances privacy.EdgeGuard-IoT,with a hierarchical federated learning framework,helps edge devices to collaboratively train models without revealing the sensitive grid data,which is crucial in the smart grid where real-time power anomaly detection and the decentralization of the energy management are a big deal.The hybrid AI models driven adaptive anomaly detection mechanism immediately raises the thumb if the grid stability and strength are negatively affected due to cyber threats,faults,and energy distribution,thereby keeping the grid stable with resilience.The proposed framework also adopts various security means within the blockchain and zero-trust authentication techniques to reduce the adversarial attack risks and model poisoning during federated learning.EdgeGuard-IoT shows superior detection accuracy,response time,and scalability performance at a much reduced communication overhead via extensive simulations and deployment in real-world case studies in smart grids.This research pioneers a 6G-driven federated intelligence model designed for secure,self-optimizing,and resilient Industry 5.0 ecosystems,paving the way for next-generation autonomous smart grids and industrial cyber-physical systems.
文摘Skin cancer is among the most common malignancies worldwide,but its mortality burden is largely driven by aggressive subtypes such as melanoma,with outcomes varying across regions and healthcare settings.These variations emphasize the importance of reliable diagnostic technologies that support clinicians in detecting skin malignancies with higher accuracy.Traditional diagnostic methods often rely on subjective visual assessments,which can lead to misdiagnosis.This study addresses these challenges by developing HybridFusionNet,a novel model that integrates Convolutional Neural Networks(CNN)with 1D feature extraction techniques to enhance diagnostic accuracy.Utilizing two extensive datasets,BCN20000 and HAM10000,the methodology includes data preprocessing,application of Synthetic Minority Oversampling Technique combined with Edited Nearest Neighbors(SMOTEENN)for data balancing,and optimization of feature selection using the Tree-based Pipeline Optimization Tool(TPOT).The results demonstrate significant performance improvements over traditional CNN models,achieving an accuracy of 0.9693 on the BCN20000 dataset and 0.9909 on the HAM10000 dataset.The HybridFusionNet model not only outperforms conventionalmethods but also effectively addresses class imbalance.To enhance transparency,it integrates post-hoc explanation techniques such as LIME,which highlight the features influencing predictions.These findings highlight the potential of HybridFusionNet to support real-world applications,including physician-assist systems,teledermatology,and large-scale skin cancer screening programs.By improving diagnostic efficiency and enabling access to expert-level analysis,the modelmay enhance patient outcomes and foster greater trust in artificial intelligence(AI)-assisted clinical decision-making.
文摘Parkinson’s disease(PD)is a progressive neurodegenerative disorder characterized by tremors,rigidity,and decreased movement.PD poses risks to individuals’lives and independence.Early detection of PD is essential because it allows timely intervention,which can slow disease progression and improve outcomes.Manual diagnosis of PD is problematic because it is difficult to capture the subtle patterns and changes that help diagnose PD.In addition,the subjectivity and lack of doctors compared to the number of patients constitute an obstacle to early diagnosis.Artificial intelligence(AI)techniques,especially deep and automated learning models,provide promising solutions to address deficiencies in manual diagnosis.This study develops robust systems for PD diagnosis by analyzing handwritten helical and wave graphical images.Handwritten graphic images of the PD dataset are enhanced using two overlapping filters,the average filter and the Laplacian filter,to improve image quality and highlight essential features.The enhanced images are segmented to isolate regions of interest(ROIs)from the rest of the image using a gradient vector flow(GVF)algorithm,which ensures that features are extracted from only relevant regions.The segmented ROIs are fed into convolutional neural network(CNN)models,namely DenseNet169,MobileNet,and VGG16,to extract fine and deep feature maps that capture complex patterns and representations relevant to PD diagnosis.Fine and deep feature maps extracted from individual CNN models are combined into fused feature vectors for DenseNet169-MobileNet,MobileNet-VGG16,DenseNet169-VGG16,and DenseNet169-MobileNet-VGG16 models.This fusion technique aims to combine complementary and robust features from several models,which improves the extracted features.Two feature selection algorithms are considered to remove redundancy and weak correlations within the combined feature set:Ant Colony Optimization(ACO)and Maximum Entropy Score-based Selection(MESbS).These algorithms identify and retain the most strongly correlated features while eliminating redundant and weakly correlated features,thus optimizing the features to improve system performance.The fused and enhanced feature vectors are fed into two powerful classifiers,XGBoost and random forest(RF),for accurate classification and differentiation between individuals with PD and healthy controls.The proposed hybrid systems show superior performance,where the RF classifier used the combined features from the DenseNet169-MobileNet-VGG16 models with the ACO feature selection method,achieving outstanding results:area under the curve(AUC)of 99%,sensitivity of 99.6%,99.3%accuracy,99.35%accuracy,and 99.65%specificity.
文摘The implementation of Countermeasure Techniques(CTs)in the context of Network-On-Chip(NoC)based Multiprocessor System-On-Chip(MPSoC)routers against the Flooding Denial-of-Service Attack(F-DoSA)falls under Multi-Criteria Decision-Making(MCDM)due to the three main concerns,called:traffic variations,multiple evaluation criteria-based traffic features,and prioritization NoC routers as an alternative.In this study,we propose a comprehensive evaluation of various NoC traffic features to identify the most efficient routers under the F-DoSA scenarios.Consequently,an MCDM approach is essential to address these emerging challenges.While the recent MCDM approach has some issues,such as uncertainty,this study utilizes Fuzzy-Weighted Zero-Inconsistency(FWZIC)to estimate the criteria weight values and Fuzzy Decision by Opinion Score Method(FDOSM)for ranking the routers with fuzzy Single-valued Neutrosophic under names(SvN-FWZIC and SvN-FDOSM)to overcome the ambiguity.The results obtained by using the SvN-FWZIC method indicate that the Max packet count has the highest importance among the evaluated criteria,with a weighted score of 0.1946.In contrast,the Hop count is identified as the least significant criterion,with a weighted score of 0.1090.The remaining criteria fall within a range of intermediate importance,with enqueue time scoring 0.1845,packet count decremented and traversal index scoring 0.1262,packet count incremented scoring 0.1124,and packet count index scoring 0.1472.In terms of ranking,SvN-FDOSM has two approaches:individual and group.Both the individual and group ranking processes show that(Router 4)is the most effective router,while(Router 3)is the lowest router under F-DoSA.The sensitivity analysis provides a high stability in ranking among all 10 scenarios.This approach offers essential feedback in making proper decisions in the design of countermeasure techniques in the domain of NoC-based MPSoC.
文摘Software defect prediction plays a critical role in software development and quality assurance processes. Effective defect prediction enables testers to accurately prioritize testing efforts and enhance defect detection efficiency. Additionally, this technology provides developers with a means to quickly identify errors, thereby improving software robustness and overall quality. However, current research in software defect prediction often faces challenges, such as relying on a single data source or failing to adequately account for the characteristics of multiple coexisting data sources. This approach may overlook the differences and potential value of various data sources, affecting the accuracy and generalization performance of prediction results. To address this issue, this study proposes a multivariate heterogeneous hybrid deep learning algorithm for defect prediction (DP-MHHDL). Initially, Abstract Syntax Tree (AST), Code Dependency Network (CDN), and code static quality metrics are extracted from source code files and used as inputs to ensure data diversity. Subsequently, for the three types of heterogeneous data, the study employs a graph convolutional network optimization model based on adjacency and spatial topologies, a Convolutional Neural Network-Bidirectional Long Short-Term Memory (CNN-BiLSTM) hybrid neural network model, and a TabNet model to extract data features. These features are then concatenated and processed through a fully connected neural network for defect prediction. Finally, the proposed framework is evaluated using ten promise defect repository projects, and performance is assessed with three metrics: F1, Area under the curve (AUC), and Matthews correlation coefficient (MCC). The experimental results demonstrate that the proposed algorithm outperforms existing methods, offering a novel solution for software defect prediction.
基金The author extends her appreciation to the Deanship of Scientific Research at King Saud University for funding this work through the Undergraduate Research Support Program,Project no.(URSP-3-18-89).
文摘It is a common observation that whenever patients arrives at the front desk of a hospital,outpatient clinic,or other health-associated centers,they have to first queue up in a line and wait to fill in their registration form to get admitted.The long waiting time without any status updates is the most common complaint,concerning health officials.In this paper,UrNext,a location-aware mobile-based solution using Bluetooth low-energy(BLE)technology is presented to solve the problem.Recently,a technology-oriented method,the Internet of Things(IoT),has been gaining popularity in helping to solve some of the healthcare sector’s problems.The implementation of this solution could be illustrated through a simple example of when a patient arrives at a clinic for a consultation.Instead of having to wait in long lines,that patient will be greeted automatically,receive a push notification of an admittance along with an estimated waiting time for a consultation session.This will not only provide the patients with a sense of freedom but would also reduce the uncertainty levels that are generally observed,thus saving both time and money.This work aims to improve the clinics’quality of services,organize queues and minimize waiting times,leading to patients’comfort while reducing the burden on nurses and receptionists.The results demonstrate that the presented system is successful in its performance and helps achieves a plea-sant and conducive clinic visitation process with higher productivity.
文摘Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computation experience.This paper aims to present a retrospective yet modern approach to the world of speech recognition systems.The development journey of ASR(Automatic Speech Recognition)has seen quite a few milestones and breakthrough technologies that have been highlighted in this paper.A step-by-step rundown of the fundamental stages in developing speech recognition systems has been presented,along with a brief discussion of various modern-day developments and applications in this domain.This review paper aims to summarize and provide a beginning point for those starting in the vast field of speech signal processing.Since speech recognition has a vast potential in various industries like telecommunication,emotion recognition,healthcare,etc.,this review would be helpful to researchers who aim at exploring more applications that society can quickly adopt in future years of evolution.
文摘Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only for removing irrelevant or redundant rules but also for uncovering hidden associations that impact other factors.Recently,several post-processing methods have been proposed,each with its own strengths and weaknesses.In this paper,we propose THAPE(Tunable Hybrid Associative Predictive Engine),which combines descriptive and predictive techniques.By leveraging both techniques,our aim is to enhance the quality of analyzing generated rules.This includes removing irrelevant or redundant rules,uncovering interesting and useful rules,exploring hidden association rules that may affect other factors,and providing backtracking ability for a given product.The proposed approach offers a tailored method that suits specific goals for retailers,enabling them to gain a better understanding of customer behavior based on factual transactions in the target market.We applied THAPE to a real dataset as a case study in this paper to demonstrate its effectiveness.Through this application,we successfully mined a concise set of highly interesting and useful association rules.Out of the 11,265 rules generated,we identified 125 rules that are particularly relevant to the business context.These identified rules significantly improve the interpretability and usefulness of association rules for decision-making purposes.
文摘The user’s intent to seek online information has been an active area of research in user profiling.User profiling considers user characteristics,behaviors,activities,and preferences to sketch user intentions,interests,and motivations.Determining user characteristics can help capture implicit and explicit preferences and intentions for effective user-centric and customized content presentation.The user’s complete online experience in seeking information is a blend of activities such as searching,verifying,and sharing it on social platforms.However,a combination of multiple behaviors in profiling users has yet to be considered.This research takes a novel approach and explores user intent types based on multidimensional online behavior in information acquisition.This research explores information search,verification,and dissemination behavior and identifies diverse types of users based on their online engagement using machine learning.The research proposes a generic user profile template that explains the user characteristics based on the internet experience and uses it as ground truth for data annotation.User feedback is based on online behavior and practices collected by using a survey method.The participants include both males and females from different occupation sectors and different ages.The data collected is subject to feature engineering,and the significant features are presented to unsupervised machine learning methods to identify user intent classes or profiles and their characteristics.Different techniques are evaluated,and the K-Mean clustering method successfully generates five user groups observing different user characteristics with an average silhouette of 0.36 and a distortion score of 1136.Feature average is computed to identify user intent type characteristics.The user intent classes are then further generalized to create a user intent template with an Inter-Rater Reliability of 75%.This research successfully extracts different user types based on their preferences in online content,platforms,criteria,and frequency.The study also validates the proposed template on user feedback data through Inter-Rater Agreement process using an external human rater.
基金This researchwork is supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R411),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Malware attacks on Windows machines pose significant cybersecurity threats,necessitating effective detection and prevention mechanisms.Supervised machine learning classifiers have emerged as promising tools for malware detection.However,there remains a need for comprehensive studies that compare the performance of different classifiers specifically for Windows malware detection.Addressing this gap can provide valuable insights for enhancing cybersecurity strategies.While numerous studies have explored malware detection using machine learning techniques,there is a lack of systematic comparison of supervised classifiers for Windows malware detection.Understanding the relative effectiveness of these classifiers can inform the selection of optimal detection methods and improve overall security measures.This study aims to bridge the research gap by conducting a comparative analysis of supervised machine learning classifiers for detecting malware on Windows systems.The objectives include Investigating the performance of various classifiers,such as Gaussian Naïve Bayes,K Nearest Neighbors(KNN),Stochastic Gradient Descent Classifier(SGDC),and Decision Tree,in detecting Windows malware.Evaluating the accuracy,efficiency,and suitability of each classifier for real-world malware detection scenarios.Identifying the strengths and limitations of different classifiers to provide insights for cybersecurity practitioners and researchers.Offering recommendations for selecting the most effective classifier for Windows malware detection based on empirical evidence.The study employs a structured methodology consisting of several phases:exploratory data analysis,data preprocessing,model training,and evaluation.Exploratory data analysis involves understanding the dataset’s characteristics and identifying preprocessing requirements.Data preprocessing includes cleaning,feature encoding,dimensionality reduction,and optimization to prepare the data for training.Model training utilizes various supervised classifiers,and their performance is evaluated using metrics such as accuracy,precision,recall,and F1 score.The study’s outcomes comprise a comparative analysis of supervised machine learning classifiers for Windows malware detection.Results reveal the effectiveness and efficiency of each classifier in detecting different types of malware.Additionally,insights into their strengths and limitations provide practical guidance for enhancing cybersecurity defenses.Overall,this research contributes to advancing malware detection techniques and bolstering the security posture of Windows systems against evolving cyber threats.
基金funded by the Deanship of Scientific Research(DSR),King Abdulaziz University(KAU),Jeddah,Saudi Arabia under Grant No.(IFPIP:631-612-1443).
文摘Big data and information and communication technologies can be important to the effectiveness of smart cities.Based on the maximal attention on smart city sustainability,developing data-driven smart cities is newly obtained attention as a vital technology for addressing sustainability problems.Real-time monitoring of pollution allows local authorities to analyze the present traffic condition of cities and make decisions.Relating to air pollution occurs a main environmental problem in smart city environments.The effect of the deep learning(DL)approach quickly increased and penetrated almost every domain,comprising air pollution forecast.Therefore,this article develops a new Coot Optimization Algorithm with an Ensemble Deep Learning based Air Pollution Prediction(COAEDL-APP)system for Sustainable Smart Cities.The projected COAEDL-APP algorithm accurately forecasts the presence of air quality in the sustainable smart city environment.To achieve this,the COAEDL-APP technique initially performs a linear scaling normalization(LSN)approach to pre-process the input data.For air quality prediction,an ensemble of three DL models has been involved,namely autoencoder(AE),long short-term memory(LSTM),and deep belief network(DBN).Furthermore,the COA-based hyperparameter tuning procedure can be designed to adjust the hyperparameter values of the DL models.The simulation outcome of the COAEDL-APP algorithm was tested on the air quality database,and the outcomes stated the improved performance of the COAEDL-APP algorithm over other existing systems with maximum accuracy of 98.34%.
基金This work was supported by National Natural Science Foundation of China(Grant No.62341208)Natural Science Foundation of Zhejiang Province(Grant Nos.LY23F020006 and LR23F020001)Moreover,it has been supported by Islamic Azad University with the Grant No.133713281361.
文摘Software-Defined Networking(SDN)represents a significant paradigm shift in network architecture,separating network logic from the underlying forwarding devices to enhance flexibility and centralize deployment.Concur-rently,the Internet of Things(IoT)connects numerous devices to the Internet,enabling autonomous interactions with minimal human intervention.However,implementing and managing an SDN-IoT system is inherently complex,particularly for those with limited resources,as the dynamic and distributed nature of IoT infrastructures creates security and privacy challenges during SDN integration.The findings of this study underscore the primary security and privacy challenges across application,control,and data planes.A comprehensive review evaluates the root causes of these challenges and the defense techniques employed in prior works to establish sufficient secrecy and privacy protection.Recent investigations have explored cutting-edge methods,such as leveraging blockchain for transaction recording to enhance security and privacy,along with applying machine learning and deep learning approaches to identify and mitigate the impacts of Denial of Service(DoS)and Distributed DoS(DDoS)attacks.Moreover,the analysis indicates that encryption and hashing techniques are prevalent in the data plane,whereas access control and certificate authorization are prominently considered in the control plane,and authentication is commonly employed within the application plane.Additionally,this paper outlines future directions,offering insights into potential strategies and technological advancements aimed at fostering a more secure and privacy-conscious SDN-based IoT ecosystem.
基金funded by the Deanship of Scientific Research(DSR),King Abdulaziz University,Jeddah,Saudi Arabia,under Grant No.KEP-1-166-41The authors,therefore,acknowledge DSR,with thanks for their technical and financial support.
文摘Cancer frequently develops resistance to the majority of chemotherapy treatments.This study aimed to examine the synergistic cytotoxic and antitumor effects of SGLT2 inhibitors,specifically Canagliflozin(CAN),Dapagliflozin(DAP),Empagliflozin(EMP),and Doxorubicin(DOX),using in vitro experimentation.The precise combination of CAN+DOX has been found to greatly enhance the cytotoxic effects of doxorubicin(DOX)in MCF-7 cells.Interestingly,it was shown that cancer cells exhibit an increased demand for glucose and ATP in order to support their growth.Notably,when these medications were combined with DOX,there was a considerable inhibition of glucose consumption,as well as reductions in intracellular ATP and lactate levels.Moreover,this effect was found to be dependent on the dosages of the drugs.In addition to effectively inhibiting the cell cycle,the combination of CAN+DOX induces substantial modifications in both cell cycle and apoptotic gene expression.This work represents the initial report on the beneficial impact of SGLT2 inhibitor medications,namely CAN,DAP,and EMP,on the responsiveness to the anticancer properties of DOX.The underlying molecular mechanisms potentially involve the suppression of the function of SGLT2.
文摘Manual investigation of chest radiography(CXR)images by physicians is crucial for effective decision-making in COVID-19 diagnosis.However,the high demand during the pandemic necessitates auxiliary help through image analysis and machine learning techniques.This study presents a multi-threshold-based segmentation technique to probe high pixel intensity regions in CXR images of various pathologies,including normal cases.Texture information is extracted using gray co-occurrence matrix(GLCM)-based features,while vessel-like features are obtained using Frangi,Sato,and Meijering filters.Machine learning models employing Decision Tree(DT)and RandomForest(RF)approaches are designed to categorize CXR images into common lung infections,lung opacity(LO),COVID-19,and viral pneumonia(VP).The results demonstrate that the fusion of texture and vesselbased features provides an effective ML model for aiding diagnosis.The ML model validation using performance measures,including an accuracy of approximately 91.8%with an RF-based classifier,supports the usefulness of the feature set and classifier model in categorizing the four different pathologies.Furthermore,the study investigates the importance of the devised features in identifying the underlying pathology and incorporates histogrambased analysis.This analysis reveals varying natural pixel distributions in CXR images belonging to the normal,COVID-19,LO,and VP groups,motivating the incorporation of additional features such as mean,standard deviation,skewness,and percentile based on the filtered images.Notably,the study achieves a considerable improvement in categorizing COVID-19 from LO,with a true positive rate of 97%,further substantiating the effectiveness of the methodology implemented.
文摘Serial remote sensing images offer a valuable means of tracking the evolutionary changes and growth of a specific geographical area over time.Although the original images may provide limited insights,they harbor considerable potential for identifying clusters and patterns.The aggregation of these serial remote sensing images(SRSI)becomes increasingly viable as distinct patterns emerge in diverse scenarios,such as suburbanization,the expansion of native flora,and agricultural activities.In a novel approach,we propose an innovative method for extracting sequential patterns by combining Ant Colony Optimization(ACD)and Empirical Mode Decomposition(EMD).This integration of the newly developed EMD and ACO techniques proves remarkably effective in identifying the most significant characteristic features within serial remote sensing images,guided by specific criteria.Our findings highlight a substantial improvement in the efficiency of sequential pattern mining through the application of this unique hybrid method,seamlessly integrating EMD and ACO for feature selection.This study exposes the potential of our innovative methodology,particularly in the realms of urbanization,native vegetation expansion,and agricultural activities.
基金supported by Teaching Reform Project of Shenzhen University of Technology under Grant No.20231016.
文摘Vehicular ad hoc networks(VANETs)provide intelligent navigation and efficient route management,resulting in time savings and cost reductions in the transportation sector.However,the exchange of beacons and messages over public channels among vehicles and roadside units renders these networks vulnerable to numerous attacks and privacy violations.To address these challenges,several privacy and security preservation protocols based on blockchain and public key cryptography have been proposed recently.However,most of these schemes are limited by a long execution time and massive communication costs,which make them inefficient for on-board units(OBUs).Additionally,some of them are still susceptible to many attacks.As such,this study presents a novel protocol based on the fusion of elliptic curve cryptography(ECC)and bilinear pairing(BP)operations.The formal security analysis is accomplished using the Burrows–Abadi–Needham(BAN)logic,demonstrating that our scheme is verifiably secure.The proposed scheme’s informal security assessment also shows that it provides salient security features,such as non-repudiation,anonymity,and unlinkability.Moreover,the scheme is shown to be resilient against attacks,such as packet replays,forgeries,message falsifications,and impersonations.From the performance perspective,this protocol yields a 37.88%reduction in communication overheads and a 44.44%improvement in the supported security features.Therefore,the proposed scheme can be deployed in VANETs to provide robust security at low overheads.
文摘Background: Personal hygiene in non-self-sufficient patients is essential to prevent the proliferation and spread of bacteria from one patient to another, both through inanimate objects (fomites) and directly through healthcare workers. The first 1000 bed hygiene treatments performed by the collaborative robot “COPERNICO Surveillance & Prevention” in 229 non-self-sufficient patients were analyzed. Materials and Methods: A total of 229 patients were included: 215 patients came from emergency contexts or home, and 14 from long-term care facilities;the presence of sepsis, venous or urinary catheters, non-invasive ventilation, bedsores, clinical condition at discharge, and treatment sessions performed were recorded. All patients were hospitalized in the Geriatrics, Medicine and Pneumology departments. The system is able to collect and process data in real time. Results: Seventy-one patients with community-acquired sepsis and fourteen with healthcare-associated infections were treated;sixty-two had pressure ulcers. The analysis of the first 1000 treatments shows the healing of almost all sepsis cases, positive evolution of pressure ulcers, and hospital stays comparable to those of the entire group of 1008 hospitalized in the same period. There was no onset of side effects or complications. Conclusions: Although the healthcare setting is not among those at greatest risk of infections, the clinical efficacy, along with excellent evaluations from patients, family members, and healthcare personnel and the absence of side effects and complications, makes the system exceptionally manageable and user-friendly for non-self-sufficient patients.
基金financially supported by the Deanship of Scientific Research at King Khalid University under Research Grant Number(R.G.P.2/549/44).
文摘Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial information,and these methods have made it feasible to handle a wide range of problems associated with image analysis.Images with little information or low payload are used by information embedding methods,but the goal of all contemporary research is to employ high-payload images for classification.To address the need for both low-and high-payload images,this work provides a machine-learning approach to steganography image classification that uses Curvelet transformation to efficiently extract characteristics from both type of images.Support Vector Machine(SVM),a commonplace classification technique,has been employed to determine whether the image is a stego or cover.The Wavelet Obtained Weights(WOW),Spatial Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Steganography(HUGO),and Minimizing the Power of Optimal Detector(MiPOD)steganography techniques are used in a variety of experimental scenarios to evaluate the performance of the proposedmethod.Using WOW at several payloads,the proposed approach proves its classification accuracy of 98.60%.It exhibits its superiority over SOTA methods.