As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and use...As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and user privacy concerns within smart grids.However,existing methods struggle with efficiency and security when processing large-scale data.Balancing efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent challenge.This paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data modalities.The approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user privacy.It also explores the application of Boneh Lynn Shacham(BLS)signatures for user authentication.The proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis.展开更多
The explosive expansion of the Internet of Things(IoT)systems has increased the imperative to have strong and robust solutions to cyber Security,especially to curtail Distributed Denial of Service(DDoS)attacks,which c...The explosive expansion of the Internet of Things(IoT)systems has increased the imperative to have strong and robust solutions to cyber Security,especially to curtail Distributed Denial of Service(DDoS)attacks,which can cripple critical infrastructure.The proposed framework presented in the current paper is a new hybrid scheme that induces deep learning-based traffic classification and blockchain-enabledmitigation tomake intelligent,decentralized,and real-time DDoS countermeasures in an IoT network.The proposed model fuses the extracted deep features with statistical features and trains them by using traditional machine-learning algorithms,which makes them more accurate in detection than statistical features alone,based on the Convolutional Neural Network(CNN)architecture,which can extract deep features.A permissioned blockchain will be included to record the threat cases immutably and automatically execute mitigation measures through smart contracts to provide transparency and resilience.When tested on two test sets,BoT-IoT and IoT-23,the framework obtains a maximum F1-score at 97.5 percent and only a 1.8 percent false positive rate,which compares favorably to other solutions regarding effectiveness and the amount of time required to respond.Our findings support the feasibility of our method as an extensible and secure paradigm of nextgeneration IoT security,which has constrictive utility in mission-critical or resource-constrained settings.The work is a substantial milestone in autonomous and trustful mitigation against DDoS attacks through intelligent learning and decentralized enforcement.展开更多
As an essential tool for quantitative analysis of lower limb coordination,optical motion capture systems with marker-based encoding still suffer from inefficiency,high costs,spatial constraints,and the requirement for...As an essential tool for quantitative analysis of lower limb coordination,optical motion capture systems with marker-based encoding still suffer from inefficiency,high costs,spatial constraints,and the requirement for multiple markers.While 3D pose estimation algorithms combined with ordinary cameras offer an alternative,their accuracy often deteriorates under significant body occlusion.To address the challenge of insufficient 3D pose estimation precision in occluded scenarios—which hinders the quantitative analysis of athletes’lower-limb coordination—this paper proposes a multimodal training framework integrating spatiotemporal dependency networks with text-semantic guidance.Compared to traditional optical motion capture systems,this work achieves low-cost,high-precision motion parameter acquisition through the following innovations:(1)spatiotemporal dependency attention module is designed to establish dynamic spatiotemporal correlation graphs via cross-frame joint semantic matching,effectively resolving the feature fragmentation issue in existing methods.(2)noise-suppressed multi-scale temporal module is proposed,leveraging KL divergence-based information gain analysis for progressive feature filtering in long-range dependencies,reducing errors by 1.91 mm compared to conventional temporal convolutions.(3)text-pose contrastive learning paradigm is introduced for the first time,where BERT-generated action descriptions align semantic-geometric features via the BERT encoder,significantly enhancing robustness under severe occlusion(50%joint invisibility).On the Human3.6M dataset,the proposed method achieves an MPJPE of 56.21 mm under Protocol 1,outperforming the state-of-the-art baseline MHFormer by 3.3%.Extensive ablation studies on Human3.6M demonstrate the individual contributions of the core modules:the spatiotemporal dependency module and noise-suppressed multi-scale temporal module reduce MPJPE by 0.30 and 0.34 mm,respectively,while the multimodal training strategy further decreases MPJPE by 0.6 mm through text-skeleton contrastive learning.Comparative experiments involving 16 athletes show that the sagittal plane coupling angle measurements of hip-ankle joints differ by less than 1.2°from those obtained via traditional optical systems(two one-sided t-tests,p<0.05),validating real-world reliability.This study provides an AI-powered analytical solution for competitive sports training,serving as a viable alternative to specialized equipment.展开更多
RESTful APIs have been adopted as the standard way of developing web services,allowing for smooth communication between clients and servers.Their simplicity,scalability,and compatibility have made them crucial to mode...RESTful APIs have been adopted as the standard way of developing web services,allowing for smooth communication between clients and servers.Their simplicity,scalability,and compatibility have made them crucial to modern web environments.However,the increased adoption of RESTful APIs has simultaneously exposed these interfaces to significant security threats that jeopardize the availability,confidentiality,and integrity of web services.This survey focuses exclusively on RESTful APIs,providing an in-depth perspective distinct from studies addressing other API types such as GraphQL or SOAP.We highlight concrete threats-such as injection attacks and insecure direct object references(IDOR)-to illustrate the evolving risk landscape.Our work systematically reviews state-of-the-art detection methods,including static code analysis and penetration testing,and proposes a novel taxonomy that categorizes vulnerabilities such as authentication and authorization issues.Unlike existing taxonomies focused on general web or network-level threats,our taxonomy emphasizes API-specific design flaws and operational dependencies,offering a more granular and actionable framework for RESTful API security.By critically assessing current detection methodologies and identifying key research gaps,we offer a structured framework that advances the understanding and mitigation of RESTful API vulnerabilities.Ultimately,this work aims to drive significant advancements in API security,thereby enhancing the resilience of web services against evolving cyber threats.展开更多
Situated in the southwestern Pacific,the Tonga-Kermadec subduction zone is separated into two parts by the Louisvlle Ridge Seamount Chain(LRSC),i.e.,the Tanga subduction zone and the Kermadec subduction zone.Known for...Situated in the southwestern Pacific,the Tonga-Kermadec subduction zone is separated into two parts by the Louisvlle Ridge Seamount Chain(LRSC),i.e.,the Tanga subduction zone and the Kermadec subduction zone.Known for its vigorous volcanic activity,frequent large earthquakes,rapid plate subduction,and distinctive subducting plate morphology,this subduction zone provides valuable insights into its structures,dynamics,and associated geohazards.This study compiles geological and geophysical datasets in this region,including seismicity,focal mechanisms,seismic reflection and refraction profiles,and seismic tomography,to understand the relationship between lithospheric structures of the subduction system and associated seismicity-volcanic activities.Our analysis suggests that variations in overlying sediment thickness,subduction rate,and subduction angle significantly influence the lithospheric deformation processes within the Tonga-Kermadec subduction system.Furthermore,these factors contribute to the notable differences in seismicity and volcanism observed between the Tonga subduction zone and the Kermadec subduction zone.This study enhances our understanding of plate tectonics by providing insights into the interplay between subduction dynamics and lithospheric deformation,which are crucial for analyzing geological and geophysical behaviors in similar subduction environments.展开更多
This paper presents a high-security medical image encryption method that leverages a novel and robust sine-cosine map.The map demonstrates remarkable chaotic dynamics over a wide range of parameters.We employ nonlinea...This paper presents a high-security medical image encryption method that leverages a novel and robust sine-cosine map.The map demonstrates remarkable chaotic dynamics over a wide range of parameters.We employ nonlinear analytical tools to thoroughly investigate the dynamics of the chaotic map,which allows us to select optimal parameter configurations for the encryption process.Our findings indicate that the proposed sine-cosine map is capable of generating a rich variety of chaotic attractors,an essential characteristic for effective encryption.The encryption technique is based on bit-plane decomposition,wherein a plain image is divided into distinct bit planes.These planes are organized into two matrices:one containing the most significant bit planes and the other housing the least significant ones.The subsequent phases of chaotic confusion and diffusion utilize these matrices to enhance security.An auxiliary matrix is then generated,comprising the combined bit planes that yield the final encrypted image.Experimental results demonstrate that our proposed technique achieves a commendable level of security for safeguarding sensitive patient information in medical images.As a result,image quality is evaluated using the Structural Similarity Index(SSIM),yielding values close to zero for encrypted images and approaching one for decrypted images.Additionally,the entropy values of the encrypted images are near 8,with a Number of Pixel Change Rate(NPCR)and Unified Average Change Intensity(UACI)exceeding 99.50%and 33%,respectively.Furthermore,quantitative assessments of occlusion attacks,along with comparisons to leading algorithms,validate the integrity and efficacy of our medical image encryption approach.展开更多
Environmental transition can potentially influence cardiovascular health.Investigating the relationship between such transition and heart disease has important applications.This study uses federated learning(FL)in thi...Environmental transition can potentially influence cardiovascular health.Investigating the relationship between such transition and heart disease has important applications.This study uses federated learning(FL)in this context and investigates the link between climate change and heart disease.The dataset containing environmental,meteorological,and health-related factors like blood sugar,cholesterol,maximum heart rate,fasting ECG,etc.,is used with machine learning models to identify hidden patterns and relationships.Algorithms such as federated learning,XGBoost,random forest,support vector classifier,extra tree classifier,k-nearest neighbor,and logistic regression are used.A framework for diagnosing heart disease is designed using FL along with other models.Experiments involve discriminating healthy subjects from those who are heart patients and obtain an accuracy of 94.03%.The proposed FL-based framework proves to be superior to existing techniques in terms of usability,dependability,and accuracy.This study paves the way for screening people for early heart disease detection and continuous monitoring in telemedicine and remote care.Personalized treatment can also be planned with customized therapies.展开更多
In the context of an increasingly severe cybersecurity landscape and the growing complexity of offensive and defen-sive techniques,Zero Trust Networks(ZTN)have emerged as a widely recognized technology.Zero Trust not ...In the context of an increasingly severe cybersecurity landscape and the growing complexity of offensive and defen-sive techniques,Zero Trust Networks(ZTN)have emerged as a widely recognized technology.Zero Trust not only addresses the shortcomings of traditional perimeter security models but also consistently follows the fundamental principle of“never trust,always verify.”Initially proposed by John Cortez in 2010 and subsequently promoted by Google,the Zero Trust model has become a key approach to addressing the ever-growing security threats in complex network environments.This paper systematically compares the current mainstream cybersecurity models,thoroughly explores the advantages and limitations of the Zero Trust model,and provides an in-depth review of its components and key technologies.Additionally,it analyzes the latest research achievements in the application of Zero Trust technology across various fields,including network security,6G networks,the Internet of Things(IoT),and cloud computing,in the context of specific use cases.The paper also discusses the innovative contributions of the Zero Trust model in these fields,the challenges it faces,and proposes corresponding solutions and future research directions.展开更多
Urdu,a prominent subcontinental language,serves as a versatile means of communication.However,its handwritten expressions present challenges for optical character recognition(OCR).While various OCR techniques have bee...Urdu,a prominent subcontinental language,serves as a versatile means of communication.However,its handwritten expressions present challenges for optical character recognition(OCR).While various OCR techniques have been proposed,most of them focus on recognizing printed Urdu characters and digits.To the best of our knowledge,very little research has focused solely on Urdu pure handwriting recognition,and the results of such proposed methods are often inadequate.In this study,we introduce a novel approach to recognizing Urdu pure handwritten digits and characters using Convolutional Neural Networks(CNN).Our proposed method utilizes convolutional layers to extract important features from input images and classifies them using fully connected layers,enabling efficient and accurate detection of Urdu handwritten digits and characters.We implemented the proposed technique on a large publicly available dataset of Urdu handwritten digits and characters.The findings demonstrate that the CNN model achieves an accuracy of 98.30%and an F1 score of 88.6%,indicating its effectiveness in detecting and classifyingUrdu handwritten digits and characters.These results have far-reaching implications for various applications,including document analysis,text recognition,and language understanding,which have previously been unexplored in the context of Urdu handwriting data.This work lays a solid foundation for future research and development in Urdu language detection and processing,opening up new opportunities for advancement in this field.展开更多
The increase in number of people using the Internet leads to increased cyberattack opportunities.Advanced Persistent Threats,or APTs,are among the most dangerous targeted cyberattacks.APT attacks utilize various advan...The increase in number of people using the Internet leads to increased cyberattack opportunities.Advanced Persistent Threats,or APTs,are among the most dangerous targeted cyberattacks.APT attacks utilize various advanced tools and techniques for attacking targets with specific goals.Even countries with advanced technologies,like the US,Russia,the UK,and India,are susceptible to this targeted attack.APT is a sophisticated attack that involves multiple stages and specific strategies.Besides,TTP(Tools,Techniques,and Procedures)involved in the APT attack are commonly new and developed by an attacker to evade the security system.However,APTs are generally implemented in multiple stages.If one of the stages is detected,we may apply a defense mechanism for subsequent stages,leading to the entire APT attack failure.The detection at the early stage of APT and the prediction of the next step in the APT kill chain are ongoing challenges.This survey paper will provide knowledge about APT attacks and their essential steps.This follows the case study of known APT attacks,which will give clear information about the APT attack process—in later sections,highlighting the various detection methods defined by different researchers along with the limitations of the work.Data used in this article comes from the various annual reports published by security experts and blogs and information released by the enterprise networks targeted by the attack.展开更多
The brain is a complex network system in which a large number of neurons are widely connected to each other and transmit signals to each other.The memory characteristic of memristors makes them suitable for simulating...The brain is a complex network system in which a large number of neurons are widely connected to each other and transmit signals to each other.The memory characteristic of memristors makes them suitable for simulating neuronal synapses with plasticity.In this paper,a memristor is used to simulate a synapse,a discrete small-world neuronal network is constructed based on Rulkov neurons and its dynamical behavior is explored.We explore the influence of system parameters on the dynamical behaviors of the discrete small-world network,and the system shows a variety of firing patterns such as spiking firing and triangular burst firing when the neuronal parameterαis changed.The results of a numerical simulation based on Matlab show that the network topology can affect the synchronous firing behavior of the neuronal network,and the higher the reconnection probability and number of the nearest neurons,the more significant the synchronization state of the neurons.In addition,by increasing the coupling strength of memristor synapses,synchronization performance is promoted.The results of this paper can boost research into complex neuronal networks coupled with memristor synapses and further promote the development of neuroscience.展开更多
The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that c...The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that can still be further enhanced.This study presents a system that employs a range of approaches and algorithms to ensure the security of transmitted venous images.The main goal of this work is to create a very effective system for compressing individual biometrics in order to improve the overall accuracy and security of digital photographs by means of image compression.This paper introduces a content-based image authentication mechanism that is suitable for usage across an untrusted network and resistant to data loss during transmission.By employing scale attributes and a key-dependent parametric Long Short-Term Memory(LSTM),it is feasible to improve the resilience of digital signatures against image deterioration and strengthen their security against malicious actions.Furthermore,the successful implementation of transmitting biometric data in a compressed format over a wireless network has been accomplished.For applications involving the transmission and sharing of images across a network.The suggested technique utilizes the scalability of a structural digital signature to attain a satisfactory equilibrium between security and picture transfer.An effective adaptive compression strategy was created to lengthen the overall lifetime of the network by sharing the processing of responsibilities.This scheme ensures a large reduction in computational and energy requirements while minimizing image quality loss.This approach employs multi-scale characteristics to improve the resistance of signatures against image deterioration.The proposed system attained a Gaussian noise value of 98%and a rotation accuracy surpassing 99%.展开更多
Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentime...Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language.展开更多
Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin ...Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin in CC’s performance,the Cloud Service Broker(CSB),orchestrates DC selection.Failure to adroitly route user requests with suitable DCs transforms the CSB into a bottleneck,endangering service quality.To tackle this,deploying an efficient CSB policy becomes imperative,optimizing DC selection to meet stringent Qualityof-Service(QoS)demands.Amidst numerous CSB policies,their implementation grapples with challenges like costs and availability.This article undertakes a holistic review of diverse CSB policies,concurrently surveying the predicaments confronted by current policies.The foremost objective is to pinpoint research gaps and remedies to invigorate future policy development.Additionally,it extensively clarifies various DC selection methodologies employed in CC,enriching practitioners and researchers alike.Employing synthetic analysis,the article systematically assesses and compares myriad DC selection techniques.These analytical insights equip decision-makers with a pragmatic framework to discern the apt technique for their needs.In summation,this discourse resoundingly underscores the paramount importance of adept CSB policies in DC selection,highlighting the imperative role of efficient CSB policies in optimizing CC performance.By emphasizing the significance of these policies and their modeling implications,the article contributes to both the general modeling discourse and its practical applications in the CC domain.展开更多
Breast cancer remains a significant global health challenge, necessitating effective early detection and prognosis to enhance patient outcomes. Current diagnostic methods, including mammography and MRI, suffer from li...Breast cancer remains a significant global health challenge, necessitating effective early detection and prognosis to enhance patient outcomes. Current diagnostic methods, including mammography and MRI, suffer from limitations such as uncertainty and imprecise data, leading to late-stage diagnoses. To address this, various expert systems have been developed, but many rely on type-1 fuzzy logic and lack mobile-based applications for data collection and feedback to healthcare practitioners. This research investigates the development of an Enhanced Mobile-based Fuzzy Expert system (EMFES) for breast cancer pre-growth prognosis. The study explores the use of type-2 fuzzy logic to enhance accuracy and model uncertainty effectively. Additionally, it evaluates the advantages of employing the python programming language over java for implementation and considers specific risk factors for data collection. The research aims to dynamically generate fuzzy rules, adapting to evolving breast cancer research and patient data. Key research questions focus on the comparative effectiveness of type-2 fuzzy logic, the handling of uncertainty and imprecise data, the integration of mobile-based features, the choice of programming language, and the creation of dynamic fuzzy rules. Furthermore, the study examines the differences between the Mamdani Inference System and the Sugeno Fuzzy Inference method and explores challenges and opportunities in deploying the EMFES on mobile devices. The research identifies a critical gap in existing breast cancer diagnostic systems, emphasizing the need for a comprehensive, mobile-enabled, and adaptable solution by developing an EMFES that leverages Type-2 fuzzy logic, the Sugeno Inference Algorithm, Python Programming, and dynamic fuzzy rule generation. This study seeks to enhance early breast cancer detection and ultimately reduce breast cancer-related mortality.展开更多
This study explores the area of Author Profiling(AP)and its importance in several industries,including forensics,security,marketing,and education.A key component of AP is the extraction of useful information from text...This study explores the area of Author Profiling(AP)and its importance in several industries,including forensics,security,marketing,and education.A key component of AP is the extraction of useful information from text,with an emphasis on the writers’ages and genders.To improve the accuracy of AP tasks,the study develops an ensemble model dubbed ABMRF that combines AdaBoostM1(ABM1)and Random Forest(RF).The work uses an extensive technique that involves textmessage dataset pretreatment,model training,and assessment.To evaluate the effectiveness of several machine learning(ML)algorithms in classifying age and gender,including Composite Hypercube on Random Projection(CHIRP),Decision Trees(J48),Na飗e Bayes(NB),K Nearest Neighbor,AdaboostM1,NB-Updatable,RF,andABMRF,they are compared.The findings demonstrate thatABMRFregularly beats the competition,with a gender classification accuracy of 71.14%and an age classification accuracy of 54.29%,respectively.Additional metrics like precision,recall,F-measure,Matthews Correlation Coefficient(MCC),and accuracy support ABMRF’s outstanding performance in age and gender profiling tasks.This study demonstrates the usefulness of ABMRF as an ensemble model for author profiling and highlights its possible uses in marketing,law enforcement,and education.The results emphasize the effectiveness of ensemble approaches in enhancing author profiling task accuracy,particularly when it comes to age and gender identification.展开更多
Gliomas are aggressive brain tumors known for their heterogeneity,unclear borders,and diverse locations on Magnetic Resonance Imaging(MRI)scans.These factors present significant challenges for MRI-based segmentation,a...Gliomas are aggressive brain tumors known for their heterogeneity,unclear borders,and diverse locations on Magnetic Resonance Imaging(MRI)scans.These factors present significant challenges for MRI-based segmentation,a crucial step for effective treatment planning and monitoring of glioma progression.This study proposes a novel deep learning framework,ResNet Multi-Head Attention U-Net(ResMHA-Net),to address these challenges and enhance glioma segmentation accuracy.ResMHA-Net leverages the strengths of both residual blocks from the ResNet architecture and multi-head attention mechanisms.This powerful combination empowers the network to prioritize informative regions within the 3D MRI data and capture long-range dependencies.By doing so,ResMHANet effectively segments intricate glioma sub-regions and reduces the impact of uncertain tumor boundaries.We rigorously trained and validated ResMHA-Net on the BraTS 2018,2019,2020 and 2021 datasets.Notably,ResMHA-Net achieved superior segmentation accuracy on the BraTS 2021 dataset compared to the previous years,demonstrating its remarkable adaptability and robustness across diverse datasets.Furthermore,we collected the predicted masks obtained from three datasets to enhance survival prediction,effectively augmenting the dataset size.Radiomic features were then extracted from these predicted masks and,along with clinical data,were used to train a novel ensemble learning-based machine learning model for survival prediction.This model employs a voting mechanism aggregating predictions from multiple models,leading to significant improvements over existing methods.This ensemble approach capitalizes on the strengths of various models,resulting in more accurate and reliable predictions for patient survival.Importantly,we achieved an impressive accuracy of 73%for overall survival(OS)prediction.展开更多
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn...Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode.展开更多
Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing capacity.Despite the popularity of ML tech...Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing capacity.Despite the popularity of ML techniques,only a few research studies have focused on the application of ML especially supervised learning techniques in Requirement Engineering(RE)activities to solve the problems that occur in RE activities.The authors focus on the systematic mapping of past work to investigate those studies that focused on the application of supervised learning techniques in RE activities between the period of 2002–2023.The authors aim to investigate the research trends,main RE activities,ML algorithms,and data sources that were studied during this period.Forty-five research studies were selected based on our exclusion and inclusion criteria.The results show that the scientific community used 57 algorithms.Among those algorithms,researchers mostly used the five following ML algorithms in RE activities:Decision Tree,Support Vector Machine,Naïve Bayes,K-nearest neighbour Classifier,and Random Forest.The results show that researchers used these algorithms in eight major RE activities.Those activities are requirements analysis,failure prediction,effort estimation,quality,traceability,business rules identification,content classification,and detection of problems in requirements written in natural language.Our selected research studies used 32 private and 41 public data sources.The most popular data sources that were detected in selected studies are the Metric Data Programme from NASA,Predictor Models in Software Engineering,and iTrust Electronic Health Care System.展开更多
基金supported by the National Key R&D Program of China(No.2023YFB2703700)the National Natural Science Foundation of China(Nos.U21A20465,62302457,62402444,62172292)+4 种基金the Fundamental Research Funds of Zhejiang Sci-Tech University(Nos.23222092-Y,22222266-Y)the Program for Leading Innovative Research Team of Zhejiang Province(No.2023R01001)the Zhejiang Provincial Natural Science Foundation of China(Nos.LQ24F020008,LQ24F020012)the Foundation of State Key Laboratory of Public Big Data(No.[2022]417)the“Pioneer”and“Leading Goose”R&D Program of Zhejiang(No.2023C01119).
文摘As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and user privacy concerns within smart grids.However,existing methods struggle with efficiency and security when processing large-scale data.Balancing efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent challenge.This paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data modalities.The approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user privacy.It also explores the application of Boneh Lynn Shacham(BLS)signatures for user authentication.The proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis.
文摘The explosive expansion of the Internet of Things(IoT)systems has increased the imperative to have strong and robust solutions to cyber Security,especially to curtail Distributed Denial of Service(DDoS)attacks,which can cripple critical infrastructure.The proposed framework presented in the current paper is a new hybrid scheme that induces deep learning-based traffic classification and blockchain-enabledmitigation tomake intelligent,decentralized,and real-time DDoS countermeasures in an IoT network.The proposed model fuses the extracted deep features with statistical features and trains them by using traditional machine-learning algorithms,which makes them more accurate in detection than statistical features alone,based on the Convolutional Neural Network(CNN)architecture,which can extract deep features.A permissioned blockchain will be included to record the threat cases immutably and automatically execute mitigation measures through smart contracts to provide transparency and resilience.When tested on two test sets,BoT-IoT and IoT-23,the framework obtains a maximum F1-score at 97.5 percent and only a 1.8 percent false positive rate,which compares favorably to other solutions regarding effectiveness and the amount of time required to respond.Our findings support the feasibility of our method as an extensible and secure paradigm of nextgeneration IoT security,which has constrictive utility in mission-critical or resource-constrained settings.The work is a substantial milestone in autonomous and trustful mitigation against DDoS attacks through intelligent learning and decentralized enforcement.
基金supported by the Major Sports Research Projects of Jiangsu Provincial Sports Bureau in 2022(No.ST221101).
文摘As an essential tool for quantitative analysis of lower limb coordination,optical motion capture systems with marker-based encoding still suffer from inefficiency,high costs,spatial constraints,and the requirement for multiple markers.While 3D pose estimation algorithms combined with ordinary cameras offer an alternative,their accuracy often deteriorates under significant body occlusion.To address the challenge of insufficient 3D pose estimation precision in occluded scenarios—which hinders the quantitative analysis of athletes’lower-limb coordination—this paper proposes a multimodal training framework integrating spatiotemporal dependency networks with text-semantic guidance.Compared to traditional optical motion capture systems,this work achieves low-cost,high-precision motion parameter acquisition through the following innovations:(1)spatiotemporal dependency attention module is designed to establish dynamic spatiotemporal correlation graphs via cross-frame joint semantic matching,effectively resolving the feature fragmentation issue in existing methods.(2)noise-suppressed multi-scale temporal module is proposed,leveraging KL divergence-based information gain analysis for progressive feature filtering in long-range dependencies,reducing errors by 1.91 mm compared to conventional temporal convolutions.(3)text-pose contrastive learning paradigm is introduced for the first time,where BERT-generated action descriptions align semantic-geometric features via the BERT encoder,significantly enhancing robustness under severe occlusion(50%joint invisibility).On the Human3.6M dataset,the proposed method achieves an MPJPE of 56.21 mm under Protocol 1,outperforming the state-of-the-art baseline MHFormer by 3.3%.Extensive ablation studies on Human3.6M demonstrate the individual contributions of the core modules:the spatiotemporal dependency module and noise-suppressed multi-scale temporal module reduce MPJPE by 0.30 and 0.34 mm,respectively,while the multimodal training strategy further decreases MPJPE by 0.6 mm through text-skeleton contrastive learning.Comparative experiments involving 16 athletes show that the sagittal plane coupling angle measurements of hip-ankle joints differ by less than 1.2°from those obtained via traditional optical systems(two one-sided t-tests,p<0.05),validating real-world reliability.This study provides an AI-powered analytical solution for competitive sports training,serving as a viable alternative to specialized equipment.
文摘RESTful APIs have been adopted as the standard way of developing web services,allowing for smooth communication between clients and servers.Their simplicity,scalability,and compatibility have made them crucial to modern web environments.However,the increased adoption of RESTful APIs has simultaneously exposed these interfaces to significant security threats that jeopardize the availability,confidentiality,and integrity of web services.This survey focuses exclusively on RESTful APIs,providing an in-depth perspective distinct from studies addressing other API types such as GraphQL or SOAP.We highlight concrete threats-such as injection attacks and insecure direct object references(IDOR)-to illustrate the evolving risk landscape.Our work systematically reviews state-of-the-art detection methods,including static code analysis and penetration testing,and proposes a novel taxonomy that categorizes vulnerabilities such as authentication and authorization issues.Unlike existing taxonomies focused on general web or network-level threats,our taxonomy emphasizes API-specific design flaws and operational dependencies,offering a more granular and actionable framework for RESTful API security.By critically assessing current detection methodologies and identifying key research gaps,we offer a structured framework that advances the understanding and mitigation of RESTful API vulnerabilities.Ultimately,this work aims to drive significant advancements in API security,thereby enhancing the resilience of web services against evolving cyber threats.
基金supported by Special Projects in Universities’Key Fields of Guangdong Province(No.2023ZDZX3017)the 2022 Tertiary Education Scientific Research Project of Guangzhou Municipal Education Bureau(No.202234607)+1 种基金the Guangdong Basic and Applied Basic Research Foundation(No.2025A1515012983)the National Natural Science Foundation of China(Nos.52371059 and 52101358).
文摘Situated in the southwestern Pacific,the Tonga-Kermadec subduction zone is separated into two parts by the Louisvlle Ridge Seamount Chain(LRSC),i.e.,the Tanga subduction zone and the Kermadec subduction zone.Known for its vigorous volcanic activity,frequent large earthquakes,rapid plate subduction,and distinctive subducting plate morphology,this subduction zone provides valuable insights into its structures,dynamics,and associated geohazards.This study compiles geological and geophysical datasets in this region,including seismicity,focal mechanisms,seismic reflection and refraction profiles,and seismic tomography,to understand the relationship between lithospheric structures of the subduction system and associated seismicity-volcanic activities.Our analysis suggests that variations in overlying sediment thickness,subduction rate,and subduction angle significantly influence the lithospheric deformation processes within the Tonga-Kermadec subduction system.Furthermore,these factors contribute to the notable differences in seismicity and volcanism observed between the Tonga subduction zone and the Kermadec subduction zone.This study enhances our understanding of plate tectonics by providing insights into the interplay between subduction dynamics and lithospheric deformation,which are crucial for analyzing geological and geophysical behaviors in similar subduction environments.
文摘This paper presents a high-security medical image encryption method that leverages a novel and robust sine-cosine map.The map demonstrates remarkable chaotic dynamics over a wide range of parameters.We employ nonlinear analytical tools to thoroughly investigate the dynamics of the chaotic map,which allows us to select optimal parameter configurations for the encryption process.Our findings indicate that the proposed sine-cosine map is capable of generating a rich variety of chaotic attractors,an essential characteristic for effective encryption.The encryption technique is based on bit-plane decomposition,wherein a plain image is divided into distinct bit planes.These planes are organized into two matrices:one containing the most significant bit planes and the other housing the least significant ones.The subsequent phases of chaotic confusion and diffusion utilize these matrices to enhance security.An auxiliary matrix is then generated,comprising the combined bit planes that yield the final encrypted image.Experimental results demonstrate that our proposed technique achieves a commendable level of security for safeguarding sensitive patient information in medical images.As a result,image quality is evaluated using the Structural Similarity Index(SSIM),yielding values close to zero for encrypted images and approaching one for decrypted images.Additionally,the entropy values of the encrypted images are near 8,with a Number of Pixel Change Rate(NPCR)and Unified Average Change Intensity(UACI)exceeding 99.50%and 33%,respectively.Furthermore,quantitative assessments of occlusion attacks,along with comparisons to leading algorithms,validate the integrity and efficacy of our medical image encryption approach.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R104),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Environmental transition can potentially influence cardiovascular health.Investigating the relationship between such transition and heart disease has important applications.This study uses federated learning(FL)in this context and investigates the link between climate change and heart disease.The dataset containing environmental,meteorological,and health-related factors like blood sugar,cholesterol,maximum heart rate,fasting ECG,etc.,is used with machine learning models to identify hidden patterns and relationships.Algorithms such as federated learning,XGBoost,random forest,support vector classifier,extra tree classifier,k-nearest neighbor,and logistic regression are used.A framework for diagnosing heart disease is designed using FL along with other models.Experiments involve discriminating healthy subjects from those who are heart patients and obtain an accuracy of 94.03%.The proposed FL-based framework proves to be superior to existing techniques in terms of usability,dependability,and accuracy.This study paves the way for screening people for early heart disease detection and continuous monitoring in telemedicine and remote care.Personalized treatment can also be planned with customized therapies.
基金supported by the National Natural Science Foundation of China(Grants Nos.62473146,62072249 and 62072056)the National Science Foundation of Hunan Province(Grant No.2024JJ3017)+1 种基金the Hunan Provincial Key Research and Development Program(Grant No.2022GK2019)by the Researchers Supporting Project Number(RSP2024R509),King Saud University,Riyadh,Saudi Arabia.
文摘In the context of an increasingly severe cybersecurity landscape and the growing complexity of offensive and defen-sive techniques,Zero Trust Networks(ZTN)have emerged as a widely recognized technology.Zero Trust not only addresses the shortcomings of traditional perimeter security models but also consistently follows the fundamental principle of“never trust,always verify.”Initially proposed by John Cortez in 2010 and subsequently promoted by Google,the Zero Trust model has become a key approach to addressing the ever-growing security threats in complex network environments.This paper systematically compares the current mainstream cybersecurity models,thoroughly explores the advantages and limitations of the Zero Trust model,and provides an in-depth review of its components and key technologies.Additionally,it analyzes the latest research achievements in the application of Zero Trust technology across various fields,including network security,6G networks,the Internet of Things(IoT),and cloud computing,in the context of specific use cases.The paper also discusses the innovative contributions of the Zero Trust model in these fields,the challenges it faces,and proposes corresponding solutions and future research directions.
文摘Urdu,a prominent subcontinental language,serves as a versatile means of communication.However,its handwritten expressions present challenges for optical character recognition(OCR).While various OCR techniques have been proposed,most of them focus on recognizing printed Urdu characters and digits.To the best of our knowledge,very little research has focused solely on Urdu pure handwriting recognition,and the results of such proposed methods are often inadequate.In this study,we introduce a novel approach to recognizing Urdu pure handwritten digits and characters using Convolutional Neural Networks(CNN).Our proposed method utilizes convolutional layers to extract important features from input images and classifies them using fully connected layers,enabling efficient and accurate detection of Urdu handwritten digits and characters.We implemented the proposed technique on a large publicly available dataset of Urdu handwritten digits and characters.The findings demonstrate that the CNN model achieves an accuracy of 98.30%and an F1 score of 88.6%,indicating its effectiveness in detecting and classifyingUrdu handwritten digits and characters.These results have far-reaching implications for various applications,including document analysis,text recognition,and language understanding,which have previously been unexplored in the context of Urdu handwriting data.This work lays a solid foundation for future research and development in Urdu language detection and processing,opening up new opportunities for advancement in this field.
文摘The increase in number of people using the Internet leads to increased cyberattack opportunities.Advanced Persistent Threats,or APTs,are among the most dangerous targeted cyberattacks.APT attacks utilize various advanced tools and techniques for attacking targets with specific goals.Even countries with advanced technologies,like the US,Russia,the UK,and India,are susceptible to this targeted attack.APT is a sophisticated attack that involves multiple stages and specific strategies.Besides,TTP(Tools,Techniques,and Procedures)involved in the APT attack are commonly new and developed by an attacker to evade the security system.However,APTs are generally implemented in multiple stages.If one of the stages is detected,we may apply a defense mechanism for subsequent stages,leading to the entire APT attack failure.The detection at the early stage of APT and the prediction of the next step in the APT kill chain are ongoing challenges.This survey paper will provide knowledge about APT attacks and their essential steps.This follows the case study of known APT attacks,which will give clear information about the APT attack process—in later sections,highlighting the various detection methods defined by different researchers along with the limitations of the work.Data used in this article comes from the various annual reports published by security experts and blogs and information released by the enterprise networks targeted by the attack.
基金Project supported by the Key Projects of Hunan Provincial Department of Education (Grant No.23A0133)the Natural Science Foundation of Hunan Province (Grant No.2022JJ30572)the National Natural Science Foundations of China (Grant No.62171401)。
文摘The brain is a complex network system in which a large number of neurons are widely connected to each other and transmit signals to each other.The memory characteristic of memristors makes them suitable for simulating neuronal synapses with plasticity.In this paper,a memristor is used to simulate a synapse,a discrete small-world neuronal network is constructed based on Rulkov neurons and its dynamical behavior is explored.We explore the influence of system parameters on the dynamical behaviors of the discrete small-world network,and the system shows a variety of firing patterns such as spiking firing and triangular burst firing when the neuronal parameterαis changed.The results of a numerical simulation based on Matlab show that the network topology can affect the synchronous firing behavior of the neuronal network,and the higher the reconnection probability and number of the nearest neurons,the more significant the synchronization state of the neurons.In addition,by increasing the coupling strength of memristor synapses,synchronization performance is promoted.The results of this paper can boost research into complex neuronal networks coupled with memristor synapses and further promote the development of neuroscience.
文摘The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that can still be further enhanced.This study presents a system that employs a range of approaches and algorithms to ensure the security of transmitted venous images.The main goal of this work is to create a very effective system for compressing individual biometrics in order to improve the overall accuracy and security of digital photographs by means of image compression.This paper introduces a content-based image authentication mechanism that is suitable for usage across an untrusted network and resistant to data loss during transmission.By employing scale attributes and a key-dependent parametric Long Short-Term Memory(LSTM),it is feasible to improve the resilience of digital signatures against image deterioration and strengthen their security against malicious actions.Furthermore,the successful implementation of transmitting biometric data in a compressed format over a wireless network has been accomplished.For applications involving the transmission and sharing of images across a network.The suggested technique utilizes the scalability of a structural digital signature to attain a satisfactory equilibrium between security and picture transfer.An effective adaptive compression strategy was created to lengthen the overall lifetime of the network by sharing the processing of responsibilities.This scheme ensures a large reduction in computational and energy requirements while minimizing image quality loss.This approach employs multi-scale characteristics to improve the resistance of signatures against image deterioration.The proposed system attained a Gaussian noise value of 98%and a rotation accuracy surpassing 99%.
文摘Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language.
文摘Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin in CC’s performance,the Cloud Service Broker(CSB),orchestrates DC selection.Failure to adroitly route user requests with suitable DCs transforms the CSB into a bottleneck,endangering service quality.To tackle this,deploying an efficient CSB policy becomes imperative,optimizing DC selection to meet stringent Qualityof-Service(QoS)demands.Amidst numerous CSB policies,their implementation grapples with challenges like costs and availability.This article undertakes a holistic review of diverse CSB policies,concurrently surveying the predicaments confronted by current policies.The foremost objective is to pinpoint research gaps and remedies to invigorate future policy development.Additionally,it extensively clarifies various DC selection methodologies employed in CC,enriching practitioners and researchers alike.Employing synthetic analysis,the article systematically assesses and compares myriad DC selection techniques.These analytical insights equip decision-makers with a pragmatic framework to discern the apt technique for their needs.In summation,this discourse resoundingly underscores the paramount importance of adept CSB policies in DC selection,highlighting the imperative role of efficient CSB policies in optimizing CC performance.By emphasizing the significance of these policies and their modeling implications,the article contributes to both the general modeling discourse and its practical applications in the CC domain.
文摘Breast cancer remains a significant global health challenge, necessitating effective early detection and prognosis to enhance patient outcomes. Current diagnostic methods, including mammography and MRI, suffer from limitations such as uncertainty and imprecise data, leading to late-stage diagnoses. To address this, various expert systems have been developed, but many rely on type-1 fuzzy logic and lack mobile-based applications for data collection and feedback to healthcare practitioners. This research investigates the development of an Enhanced Mobile-based Fuzzy Expert system (EMFES) for breast cancer pre-growth prognosis. The study explores the use of type-2 fuzzy logic to enhance accuracy and model uncertainty effectively. Additionally, it evaluates the advantages of employing the python programming language over java for implementation and considers specific risk factors for data collection. The research aims to dynamically generate fuzzy rules, adapting to evolving breast cancer research and patient data. Key research questions focus on the comparative effectiveness of type-2 fuzzy logic, the handling of uncertainty and imprecise data, the integration of mobile-based features, the choice of programming language, and the creation of dynamic fuzzy rules. Furthermore, the study examines the differences between the Mamdani Inference System and the Sugeno Fuzzy Inference method and explores challenges and opportunities in deploying the EMFES on mobile devices. The research identifies a critical gap in existing breast cancer diagnostic systems, emphasizing the need for a comprehensive, mobile-enabled, and adaptable solution by developing an EMFES that leverages Type-2 fuzzy logic, the Sugeno Inference Algorithm, Python Programming, and dynamic fuzzy rule generation. This study seeks to enhance early breast cancer detection and ultimately reduce breast cancer-related mortality.
文摘This study explores the area of Author Profiling(AP)and its importance in several industries,including forensics,security,marketing,and education.A key component of AP is the extraction of useful information from text,with an emphasis on the writers’ages and genders.To improve the accuracy of AP tasks,the study develops an ensemble model dubbed ABMRF that combines AdaBoostM1(ABM1)and Random Forest(RF).The work uses an extensive technique that involves textmessage dataset pretreatment,model training,and assessment.To evaluate the effectiveness of several machine learning(ML)algorithms in classifying age and gender,including Composite Hypercube on Random Projection(CHIRP),Decision Trees(J48),Na飗e Bayes(NB),K Nearest Neighbor,AdaboostM1,NB-Updatable,RF,andABMRF,they are compared.The findings demonstrate thatABMRFregularly beats the competition,with a gender classification accuracy of 71.14%and an age classification accuracy of 54.29%,respectively.Additional metrics like precision,recall,F-measure,Matthews Correlation Coefficient(MCC),and accuracy support ABMRF’s outstanding performance in age and gender profiling tasks.This study demonstrates the usefulness of ABMRF as an ensemble model for author profiling and highlights its possible uses in marketing,law enforcement,and education.The results emphasize the effectiveness of ensemble approaches in enhancing author profiling task accuracy,particularly when it comes to age and gender identification.
基金the Deanship of Research and Graduate Studies at King Khalid University for funding this work through a Large Research Project under grant number RGP2/254/45.
文摘Gliomas are aggressive brain tumors known for their heterogeneity,unclear borders,and diverse locations on Magnetic Resonance Imaging(MRI)scans.These factors present significant challenges for MRI-based segmentation,a crucial step for effective treatment planning and monitoring of glioma progression.This study proposes a novel deep learning framework,ResNet Multi-Head Attention U-Net(ResMHA-Net),to address these challenges and enhance glioma segmentation accuracy.ResMHA-Net leverages the strengths of both residual blocks from the ResNet architecture and multi-head attention mechanisms.This powerful combination empowers the network to prioritize informative regions within the 3D MRI data and capture long-range dependencies.By doing so,ResMHANet effectively segments intricate glioma sub-regions and reduces the impact of uncertain tumor boundaries.We rigorously trained and validated ResMHA-Net on the BraTS 2018,2019,2020 and 2021 datasets.Notably,ResMHA-Net achieved superior segmentation accuracy on the BraTS 2021 dataset compared to the previous years,demonstrating its remarkable adaptability and robustness across diverse datasets.Furthermore,we collected the predicted masks obtained from three datasets to enhance survival prediction,effectively augmenting the dataset size.Radiomic features were then extracted from these predicted masks and,along with clinical data,were used to train a novel ensemble learning-based machine learning model for survival prediction.This model employs a voting mechanism aggregating predictions from multiple models,leading to significant improvements over existing methods.This ensemble approach capitalizes on the strengths of various models,resulting in more accurate and reliable predictions for patient survival.Importantly,we achieved an impressive accuracy of 73%for overall survival(OS)prediction.
基金This Research is funded by Researchers Supporting Project Number(RSPD2024R947),King Saud University,Riyadh,Saudi Arabia.
文摘Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode.
基金Research Center of the College of Computer and Information Sciences,King Saud University,Grant/Award Number:RSPD2024R947King Saud University。
文摘Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing capacity.Despite the popularity of ML techniques,only a few research studies have focused on the application of ML especially supervised learning techniques in Requirement Engineering(RE)activities to solve the problems that occur in RE activities.The authors focus on the systematic mapping of past work to investigate those studies that focused on the application of supervised learning techniques in RE activities between the period of 2002–2023.The authors aim to investigate the research trends,main RE activities,ML algorithms,and data sources that were studied during this period.Forty-five research studies were selected based on our exclusion and inclusion criteria.The results show that the scientific community used 57 algorithms.Among those algorithms,researchers mostly used the five following ML algorithms in RE activities:Decision Tree,Support Vector Machine,Naïve Bayes,K-nearest neighbour Classifier,and Random Forest.The results show that researchers used these algorithms in eight major RE activities.Those activities are requirements analysis,failure prediction,effort estimation,quality,traceability,business rules identification,content classification,and detection of problems in requirements written in natural language.Our selected research studies used 32 private and 41 public data sources.The most popular data sources that were detected in selected studies are the Metric Data Programme from NASA,Predictor Models in Software Engineering,and iTrust Electronic Health Care System.