Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning sy...Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning system for the early detection of Autism Spectrum Disorder(ASD)in children.Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning.For this,we combined several different models,including Random Forest,XGBoost,and Neural Networks,into a single,more powerful framework.We used two different types of datasets:(i)a standard behavioral dataset and(ii)a more complex multimodal dataset with images,audio,and physiological information.The datasets were carefully preprocessed for missing values,redundant features,and dataset imbalance to ensure fair learning.The results outperformed the state-of-the-art with a Regularized Neural Network,achieving 97.6%accuracy on behavioral data.Whereas,on the multimodal data,the accuracy is 98.2%.Other models also did well with accuracies consistently above 96%.We also used SHAP and LIME on a behavioral dataset for models’explainability.展开更多
Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and ...Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and severe class imbalance,and occasional imaging artifacts can create ambiguity for state-of-the-art convolutional neural networks(CNNs).We frame skin lesion recognition as graph-based reasoning and,to ensure fair evaluation and avoid data leakage,adopt a strict lesion-level partitioning strategy.Each image is first over-segmented using SLIC(Simple Linear Iterative Clustering)to produce perceptually homogeneous superpixels.These superpixels form the nodes of a region-adjacency graph whose edges encode spatial continuity.Node attributes are 1280-dimensional embeddings extracted with a lightweight yet expressive EfficientNet-B0 backbone,providing strong representational power at modest computational cost.The resulting graphs are processed by a five-layer Graph Attention Network(GAT)that learns to weight inter-node relationships dynamically and aggregates multi-hop context before classifying lesions into seven classes with a log-softmax output.Extensive experiments on the DermaMNIST benchmark show the proposed pipeline achieves 88.35%accuracy and 98.04%AUC,outperforming contemporary CNNs,AutoML approaches,and alternative graph neural networks.An ablation study indicates EfficientNet-B0 produces superior node descriptors compared with ResNet-18 and DenseNet,and that roughly five GAT layers strike a good balance between being too shallow and over-deep while avoiding oversmoothing.The method requires no data augmentation or external metadata,making it a drop-in upgrade for clinical computer-aided diagnosis systems.展开更多
Wireless technologies and the Internet of Things(IoT)are being extensively utilized for advanced development in traditional communication systems.This evolution lowers the cost of the extensive use of sensors,changing...Wireless technologies and the Internet of Things(IoT)are being extensively utilized for advanced development in traditional communication systems.This evolution lowers the cost of the extensive use of sensors,changing the way devices interact and communicate in dynamic and uncertain situations.Such a constantly evolving environment presents enormous challenges to preserving a secure and lightweight IoT system.Therefore,it leads to the design of effective and trusted routing to support sustainable smart cities.This research study proposed a Genetic Algorithm sentiment-enhanced secured optimization model,which combines big data analytics and analysis rules to evaluate user feedback.The sentiment analysis is utilized to assess the perception of network performance,allowing the classification of device behavior as positive,neutral,or negative.By integrating sentiment-driven insights,the IoT network adjusts the system configurations to enhance the performance using network behaviour in terms of latency,reliability,fault tolerance,and sentiment score.Accordingly to the analysis,the proposed model categorizes the behavior of devices as positive,neutral,or negative,facilitating real-time monitoring for crucial applications.Experimental results revealed a significant improvement in the proposed model for threat prevention and network efficiency,demonstrating its resilience for real-time IoT applications.展开更多
Problem:The integration of Artificial Intelligence(AI)into cybersecurity,while enhancing threat detection,is hampered by the“black box”nature of complex models,eroding trust,accountability,and regulatory compliance....Problem:The integration of Artificial Intelligence(AI)into cybersecurity,while enhancing threat detection,is hampered by the“black box”nature of complex models,eroding trust,accountability,and regulatory compliance.Explainable AI(XAI)aims to resolve this opacity but introduces a critical newvulnerability:the adversarial exploitation of model explanations themselves.Gap:Current research lacks a comprehensive synthesis of this dual role of XAI in cybersecurity—as both a tool for transparency and a potential attack vector.There is a pressing need to systematically analyze the trade-offs between interpretability and security,evaluate defense mechanisms,and outline a path for developing robust,next-generation XAI frameworks.Solution:This review provides a systematic examination of XAI techniques(e.g.,SHAP,LIME,Grad-CAM)and their applications in intrusion detection,malware analysis,and fraud prevention.It critically evaluates the security risks posed by XAI,including model inversion and explanation-guided evasion attacks,and assesses corresponding defense strategies such as adversarially robust training,differential privacy,and secure-XAI deployment patterns.Contribution:Theprimary contributions of this work are:(1)a comparative analysis of XAI methods tailored for cybersecurity contexts;(2)an identification of the critical trade-off betweenmodel interpretability and security robustness;(3)a synthesis of defense mechanisms to mitigate XAI-specific vulnerabilities;and(4)a forward-looking perspective proposing future research directions,including quantum-safe XAI,hybrid neuro-symbolic models,and the integration of XAI into Zero Trust Architectures.This review serves as a foundational resource for developing transparent,trustworthy,and resilient AI-driven cybersecurity systems.展开更多
Emerging technologies and the Internet of Things(IoT)are integrating for the growth and development of heterogeneous networks.These systems are providing real-time devices to end users to deliver dynamic services and ...Emerging technologies and the Internet of Things(IoT)are integrating for the growth and development of heterogeneous networks.These systems are providing real-time devices to end users to deliver dynamic services and improve human lives.Most existing approaches have been proposed to improve energy efficiency and ensure reliable routing;however,trustworthiness and network scalability remain significant research challenges.In this research work,we introduce an AI-enabled Software-Defined Network(SDN)-driven framework to provide secure communication,trusted behavior,and effective route maintenance.By considering multiple parameters in the forwarder selection process,the proposed framework enhances network stability and optimizes decision-making.In addition,the involvement of the blockchain consensus algorithm and the intelligence of the SDN controller enables a proposed framework for robust authentication and a verifiable process of data blocks.Ultimately,only trusted devices are selected for routing,and malicious threats are prevented as data is forwarded to the cloud system.The extensive experimental analysis demonstrated that the proposed framework significantly improved energy consumption by 48%,packet loss by 49%,response time by 46%,and data transfer rate by 45%compared with existing techniques.展开更多
The Internet ofThings(IoT)and edge computing have substantially contributed to the development and growth of smart cities.It handled time-constrained services and mobile devices to capture the observing environment fo...The Internet ofThings(IoT)and edge computing have substantially contributed to the development and growth of smart cities.It handled time-constrained services and mobile devices to capture the observing environment for surveillance applications.These systems are composed of wireless cameras,digital devices,and tiny sensors to facilitate the operations of crucial healthcare services.Recently,many interactive applications have been proposed,including integrating intelligent systems to handle data processing and enable dynamic communication functionalities for crucial IoT services.Nonetheless,most solutions lack optimizing relayingmethods and impose excessive overheads for maintaining devices’connectivity.Alternatively,data integrity and trust are another vital consideration for nextgeneration networks.This research proposed a load-balanced trusted surveillance routing model with collaborative decisions at network edges to enhance energymanagement and resource balancing.It leverages graph-based optimization to enable reliable analysis of decision-making parameters.Furthermore,mobile devices integratewith the proposed model to sustain trusted routes with lightweight privacy-preserving and authentication.The proposed model analyzed its performance results in a simulation-based environment and illustrated an exceptional improvement in packet loss ratio,energy consumption,detection anomaly,and blockchain overhead than related solutions.展开更多
With the rising demand for data access,network service providers face the challenge of growing their capital and operating costs while at the same time enhancing network capacity and meeting the increased demand for a...With the rising demand for data access,network service providers face the challenge of growing their capital and operating costs while at the same time enhancing network capacity and meeting the increased demand for access.To increase efficacy of Software Defined Network(SDN)and Network Function Virtualization(NFV)framework,we need to eradicate network security configuration errors that may create vulnerabilities to affect overall efficiency,reduce network performance,and increase maintenance cost.The existing frameworks lack in security,and computer systems face few abnormalities,which prompts the need for different recognition and mitigation methods to keep the system in the operational state proactively.The fundamental concept behind SDN-NFV is the encroachment from specific resource execution to the programming-based structure.This research is around the combination of SDN and NFV for rational decision making to control and monitor traffic in the virtualized environment.The combination is often seen as an extra burden in terms of resources usage in a heterogeneous network environment,but as well as it provides the solution for critical problems specially regarding massive network traffic issues.The attacks have been expanding step by step;therefore,it is hard to recognize and protect by conventional methods.To overcome these issues,there must be an autonomous system to recognize and characterize the network traffic’s abnormal conduct if there is any.Only four types of assaults,including HTTP Flood,UDP Flood,Smurf Flood,and SiDDoS Flood,are considered in the identified dataset,to optimize the stability of the SDN-NFVenvironment and security management,through several machine learning based characterization techniques like Support Vector Machine(SVM),K-Nearest Neighbors(KNN),Logistic Regression(LR)and Isolation Forest(IF).Python is used for simulation purposes,including several valuable utilities like the mine package,the open-source Python ML libraries Scikit-learn,NumPy,SciPy,Matplotlib.Few Flood assaults and Structured Query Language(SQL)injections anomalies are validated and effectively-identified through the anticipated procedure.The classification results are promising and show that overall accuracy lies between 87%to 95%for SVM,LR,KNN,and IF classifiers in the scrutiny of traffic,whether the network traffic is normal or anomalous in the SDN-NFV environment.展开更多
Clinical image processing plays a signicant role in healthcare systems and is currently a widely used methodology.In carcinogenic diseases,time is crucial;thus,an image’s accurate analysis can help treat disease at a...Clinical image processing plays a signicant role in healthcare systems and is currently a widely used methodology.In carcinogenic diseases,time is crucial;thus,an image’s accurate analysis can help treat disease at an early stage.Ductal carcinoma in situ(DCIS)and lobular carcinoma in situ(LCIS)are common types of malignancies that affect both women and men.The number of cases of DCIS and LCIS has increased every year since 2002,while it still takes a considerable amount of time to recommend a controlling technique.Image processing is a powerful technique to analyze preprocessed images to retrieve useful information by using some remarkable processing operations.In this paper,we used a dataset from the Mammographic Image Analysis Society and MATLAB 2019b software from MathWorks to simulate and extract our results.In this proposed study,mammograms are primarily used to diagnose,more precisely,the breast’s tumor component.The detection of DCIS and LCIS on breast mammograms is done by preprocessing the images using contrast-limited adaptive histogram equalization.The resulting images’tumor portions are then isolated by a segmentation process,such as threshold detection.Furthermore,morphological operations,such as erosion and dilation,are applied to the images,then a gray-level co-occurrence matrix texture features,Harlick texture features,and shape features are extracted from the regions of interest.For classication purposes,a support vector machine(SVM)classier is used to categorize normal and abnormal patterns.Finally,the adaptive neuro-fuzzy inference system is deployed for the amputation of fuzziness due to overlapping features of patterns within the images,and the exact categorization of prior patterns is gained through the SVM.Early detection of DCIS and LCIS can save lives and help physicians and surgeons todiagnose and treat these diseases.Substantial results are obtained through cubic support vector machine(CSVM),respectively,showing 98.95%and 98.01%accuracies for normal and abnormal mammograms.Through ANFIS,promising results of mean square error(MSE)0.01866,0.18397,and 0.19640 for DCIS and LCIS differentiation during the training,testing,and checking phases.展开更多
COVID-19 is a pandemic that has affected nearly every country in the world.At present,sustainable development in the area of public health is considered vital to securing a promising and prosperous future for humans.H...COVID-19 is a pandemic that has affected nearly every country in the world.At present,sustainable development in the area of public health is considered vital to securing a promising and prosperous future for humans.However,widespread diseases,such as COVID-19,create numerous challenges to this goal,and some of those challenges are not yet defined.In this study,a Shallow Single-Layer Perceptron Neural Network(SSLPNN)and Gaussian Process Regression(GPR)model were used for the classification and prediction of confirmed COVID-19 cases in five geographically distributed regions of Asia with diverse settings and environmental conditions:namely,China,South Korea,Japan,Saudi Arabia,and Pakistan.Significant environmental and non-environmental features were taken as the input dataset,and confirmed COVID-19 cases were taken as the output dataset.A correlation analysis was done to identify patterns in the cases related to fluctuations in the associated variables.The results of this study established that the population and air quality index of a region had a statistically significant influence on the cases.However,age and the human development index had a negative influence on the cases.The proposed SSLPNN-based classification model performed well when predicting the classes of confirmed cases.During training,the binary classification model was highly accurate,with a Root Mean Square Error(RMSE)of 0.91.Likewise,the results of the regression analysis using the GPR technique with Matern 5/2 were highly accurate(RMSE=0.95239)when predicting the number of confirmed COVID-19 cases in an area.However,dynamic management has occupied a core place in studies on the sustainable development of public health but dynamic management depends on proactive strategies based on statistically verified approaches,like Artificial Intelligence(AI).In this study,an SSLPNN model has been trained to fit public health associated data into an appropriate class,allowing GPR to predict the number of confirmed COVID-19 cases in an area based on the given values of selected parameters. Therefore, this tool can help authorities in different ecological settingseffectively manage COVID-19.展开更多
Component-based software development is rapidly introducing numerous new paradigms and possibilities to deliver highly customized software in a distributed environment.Among other communication,teamwork,and coordinati...Component-based software development is rapidly introducing numerous new paradigms and possibilities to deliver highly customized software in a distributed environment.Among other communication,teamwork,and coordination problems in global software development,the detection of faults is seen as the key challenge.Thus,there is a need to ensure the reliability of component-based applications requirements.Distributed device detection faults applied to tracked components from various sources and failed to keep track of all the large number of components from different locations.In this study,we propose an approach for fault detection from componentbased systems requirements using the fuzzy logic approach and historical information during acceptance testing.This approach identified error-prone components selection for test case extraction and for prioritization of test cases to validate components in acceptance testing.For the evaluation,we used empirical study,and results depicted that the proposed approach significantly outperforms in component selection and acceptance testing.The comparison to the conventional procedures,i.e.,requirement criteria,and communication coverage criteria without irrelevancy and redundancy successfully outperform other procedures.Consequently,the F-measures of the proposed approach define the accurate selection of components,and faults identification increases in components using the proposed approach were higher(i.e.,more than 80 percent)than requirement criteria,and code coverage criteria procedures(i.e.,less than 80 percent),respectively.Similarly,the rate of fault detection in the proposed approach increases,i.e.,92.80 compared to existing methods i.e.,less than 80 percent.The proposed approach will provide a comprehensive guideline and roadmap for practitioners and researchers.展开更多
An IoT-based wireless sensor network(WSN)comprises many small sensors to collect the data and share it with the central repositories.These sensors are battery-driven and resource-restrained devices that consume most o...An IoT-based wireless sensor network(WSN)comprises many small sensors to collect the data and share it with the central repositories.These sensors are battery-driven and resource-restrained devices that consume most of the energy in sensing or collecting the data and transmitting it.During data sharing,security is an important concern in such networks as they are prone to many threats,of which the deadliest is the wormhole attack.These attacks are launched without acquiring the vital information of the network and they highly compromise the communication,security,and performance of the network.In the IoT-based network environment,its mitigation becomes more challenging because of the low resource availability in the sensing devices.We have performed an extensive literature study of the existing techniques against the wormhole attack and categorised them according to their methodology.The analysis of literature has motivated our research.In this paper,we developed the ESWI technique for detecting the wormhole attack while improving the performance and security.This algorithm has been designed to be simple and less complicated to avoid the overheads and the drainage of energy in its operation.The simulation results of our technique show competitive results for the detection rate and packet delivery ratio.It also gives an increased throughput,a decreased end-to-end delay,and a much-reduced consumption of energy.展开更多
The Internet of Things(IoT)is gaining attention because of its broad applicability,especially by integrating smart devices for massive communication during sensing tasks.IoT-assisted Wireless Sensor Networks(WSN)are s...The Internet of Things(IoT)is gaining attention because of its broad applicability,especially by integrating smart devices for massive communication during sensing tasks.IoT-assisted Wireless Sensor Networks(WSN)are suitable for various applications like industrial monitoring,agriculture,and transportation.In this regard,routing is challenging to nd an efcient path using smart devices for transmitting the packets towards big data repositories while ensuring efcient energy utilization.This paper presents the Robust Cluster Based Routing Protocol(RCBRP)to identify the routing paths where less energy is consumed to enhances the network lifespan.The scheme is presented in six phases to explore ow and communication.We propose the two algorithms:(i)energy-efcient clustering and routing algorithm and (ii)distance and energy consumption calculation algorithm.The scheme consumes less energy and balances the load by clustering the smart devices.Our work is validated through extensive simulation using Matlab.Results elucidate the dominance of the proposed scheme is compared to counterparts in terms of energy consumption,the number of packets received at BS and the number of active and dead nodes.In the future,we shall consider edge computing to analyze the performance of robust clustering.展开更多
The most valuable resource on the planet is no longer oil,but data.The transmission of this data securely over the internet is another challenge that comes with its ever-increasing value.In order to transmit sensitive...The most valuable resource on the planet is no longer oil,but data.The transmission of this data securely over the internet is another challenge that comes with its ever-increasing value.In order to transmit sensitive information securely,researchers are combining robust cryptography and steganographic approaches.The objective of this research is to introduce a more secure method of video steganography by using Deoxyribonucleic acid(DNA)for embedding encrypted data and an intelligent frame selection algorithm to improve video imperceptibility.In the previous approach,DNA was used only for frame selection.If this DNA is compromised,then our frames with the hidden and unencrypted data will be exposed.Moreover the frame selected in this way were random frames,and no consideration was made to the contents of frames.Hiding data in this way introduces visible artifacts in video.In the proposed approach rather than using DNA for frame selection we have created a fakeDNA out of our data and then embedded it in a video file on intelligently selected frames called the complex frames.Using chaotic maps and linear congruential generators,a unique pixel set is selected each time only from the identified complex frames,and encrypted data is embedded in these random locations.Experimental results demonstrate that the proposed technique shows minimum degradation of the stenographic video hence reducing the very first chances of visual surveillance.Further,the selection of complex frames for embedding and creation of a fake DNA as proposed in this research have higher peak signal-to-noise ratio(PSNR)and reduced mean squared error(MSE)values that indicate improved results.The proposed methodology has been implemented in Matlab.展开更多
Internet of Things(IoT)devices work mainly in wireless mediums;requiring different Intrusion Detection System(IDS)kind of solutions to leverage 802.11 header information for intrusion detection.Wireless-specific traff...Internet of Things(IoT)devices work mainly in wireless mediums;requiring different Intrusion Detection System(IDS)kind of solutions to leverage 802.11 header information for intrusion detection.Wireless-specific traffic features with high information gain are primarily found in data link layers rather than application layers in wired networks.This survey investigates some of the complexities and challenges in deploying wireless IDS in terms of data collection methods,IDS techniques,IDS placement strategies,and traffic data analysis techniques.This paper’s main finding highlights the lack of available network traces for training modern machine-learning models against IoT specific intrusions.Specifically,the Knowledge Discovery in Databases(KDD)Cup dataset is reviewed to highlight the design challenges of wireless intrusion detection based on current data attributes and proposed several guidelines to future-proof following traffic capture methods in the wireless network(WN).The paper starts with a review of various intrusion detection techniques,data collection methods and placement methods.The main goal of this paper is to study the design challenges of deploying intrusion detection system in a wireless environment.Intrusion detection system deployment in a wireless environment is not as straightforward as in the wired network environment due to the architectural complexities.So this paper reviews the traditional wired intrusion detection deployment methods and discusses how these techniques could be adopted into the wireless environment and also highlights the design challenges in the wireless environment.The main wireless environments to look into would be Wireless Sensor Networks(WSN),Mobile Ad Hoc Networks(MANET)and IoT as this are the future trends and a lot of attacks have been targeted into these networks.So it is very crucial to design an IDS specifically to target on the wireless networks.展开更多
Security is critical to the success of software,particularly in today’s fast-paced,technology-driven environment.It ensures that data,code,and services maintain their CIA(Confidentiality,Integrity,and Availability).T...Security is critical to the success of software,particularly in today’s fast-paced,technology-driven environment.It ensures that data,code,and services maintain their CIA(Confidentiality,Integrity,and Availability).This is only possible if security is taken into account at all stages of the SDLC(Software Development Life Cycle).Various approaches to software quality have been developed,such as CMMI(Capabilitymaturitymodel integration).However,there exists no explicit solution for incorporating security into all phases of SDLC.One of the major causes of pervasive vulnerabilities is a failure to prioritize security.Even the most proactive companies use the“patch and penetrate”strategy,inwhich security is accessed once the job is completed.Increased cost,time overrun,not integrating testing and input in SDLC,usage of third-party tools and components,and lack of knowledge are all reasons for not paying attention to the security angle during the SDLC,despite the fact that secure software development is essential for business continuity and survival in today’s ICT world.There is a need to implement best practices in SDLC to address security at all levels.To fill this gap,we have provided a detailed overview of secure software development practices while taking care of project costs and deadlines.We proposed a secure SDLC framework based on the identified practices,which integrates the best security practices in various SDLC phases.A mathematical model is used to validate the proposed framework.A case study and findings show that the proposed system aids in the integration of security best practices into the overall SDLC,resulting in more secure applications.展开更多
In recent years, web security has been viewed in the context of securing the web application layer from attacks by unauthorized users. The vulnerabilities existing in the web application layer have been attributed eit...In recent years, web security has been viewed in the context of securing the web application layer from attacks by unauthorized users. The vulnerabilities existing in the web application layer have been attributed either to using an inappropriate software development model to guide the development process, or the use of a software development model that does not consider security as a key factor. Therefore, this systematic literature review is conducted to investigate the various security vulnerabilities used to secure the web application layer, the security approaches or techniques used in the process, the stages in the software development in which the approaches or techniques are emphasized, and the tools and mechanisms used to detect vulnerabilities. The study extracted 519 publications from respectable scientific sources, i.e. the IEEE Computer Society, ACM Digital Library, Science Direct, Springer Link. After detailed review process, only 56 key primary studies were considered for this review based on defined inclusion and exclusion criteria. From the review, it appears that no one software is referred to as a standard or preferred software product for web application development. In our SLR, we have performed a deep analysis on web application security vulnerabilities detection methods which help us to identify the scope of SLR for comprehensively investigation in the future research. Further in this SLR considering OWASP Top 10 web application vulnerabilities discovered in 2012, we will attempt to categories the accessible vulnerabilities. OWASP is major source to construct and validate web security processes and standards.展开更多
Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As re...Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As requirement changes continuously,it increases the irrelevancy and redundancy during testing.Due to these challenges;fault detection capability decreases and there arises a need to improve the testing process,which is based on changes in requirements specification.In this research,we have developed a model to resolve testing challenges through requirement prioritization and prediction in an agile-based environment.The research objective is to identify the most relevant and meaningful requirements through semantic analysis for correct change analysis.Then compute the similarity of requirements through case-based reasoning,which predicted the requirements for reuse and restricted to error-based requirements.Afterward,the apriori algorithm mapped out requirement frequency to select relevant test cases based on frequently reused or not reused test cases to increase the fault detection rate.Furthermore,the proposed model was evaluated by conducting experiments.The results showed that requirement redundancy and irrelevancy improved due to semantic analysis,which correctly predicted the requirements,increasing the fault detection rate and resulting in high user satisfaction.The predicted requirements are mapped into test cases,increasing the fault detection rate after changes to achieve higher user satisfaction.Therefore,the model improves the redundancy and irrelevancy of requirements by more than 90%compared to other clustering methods and the analytical hierarchical process,achieving an 80%fault detection rate at an earlier stage.Hence,it provides guidelines for practitioners and researchers in the modern era.In the future,we will provide the working prototype of this model for proof of concept.展开更多
COVID-19 is a novel coronavirus disease that has been declared as a global pandemic in 2019.It affects the whole world through personto-person communication.This virus spreads by the droplets of coughs and sneezing,wh...COVID-19 is a novel coronavirus disease that has been declared as a global pandemic in 2019.It affects the whole world through personto-person communication.This virus spreads by the droplets of coughs and sneezing,which are quickly falling over the surface.Therefore,anyone can get easily affected by breathing in the vicinity of the COVID-19 patient.Currently,vaccine for the disease is under clinical investigation in different pharmaceutical companies.Until now,multiple medical companies have delivered health monitoring kits.However,a wireless body area network(WBAN)is a healthcare system that consists of nano sensors used to detect the real-time health condition of the patient.The proposed approach delineates is to fill a gap between recent technology trends and healthcare structure.If COVID-19 affected patient is monitored through WBAN sensors and network,a physician or a doctor can guide the patient at the right timewith the correct possible decision.This scenario helps the community to maintain social distancing and avoids an unpleasant environment for hospitalized patients Herein,a Monte Carlo algorithm guided protocol is developed to probe a secured cipher output.Security cipher helps to avoid wireless network issues like packet loss,network attacks,network interference,and routing problems.Monte Carlo based covid-19 detection technique gives 90%better results in terms of time complexity,performance,and efficiency.Results indicate that Monte Carlo based covid-19 detection technique with edge computing idea is robust in terms of time complexity,performance,and efficiency and thus,is advocated as a significant application for lessening hospital expenses.展开更多
Machine learning is a technique for analyzing data that aids the construction of mathematical models.Because of the growth of the Internet of Things(IoT)and wearable sensor devices,gesture interfaces are becoming a mo...Machine learning is a technique for analyzing data that aids the construction of mathematical models.Because of the growth of the Internet of Things(IoT)and wearable sensor devices,gesture interfaces are becoming a more natural and expedient human-machine interaction method.This type of artificial intelligence that requires minimal or no direct human intervention in decision-making is predicated on the ability of intelligent systems to self-train and detect patterns.The rise of touch-free applications and the number of deaf people have increased the significance of hand gesture recognition.Potential applications of hand gesture recognition research span from online gaming to surgical robotics.The location of the hands,the alignment of the fingers,and the hand-to-body posture are the fundamental components of hierarchical emotions in gestures.Linguistic gestures may be difficult to distinguish from nonsensical motions in the field of gesture recognition.Linguistic gestures may be difficult to distinguish from nonsensical motions in the field of gesture recognition.In this scenario,it may be difficult to overcome segmentation uncertainty caused by accidental hand motions or trembling.When a user performs the same dynamic gesture,the hand shapes and speeds of each user,as well as those often generated by the same user,vary.A machine-learning-based Gesture Recognition Framework(ML-GRF)for recognizing the beginning and end of a gesture sequence in a continuous stream of data is suggested to solve the problem of distinguishing between meaningful dynamic gestures and scattered generation.We have recommended using a similarity matching-based gesture classification approach to reduce the overall computing cost associated with identifying actions,and we have shown how an efficient feature extraction method can be used to reduce the thousands of single gesture information to four binary digit gesture codes.The findings from the simulation support the accuracy,precision,gesture recognition,sensitivity,and efficiency rates.The Machine Learning-based Gesture Recognition Framework(ML-GRF)had an accuracy rate of 98.97%,a precision rate of 97.65%,a gesture recognition rate of 98.04%,a sensitivity rate of 96.99%,and an efficiency rate of 95.12%.展开更多
The emergence of industry 4.0 stems from research that has received a great deal of attention in the last few decades.Consequently,there has been a huge paradigm shift in the manufacturing and production sectors.Howev...The emergence of industry 4.0 stems from research that has received a great deal of attention in the last few decades.Consequently,there has been a huge paradigm shift in the manufacturing and production sectors.However,this poses a challenge for cybersecurity and highlights the need to address the possible threats targeting(various pillars of)industry 4.0.However,before providing a concrete solution certain aspect need to be researched,for instance,cybersecurity threats and privacy issues in the industry.To fill this gap,this paper discusses potential solutions to cybersecurity targeting this industry and highlights the consequences of possible attacks and countermeasures(in detail).In particular,the focus of the paper is on investigating the possible cyber-attacks targeting 4 layers of IIoT that is one of the key pillars of Industry 4.0.Based on a detailed review of existing literature,in this study,we have identified possible cyber threats,their consequences,and countermeasures.Further,we have provided a comprehensive framework based on an analysis of cybersecurity and privacy challenges.The suggested framework provides for a deeper understanding of the current state of cybersecurity and sets out directions for future research and applications.展开更多
基金the King Salman center for Disability Research for funding this work through Research Group No.KSRG-2024-050.
文摘Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning system for the early detection of Autism Spectrum Disorder(ASD)in children.Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning.For this,we combined several different models,including Random Forest,XGBoost,and Neural Networks,into a single,more powerful framework.We used two different types of datasets:(i)a standard behavioral dataset and(ii)a more complex multimodal dataset with images,audio,and physiological information.The datasets were carefully preprocessed for missing values,redundant features,and dataset imbalance to ensure fair learning.The results outperformed the state-of-the-art with a Regularized Neural Network,achieving 97.6%accuracy on behavioral data.Whereas,on the multimodal data,the accuracy is 98.2%.Other models also did well with accuracies consistently above 96%.We also used SHAP and LIME on a behavioral dataset for models’explainability.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01296).
文摘Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and severe class imbalance,and occasional imaging artifacts can create ambiguity for state-of-the-art convolutional neural networks(CNNs).We frame skin lesion recognition as graph-based reasoning and,to ensure fair evaluation and avoid data leakage,adopt a strict lesion-level partitioning strategy.Each image is first over-segmented using SLIC(Simple Linear Iterative Clustering)to produce perceptually homogeneous superpixels.These superpixels form the nodes of a region-adjacency graph whose edges encode spatial continuity.Node attributes are 1280-dimensional embeddings extracted with a lightweight yet expressive EfficientNet-B0 backbone,providing strong representational power at modest computational cost.The resulting graphs are processed by a five-layer Graph Attention Network(GAT)that learns to weight inter-node relationships dynamically and aggregates multi-hop context before classifying lesions into seven classes with a log-softmax output.Extensive experiments on the DermaMNIST benchmark show the proposed pipeline achieves 88.35%accuracy and 98.04%AUC,outperforming contemporary CNNs,AutoML approaches,and alternative graph neural networks.An ablation study indicates EfficientNet-B0 produces superior node descriptors compared with ResNet-18 and DenseNet,and that roughly five GAT layers strike a good balance between being too shallow and over-deep while avoiding oversmoothing.The method requires no data augmentation or external metadata,making it a drop-in upgrade for clinical computer-aided diagnosis systems.
基金supported by the Deanship of Graduate Studies and Scientific Research at Jouf University under Grant No.DGSSR-2024-02-01011.
文摘Wireless technologies and the Internet of Things(IoT)are being extensively utilized for advanced development in traditional communication systems.This evolution lowers the cost of the extensive use of sensors,changing the way devices interact and communicate in dynamic and uncertain situations.Such a constantly evolving environment presents enormous challenges to preserving a secure and lightweight IoT system.Therefore,it leads to the design of effective and trusted routing to support sustainable smart cities.This research study proposed a Genetic Algorithm sentiment-enhanced secured optimization model,which combines big data analytics and analysis rules to evaluate user feedback.The sentiment analysis is utilized to assess the perception of network performance,allowing the classification of device behavior as positive,neutral,or negative.By integrating sentiment-driven insights,the IoT network adjusts the system configurations to enhance the performance using network behaviour in terms of latency,reliability,fault tolerance,and sentiment score.Accordingly to the analysis,the proposed model categorizes the behavior of devices as positive,neutral,or negative,facilitating real-time monitoring for crucial applications.Experimental results revealed a significant improvement in the proposed model for threat prevention and network efficiency,demonstrating its resilience for real-time IoT applications.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01395).
文摘Problem:The integration of Artificial Intelligence(AI)into cybersecurity,while enhancing threat detection,is hampered by the“black box”nature of complex models,eroding trust,accountability,and regulatory compliance.Explainable AI(XAI)aims to resolve this opacity but introduces a critical newvulnerability:the adversarial exploitation of model explanations themselves.Gap:Current research lacks a comprehensive synthesis of this dual role of XAI in cybersecurity—as both a tool for transparency and a potential attack vector.There is a pressing need to systematically analyze the trade-offs between interpretability and security,evaluate defense mechanisms,and outline a path for developing robust,next-generation XAI frameworks.Solution:This review provides a systematic examination of XAI techniques(e.g.,SHAP,LIME,Grad-CAM)and their applications in intrusion detection,malware analysis,and fraud prevention.It critically evaluates the security risks posed by XAI,including model inversion and explanation-guided evasion attacks,and assesses corresponding defense strategies such as adversarially robust training,differential privacy,and secure-XAI deployment patterns.Contribution:Theprimary contributions of this work are:(1)a comparative analysis of XAI methods tailored for cybersecurity contexts;(2)an identification of the critical trade-off betweenmodel interpretability and security robustness;(3)a synthesis of defense mechanisms to mitigate XAI-specific vulnerabilities;and(4)a forward-looking perspective proposing future research directions,including quantum-safe XAI,hybrid neuro-symbolic models,and the integration of XAI into Zero Trust Architectures.This review serves as a foundational resource for developing transparent,trustworthy,and resilient AI-driven cybersecurity systems.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.DGSSR-2025-02-01669financial support from the Deanship of Graduate Studies and Scientific Research at Jouf University.
文摘Emerging technologies and the Internet of Things(IoT)are integrating for the growth and development of heterogeneous networks.These systems are providing real-time devices to end users to deliver dynamic services and improve human lives.Most existing approaches have been proposed to improve energy efficiency and ensure reliable routing;however,trustworthiness and network scalability remain significant research challenges.In this research work,we introduce an AI-enabled Software-Defined Network(SDN)-driven framework to provide secure communication,trusted behavior,and effective route maintenance.By considering multiple parameters in the forwarder selection process,the proposed framework enhances network stability and optimizes decision-making.In addition,the involvement of the blockchain consensus algorithm and the intelligence of the SDN controller enables a proposed framework for robust authentication and a verifiable process of data blocks.Ultimately,only trusted devices are selected for routing,and malicious threats are prevented as data is forwarded to the cloud system.The extensive experimental analysis demonstrated that the proposed framework significantly improved energy consumption by 48%,packet loss by 49%,response time by 46%,and data transfer rate by 45%compared with existing techniques.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2024-02-02090).
文摘The Internet ofThings(IoT)and edge computing have substantially contributed to the development and growth of smart cities.It handled time-constrained services and mobile devices to capture the observing environment for surveillance applications.These systems are composed of wireless cameras,digital devices,and tiny sensors to facilitate the operations of crucial healthcare services.Recently,many interactive applications have been proposed,including integrating intelligent systems to handle data processing and enable dynamic communication functionalities for crucial IoT services.Nonetheless,most solutions lack optimizing relayingmethods and impose excessive overheads for maintaining devices’connectivity.Alternatively,data integrity and trust are another vital consideration for nextgeneration networks.This research proposed a load-balanced trusted surveillance routing model with collaborative decisions at network edges to enhance energymanagement and resource balancing.It leverages graph-based optimization to enable reliable analysis of decision-making parameters.Furthermore,mobile devices integratewith the proposed model to sustain trusted routes with lightweight privacy-preserving and authentication.The proposed model analyzed its performance results in a simulation-based environment and illustrated an exceptional improvement in packet loss ratio,energy consumption,detection anomaly,and blockchain overhead than related solutions.
文摘With the rising demand for data access,network service providers face the challenge of growing their capital and operating costs while at the same time enhancing network capacity and meeting the increased demand for access.To increase efficacy of Software Defined Network(SDN)and Network Function Virtualization(NFV)framework,we need to eradicate network security configuration errors that may create vulnerabilities to affect overall efficiency,reduce network performance,and increase maintenance cost.The existing frameworks lack in security,and computer systems face few abnormalities,which prompts the need for different recognition and mitigation methods to keep the system in the operational state proactively.The fundamental concept behind SDN-NFV is the encroachment from specific resource execution to the programming-based structure.This research is around the combination of SDN and NFV for rational decision making to control and monitor traffic in the virtualized environment.The combination is often seen as an extra burden in terms of resources usage in a heterogeneous network environment,but as well as it provides the solution for critical problems specially regarding massive network traffic issues.The attacks have been expanding step by step;therefore,it is hard to recognize and protect by conventional methods.To overcome these issues,there must be an autonomous system to recognize and characterize the network traffic’s abnormal conduct if there is any.Only four types of assaults,including HTTP Flood,UDP Flood,Smurf Flood,and SiDDoS Flood,are considered in the identified dataset,to optimize the stability of the SDN-NFVenvironment and security management,through several machine learning based characterization techniques like Support Vector Machine(SVM),K-Nearest Neighbors(KNN),Logistic Regression(LR)and Isolation Forest(IF).Python is used for simulation purposes,including several valuable utilities like the mine package,the open-source Python ML libraries Scikit-learn,NumPy,SciPy,Matplotlib.Few Flood assaults and Structured Query Language(SQL)injections anomalies are validated and effectively-identified through the anticipated procedure.The classification results are promising and show that overall accuracy lies between 87%to 95%for SVM,LR,KNN,and IF classifiers in the scrutiny of traffic,whether the network traffic is normal or anomalous in the SDN-NFV environment.
文摘Clinical image processing plays a signicant role in healthcare systems and is currently a widely used methodology.In carcinogenic diseases,time is crucial;thus,an image’s accurate analysis can help treat disease at an early stage.Ductal carcinoma in situ(DCIS)and lobular carcinoma in situ(LCIS)are common types of malignancies that affect both women and men.The number of cases of DCIS and LCIS has increased every year since 2002,while it still takes a considerable amount of time to recommend a controlling technique.Image processing is a powerful technique to analyze preprocessed images to retrieve useful information by using some remarkable processing operations.In this paper,we used a dataset from the Mammographic Image Analysis Society and MATLAB 2019b software from MathWorks to simulate and extract our results.In this proposed study,mammograms are primarily used to diagnose,more precisely,the breast’s tumor component.The detection of DCIS and LCIS on breast mammograms is done by preprocessing the images using contrast-limited adaptive histogram equalization.The resulting images’tumor portions are then isolated by a segmentation process,such as threshold detection.Furthermore,morphological operations,such as erosion and dilation,are applied to the images,then a gray-level co-occurrence matrix texture features,Harlick texture features,and shape features are extracted from the regions of interest.For classication purposes,a support vector machine(SVM)classier is used to categorize normal and abnormal patterns.Finally,the adaptive neuro-fuzzy inference system is deployed for the amputation of fuzziness due to overlapping features of patterns within the images,and the exact categorization of prior patterns is gained through the SVM.Early detection of DCIS and LCIS can save lives and help physicians and surgeons todiagnose and treat these diseases.Substantial results are obtained through cubic support vector machine(CSVM),respectively,showing 98.95%and 98.01%accuracies for normal and abnormal mammograms.Through ANFIS,promising results of mean square error(MSE)0.01866,0.18397,and 0.19640 for DCIS and LCIS differentiation during the training,testing,and checking phases.
文摘COVID-19 is a pandemic that has affected nearly every country in the world.At present,sustainable development in the area of public health is considered vital to securing a promising and prosperous future for humans.However,widespread diseases,such as COVID-19,create numerous challenges to this goal,and some of those challenges are not yet defined.In this study,a Shallow Single-Layer Perceptron Neural Network(SSLPNN)and Gaussian Process Regression(GPR)model were used for the classification and prediction of confirmed COVID-19 cases in five geographically distributed regions of Asia with diverse settings and environmental conditions:namely,China,South Korea,Japan,Saudi Arabia,and Pakistan.Significant environmental and non-environmental features were taken as the input dataset,and confirmed COVID-19 cases were taken as the output dataset.A correlation analysis was done to identify patterns in the cases related to fluctuations in the associated variables.The results of this study established that the population and air quality index of a region had a statistically significant influence on the cases.However,age and the human development index had a negative influence on the cases.The proposed SSLPNN-based classification model performed well when predicting the classes of confirmed cases.During training,the binary classification model was highly accurate,with a Root Mean Square Error(RMSE)of 0.91.Likewise,the results of the regression analysis using the GPR technique with Matern 5/2 were highly accurate(RMSE=0.95239)when predicting the number of confirmed COVID-19 cases in an area.However,dynamic management has occupied a core place in studies on the sustainable development of public health but dynamic management depends on proactive strategies based on statistically verified approaches,like Artificial Intelligence(AI).In this study,an SSLPNN model has been trained to fit public health associated data into an appropriate class,allowing GPR to predict the number of confirmed COVID-19 cases in an area based on the given values of selected parameters. Therefore, this tool can help authorities in different ecological settingseffectively manage COVID-19.
基金Taif University Researchers Supporting Project No.(TURSP-2020/10),Taif University,Taif,Saudi Arabia.
文摘Component-based software development is rapidly introducing numerous new paradigms and possibilities to deliver highly customized software in a distributed environment.Among other communication,teamwork,and coordination problems in global software development,the detection of faults is seen as the key challenge.Thus,there is a need to ensure the reliability of component-based applications requirements.Distributed device detection faults applied to tracked components from various sources and failed to keep track of all the large number of components from different locations.In this study,we propose an approach for fault detection from componentbased systems requirements using the fuzzy logic approach and historical information during acceptance testing.This approach identified error-prone components selection for test case extraction and for prioritization of test cases to validate components in acceptance testing.For the evaluation,we used empirical study,and results depicted that the proposed approach significantly outperforms in component selection and acceptance testing.The comparison to the conventional procedures,i.e.,requirement criteria,and communication coverage criteria without irrelevancy and redundancy successfully outperform other procedures.Consequently,the F-measures of the proposed approach define the accurate selection of components,and faults identification increases in components using the proposed approach were higher(i.e.,more than 80 percent)than requirement criteria,and code coverage criteria procedures(i.e.,less than 80 percent),respectively.Similarly,the rate of fault detection in the proposed approach increases,i.e.,92.80 compared to existing methods i.e.,less than 80 percent.The proposed approach will provide a comprehensive guideline and roadmap for practitioners and researchers.
文摘An IoT-based wireless sensor network(WSN)comprises many small sensors to collect the data and share it with the central repositories.These sensors are battery-driven and resource-restrained devices that consume most of the energy in sensing or collecting the data and transmitting it.During data sharing,security is an important concern in such networks as they are prone to many threats,of which the deadliest is the wormhole attack.These attacks are launched without acquiring the vital information of the network and they highly compromise the communication,security,and performance of the network.In the IoT-based network environment,its mitigation becomes more challenging because of the low resource availability in the sensing devices.We have performed an extensive literature study of the existing techniques against the wormhole attack and categorised them according to their methodology.The analysis of literature has motivated our research.In this paper,we developed the ESWI technique for detecting the wormhole attack while improving the performance and security.This algorithm has been designed to be simple and less complicated to avoid the overheads and the drainage of energy in its operation.The simulation results of our technique show competitive results for the detection rate and packet delivery ratio.It also gives an increased throughput,a decreased end-to-end delay,and a much-reduced consumption of energy.
文摘The Internet of Things(IoT)is gaining attention because of its broad applicability,especially by integrating smart devices for massive communication during sensing tasks.IoT-assisted Wireless Sensor Networks(WSN)are suitable for various applications like industrial monitoring,agriculture,and transportation.In this regard,routing is challenging to nd an efcient path using smart devices for transmitting the packets towards big data repositories while ensuring efcient energy utilization.This paper presents the Robust Cluster Based Routing Protocol(RCBRP)to identify the routing paths where less energy is consumed to enhances the network lifespan.The scheme is presented in six phases to explore ow and communication.We propose the two algorithms:(i)energy-efcient clustering and routing algorithm and (ii)distance and energy consumption calculation algorithm.The scheme consumes less energy and balances the load by clustering the smart devices.Our work is validated through extensive simulation using Matlab.Results elucidate the dominance of the proposed scheme is compared to counterparts in terms of energy consumption,the number of packets received at BS and the number of active and dead nodes.In the future,we shall consider edge computing to analyze the performance of robust clustering.
基金Taif University Researchers Supporting Project number(TURSP-2020/98),Taif University,Taif,Saudi Arabia.
文摘The most valuable resource on the planet is no longer oil,but data.The transmission of this data securely over the internet is another challenge that comes with its ever-increasing value.In order to transmit sensitive information securely,researchers are combining robust cryptography and steganographic approaches.The objective of this research is to introduce a more secure method of video steganography by using Deoxyribonucleic acid(DNA)for embedding encrypted data and an intelligent frame selection algorithm to improve video imperceptibility.In the previous approach,DNA was used only for frame selection.If this DNA is compromised,then our frames with the hidden and unencrypted data will be exposed.Moreover the frame selected in this way were random frames,and no consideration was made to the contents of frames.Hiding data in this way introduces visible artifacts in video.In the proposed approach rather than using DNA for frame selection we have created a fakeDNA out of our data and then embedded it in a video file on intelligently selected frames called the complex frames.Using chaotic maps and linear congruential generators,a unique pixel set is selected each time only from the identified complex frames,and encrypted data is embedded in these random locations.Experimental results demonstrate that the proposed technique shows minimum degradation of the stenographic video hence reducing the very first chances of visual surveillance.Further,the selection of complex frames for embedding and creation of a fake DNA as proposed in this research have higher peak signal-to-noise ratio(PSNR)and reduced mean squared error(MSE)values that indicate improved results.The proposed methodology has been implemented in Matlab.
基金The authors acknowledge Jouf University,Saudi Arabia for his funding support.
文摘Internet of Things(IoT)devices work mainly in wireless mediums;requiring different Intrusion Detection System(IDS)kind of solutions to leverage 802.11 header information for intrusion detection.Wireless-specific traffic features with high information gain are primarily found in data link layers rather than application layers in wired networks.This survey investigates some of the complexities and challenges in deploying wireless IDS in terms of data collection methods,IDS techniques,IDS placement strategies,and traffic data analysis techniques.This paper’s main finding highlights the lack of available network traces for training modern machine-learning models against IoT specific intrusions.Specifically,the Knowledge Discovery in Databases(KDD)Cup dataset is reviewed to highlight the design challenges of wireless intrusion detection based on current data attributes and proposed several guidelines to future-proof following traffic capture methods in the wireless network(WN).The paper starts with a review of various intrusion detection techniques,data collection methods and placement methods.The main goal of this paper is to study the design challenges of deploying intrusion detection system in a wireless environment.Intrusion detection system deployment in a wireless environment is not as straightforward as in the wired network environment due to the architectural complexities.So this paper reviews the traditional wired intrusion detection deployment methods and discusses how these techniques could be adopted into the wireless environment and also highlights the design challenges in the wireless environment.The main wireless environments to look into would be Wireless Sensor Networks(WSN),Mobile Ad Hoc Networks(MANET)and IoT as this are the future trends and a lot of attacks have been targeted into these networks.So it is very crucial to design an IDS specifically to target on the wireless networks.
文摘Security is critical to the success of software,particularly in today’s fast-paced,technology-driven environment.It ensures that data,code,and services maintain their CIA(Confidentiality,Integrity,and Availability).This is only possible if security is taken into account at all stages of the SDLC(Software Development Life Cycle).Various approaches to software quality have been developed,such as CMMI(Capabilitymaturitymodel integration).However,there exists no explicit solution for incorporating security into all phases of SDLC.One of the major causes of pervasive vulnerabilities is a failure to prioritize security.Even the most proactive companies use the“patch and penetrate”strategy,inwhich security is accessed once the job is completed.Increased cost,time overrun,not integrating testing and input in SDLC,usage of third-party tools and components,and lack of knowledge are all reasons for not paying attention to the security angle during the SDLC,despite the fact that secure software development is essential for business continuity and survival in today’s ICT world.There is a need to implement best practices in SDLC to address security at all levels.To fill this gap,we have provided a detailed overview of secure software development practices while taking care of project costs and deadlines.We proposed a secure SDLC framework based on the identified practices,which integrates the best security practices in various SDLC phases.A mathematical model is used to validate the proposed framework.A case study and findings show that the proposed system aids in the integration of security best practices into the overall SDLC,resulting in more secure applications.
文摘In recent years, web security has been viewed in the context of securing the web application layer from attacks by unauthorized users. The vulnerabilities existing in the web application layer have been attributed either to using an inappropriate software development model to guide the development process, or the use of a software development model that does not consider security as a key factor. Therefore, this systematic literature review is conducted to investigate the various security vulnerabilities used to secure the web application layer, the security approaches or techniques used in the process, the stages in the software development in which the approaches or techniques are emphasized, and the tools and mechanisms used to detect vulnerabilities. The study extracted 519 publications from respectable scientific sources, i.e. the IEEE Computer Society, ACM Digital Library, Science Direct, Springer Link. After detailed review process, only 56 key primary studies were considered for this review based on defined inclusion and exclusion criteria. From the review, it appears that no one software is referred to as a standard or preferred software product for web application development. In our SLR, we have performed a deep analysis on web application security vulnerabilities detection methods which help us to identify the scope of SLR for comprehensively investigation in the future research. Further in this SLR considering OWASP Top 10 web application vulnerabilities discovered in 2012, we will attempt to categories the accessible vulnerabilities. OWASP is major source to construct and validate web security processes and standards.
文摘Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As requirement changes continuously,it increases the irrelevancy and redundancy during testing.Due to these challenges;fault detection capability decreases and there arises a need to improve the testing process,which is based on changes in requirements specification.In this research,we have developed a model to resolve testing challenges through requirement prioritization and prediction in an agile-based environment.The research objective is to identify the most relevant and meaningful requirements through semantic analysis for correct change analysis.Then compute the similarity of requirements through case-based reasoning,which predicted the requirements for reuse and restricted to error-based requirements.Afterward,the apriori algorithm mapped out requirement frequency to select relevant test cases based on frequently reused or not reused test cases to increase the fault detection rate.Furthermore,the proposed model was evaluated by conducting experiments.The results showed that requirement redundancy and irrelevancy improved due to semantic analysis,which correctly predicted the requirements,increasing the fault detection rate and resulting in high user satisfaction.The predicted requirements are mapped into test cases,increasing the fault detection rate after changes to achieve higher user satisfaction.Therefore,the model improves the redundancy and irrelevancy of requirements by more than 90%compared to other clustering methods and the analytical hierarchical process,achieving an 80%fault detection rate at an earlier stage.Hence,it provides guidelines for practitioners and researchers in the modern era.In the future,we will provide the working prototype of this model for proof of concept.
基金Taif University Researchers Supporting Project number(TURSP-2020/73).
文摘COVID-19 is a novel coronavirus disease that has been declared as a global pandemic in 2019.It affects the whole world through personto-person communication.This virus spreads by the droplets of coughs and sneezing,which are quickly falling over the surface.Therefore,anyone can get easily affected by breathing in the vicinity of the COVID-19 patient.Currently,vaccine for the disease is under clinical investigation in different pharmaceutical companies.Until now,multiple medical companies have delivered health monitoring kits.However,a wireless body area network(WBAN)is a healthcare system that consists of nano sensors used to detect the real-time health condition of the patient.The proposed approach delineates is to fill a gap between recent technology trends and healthcare structure.If COVID-19 affected patient is monitored through WBAN sensors and network,a physician or a doctor can guide the patient at the right timewith the correct possible decision.This scenario helps the community to maintain social distancing and avoids an unpleasant environment for hospitalized patients Herein,a Monte Carlo algorithm guided protocol is developed to probe a secured cipher output.Security cipher helps to avoid wireless network issues like packet loss,network attacks,network interference,and routing problems.Monte Carlo based covid-19 detection technique gives 90%better results in terms of time complexity,performance,and efficiency.Results indicate that Monte Carlo based covid-19 detection technique with edge computing idea is robust in terms of time complexity,performance,and efficiency and thus,is advocated as a significant application for lessening hospital expenses.
文摘Machine learning is a technique for analyzing data that aids the construction of mathematical models.Because of the growth of the Internet of Things(IoT)and wearable sensor devices,gesture interfaces are becoming a more natural and expedient human-machine interaction method.This type of artificial intelligence that requires minimal or no direct human intervention in decision-making is predicated on the ability of intelligent systems to self-train and detect patterns.The rise of touch-free applications and the number of deaf people have increased the significance of hand gesture recognition.Potential applications of hand gesture recognition research span from online gaming to surgical robotics.The location of the hands,the alignment of the fingers,and the hand-to-body posture are the fundamental components of hierarchical emotions in gestures.Linguistic gestures may be difficult to distinguish from nonsensical motions in the field of gesture recognition.Linguistic gestures may be difficult to distinguish from nonsensical motions in the field of gesture recognition.In this scenario,it may be difficult to overcome segmentation uncertainty caused by accidental hand motions or trembling.When a user performs the same dynamic gesture,the hand shapes and speeds of each user,as well as those often generated by the same user,vary.A machine-learning-based Gesture Recognition Framework(ML-GRF)for recognizing the beginning and end of a gesture sequence in a continuous stream of data is suggested to solve the problem of distinguishing between meaningful dynamic gestures and scattered generation.We have recommended using a similarity matching-based gesture classification approach to reduce the overall computing cost associated with identifying actions,and we have shown how an efficient feature extraction method can be used to reduce the thousands of single gesture information to four binary digit gesture codes.The findings from the simulation support the accuracy,precision,gesture recognition,sensitivity,and efficiency rates.The Machine Learning-based Gesture Recognition Framework(ML-GRF)had an accuracy rate of 98.97%,a precision rate of 97.65%,a gesture recognition rate of 98.04%,a sensitivity rate of 96.99%,and an efficiency rate of 95.12%.
基金The author(s)acknowledge Jouf University,Saudi Arabia for his funding support.
文摘The emergence of industry 4.0 stems from research that has received a great deal of attention in the last few decades.Consequently,there has been a huge paradigm shift in the manufacturing and production sectors.However,this poses a challenge for cybersecurity and highlights the need to address the possible threats targeting(various pillars of)industry 4.0.However,before providing a concrete solution certain aspect need to be researched,for instance,cybersecurity threats and privacy issues in the industry.To fill this gap,this paper discusses potential solutions to cybersecurity targeting this industry and highlights the consequences of possible attacks and countermeasures(in detail).In particular,the focus of the paper is on investigating the possible cyber-attacks targeting 4 layers of IIoT that is one of the key pillars of Industry 4.0.Based on a detailed review of existing literature,in this study,we have identified possible cyber threats,their consequences,and countermeasures.Further,we have provided a comprehensive framework based on an analysis of cybersecurity and privacy challenges.The suggested framework provides for a deeper understanding of the current state of cybersecurity and sets out directions for future research and applications.