Measuring software quality requires software engineers to understand the system’s quality attributes and their measurements.The quality attribute is a qualitative property;however,the quantitative feature is needed f...Measuring software quality requires software engineers to understand the system’s quality attributes and their measurements.The quality attribute is a qualitative property;however,the quantitative feature is needed for software measurement,which is not considered during the development of most software systems.Many research studies have investigated different approaches for measuring software quality,but with no practical approaches to quantify and measure quality attributes.This paper proposes a software quality measurement model,based on a software interconnection model,to measure the quality of software components and the overall quality of the software system.Unlike most of the existing approaches,the proposed approach can be applied at the early stages of software development,to different architectural design models,and at different levels of system decomposition.This article introduces a software measurement model that uses a heuristic normalization of the software’s internal quality attributes,i.e.,coupling and cohesion,for software quality measurement.In this model,the quality of a software component is measured based on its internal strength and the coupling it exhibits with other component(s).The proposed model has been experimented with nine software engineering teams that have agreed to participate in the experiment during the development of their different software systems.The experiments have shown that coupling reduces the internal strength of the coupled components by the amount of coupling they exhibit,which degrades their quality and the overall quality of the software system.The introduced model can help in understanding the quality of software design.In addition,it identifies the locations in software design that exhibit unnecessary couplings that degrade the quality of the software systems,which can be eliminated.展开更多
Our dependability on software in every aspect of our lives has exceeded the level that was expected in the past. We have now reached a point where we are currently stuck with technology, and it made life much easier t...Our dependability on software in every aspect of our lives has exceeded the level that was expected in the past. We have now reached a point where we are currently stuck with technology, and it made life much easier than before. The rapid increase of technology adoption in the different aspects of life has made technology affordable and has led to an even stronger adoption in the society. As technology advances, almost every kind of technology is now connected to the network like infrastructure, automobiles, airplanes, chemical factories, power stations, and many other systems that are business and mission critical. Because of our high dependency on technology in most, if not all, aspects of life, a system failure is considered to be very critical and might result in harming the surrounding environment or put human life at risk. We apply our conceptual framework to integration between security and safety by creating a SaS (Safety and Security) domain model. Furthermore, it demonstrates that it is possible to use goal-oriented KAOS (Knowledge Acquisition in automated Specification) language in threat and hazard analysis to cover both safety and security domains making their outputs, or artifacts, well-structured and comprehensive, which results in dependability due to the comprehensiveness of the analysis. The conceptual framework can thereby act as an interface for active interactions in risk and hazard management in terms of universal coverage, finding solutions for differences and contradictions which can be overcome by integrating the safety and security domains and using a unified system analysis technique (KAOS) that will result in analysis centrality. For validation we chose the Systems-Theoretic Accident Model and Processes (STAMP) approach and its modelling language, namely System-Theoretic Process Analysis for safety (STPA), on the safety side and System-Theoretic Process Analysis for Security (STPA-sec) on the security side in order to be the base of the experiment in comparison to what was done in SaS. The concepts of SaS domain model were applied on STAMP approach using the same example @RemoteSurgery.展开更多
Software engineering has been taught at many institutions as individual course for many years. Recently, many higher education institutions offer a BSc degree in Software Engineering. Software engineers are required, ...Software engineering has been taught at many institutions as individual course for many years. Recently, many higher education institutions offer a BSc degree in Software Engineering. Software engineers are required, especially at the small enterprises, to play many roles, and sometimes simultaneously. Beside the technical and managerial skills, software engineers should have additional intellectual skills such as domain-specific abstract thinking. Therefore, software engineering curriculum should help the students to build and improve their skills to meet the labor market needs. This study aims to explore the perceptions of software engineering students on the influence of learning software modeling and design on their domain-specific abstract thinking. Also, we explore the role of the course project in improving their domain-specific abstract thinking. The study results have shown that, most of the surveyed students believe that learning and practicing modeling and design concepts contribute to their ability to think abstractly on specific domain. However, this finding is influenced by the students’ lack of the comprehension of some modeling and design aspects (e.g., generalization). We believe that, such aspects should be introduced to the students at early levels of software engineering curriculum, which certainly will improve their ability to think abstractly on specific domain.展开更多
Chronic diseases,or NCDs(noncommunicable diseases),constitute a major global health challenge,causing millions of deaths and imposing substantial economic burdens annually.This paper introduces the Health Score,a comp...Chronic diseases,or NCDs(noncommunicable diseases),constitute a major global health challenge,causing millions of deaths and imposing substantial economic burdens annually.This paper introduces the Health Score,a comprehensive framework for assessing chronic disease risk by integrating diverse determinants of health,including social,economic,environmental,behavioral,treatment,culture,and nature factors.The Health Score,ranging from 0 to 850,quantifies indivi dual and population-level health risks while identifying protective factors through a structured methodology that supports targeted interventions at individual,corporate,and community scales.The paper highlights the rising prevalence of chronic diseases in the United States,projecting that nearly half of the population will be affected by 2030,alongside a global economic burden expected to reach trillions of dollars.Existing surveillance tools,such as the CDS(Chronic Disease Score)and CDIs(Chronic Disease Indicators),are examined for their roles in monitoring health disparities.The Health Score advances a holistic,proactive approach,emphasizing lifestyle modifications,equitable healthcare access,economic opportunities,social support,nature exposure,cu ltural awareness,and community engagement.By elucidating the complex interplay of health determinants,this framework equips stakeholders with actionable insights to implement effective prevention strategies,ultimately fostering healthier,more resi lient populations.展开更多
This paper presents 3RVAV(Three-Round Voting with Advanced Validation),a novel Byzantine Fault Tolerant consensus protocol combining Proof-of-Stake with a multi-phase voting mechanism.The protocol introduces three lay...This paper presents 3RVAV(Three-Round Voting with Advanced Validation),a novel Byzantine Fault Tolerant consensus protocol combining Proof-of-Stake with a multi-phase voting mechanism.The protocol introduces three layers of randomized committee voting with distinct participant roles(Validators,Delegators,and Users),achieving(4/5)-threshold approval per round through a verifiable random function(VRF)-based selection process.Our security analysis demonstrates 3RVAV provides 1−(1−s/n)^(3k) resistance to Sybil attacks with n participants and stake s,while maintaining O(kn log n)communication complexity.Experimental simulations show 3247 TPS throughput with 4-s finality,representing a 5.8×improvement over Algorand’s committee-based approach.The proposed protocol achieves approximately 4.2-s finality,demonstrating low latency while maintaining strong consistency and resilience.The protocol introduces a novel punishment matrix incorporating both stake slashing and probabilistic blacklisting,proving a Nash equilibrium for honest participation under rational actor assumptions.展开更多
This paper introduces a novel lightweight colour image encryption algorithm,specifically designed for resource-constrained environments such as Internet of Things(IoT)devices.As IoT systems become increasingly prevale...This paper introduces a novel lightweight colour image encryption algorithm,specifically designed for resource-constrained environments such as Internet of Things(IoT)devices.As IoT systems become increasingly prevalent,secure and efficient data transmission becomes crucial.The proposed algorithm addresses this need by offering a robust yet resource-efficient solution for image encryption.Traditional image encryption relies on confusion and diffusion steps.These stages are generally implemented linearly,but this work introduces a new RSP(Random Strip Peeling)algorithm for the confusion step,which disrupts linearity in the lightweight category by using two different sequences generated by the 1D Tent Map with varying initial conditions.The diffusion stage then employs an XOR matrix generated by the Logistic Map.Different evaluation metrics,such as entropy analysis,key sensitivity,statistical and differential attacks resistance,and robustness analysis demonstrate the proposed algorithm's lightweight,robust,and efficient.The proposed encryption scheme achieved average metric values of 99.6056 for NPCR,33.4397 for UACI,and 7.9914 for information entropy in the SIPI image dataset.It also exhibits a time complexity of O(2×M×N)for an image of size M×N.展开更多
Liquefaction is one of the prominent factors leading to damage to soil and structures.In this study,the rela-tionship between liquefaction potential and soil parameters is determined by applying feature importance met...Liquefaction is one of the prominent factors leading to damage to soil and structures.In this study,the rela-tionship between liquefaction potential and soil parameters is determined by applying feature importance methods to Random Forest(RF),Logistic Regression(LR),Multilayer Perceptron(MLP),Support Vector Machine(SVM)and eXtreme Gradient Boosting(XGBoost)algorithms.Feature importance methods consist of permuta-tion and Shapley Additive exPlanations(SHAP)importances along with the used model’s built-in feature importance method if it exists.These suggested approaches incorporate an extensive dataset of geotechnical parameters,historical liquefaction events,and soil properties.The feature set comprises 18 parameters that are gathered from 161 field cases.Algorithms are used to determine the optimum performance feature set.Compared to other approaches,the study assesses how well these algorithms predict soil liquefaction potential.Early findings show that the algorithms perform well,demonstrating their capacity to identify non-linear connections and improve prediction accuracy.Among the feature set,σ,v(psf),MSF,CSRσ,v,FC%,Vs*,40f t(f ps)and N1,60,CS are the ones that have the highest deterministic power on the result.The study’s contribution is that,in the absence of extensive data for liquefaction assessment,the proposed method estimates the liquefaction potential using five parameters with promising accuracy.展开更多
The counterflow burner is a combustion device used for research on combustion.By utilizing deep convolutional models to identify the combustion state of a counter flow burner through visible flame images,it facilitate...The counterflow burner is a combustion device used for research on combustion.By utilizing deep convolutional models to identify the combustion state of a counter flow burner through visible flame images,it facilitates the optimization of the combustion process and enhances combustion efficiency.Among existing deep convolutional models,InceptionNeXt is a deep learning architecture that integrates the ideas of the Inception series and ConvNeXt.It has garnered significant attention for its computational efficiency,remarkable model accuracy,and exceptional feature extraction capabilities.However,since this model still has limitations in the combustion state recognition task,we propose a Triple-Scale Multi-Stage InceptionNeXt(TSMS-InceptionNeXt)combustion state recognitionmethod based on feature extraction optimization.First,to address the InceptionNeXt model’s limited ability to capture dynamic features in flame images,we introduce Triplet Attention,which applies attention to the width,height,and Red Green Blue(RGB)dimensions of the flame images to enhance its ability to model dynamic features.Secondly,to address the issue of key information loss in the Inception deep convolution layers,we propose a Similarity-based Feature Concentration(SimC)mechanism to enhance the model’s capability to concentrate on critical features.Next,to address the insufficient receptive field of the model,we propose a Multi-Scale Dilated Channel Parallel Integration(MDCPI)mechanism to enhance the model’s ability to extract multi-scale contextual information.Finally,to address the issue of the model’s Multi-Layer Perceptron Head(MlpHead)neglecting channel interactions,we propose a Channel Shuffle-Guided Channel-Spatial Attention(ShuffleCS)mechanism,which integrates information from different channels to further enhance the representational power of the input features.To validate the effectiveness of the method,experiments are conducted on the counterflow burner flame visible light image dataset.The experimental results show that the TSMS-InceptionNeXt model achieved an accuracy of 85.71%on the dataset,improving by 2.38%over the baseline model and outperforming the baseline model’s performance.It achieved accuracy improvements of 10.47%,4.76%,11.19%,and 9.28%compared to the Reparameterized Visual Geometry Group(RepVGG),Squeeze-erunhanced Axial Transoformer(SeaFormer),Simplified Graph Transformers(SGFormer),and VanillaNet models,respectively,effectively enhancing the recognition performance for combustion states in counterflow burners.展开更多
Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or...Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or indirect slurs.To address this gap,we propose a hybrid framework combining Term Frequency-Inverse Document Frequency(TF-IDF),word-to-vector(Word2Vec),and Bidirectional Encoder Representations from Transformers(BERT)based models for multi-class cyberbullying detection.Our approach integrates TF-IDF for lexical specificity and Word2Vec for semantic relationships,fused with BERT’s contextual embeddings to capture syntactic and semantic complexities.We evaluate the framework on a publicly available dataset of 47,000 annotated social media posts across five cyberbullying categories:age,ethnicity,gender,religion,and indirect aggression.Among BERT variants tested,BERT Base Un-Cased achieved the highest performance with 93%accuracy(standard deviation across±1%5-fold cross-validation)and an average AUC of 0.96,outperforming standalone TF-IDF(78%)and Word2Vec(82%)models.Notably,it achieved near-perfect AUC scores(0.99)for age and ethnicity-based bullying.A comparative analysis with state-of-the-art benchmarks,including Generative Pre-trained Transformer 2(GPT-2)and Text-to-Text Transfer Transformer(T5)models highlights BERT’s superiority in handling ambiguous language.This work advances cyberbullying detection by demonstrating how hybrid feature extraction and transformer models improve multi-class classification,offering a scalable solution for moderating nuanced harmful content.展开更多
In the context of an increasingly severe cybersecurity landscape and the growing complexity of offensive and defen-sive techniques,Zero Trust Networks(ZTN)have emerged as a widely recognized technology.Zero Trust not ...In the context of an increasingly severe cybersecurity landscape and the growing complexity of offensive and defen-sive techniques,Zero Trust Networks(ZTN)have emerged as a widely recognized technology.Zero Trust not only addresses the shortcomings of traditional perimeter security models but also consistently follows the fundamental principle of“never trust,always verify.”Initially proposed by John Cortez in 2010 and subsequently promoted by Google,the Zero Trust model has become a key approach to addressing the ever-growing security threats in complex network environments.This paper systematically compares the current mainstream cybersecurity models,thoroughly explores the advantages and limitations of the Zero Trust model,and provides an in-depth review of its components and key technologies.Additionally,it analyzes the latest research achievements in the application of Zero Trust technology across various fields,including network security,6G networks,the Internet of Things(IoT),and cloud computing,in the context of specific use cases.The paper also discusses the innovative contributions of the Zero Trust model in these fields,the challenges it faces,and proposes corresponding solutions and future research directions.展开更多
In this study,the hourly directions of eight banking stocks in Borsa Istanbul were predicted using linear-based,deep-learning(LSTM)and ensemble learning(Light-GBM)models.These models were trained with four different f...In this study,the hourly directions of eight banking stocks in Borsa Istanbul were predicted using linear-based,deep-learning(LSTM)and ensemble learning(Light-GBM)models.These models were trained with four different feature sets and their performances were evaluated in terms of accuracy and F-measure metrics.While the first experiments directly used the own stock features as the model inputs,the second experiments utilized reduced stock features through Variational AutoEncoders(VAE).In the last experiments,in order to grasp the effects of the other banking stocks on individual stock performance,the features belonging to other stocks were also given as inputs to our models.While combining other stock features was done for both own(named as allstock_own)and VAE-reduced(named as allstock_VAE)stock features,the expanded dimensions of the feature sets were reduced by Recursive Feature Elimination.As the highest success rate increased up to 0.685 with allstock_own and LSTM with attention model,the combination of allstock_VAE and LSTM with the attention model obtained an accuracy rate of 0.675.Although the classification results achieved with both feature types was close,allstock_VAE achieved these results using nearly 16.67%less features compared to allstock_own.When all experimental results were examined,it was found out that the models trained with allstock_own and allstock_VAE achieved higher accuracy rates than those using individual stock features.It was also concluded that the results obtained with the VAE-reduced stock features were similar to those obtained by own stock features.展开更多
One of the most complex tasks for computer-aided diagnosis(Intelligent decision support system)is the segmentation of lesions.Thus,this study proposes a new fully automated method for the segmentation of ovarian and b...One of the most complex tasks for computer-aided diagnosis(Intelligent decision support system)is the segmentation of lesions.Thus,this study proposes a new fully automated method for the segmentation of ovarian and breast ultrasound images.The main contributions of this research is the development of a novel Viola–James model capable of segmenting the ultrasound images of breast and ovarian cancer cases.In addition,proposed an approach that can efficiently generate region-of-interest(ROI)and new features that can be used in characterizing lesion boundaries.This study uses two databases in training and testing the proposed segmentation approach.The breast cancer database contains 250 images,while that of the ovarian tumor has 100 images obtained from several hospitals in Iraq.Results of the experiments showed that the proposed approach demonstrates better performance compared with those of other segmentation methods used for segmenting breast and ovarian ultrasound images.The segmentation result of the proposed system compared with the other existing techniques in the breast cancer data set was 78.8%.By contrast,the segmentation result of the proposed system in the ovarian tumor data set was 79.2%.In the classification results,we achieved 95.43%accuracy,92.20%sensitivity,and 97.5%specificity when we used the breast cancer data set.For the ovarian tumor data set,we achieved 94.84%accuracy,96.96%sensitivity,and 90.32%specificity.展开更多
The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,whi...The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,which makes it challenging and a big issue to improve approaches for efficient identification of COVID-19 disease.In this study,an automatic prediction of COVID-19 identification is proposed to automatically discriminate between healthy and COVID-19 infected subjects in X-ray images using two successful moderns are traditional machine learning methods(e.g.,artificial neural network(ANN),support vector machine(SVM),linear kernel and radial basis function(RBF),k-nearest neighbor(k-NN),Decision Tree(DT),andCN2 rule inducer techniques)and deep learningmodels(e.g.,MobileNets V2,ResNet50,GoogleNet,DarkNet andXception).A largeX-ray dataset has been created and developed,namely the COVID-19 vs.Normal(400 healthy cases,and 400 COVID cases).To the best of our knowledge,it is currently the largest publicly accessible COVID-19 dataset with the largest number of X-ray images of confirmed COVID-19 infection cases.Based on the results obtained from the experiments,it can be concluded that all the models performed well,deep learning models had achieved the optimum accuracy of 98.8%in ResNet50 model.In comparison,in traditional machine learning techniques, the SVM demonstrated the best result for an accuracy of 95% and RBFaccuracy 94% for the prediction of coronavirus disease 2019.展开更多
Recommendation services become an essential and hot research topic for researchers nowadays.Social data such asReviews play an important role in the recommendation of the products.Improvement was achieved by deep lear...Recommendation services become an essential and hot research topic for researchers nowadays.Social data such asReviews play an important role in the recommendation of the products.Improvement was achieved by deep learning approaches for capturing user and product information from a short text.However,such previously used approaches do not fairly and efficiently incorporate users’preferences and product characteristics.The proposed novel Hybrid Deep Collaborative Filtering(HDCF)model combines deep learning capabilities and deep interaction modeling with high performance for True Recommendations.To overcome the cold start problem,the new overall rating is generated by aggregating the Deep Multivariate Rating DMR(Votes,Likes,Stars,and Sentiment scores of reviews)from different external data sources because different sites have different rating scores about the same product that make confusion for the user to make a decision,either product is truly popular or not.The proposed novel HDCF model consists of four major modules such as User Product Attention,Deep Collaborative Filtering,Neural Sentiment Classifier,and Deep Multivariate Rating(UPA-DCF+NSC+DMR)to solve the addressed problems.Experimental results demonstrate that our novel model is outperforming state-of-the-art IMDb,Yelp2013,and Yelp2014 datasets for the true top-n recommendation of products using HDCF to increase the accuracy,confidence,and trust of recommendation services.展开更多
The Internet of Vehicles(IoV)is a networking paradigm related to the intercommunication of vehicles using a network.In a dynamic network,one of the key challenges in IoV is traffic management under increasing vehicles...The Internet of Vehicles(IoV)is a networking paradigm related to the intercommunication of vehicles using a network.In a dynamic network,one of the key challenges in IoV is traffic management under increasing vehicles to avoid congestion.Therefore,optimal path selection to route traffic between the origin and destination is vital.This research proposed a realistic strategy to reduce traffic management service response time by enabling real-time content distribution in IoV systems using heterogeneous network access.Firstly,this work proposed a novel use of the Ant Colony Optimization(ACO)algorithm and formulated the path planning optimization problem as an Integer Linear Program(ILP).This integrates the future estimation metric to predict the future arrivals of the vehicles,searching the optimal routes.Considering the mobile nature of IOV,fuzzy logic is used for congestion level estimation along with the ACO to determine the optimal path.The model results indicate that the suggested scheme outperforms the existing state-of-the-art methods by identifying the shortest and most cost-effective path.Thus,this work strongly supports its use in applications having stringent Quality of Service(QoS)requirements for the vehicles.展开更多
In pursuit of enhancing the Wireless Sensor Networks(WSNs)energy efficiency and operational lifespan,this paper delves into the domain of energy-efficient routing protocols.InWSNs,the limited energy resources of Senso...In pursuit of enhancing the Wireless Sensor Networks(WSNs)energy efficiency and operational lifespan,this paper delves into the domain of energy-efficient routing protocols.InWSNs,the limited energy resources of Sensor Nodes(SNs)are a big challenge for ensuring their efficient and reliable operation.WSN data gathering involves the utilization of a mobile sink(MS)to mitigate the energy consumption problem through periodic network traversal.The mobile sink(MS)strategy minimizes energy consumption and latency by visiting the fewest nodes or predetermined locations called rendezvous points(RPs)instead of all cluster heads(CHs).CHs subsequently transmit packets to neighboring RPs.The unique determination of this study is the shortest path to reach RPs.As the mobile sink(MS)concept has emerged as a promising solution to the energy consumption problem in WSNs,caused by multi-hop data collection with static sinks.In this study,we proposed two novel hybrid algorithms,namely“ Reduced k-means based on Artificial Neural Network”(RkM-ANN)and“Delay Bound Reduced kmeans with ANN”(DBRkM-ANN)for designing a fast,efficient,and most proficient MS path depending upon rendezvous points(RPs).The first algorithm optimizes the MS’s latency,while the second considers the designing of delay-bound paths,also defined as the number of paths with delay over bound for the MS.Both methods use a weight function and k-means clustering to choose RPs in a way that maximizes efficiency and guarantees network-wide coverage.In addition,a method of using MS scheduling for efficient data collection is provided.Extensive simulations and comparisons to several existing algorithms have shown the effectiveness of the suggested methodologies over a wide range of performance indicators.展开更多
The overgrowth of weeds growing along with the primary crop in the fields reduces crop production.Conventional solutions like hand weeding are labor-intensive,costly,and time-consuming;farmers have used herbicides.The...The overgrowth of weeds growing along with the primary crop in the fields reduces crop production.Conventional solutions like hand weeding are labor-intensive,costly,and time-consuming;farmers have used herbicides.The application of herbicide is effective but causes environmental and health concerns.Hence,Precision Agriculture(PA)suggests the variable spraying of herbicides so that herbicide chemicals do not affect the primary plants.Motivated by the gap above,we proposed a Deep Learning(DL)based model for detecting Eggplant(Brinjal)weed in this paper.The key objective of this study is to detect plant and non-plant(weed)parts from crop images.With the help of object detection,the precise location of weeds from images can be achieved.The dataset is collected manually from a private farm in Gandhinagar,Gujarat,India.The combined approach of classification and object detection is applied in the proposed model.The Convolutional Neural Network(CNN)model is used to classify weed and non-weed images;further DL models are applied for object detection.We have compared DL models based on accuracy,memory usage,and Intersection over Union(IoU).ResNet-18,YOLOv3,CenterNet,and Faster RCNN are used in the proposed work.CenterNet outperforms all other models in terms of accuracy,i.e.,88%.Compared to other models,YOLOv3 is the least memory-intensive,utilizing 4.78 GB to evaluate the data.展开更多
The detection of rice leaf disease is significant because,as an agricultural and rice exporter country,Pakistan needs to advance in production and lower the risk of diseases.In this rapid globalization era,information...The detection of rice leaf disease is significant because,as an agricultural and rice exporter country,Pakistan needs to advance in production and lower the risk of diseases.In this rapid globalization era,information technology has increased.A sensing system is mandatory to detect rice diseases using Artificial Intelligence(AI).It is being adopted in all medical and plant sciences fields to access and measure the accuracy of results and detection while lowering the risk of diseases.Deep Neural Network(DNN)is a novel technique that will help detect disease present on a rice leave because DNN is also considered a state-of-the-art solution in image detection using sensing nodes.Further in this paper,the adoption of the mixed-method approach Deep Convolutional Neural Network(Deep CNN)has assisted the research in increasing the effectiveness of the proposed method.Deep CNN is used for image recognition and is a class of deep-learning neural networks.CNN is popular and mostly used in the field of image recognition.A dataset of images with three main leaf diseases is selected for training and testing the proposed model.After the image acquisition and preprocessing process,the Deep CNN model was trained to detect and classify three rice diseases(Brown spot,bacterial blight,and blast disease).The proposed model achieved 98.3%accuracy in comparison with similar state-of-the-art techniques.展开更多
Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system management.However, due to the model's inherent uncertainty...Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system management.However, due to the model's inherent uncertainty, rigorous validation is requisite for its application in real-world tasks. Specific tests may reveal inadequacies in the performance of pre-trained DRL models, while the “black-box” nature of DRL poses a challenge for testing model behavior. We propose a novel performance improvement framework based on probabilistic automata,which aims to proactively identify and correct critical vulnerabilities of DRL systems, so that the performance of DRL models in real tasks can be improved with minimal model modifications.First, a probabilistic automaton is constructed from the historical trajectory of the DRL system by abstracting the state to generate probabilistic decision-making units(PDMUs), and a reverse breadth-first search(BFS) method is used to identify the key PDMU-action pairs that have the greatest impact on adverse outcomes. This process relies only on the state-action sequence and final result of each trajectory. Then, under the key PDMU, we search for the new action that has the greatest impact on favorable results. Finally, the key PDMU, undesirable action and new action are encapsulated as monitors to guide the DRL system to obtain more favorable results through real-time monitoring and correction mechanisms. Evaluations in two standard reinforcement learning environments and three actual job scheduling scenarios confirmed the effectiveness of the method, providing certain guarantees for the deployment of DRL models in real-world applications.展开更多
文摘Measuring software quality requires software engineers to understand the system’s quality attributes and their measurements.The quality attribute is a qualitative property;however,the quantitative feature is needed for software measurement,which is not considered during the development of most software systems.Many research studies have investigated different approaches for measuring software quality,but with no practical approaches to quantify and measure quality attributes.This paper proposes a software quality measurement model,based on a software interconnection model,to measure the quality of software components and the overall quality of the software system.Unlike most of the existing approaches,the proposed approach can be applied at the early stages of software development,to different architectural design models,and at different levels of system decomposition.This article introduces a software measurement model that uses a heuristic normalization of the software’s internal quality attributes,i.e.,coupling and cohesion,for software quality measurement.In this model,the quality of a software component is measured based on its internal strength and the coupling it exhibits with other component(s).The proposed model has been experimented with nine software engineering teams that have agreed to participate in the experiment during the development of their different software systems.The experiments have shown that coupling reduces the internal strength of the coupled components by the amount of coupling they exhibit,which degrades their quality and the overall quality of the software system.The introduced model can help in understanding the quality of software design.In addition,it identifies the locations in software design that exhibit unnecessary couplings that degrade the quality of the software systems,which can be eliminated.
文摘Our dependability on software in every aspect of our lives has exceeded the level that was expected in the past. We have now reached a point where we are currently stuck with technology, and it made life much easier than before. The rapid increase of technology adoption in the different aspects of life has made technology affordable and has led to an even stronger adoption in the society. As technology advances, almost every kind of technology is now connected to the network like infrastructure, automobiles, airplanes, chemical factories, power stations, and many other systems that are business and mission critical. Because of our high dependency on technology in most, if not all, aspects of life, a system failure is considered to be very critical and might result in harming the surrounding environment or put human life at risk. We apply our conceptual framework to integration between security and safety by creating a SaS (Safety and Security) domain model. Furthermore, it demonstrates that it is possible to use goal-oriented KAOS (Knowledge Acquisition in automated Specification) language in threat and hazard analysis to cover both safety and security domains making their outputs, or artifacts, well-structured and comprehensive, which results in dependability due to the comprehensiveness of the analysis. The conceptual framework can thereby act as an interface for active interactions in risk and hazard management in terms of universal coverage, finding solutions for differences and contradictions which can be overcome by integrating the safety and security domains and using a unified system analysis technique (KAOS) that will result in analysis centrality. For validation we chose the Systems-Theoretic Accident Model and Processes (STAMP) approach and its modelling language, namely System-Theoretic Process Analysis for safety (STPA), on the safety side and System-Theoretic Process Analysis for Security (STPA-sec) on the security side in order to be the base of the experiment in comparison to what was done in SaS. The concepts of SaS domain model were applied on STAMP approach using the same example @RemoteSurgery.
文摘Software engineering has been taught at many institutions as individual course for many years. Recently, many higher education institutions offer a BSc degree in Software Engineering. Software engineers are required, especially at the small enterprises, to play many roles, and sometimes simultaneously. Beside the technical and managerial skills, software engineers should have additional intellectual skills such as domain-specific abstract thinking. Therefore, software engineering curriculum should help the students to build and improve their skills to meet the labor market needs. This study aims to explore the perceptions of software engineering students on the influence of learning software modeling and design on their domain-specific abstract thinking. Also, we explore the role of the course project in improving their domain-specific abstract thinking. The study results have shown that, most of the surveyed students believe that learning and practicing modeling and design concepts contribute to their ability to think abstractly on specific domain. However, this finding is influenced by the students’ lack of the comprehension of some modeling and design aspects (e.g., generalization). We believe that, such aspects should be introduced to the students at early levels of software engineering curriculum, which certainly will improve their ability to think abstractly on specific domain.
文摘Chronic diseases,or NCDs(noncommunicable diseases),constitute a major global health challenge,causing millions of deaths and imposing substantial economic burdens annually.This paper introduces the Health Score,a comprehensive framework for assessing chronic disease risk by integrating diverse determinants of health,including social,economic,environmental,behavioral,treatment,culture,and nature factors.The Health Score,ranging from 0 to 850,quantifies indivi dual and population-level health risks while identifying protective factors through a structured methodology that supports targeted interventions at individual,corporate,and community scales.The paper highlights the rising prevalence of chronic diseases in the United States,projecting that nearly half of the population will be affected by 2030,alongside a global economic burden expected to reach trillions of dollars.Existing surveillance tools,such as the CDS(Chronic Disease Score)and CDIs(Chronic Disease Indicators),are examined for their roles in monitoring health disparities.The Health Score advances a holistic,proactive approach,emphasizing lifestyle modifications,equitable healthcare access,economic opportunities,social support,nature exposure,cu ltural awareness,and community engagement.By elucidating the complex interplay of health determinants,this framework equips stakeholders with actionable insights to implement effective prevention strategies,ultimately fostering healthier,more resi lient populations.
文摘This paper presents 3RVAV(Three-Round Voting with Advanced Validation),a novel Byzantine Fault Tolerant consensus protocol combining Proof-of-Stake with a multi-phase voting mechanism.The protocol introduces three layers of randomized committee voting with distinct participant roles(Validators,Delegators,and Users),achieving(4/5)-threshold approval per round through a verifiable random function(VRF)-based selection process.Our security analysis demonstrates 3RVAV provides 1−(1−s/n)^(3k) resistance to Sybil attacks with n participants and stake s,while maintaining O(kn log n)communication complexity.Experimental simulations show 3247 TPS throughput with 4-s finality,representing a 5.8×improvement over Algorand’s committee-based approach.The proposed protocol achieves approximately 4.2-s finality,demonstrating low latency while maintaining strong consistency and resilience.The protocol introduces a novel punishment matrix incorporating both stake slashing and probabilistic blacklisting,proving a Nash equilibrium for honest participation under rational actor assumptions.
基金Türkiye Bilimsel ve Teknolojik Arastırma Kurumu。
文摘This paper introduces a novel lightweight colour image encryption algorithm,specifically designed for resource-constrained environments such as Internet of Things(IoT)devices.As IoT systems become increasingly prevalent,secure and efficient data transmission becomes crucial.The proposed algorithm addresses this need by offering a robust yet resource-efficient solution for image encryption.Traditional image encryption relies on confusion and diffusion steps.These stages are generally implemented linearly,but this work introduces a new RSP(Random Strip Peeling)algorithm for the confusion step,which disrupts linearity in the lightweight category by using two different sequences generated by the 1D Tent Map with varying initial conditions.The diffusion stage then employs an XOR matrix generated by the Logistic Map.Different evaluation metrics,such as entropy analysis,key sensitivity,statistical and differential attacks resistance,and robustness analysis demonstrate the proposed algorithm's lightweight,robust,and efficient.The proposed encryption scheme achieved average metric values of 99.6056 for NPCR,33.4397 for UACI,and 7.9914 for information entropy in the SIPI image dataset.It also exhibits a time complexity of O(2×M×N)for an image of size M×N.
文摘Liquefaction is one of the prominent factors leading to damage to soil and structures.In this study,the rela-tionship between liquefaction potential and soil parameters is determined by applying feature importance methods to Random Forest(RF),Logistic Regression(LR),Multilayer Perceptron(MLP),Support Vector Machine(SVM)and eXtreme Gradient Boosting(XGBoost)algorithms.Feature importance methods consist of permuta-tion and Shapley Additive exPlanations(SHAP)importances along with the used model’s built-in feature importance method if it exists.These suggested approaches incorporate an extensive dataset of geotechnical parameters,historical liquefaction events,and soil properties.The feature set comprises 18 parameters that are gathered from 161 field cases.Algorithms are used to determine the optimum performance feature set.Compared to other approaches,the study assesses how well these algorithms predict soil liquefaction potential.Early findings show that the algorithms perform well,demonstrating their capacity to identify non-linear connections and improve prediction accuracy.Among the feature set,σ,v(psf),MSF,CSRσ,v,FC%,Vs*,40f t(f ps)and N1,60,CS are the ones that have the highest deterministic power on the result.The study’s contribution is that,in the absence of extensive data for liquefaction assessment,the proposed method estimates the liquefaction potential using five parameters with promising accuracy.
文摘The counterflow burner is a combustion device used for research on combustion.By utilizing deep convolutional models to identify the combustion state of a counter flow burner through visible flame images,it facilitates the optimization of the combustion process and enhances combustion efficiency.Among existing deep convolutional models,InceptionNeXt is a deep learning architecture that integrates the ideas of the Inception series and ConvNeXt.It has garnered significant attention for its computational efficiency,remarkable model accuracy,and exceptional feature extraction capabilities.However,since this model still has limitations in the combustion state recognition task,we propose a Triple-Scale Multi-Stage InceptionNeXt(TSMS-InceptionNeXt)combustion state recognitionmethod based on feature extraction optimization.First,to address the InceptionNeXt model’s limited ability to capture dynamic features in flame images,we introduce Triplet Attention,which applies attention to the width,height,and Red Green Blue(RGB)dimensions of the flame images to enhance its ability to model dynamic features.Secondly,to address the issue of key information loss in the Inception deep convolution layers,we propose a Similarity-based Feature Concentration(SimC)mechanism to enhance the model’s capability to concentrate on critical features.Next,to address the insufficient receptive field of the model,we propose a Multi-Scale Dilated Channel Parallel Integration(MDCPI)mechanism to enhance the model’s ability to extract multi-scale contextual information.Finally,to address the issue of the model’s Multi-Layer Perceptron Head(MlpHead)neglecting channel interactions,we propose a Channel Shuffle-Guided Channel-Spatial Attention(ShuffleCS)mechanism,which integrates information from different channels to further enhance the representational power of the input features.To validate the effectiveness of the method,experiments are conducted on the counterflow burner flame visible light image dataset.The experimental results show that the TSMS-InceptionNeXt model achieved an accuracy of 85.71%on the dataset,improving by 2.38%over the baseline model and outperforming the baseline model’s performance.It achieved accuracy improvements of 10.47%,4.76%,11.19%,and 9.28%compared to the Reparameterized Visual Geometry Group(RepVGG),Squeeze-erunhanced Axial Transoformer(SeaFormer),Simplified Graph Transformers(SGFormer),and VanillaNet models,respectively,effectively enhancing the recognition performance for combustion states in counterflow burners.
基金funded by Scientific Research Deanship at University of Hail-Saudi Arabia through Project Number RG-23092.
文摘Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or indirect slurs.To address this gap,we propose a hybrid framework combining Term Frequency-Inverse Document Frequency(TF-IDF),word-to-vector(Word2Vec),and Bidirectional Encoder Representations from Transformers(BERT)based models for multi-class cyberbullying detection.Our approach integrates TF-IDF for lexical specificity and Word2Vec for semantic relationships,fused with BERT’s contextual embeddings to capture syntactic and semantic complexities.We evaluate the framework on a publicly available dataset of 47,000 annotated social media posts across five cyberbullying categories:age,ethnicity,gender,religion,and indirect aggression.Among BERT variants tested,BERT Base Un-Cased achieved the highest performance with 93%accuracy(standard deviation across±1%5-fold cross-validation)and an average AUC of 0.96,outperforming standalone TF-IDF(78%)and Word2Vec(82%)models.Notably,it achieved near-perfect AUC scores(0.99)for age and ethnicity-based bullying.A comparative analysis with state-of-the-art benchmarks,including Generative Pre-trained Transformer 2(GPT-2)and Text-to-Text Transfer Transformer(T5)models highlights BERT’s superiority in handling ambiguous language.This work advances cyberbullying detection by demonstrating how hybrid feature extraction and transformer models improve multi-class classification,offering a scalable solution for moderating nuanced harmful content.
基金supported by the National Natural Science Foundation of China(Grants Nos.62473146,62072249 and 62072056)the National Science Foundation of Hunan Province(Grant No.2024JJ3017)+1 种基金the Hunan Provincial Key Research and Development Program(Grant No.2022GK2019)by the Researchers Supporting Project Number(RSP2024R509),King Saud University,Riyadh,Saudi Arabia.
文摘In the context of an increasingly severe cybersecurity landscape and the growing complexity of offensive and defen-sive techniques,Zero Trust Networks(ZTN)have emerged as a widely recognized technology.Zero Trust not only addresses the shortcomings of traditional perimeter security models but also consistently follows the fundamental principle of“never trust,always verify.”Initially proposed by John Cortez in 2010 and subsequently promoted by Google,the Zero Trust model has become a key approach to addressing the ever-growing security threats in complex network environments.This paper systematically compares the current mainstream cybersecurity models,thoroughly explores the advantages and limitations of the Zero Trust model,and provides an in-depth review of its components and key technologies.Additionally,it analyzes the latest research achievements in the application of Zero Trust technology across various fields,including network security,6G networks,the Internet of Things(IoT),and cloud computing,in the context of specific use cases.The paper also discusses the innovative contributions of the Zero Trust model in these fields,the challenges it faces,and proposes corresponding solutions and future research directions.
文摘In this study,the hourly directions of eight banking stocks in Borsa Istanbul were predicted using linear-based,deep-learning(LSTM)and ensemble learning(Light-GBM)models.These models were trained with four different feature sets and their performances were evaluated in terms of accuracy and F-measure metrics.While the first experiments directly used the own stock features as the model inputs,the second experiments utilized reduced stock features through Variational AutoEncoders(VAE).In the last experiments,in order to grasp the effects of the other banking stocks on individual stock performance,the features belonging to other stocks were also given as inputs to our models.While combining other stock features was done for both own(named as allstock_own)and VAE-reduced(named as allstock_VAE)stock features,the expanded dimensions of the feature sets were reduced by Recursive Feature Elimination.As the highest success rate increased up to 0.685 with allstock_own and LSTM with attention model,the combination of allstock_VAE and LSTM with the attention model obtained an accuracy rate of 0.675.Although the classification results achieved with both feature types was close,allstock_VAE achieved these results using nearly 16.67%less features compared to allstock_own.When all experimental results were examined,it was found out that the models trained with allstock_own and allstock_VAE achieved higher accuracy rates than those using individual stock features.It was also concluded that the results obtained with the VAE-reduced stock features were similar to those obtained by own stock features.
文摘One of the most complex tasks for computer-aided diagnosis(Intelligent decision support system)is the segmentation of lesions.Thus,this study proposes a new fully automated method for the segmentation of ovarian and breast ultrasound images.The main contributions of this research is the development of a novel Viola–James model capable of segmenting the ultrasound images of breast and ovarian cancer cases.In addition,proposed an approach that can efficiently generate region-of-interest(ROI)and new features that can be used in characterizing lesion boundaries.This study uses two databases in training and testing the proposed segmentation approach.The breast cancer database contains 250 images,while that of the ovarian tumor has 100 images obtained from several hospitals in Iraq.Results of the experiments showed that the proposed approach demonstrates better performance compared with those of other segmentation methods used for segmenting breast and ovarian ultrasound images.The segmentation result of the proposed system compared with the other existing techniques in the breast cancer data set was 78.8%.By contrast,the segmentation result of the proposed system in the ovarian tumor data set was 79.2%.In the classification results,we achieved 95.43%accuracy,92.20%sensitivity,and 97.5%specificity when we used the breast cancer data set.For the ovarian tumor data set,we achieved 94.84%accuracy,96.96%sensitivity,and 90.32%specificity.
文摘The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,which makes it challenging and a big issue to improve approaches for efficient identification of COVID-19 disease.In this study,an automatic prediction of COVID-19 identification is proposed to automatically discriminate between healthy and COVID-19 infected subjects in X-ray images using two successful moderns are traditional machine learning methods(e.g.,artificial neural network(ANN),support vector machine(SVM),linear kernel and radial basis function(RBF),k-nearest neighbor(k-NN),Decision Tree(DT),andCN2 rule inducer techniques)and deep learningmodels(e.g.,MobileNets V2,ResNet50,GoogleNet,DarkNet andXception).A largeX-ray dataset has been created and developed,namely the COVID-19 vs.Normal(400 healthy cases,and 400 COVID cases).To the best of our knowledge,it is currently the largest publicly accessible COVID-19 dataset with the largest number of X-ray images of confirmed COVID-19 infection cases.Based on the results obtained from the experiments,it can be concluded that all the models performed well,deep learning models had achieved the optimum accuracy of 98.8%in ResNet50 model.In comparison,in traditional machine learning techniques, the SVM demonstrated the best result for an accuracy of 95% and RBFaccuracy 94% for the prediction of coronavirus disease 2019.
文摘Recommendation services become an essential and hot research topic for researchers nowadays.Social data such asReviews play an important role in the recommendation of the products.Improvement was achieved by deep learning approaches for capturing user and product information from a short text.However,such previously used approaches do not fairly and efficiently incorporate users’preferences and product characteristics.The proposed novel Hybrid Deep Collaborative Filtering(HDCF)model combines deep learning capabilities and deep interaction modeling with high performance for True Recommendations.To overcome the cold start problem,the new overall rating is generated by aggregating the Deep Multivariate Rating DMR(Votes,Likes,Stars,and Sentiment scores of reviews)from different external data sources because different sites have different rating scores about the same product that make confusion for the user to make a decision,either product is truly popular or not.The proposed novel HDCF model consists of four major modules such as User Product Attention,Deep Collaborative Filtering,Neural Sentiment Classifier,and Deep Multivariate Rating(UPA-DCF+NSC+DMR)to solve the addressed problems.Experimental results demonstrate that our novel model is outperforming state-of-the-art IMDb,Yelp2013,and Yelp2014 datasets for the true top-n recommendation of products using HDCF to increase the accuracy,confidence,and trust of recommendation services.
基金supported by“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning(KETEP),granted financial resources from the Ministry of Trade,Industry&Energy,Republic of Korea.(No.20204010600090).
文摘The Internet of Vehicles(IoV)is a networking paradigm related to the intercommunication of vehicles using a network.In a dynamic network,one of the key challenges in IoV is traffic management under increasing vehicles to avoid congestion.Therefore,optimal path selection to route traffic between the origin and destination is vital.This research proposed a realistic strategy to reduce traffic management service response time by enabling real-time content distribution in IoV systems using heterogeneous network access.Firstly,this work proposed a novel use of the Ant Colony Optimization(ACO)algorithm and formulated the path planning optimization problem as an Integer Linear Program(ILP).This integrates the future estimation metric to predict the future arrivals of the vehicles,searching the optimal routes.Considering the mobile nature of IOV,fuzzy logic is used for congestion level estimation along with the ACO to determine the optimal path.The model results indicate that the suggested scheme outperforms the existing state-of-the-art methods by identifying the shortest and most cost-effective path.Thus,this work strongly supports its use in applications having stringent Quality of Service(QoS)requirements for the vehicles.
基金Research Supporting Project Number(RSP2024R421),King Saud University,Riyadh,Saudi Arabia.
文摘In pursuit of enhancing the Wireless Sensor Networks(WSNs)energy efficiency and operational lifespan,this paper delves into the domain of energy-efficient routing protocols.InWSNs,the limited energy resources of Sensor Nodes(SNs)are a big challenge for ensuring their efficient and reliable operation.WSN data gathering involves the utilization of a mobile sink(MS)to mitigate the energy consumption problem through periodic network traversal.The mobile sink(MS)strategy minimizes energy consumption and latency by visiting the fewest nodes or predetermined locations called rendezvous points(RPs)instead of all cluster heads(CHs).CHs subsequently transmit packets to neighboring RPs.The unique determination of this study is the shortest path to reach RPs.As the mobile sink(MS)concept has emerged as a promising solution to the energy consumption problem in WSNs,caused by multi-hop data collection with static sinks.In this study,we proposed two novel hybrid algorithms,namely“ Reduced k-means based on Artificial Neural Network”(RkM-ANN)and“Delay Bound Reduced kmeans with ANN”(DBRkM-ANN)for designing a fast,efficient,and most proficient MS path depending upon rendezvous points(RPs).The first algorithm optimizes the MS’s latency,while the second considers the designing of delay-bound paths,also defined as the number of paths with delay over bound for the MS.Both methods use a weight function and k-means clustering to choose RPs in a way that maximizes efficiency and guarantees network-wide coverage.In addition,a method of using MS scheduling for efficient data collection is provided.Extensive simulations and comparisons to several existing algorithms have shown the effectiveness of the suggested methodologies over a wide range of performance indicators.
基金funded by the Researchers Supporting Project Number(RSP2023R 509),King Saud University,Riyadh,Saudi Arabia.
文摘The overgrowth of weeds growing along with the primary crop in the fields reduces crop production.Conventional solutions like hand weeding are labor-intensive,costly,and time-consuming;farmers have used herbicides.The application of herbicide is effective but causes environmental and health concerns.Hence,Precision Agriculture(PA)suggests the variable spraying of herbicides so that herbicide chemicals do not affect the primary plants.Motivated by the gap above,we proposed a Deep Learning(DL)based model for detecting Eggplant(Brinjal)weed in this paper.The key objective of this study is to detect plant and non-plant(weed)parts from crop images.With the help of object detection,the precise location of weeds from images can be achieved.The dataset is collected manually from a private farm in Gandhinagar,Gujarat,India.The combined approach of classification and object detection is applied in the proposed model.The Convolutional Neural Network(CNN)model is used to classify weed and non-weed images;further DL models are applied for object detection.We have compared DL models based on accuracy,memory usage,and Intersection over Union(IoU).ResNet-18,YOLOv3,CenterNet,and Faster RCNN are used in the proposed work.CenterNet outperforms all other models in terms of accuracy,i.e.,88%.Compared to other models,YOLOv3 is the least memory-intensive,utilizing 4.78 GB to evaluate the data.
基金funded by the University of Haripur,KP Pakistan Researchers Supporting Project number (PKURFL2324L33)。
文摘The detection of rice leaf disease is significant because,as an agricultural and rice exporter country,Pakistan needs to advance in production and lower the risk of diseases.In this rapid globalization era,information technology has increased.A sensing system is mandatory to detect rice diseases using Artificial Intelligence(AI).It is being adopted in all medical and plant sciences fields to access and measure the accuracy of results and detection while lowering the risk of diseases.Deep Neural Network(DNN)is a novel technique that will help detect disease present on a rice leave because DNN is also considered a state-of-the-art solution in image detection using sensing nodes.Further in this paper,the adoption of the mixed-method approach Deep Convolutional Neural Network(Deep CNN)has assisted the research in increasing the effectiveness of the proposed method.Deep CNN is used for image recognition and is a class of deep-learning neural networks.CNN is popular and mostly used in the field of image recognition.A dataset of images with three main leaf diseases is selected for training and testing the proposed model.After the image acquisition and preprocessing process,the Deep CNN model was trained to detect and classify three rice diseases(Brown spot,bacterial blight,and blast disease).The proposed model achieved 98.3%accuracy in comparison with similar state-of-the-art techniques.
基金supported by the Shanghai Science and Technology Committee (22511105500)the National Nature Science Foundation of China (62172299, 62032019)+2 种基金the Space Optoelectronic Measurement and Perception LaboratoryBeijing Institute of Control Engineering(LabSOMP-2023-03)the Central Universities of China (2023-4-YB-05)。
文摘Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system management.However, due to the model's inherent uncertainty, rigorous validation is requisite for its application in real-world tasks. Specific tests may reveal inadequacies in the performance of pre-trained DRL models, while the “black-box” nature of DRL poses a challenge for testing model behavior. We propose a novel performance improvement framework based on probabilistic automata,which aims to proactively identify and correct critical vulnerabilities of DRL systems, so that the performance of DRL models in real tasks can be improved with minimal model modifications.First, a probabilistic automaton is constructed from the historical trajectory of the DRL system by abstracting the state to generate probabilistic decision-making units(PDMUs), and a reverse breadth-first search(BFS) method is used to identify the key PDMU-action pairs that have the greatest impact on adverse outcomes. This process relies only on the state-action sequence and final result of each trajectory. Then, under the key PDMU, we search for the new action that has the greatest impact on favorable results. Finally, the key PDMU, undesirable action and new action are encapsulated as monitors to guide the DRL system to obtain more favorable results through real-time monitoring and correction mechanisms. Evaluations in two standard reinforcement learning environments and three actual job scheduling scenarios confirmed the effectiveness of the method, providing certain guarantees for the deployment of DRL models in real-world applications.