Measuring software quality requires software engineers to understand the system’s quality attributes and their measurements.The quality attribute is a qualitative property;however,the quantitative feature is needed f...Measuring software quality requires software engineers to understand the system’s quality attributes and their measurements.The quality attribute is a qualitative property;however,the quantitative feature is needed for software measurement,which is not considered during the development of most software systems.Many research studies have investigated different approaches for measuring software quality,but with no practical approaches to quantify and measure quality attributes.This paper proposes a software quality measurement model,based on a software interconnection model,to measure the quality of software components and the overall quality of the software system.Unlike most of the existing approaches,the proposed approach can be applied at the early stages of software development,to different architectural design models,and at different levels of system decomposition.This article introduces a software measurement model that uses a heuristic normalization of the software’s internal quality attributes,i.e.,coupling and cohesion,for software quality measurement.In this model,the quality of a software component is measured based on its internal strength and the coupling it exhibits with other component(s).The proposed model has been experimented with nine software engineering teams that have agreed to participate in the experiment during the development of their different software systems.The experiments have shown that coupling reduces the internal strength of the coupled components by the amount of coupling they exhibit,which degrades their quality and the overall quality of the software system.The introduced model can help in understanding the quality of software design.In addition,it identifies the locations in software design that exhibit unnecessary couplings that degrade the quality of the software systems,which can be eliminated.展开更多
Software engineering has been taught at many institutions as individual course for many years. Recently, many higher education institutions offer a BSc degree in Software Engineering. Software engineers are required, ...Software engineering has been taught at many institutions as individual course for many years. Recently, many higher education institutions offer a BSc degree in Software Engineering. Software engineers are required, especially at the small enterprises, to play many roles, and sometimes simultaneously. Beside the technical and managerial skills, software engineers should have additional intellectual skills such as domain-specific abstract thinking. Therefore, software engineering curriculum should help the students to build and improve their skills to meet the labor market needs. This study aims to explore the perceptions of software engineering students on the influence of learning software modeling and design on their domain-specific abstract thinking. Also, we explore the role of the course project in improving their domain-specific abstract thinking. The study results have shown that, most of the surveyed students believe that learning and practicing modeling and design concepts contribute to their ability to think abstractly on specific domain. However, this finding is influenced by the students’ lack of the comprehension of some modeling and design aspects (e.g., generalization). We believe that, such aspects should be introduced to the students at early levels of software engineering curriculum, which certainly will improve their ability to think abstractly on specific domain.展开更多
This paper presents 3RVAV(Three-Round Voting with Advanced Validation),a novel Byzantine Fault Tolerant consensus protocol combining Proof-of-Stake with a multi-phase voting mechanism.The protocol introduces three lay...This paper presents 3RVAV(Three-Round Voting with Advanced Validation),a novel Byzantine Fault Tolerant consensus protocol combining Proof-of-Stake with a multi-phase voting mechanism.The protocol introduces three layers of randomized committee voting with distinct participant roles(Validators,Delegators,and Users),achieving(4/5)-threshold approval per round through a verifiable random function(VRF)-based selection process.Our security analysis demonstrates 3RVAV provides 1−(1−s/n)^(3k) resistance to Sybil attacks with n participants and stake s,while maintaining O(kn log n)communication complexity.Experimental simulations show 3247 TPS throughput with 4-s finality,representing a 5.8×improvement over Algorand’s committee-based approach.The proposed protocol achieves approximately 4.2-s finality,demonstrating low latency while maintaining strong consistency and resilience.The protocol introduces a novel punishment matrix incorporating both stake slashing and probabilistic blacklisting,proving a Nash equilibrium for honest participation under rational actor assumptions.展开更多
Chronic diseases,or NCDs(noncommunicable diseases),constitute a major global health challenge,causing millions of deaths and imposing substantial economic burdens annually.This paper introduces the Health Score,a comp...Chronic diseases,or NCDs(noncommunicable diseases),constitute a major global health challenge,causing millions of deaths and imposing substantial economic burdens annually.This paper introduces the Health Score,a comprehensive framework for assessing chronic disease risk by integrating diverse determinants of health,including social,economic,environmental,behavioral,treatment,culture,and nature factors.The Health Score,ranging from 0 to 850,quantifies indivi dual and population-level health risks while identifying protective factors through a structured methodology that supports targeted interventions at individual,corporate,and community scales.The paper highlights the rising prevalence of chronic diseases in the United States,projecting that nearly half of the population will be affected by 2030,alongside a global economic burden expected to reach trillions of dollars.Existing surveillance tools,such as the CDS(Chronic Disease Score)and CDIs(Chronic Disease Indicators),are examined for their roles in monitoring health disparities.The Health Score advances a holistic,proactive approach,emphasizing lifestyle modifications,equitable healthcare access,economic opportunities,social support,nature exposure,cu ltural awareness,and community engagement.By elucidating the complex interplay of health determinants,this framework equips stakeholders with actionable insights to implement effective prevention strategies,ultimately fostering healthier,more resi lient populations.展开更多
This paper introduces a novel lightweight colour image encryption algorithm,specifically designed for resource-constrained environments such as Internet of Things(IoT)devices.As IoT systems become increasingly prevale...This paper introduces a novel lightweight colour image encryption algorithm,specifically designed for resource-constrained environments such as Internet of Things(IoT)devices.As IoT systems become increasingly prevalent,secure and efficient data transmission becomes crucial.The proposed algorithm addresses this need by offering a robust yet resource-efficient solution for image encryption.Traditional image encryption relies on confusion and diffusion steps.These stages are generally implemented linearly,but this work introduces a new RSP(Random Strip Peeling)algorithm for the confusion step,which disrupts linearity in the lightweight category by using two different sequences generated by the 1D Tent Map with varying initial conditions.The diffusion stage then employs an XOR matrix generated by the Logistic Map.Different evaluation metrics,such as entropy analysis,key sensitivity,statistical and differential attacks resistance,and robustness analysis demonstrate the proposed algorithm's lightweight,robust,and efficient.The proposed encryption scheme achieved average metric values of 99.6056 for NPCR,33.4397 for UACI,and 7.9914 for information entropy in the SIPI image dataset.It also exhibits a time complexity of O(2×M×N)for an image of size M×N.展开更多
Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or...Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or indirect slurs.To address this gap,we propose a hybrid framework combining Term Frequency-Inverse Document Frequency(TF-IDF),word-to-vector(Word2Vec),and Bidirectional Encoder Representations from Transformers(BERT)based models for multi-class cyberbullying detection.Our approach integrates TF-IDF for lexical specificity and Word2Vec for semantic relationships,fused with BERT’s contextual embeddings to capture syntactic and semantic complexities.We evaluate the framework on a publicly available dataset of 47,000 annotated social media posts across five cyberbullying categories:age,ethnicity,gender,religion,and indirect aggression.Among BERT variants tested,BERT Base Un-Cased achieved the highest performance with 93%accuracy(standard deviation across±1%5-fold cross-validation)and an average AUC of 0.96,outperforming standalone TF-IDF(78%)and Word2Vec(82%)models.Notably,it achieved near-perfect AUC scores(0.99)for age and ethnicity-based bullying.A comparative analysis with state-of-the-art benchmarks,including Generative Pre-trained Transformer 2(GPT-2)and Text-to-Text Transfer Transformer(T5)models highlights BERT’s superiority in handling ambiguous language.This work advances cyberbullying detection by demonstrating how hybrid feature extraction and transformer models improve multi-class classification,offering a scalable solution for moderating nuanced harmful content.展开更多
In the context of an increasingly severe cybersecurity landscape and the growing complexity of offensive and defen-sive techniques,Zero Trust Networks(ZTN)have emerged as a widely recognized technology.Zero Trust not ...In the context of an increasingly severe cybersecurity landscape and the growing complexity of offensive and defen-sive techniques,Zero Trust Networks(ZTN)have emerged as a widely recognized technology.Zero Trust not only addresses the shortcomings of traditional perimeter security models but also consistently follows the fundamental principle of“never trust,always verify.”Initially proposed by John Cortez in 2010 and subsequently promoted by Google,the Zero Trust model has become a key approach to addressing the ever-growing security threats in complex network environments.This paper systematically compares the current mainstream cybersecurity models,thoroughly explores the advantages and limitations of the Zero Trust model,and provides an in-depth review of its components and key technologies.Additionally,it analyzes the latest research achievements in the application of Zero Trust technology across various fields,including network security,6G networks,the Internet of Things(IoT),and cloud computing,in the context of specific use cases.The paper also discusses the innovative contributions of the Zero Trust model in these fields,the challenges it faces,and proposes corresponding solutions and future research directions.展开更多
Twenty samples of endothelia removed from normal and post penetrating keratoplas-ty (0.5,1,2,3 months after penetrating keratoplasty) were observed by scanning electron mi-croscopy.The photographs of the endothelia in...Twenty samples of endothelia removed from normal and post penetrating keratoplas-ty (0.5,1,2,3 months after penetrating keratoplasty) were observed by scanning electron mi-croscopy.The photographs of the endothelia in graft-host junction were analyzed by computer-assisted image analysis system,and the morphometric indexes examined were area of the cells,perimeters,density,figure coefficient,long axis,coefficient of variation of the area,and oth-ers.Results showed that the morphology and the density of the endothelial cells changed obvi-ously after operation and improved slowly but progressively with time although at 3 monthspostoperatively some differences still existed.By using the new techniques,the experiment con-firmed and enriched the theories on the corneal endothelial wound-healing,revealing some ofthe new characters of the endothelial wound-healing following penetrating keratoplasty.展开更多
In this study,the hourly directions of eight banking stocks in Borsa Istanbul were predicted using linear-based,deep-learning(LSTM)and ensemble learning(Light-GBM)models.These models were trained with four different f...In this study,the hourly directions of eight banking stocks in Borsa Istanbul were predicted using linear-based,deep-learning(LSTM)and ensemble learning(Light-GBM)models.These models were trained with four different feature sets and their performances were evaluated in terms of accuracy and F-measure metrics.While the first experiments directly used the own stock features as the model inputs,the second experiments utilized reduced stock features through Variational AutoEncoders(VAE).In the last experiments,in order to grasp the effects of the other banking stocks on individual stock performance,the features belonging to other stocks were also given as inputs to our models.While combining other stock features was done for both own(named as allstock_own)and VAE-reduced(named as allstock_VAE)stock features,the expanded dimensions of the feature sets were reduced by Recursive Feature Elimination.As the highest success rate increased up to 0.685 with allstock_own and LSTM with attention model,the combination of allstock_VAE and LSTM with the attention model obtained an accuracy rate of 0.675.Although the classification results achieved with both feature types was close,allstock_VAE achieved these results using nearly 16.67%less features compared to allstock_own.When all experimental results were examined,it was found out that the models trained with allstock_own and allstock_VAE achieved higher accuracy rates than those using individual stock features.It was also concluded that the results obtained with the VAE-reduced stock features were similar to those obtained by own stock features.展开更多
Agriculture plays an important role in the economy of all countries.However,plant diseases may badly affect the quality of food,production,and ultimately the economy.For plant disease detection and management,agricult...Agriculture plays an important role in the economy of all countries.However,plant diseases may badly affect the quality of food,production,and ultimately the economy.For plant disease detection and management,agriculturalists spend a huge amount of money.However,the manual detection method of plant diseases is complicated and time-consuming.Consequently,automated systems for plant disease detection using machine learning(ML)approaches are proposed.However,most of the existing ML techniques of plants diseases recognition are based on handcrafted features and they rarely deal with huge amount of input data.To address the issue,this article proposes a fully automated method for plant disease detection and recognition using deep neural networks.In the proposed method,AlexNet and VGG19 CNNs are considered as pre-trained architectures.It is capable to obtain the feature extraction of the given data with fine-tuning details.After convolutional neural network feature extraction,it selects the best subset of features through the correlation coefficient and feeds them to the number of classifiers including K-Nearest Neighbor,Support Vector Machine,Probabilistic Neural Network,Fuzzy logic,and Artificial Neural Network.The validation of the proposed method is carried out on a self-collected dataset generated through the augmentation step.The achieved average accuracy of our method is more than 96%and outperforms the recent techniques.展开更多
Coronavirus disease 2019(COVID-19)has been termed a“Pandemic Disease”that has infected many people and caused many deaths on a nearly unprecedented level.As more people are infected each day,it continues to pose a s...Coronavirus disease 2019(COVID-19)has been termed a“Pandemic Disease”that has infected many people and caused many deaths on a nearly unprecedented level.As more people are infected each day,it continues to pose a serious threat to humanity worldwide.As a result,healthcare systems around the world are facing a shortage of medical space such as wards and sickbeds.In most cases,healthy people experience tolerable symptoms if they are infected.However,in other cases,patients may suffer severe symptoms and require treatment in an intensive care unit.Thus,hospitals should select patients who have a high risk of death and treat them first.To solve this problem,a number of models have been developed for mortality prediction.However,they lack interpretability and generalization.To prepare a model that addresses these issues,we proposed a COVID-19 mortality prediction model that could provide new insights.We identified blood factors that could affect the prediction of COVID-19 mortality.In particular,we focused on dependency reduction using partial correlation and mutual information.Next,we used the Class-Attribute Interdependency Maximization(CAIM)algorithm to bin continuous values.Then,we used Jensen Shannon Divergence(JSD)and Bayesian posterior probability to create less redundant and more accurate rules.We provided a ruleset with its own posterior probability as a result.The extracted rules are in the form of“if antecedent then results,posterior probability(θ)”.If the sample matches the extracted rules,then the result is positive.The average AUC Score was 96.77%for the validation dataset and the F1-score was 92.8%for the test data.Compared to the results of previous studies,it shows good performance in terms of classification performance,generalization,and interpretability.展开更多
Decision making in case of medical diagnosis is a complicated process.A large number of overlapping structures and cases,and distractions,tiredness,and limitations with the human visual system can lead to inappropriat...Decision making in case of medical diagnosis is a complicated process.A large number of overlapping structures and cases,and distractions,tiredness,and limitations with the human visual system can lead to inappropriate diagnosis.Machine learning(ML)methods have been employed to assist clinicians in overcoming these limitations and in making informed and correct decisions in disease diagnosis.Many academic papers involving the use of machine learning for disease diagnosis have been increasingly getting published.Hence,to determine the use of ML to improve the diagnosis in varied medical disciplines,a systematic review is conducted in this study.To carry out the review,six different databases are selected.Inclusion and exclusion criteria are employed to limit the research.Further,the eligible articles are classied depending on publication year,authors,type of articles,research objective,inputs and outputs,problem and research gaps,and ndings and results.Then the selected articles are analyzed to show the impact of ML methods in improving the disease diagnosis.The ndings of this study show the most used ML methods and the most common diseases that are focused on by researchers.It also shows the increase in use of machine learning for disease diagnosis over the years.These results will help in focusing on those areas which are neglected and also to determine various ways in which ML methods could be employed to achieve desirable results.展开更多
One of the most complex tasks for computer-aided diagnosis(Intelligent decision support system)is the segmentation of lesions.Thus,this study proposes a new fully automated method for the segmentation of ovarian and b...One of the most complex tasks for computer-aided diagnosis(Intelligent decision support system)is the segmentation of lesions.Thus,this study proposes a new fully automated method for the segmentation of ovarian and breast ultrasound images.The main contributions of this research is the development of a novel Viola–James model capable of segmenting the ultrasound images of breast and ovarian cancer cases.In addition,proposed an approach that can efficiently generate region-of-interest(ROI)and new features that can be used in characterizing lesion boundaries.This study uses two databases in training and testing the proposed segmentation approach.The breast cancer database contains 250 images,while that of the ovarian tumor has 100 images obtained from several hospitals in Iraq.Results of the experiments showed that the proposed approach demonstrates better performance compared with those of other segmentation methods used for segmenting breast and ovarian ultrasound images.The segmentation result of the proposed system compared with the other existing techniques in the breast cancer data set was 78.8%.By contrast,the segmentation result of the proposed system in the ovarian tumor data set was 79.2%.In the classification results,we achieved 95.43%accuracy,92.20%sensitivity,and 97.5%specificity when we used the breast cancer data set.For the ovarian tumor data set,we achieved 94.84%accuracy,96.96%sensitivity,and 90.32%specificity.展开更多
Recommendation services become an essential and hot research topic for researchers nowadays.Social data such asReviews play an important role in the recommendation of the products.Improvement was achieved by deep lear...Recommendation services become an essential and hot research topic for researchers nowadays.Social data such asReviews play an important role in the recommendation of the products.Improvement was achieved by deep learning approaches for capturing user and product information from a short text.However,such previously used approaches do not fairly and efficiently incorporate users’preferences and product characteristics.The proposed novel Hybrid Deep Collaborative Filtering(HDCF)model combines deep learning capabilities and deep interaction modeling with high performance for True Recommendations.To overcome the cold start problem,the new overall rating is generated by aggregating the Deep Multivariate Rating DMR(Votes,Likes,Stars,and Sentiment scores of reviews)from different external data sources because different sites have different rating scores about the same product that make confusion for the user to make a decision,either product is truly popular or not.The proposed novel HDCF model consists of four major modules such as User Product Attention,Deep Collaborative Filtering,Neural Sentiment Classifier,and Deep Multivariate Rating(UPA-DCF+NSC+DMR)to solve the addressed problems.Experimental results demonstrate that our novel model is outperforming state-of-the-art IMDb,Yelp2013,and Yelp2014 datasets for the true top-n recommendation of products using HDCF to increase the accuracy,confidence,and trust of recommendation services.展开更多
The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,whi...The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,which makes it challenging and a big issue to improve approaches for efficient identification of COVID-19 disease.In this study,an automatic prediction of COVID-19 identification is proposed to automatically discriminate between healthy and COVID-19 infected subjects in X-ray images using two successful moderns are traditional machine learning methods(e.g.,artificial neural network(ANN),support vector machine(SVM),linear kernel and radial basis function(RBF),k-nearest neighbor(k-NN),Decision Tree(DT),andCN2 rule inducer techniques)and deep learningmodels(e.g.,MobileNets V2,ResNet50,GoogleNet,DarkNet andXception).A largeX-ray dataset has been created and developed,namely the COVID-19 vs.Normal(400 healthy cases,and 400 COVID cases).To the best of our knowledge,it is currently the largest publicly accessible COVID-19 dataset with the largest number of X-ray images of confirmed COVID-19 infection cases.Based on the results obtained from the experiments,it can be concluded that all the models performed well,deep learning models had achieved the optimum accuracy of 98.8%in ResNet50 model.In comparison,in traditional machine learning techniques, the SVM demonstrated the best result for an accuracy of 95% and RBFaccuracy 94% for the prediction of coronavirus disease 2019.展开更多
In pursuit of enhancing the Wireless Sensor Networks(WSNs)energy efficiency and operational lifespan,this paper delves into the domain of energy-efficient routing protocols.InWSNs,the limited energy resources of Senso...In pursuit of enhancing the Wireless Sensor Networks(WSNs)energy efficiency and operational lifespan,this paper delves into the domain of energy-efficient routing protocols.InWSNs,the limited energy resources of Sensor Nodes(SNs)are a big challenge for ensuring their efficient and reliable operation.WSN data gathering involves the utilization of a mobile sink(MS)to mitigate the energy consumption problem through periodic network traversal.The mobile sink(MS)strategy minimizes energy consumption and latency by visiting the fewest nodes or predetermined locations called rendezvous points(RPs)instead of all cluster heads(CHs).CHs subsequently transmit packets to neighboring RPs.The unique determination of this study is the shortest path to reach RPs.As the mobile sink(MS)concept has emerged as a promising solution to the energy consumption problem in WSNs,caused by multi-hop data collection with static sinks.In this study,we proposed two novel hybrid algorithms,namely“ Reduced k-means based on Artificial Neural Network”(RkM-ANN)and“Delay Bound Reduced kmeans with ANN”(DBRkM-ANN)for designing a fast,efficient,and most proficient MS path depending upon rendezvous points(RPs).The first algorithm optimizes the MS’s latency,while the second considers the designing of delay-bound paths,also defined as the number of paths with delay over bound for the MS.Both methods use a weight function and k-means clustering to choose RPs in a way that maximizes efficiency and guarantees network-wide coverage.In addition,a method of using MS scheduling for efficient data collection is provided.Extensive simulations and comparisons to several existing algorithms have shown the effectiveness of the suggested methodologies over a wide range of performance indicators.展开更多
The Internet of Vehicles(IoV)is a networking paradigm related to the intercommunication of vehicles using a network.In a dynamic network,one of the key challenges in IoV is traffic management under increasing vehicles...The Internet of Vehicles(IoV)is a networking paradigm related to the intercommunication of vehicles using a network.In a dynamic network,one of the key challenges in IoV is traffic management under increasing vehicles to avoid congestion.Therefore,optimal path selection to route traffic between the origin and destination is vital.This research proposed a realistic strategy to reduce traffic management service response time by enabling real-time content distribution in IoV systems using heterogeneous network access.Firstly,this work proposed a novel use of the Ant Colony Optimization(ACO)algorithm and formulated the path planning optimization problem as an Integer Linear Program(ILP).This integrates the future estimation metric to predict the future arrivals of the vehicles,searching the optimal routes.Considering the mobile nature of IOV,fuzzy logic is used for congestion level estimation along with the ACO to determine the optimal path.The model results indicate that the suggested scheme outperforms the existing state-of-the-art methods by identifying the shortest and most cost-effective path.Thus,this work strongly supports its use in applications having stringent Quality of Service(QoS)requirements for the vehicles.展开更多
The overgrowth of weeds growing along with the primary crop in the fields reduces crop production.Conventional solutions like hand weeding are labor-intensive,costly,and time-consuming;farmers have used herbicides.The...The overgrowth of weeds growing along with the primary crop in the fields reduces crop production.Conventional solutions like hand weeding are labor-intensive,costly,and time-consuming;farmers have used herbicides.The application of herbicide is effective but causes environmental and health concerns.Hence,Precision Agriculture(PA)suggests the variable spraying of herbicides so that herbicide chemicals do not affect the primary plants.Motivated by the gap above,we proposed a Deep Learning(DL)based model for detecting Eggplant(Brinjal)weed in this paper.The key objective of this study is to detect plant and non-plant(weed)parts from crop images.With the help of object detection,the precise location of weeds from images can be achieved.The dataset is collected manually from a private farm in Gandhinagar,Gujarat,India.The combined approach of classification and object detection is applied in the proposed model.The Convolutional Neural Network(CNN)model is used to classify weed and non-weed images;further DL models are applied for object detection.We have compared DL models based on accuracy,memory usage,and Intersection over Union(IoU).ResNet-18,YOLOv3,CenterNet,and Faster RCNN are used in the proposed work.CenterNet outperforms all other models in terms of accuracy,i.e.,88%.Compared to other models,YOLOv3 is the least memory-intensive,utilizing 4.78 GB to evaluate the data.展开更多
A simple,stable and reliable virtual logic analyzer is presented. The logic analyzer had two modules:one was the test pattern generation module,the other was the logic monitoring module. Combining the two modules,one ...A simple,stable and reliable virtual logic analyzer is presented. The logic analyzer had two modules:one was the test pattern generation module,the other was the logic monitoring module. Combining the two modules,one is able to test a digital circuit automatically. The user interface of the logic analyzer was programmed with LabVIEW. Two Arduino UNO boards were used as the hardware targets to input and output the logic signals. The maximum pattern update rate was set to be 20 Hz. The maximum logic sampling rate was set to be 200 Hz. After twelve thousand cycles of exhaustive tests,the logic analyzer had a 100% accuracy. As a tutorial showing how to build virtual instruments with Arduino,the software detail is also explained in this article.展开更多
文摘Measuring software quality requires software engineers to understand the system’s quality attributes and their measurements.The quality attribute is a qualitative property;however,the quantitative feature is needed for software measurement,which is not considered during the development of most software systems.Many research studies have investigated different approaches for measuring software quality,but with no practical approaches to quantify and measure quality attributes.This paper proposes a software quality measurement model,based on a software interconnection model,to measure the quality of software components and the overall quality of the software system.Unlike most of the existing approaches,the proposed approach can be applied at the early stages of software development,to different architectural design models,and at different levels of system decomposition.This article introduces a software measurement model that uses a heuristic normalization of the software’s internal quality attributes,i.e.,coupling and cohesion,for software quality measurement.In this model,the quality of a software component is measured based on its internal strength and the coupling it exhibits with other component(s).The proposed model has been experimented with nine software engineering teams that have agreed to participate in the experiment during the development of their different software systems.The experiments have shown that coupling reduces the internal strength of the coupled components by the amount of coupling they exhibit,which degrades their quality and the overall quality of the software system.The introduced model can help in understanding the quality of software design.In addition,it identifies the locations in software design that exhibit unnecessary couplings that degrade the quality of the software systems,which can be eliminated.
文摘Software engineering has been taught at many institutions as individual course for many years. Recently, many higher education institutions offer a BSc degree in Software Engineering. Software engineers are required, especially at the small enterprises, to play many roles, and sometimes simultaneously. Beside the technical and managerial skills, software engineers should have additional intellectual skills such as domain-specific abstract thinking. Therefore, software engineering curriculum should help the students to build and improve their skills to meet the labor market needs. This study aims to explore the perceptions of software engineering students on the influence of learning software modeling and design on their domain-specific abstract thinking. Also, we explore the role of the course project in improving their domain-specific abstract thinking. The study results have shown that, most of the surveyed students believe that learning and practicing modeling and design concepts contribute to their ability to think abstractly on specific domain. However, this finding is influenced by the students’ lack of the comprehension of some modeling and design aspects (e.g., generalization). We believe that, such aspects should be introduced to the students at early levels of software engineering curriculum, which certainly will improve their ability to think abstractly on specific domain.
文摘This paper presents 3RVAV(Three-Round Voting with Advanced Validation),a novel Byzantine Fault Tolerant consensus protocol combining Proof-of-Stake with a multi-phase voting mechanism.The protocol introduces three layers of randomized committee voting with distinct participant roles(Validators,Delegators,and Users),achieving(4/5)-threshold approval per round through a verifiable random function(VRF)-based selection process.Our security analysis demonstrates 3RVAV provides 1−(1−s/n)^(3k) resistance to Sybil attacks with n participants and stake s,while maintaining O(kn log n)communication complexity.Experimental simulations show 3247 TPS throughput with 4-s finality,representing a 5.8×improvement over Algorand’s committee-based approach.The proposed protocol achieves approximately 4.2-s finality,demonstrating low latency while maintaining strong consistency and resilience.The protocol introduces a novel punishment matrix incorporating both stake slashing and probabilistic blacklisting,proving a Nash equilibrium for honest participation under rational actor assumptions.
文摘Chronic diseases,or NCDs(noncommunicable diseases),constitute a major global health challenge,causing millions of deaths and imposing substantial economic burdens annually.This paper introduces the Health Score,a comprehensive framework for assessing chronic disease risk by integrating diverse determinants of health,including social,economic,environmental,behavioral,treatment,culture,and nature factors.The Health Score,ranging from 0 to 850,quantifies indivi dual and population-level health risks while identifying protective factors through a structured methodology that supports targeted interventions at individual,corporate,and community scales.The paper highlights the rising prevalence of chronic diseases in the United States,projecting that nearly half of the population will be affected by 2030,alongside a global economic burden expected to reach trillions of dollars.Existing surveillance tools,such as the CDS(Chronic Disease Score)and CDIs(Chronic Disease Indicators),are examined for their roles in monitoring health disparities.The Health Score advances a holistic,proactive approach,emphasizing lifestyle modifications,equitable healthcare access,economic opportunities,social support,nature exposure,cu ltural awareness,and community engagement.By elucidating the complex interplay of health determinants,this framework equips stakeholders with actionable insights to implement effective prevention strategies,ultimately fostering healthier,more resi lient populations.
基金Türkiye Bilimsel ve Teknolojik Arastırma Kurumu。
文摘This paper introduces a novel lightweight colour image encryption algorithm,specifically designed for resource-constrained environments such as Internet of Things(IoT)devices.As IoT systems become increasingly prevalent,secure and efficient data transmission becomes crucial.The proposed algorithm addresses this need by offering a robust yet resource-efficient solution for image encryption.Traditional image encryption relies on confusion and diffusion steps.These stages are generally implemented linearly,but this work introduces a new RSP(Random Strip Peeling)algorithm for the confusion step,which disrupts linearity in the lightweight category by using two different sequences generated by the 1D Tent Map with varying initial conditions.The diffusion stage then employs an XOR matrix generated by the Logistic Map.Different evaluation metrics,such as entropy analysis,key sensitivity,statistical and differential attacks resistance,and robustness analysis demonstrate the proposed algorithm's lightweight,robust,and efficient.The proposed encryption scheme achieved average metric values of 99.6056 for NPCR,33.4397 for UACI,and 7.9914 for information entropy in the SIPI image dataset.It also exhibits a time complexity of O(2×M×N)for an image of size M×N.
基金funded by Scientific Research Deanship at University of Hail-Saudi Arabia through Project Number RG-23092.
文摘Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or indirect slurs.To address this gap,we propose a hybrid framework combining Term Frequency-Inverse Document Frequency(TF-IDF),word-to-vector(Word2Vec),and Bidirectional Encoder Representations from Transformers(BERT)based models for multi-class cyberbullying detection.Our approach integrates TF-IDF for lexical specificity and Word2Vec for semantic relationships,fused with BERT’s contextual embeddings to capture syntactic and semantic complexities.We evaluate the framework on a publicly available dataset of 47,000 annotated social media posts across five cyberbullying categories:age,ethnicity,gender,religion,and indirect aggression.Among BERT variants tested,BERT Base Un-Cased achieved the highest performance with 93%accuracy(standard deviation across±1%5-fold cross-validation)and an average AUC of 0.96,outperforming standalone TF-IDF(78%)and Word2Vec(82%)models.Notably,it achieved near-perfect AUC scores(0.99)for age and ethnicity-based bullying.A comparative analysis with state-of-the-art benchmarks,including Generative Pre-trained Transformer 2(GPT-2)and Text-to-Text Transfer Transformer(T5)models highlights BERT’s superiority in handling ambiguous language.This work advances cyberbullying detection by demonstrating how hybrid feature extraction and transformer models improve multi-class classification,offering a scalable solution for moderating nuanced harmful content.
基金supported by the National Natural Science Foundation of China(Grants Nos.62473146,62072249 and 62072056)the National Science Foundation of Hunan Province(Grant No.2024JJ3017)+1 种基金the Hunan Provincial Key Research and Development Program(Grant No.2022GK2019)by the Researchers Supporting Project Number(RSP2024R509),King Saud University,Riyadh,Saudi Arabia.
文摘In the context of an increasingly severe cybersecurity landscape and the growing complexity of offensive and defen-sive techniques,Zero Trust Networks(ZTN)have emerged as a widely recognized technology.Zero Trust not only addresses the shortcomings of traditional perimeter security models but also consistently follows the fundamental principle of“never trust,always verify.”Initially proposed by John Cortez in 2010 and subsequently promoted by Google,the Zero Trust model has become a key approach to addressing the ever-growing security threats in complex network environments.This paper systematically compares the current mainstream cybersecurity models,thoroughly explores the advantages and limitations of the Zero Trust model,and provides an in-depth review of its components and key technologies.Additionally,it analyzes the latest research achievements in the application of Zero Trust technology across various fields,including network security,6G networks,the Internet of Things(IoT),and cloud computing,in the context of specific use cases.The paper also discusses the innovative contributions of the Zero Trust model in these fields,the challenges it faces,and proposes corresponding solutions and future research directions.
文摘Twenty samples of endothelia removed from normal and post penetrating keratoplas-ty (0.5,1,2,3 months after penetrating keratoplasty) were observed by scanning electron mi-croscopy.The photographs of the endothelia in graft-host junction were analyzed by computer-assisted image analysis system,and the morphometric indexes examined were area of the cells,perimeters,density,figure coefficient,long axis,coefficient of variation of the area,and oth-ers.Results showed that the morphology and the density of the endothelial cells changed obvi-ously after operation and improved slowly but progressively with time although at 3 monthspostoperatively some differences still existed.By using the new techniques,the experiment con-firmed and enriched the theories on the corneal endothelial wound-healing,revealing some ofthe new characters of the endothelial wound-healing following penetrating keratoplasty.
文摘In this study,the hourly directions of eight banking stocks in Borsa Istanbul were predicted using linear-based,deep-learning(LSTM)and ensemble learning(Light-GBM)models.These models were trained with four different feature sets and their performances were evaluated in terms of accuracy and F-measure metrics.While the first experiments directly used the own stock features as the model inputs,the second experiments utilized reduced stock features through Variational AutoEncoders(VAE).In the last experiments,in order to grasp the effects of the other banking stocks on individual stock performance,the features belonging to other stocks were also given as inputs to our models.While combining other stock features was done for both own(named as allstock_own)and VAE-reduced(named as allstock_VAE)stock features,the expanded dimensions of the feature sets were reduced by Recursive Feature Elimination.As the highest success rate increased up to 0.685 with allstock_own and LSTM with attention model,the combination of allstock_VAE and LSTM with the attention model obtained an accuracy rate of 0.675.Although the classification results achieved with both feature types was close,allstock_VAE achieved these results using nearly 16.67%less features compared to allstock_own.When all experimental results were examined,it was found out that the models trained with allstock_own and allstock_VAE achieved higher accuracy rates than those using individual stock features.It was also concluded that the results obtained with the VAE-reduced stock features were similar to those obtained by own stock features.
基金the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2020-2016-0-00312)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)in part by the MSIP(Ministry of Science,ICT&Future Planning),Korea,under the National Program for Excellence in SW(2015-0-00938)supervised by the IITP(Institute for Information&communications Technology Planning&Evaluation).
文摘Agriculture plays an important role in the economy of all countries.However,plant diseases may badly affect the quality of food,production,and ultimately the economy.For plant disease detection and management,agriculturalists spend a huge amount of money.However,the manual detection method of plant diseases is complicated and time-consuming.Consequently,automated systems for plant disease detection using machine learning(ML)approaches are proposed.However,most of the existing ML techniques of plants diseases recognition are based on handcrafted features and they rarely deal with huge amount of input data.To address the issue,this article proposes a fully automated method for plant disease detection and recognition using deep neural networks.In the proposed method,AlexNet and VGG19 CNNs are considered as pre-trained architectures.It is capable to obtain the feature extraction of the given data with fine-tuning details.After convolutional neural network feature extraction,it selects the best subset of features through the correlation coefficient and feeds them to the number of classifiers including K-Nearest Neighbor,Support Vector Machine,Probabilistic Neural Network,Fuzzy logic,and Artificial Neural Network.The validation of the proposed method is carried out on a self-collected dataset generated through the augmentation step.The achieved average accuracy of our method is more than 96%and outperforms the recent techniques.
基金This research was supported by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2021–2020–0–01602)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).
文摘Coronavirus disease 2019(COVID-19)has been termed a“Pandemic Disease”that has infected many people and caused many deaths on a nearly unprecedented level.As more people are infected each day,it continues to pose a serious threat to humanity worldwide.As a result,healthcare systems around the world are facing a shortage of medical space such as wards and sickbeds.In most cases,healthy people experience tolerable symptoms if they are infected.However,in other cases,patients may suffer severe symptoms and require treatment in an intensive care unit.Thus,hospitals should select patients who have a high risk of death and treat them first.To solve this problem,a number of models have been developed for mortality prediction.However,they lack interpretability and generalization.To prepare a model that addresses these issues,we proposed a COVID-19 mortality prediction model that could provide new insights.We identified blood factors that could affect the prediction of COVID-19 mortality.In particular,we focused on dependency reduction using partial correlation and mutual information.Next,we used the Class-Attribute Interdependency Maximization(CAIM)algorithm to bin continuous values.Then,we used Jensen Shannon Divergence(JSD)and Bayesian posterior probability to create less redundant and more accurate rules.We provided a ruleset with its own posterior probability as a result.The extracted rules are in the form of“if antecedent then results,posterior probability(θ)”.If the sample matches the extracted rules,then the result is positive.The average AUC Score was 96.77%for the validation dataset and the F1-score was 92.8%for the test data.Compared to the results of previous studies,it shows good performance in terms of classification performance,generalization,and interpretability.
基金supported in part by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2020-2016-0-00312)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)in part by the MSIP(Ministry of Science,ICT&Future Planning),Korea,under the National Program for Excellence in SW(2015-0-00938)supervised by the IITP(Institute for Information&communications Technology Planning&Evaluation).
文摘Decision making in case of medical diagnosis is a complicated process.A large number of overlapping structures and cases,and distractions,tiredness,and limitations with the human visual system can lead to inappropriate diagnosis.Machine learning(ML)methods have been employed to assist clinicians in overcoming these limitations and in making informed and correct decisions in disease diagnosis.Many academic papers involving the use of machine learning for disease diagnosis have been increasingly getting published.Hence,to determine the use of ML to improve the diagnosis in varied medical disciplines,a systematic review is conducted in this study.To carry out the review,six different databases are selected.Inclusion and exclusion criteria are employed to limit the research.Further,the eligible articles are classied depending on publication year,authors,type of articles,research objective,inputs and outputs,problem and research gaps,and ndings and results.Then the selected articles are analyzed to show the impact of ML methods in improving the disease diagnosis.The ndings of this study show the most used ML methods and the most common diseases that are focused on by researchers.It also shows the increase in use of machine learning for disease diagnosis over the years.These results will help in focusing on those areas which are neglected and also to determine various ways in which ML methods could be employed to achieve desirable results.
文摘One of the most complex tasks for computer-aided diagnosis(Intelligent decision support system)is the segmentation of lesions.Thus,this study proposes a new fully automated method for the segmentation of ovarian and breast ultrasound images.The main contributions of this research is the development of a novel Viola–James model capable of segmenting the ultrasound images of breast and ovarian cancer cases.In addition,proposed an approach that can efficiently generate region-of-interest(ROI)and new features that can be used in characterizing lesion boundaries.This study uses two databases in training and testing the proposed segmentation approach.The breast cancer database contains 250 images,while that of the ovarian tumor has 100 images obtained from several hospitals in Iraq.Results of the experiments showed that the proposed approach demonstrates better performance compared with those of other segmentation methods used for segmenting breast and ovarian ultrasound images.The segmentation result of the proposed system compared with the other existing techniques in the breast cancer data set was 78.8%.By contrast,the segmentation result of the proposed system in the ovarian tumor data set was 79.2%.In the classification results,we achieved 95.43%accuracy,92.20%sensitivity,and 97.5%specificity when we used the breast cancer data set.For the ovarian tumor data set,we achieved 94.84%accuracy,96.96%sensitivity,and 90.32%specificity.
文摘Recommendation services become an essential and hot research topic for researchers nowadays.Social data such asReviews play an important role in the recommendation of the products.Improvement was achieved by deep learning approaches for capturing user and product information from a short text.However,such previously used approaches do not fairly and efficiently incorporate users’preferences and product characteristics.The proposed novel Hybrid Deep Collaborative Filtering(HDCF)model combines deep learning capabilities and deep interaction modeling with high performance for True Recommendations.To overcome the cold start problem,the new overall rating is generated by aggregating the Deep Multivariate Rating DMR(Votes,Likes,Stars,and Sentiment scores of reviews)from different external data sources because different sites have different rating scores about the same product that make confusion for the user to make a decision,either product is truly popular or not.The proposed novel HDCF model consists of four major modules such as User Product Attention,Deep Collaborative Filtering,Neural Sentiment Classifier,and Deep Multivariate Rating(UPA-DCF+NSC+DMR)to solve the addressed problems.Experimental results demonstrate that our novel model is outperforming state-of-the-art IMDb,Yelp2013,and Yelp2014 datasets for the true top-n recommendation of products using HDCF to increase the accuracy,confidence,and trust of recommendation services.
文摘The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,which makes it challenging and a big issue to improve approaches for efficient identification of COVID-19 disease.In this study,an automatic prediction of COVID-19 identification is proposed to automatically discriminate between healthy and COVID-19 infected subjects in X-ray images using two successful moderns are traditional machine learning methods(e.g.,artificial neural network(ANN),support vector machine(SVM),linear kernel and radial basis function(RBF),k-nearest neighbor(k-NN),Decision Tree(DT),andCN2 rule inducer techniques)and deep learningmodels(e.g.,MobileNets V2,ResNet50,GoogleNet,DarkNet andXception).A largeX-ray dataset has been created and developed,namely the COVID-19 vs.Normal(400 healthy cases,and 400 COVID cases).To the best of our knowledge,it is currently the largest publicly accessible COVID-19 dataset with the largest number of X-ray images of confirmed COVID-19 infection cases.Based on the results obtained from the experiments,it can be concluded that all the models performed well,deep learning models had achieved the optimum accuracy of 98.8%in ResNet50 model.In comparison,in traditional machine learning techniques, the SVM demonstrated the best result for an accuracy of 95% and RBFaccuracy 94% for the prediction of coronavirus disease 2019.
基金Research Supporting Project Number(RSP2024R421),King Saud University,Riyadh,Saudi Arabia.
文摘In pursuit of enhancing the Wireless Sensor Networks(WSNs)energy efficiency and operational lifespan,this paper delves into the domain of energy-efficient routing protocols.InWSNs,the limited energy resources of Sensor Nodes(SNs)are a big challenge for ensuring their efficient and reliable operation.WSN data gathering involves the utilization of a mobile sink(MS)to mitigate the energy consumption problem through periodic network traversal.The mobile sink(MS)strategy minimizes energy consumption and latency by visiting the fewest nodes or predetermined locations called rendezvous points(RPs)instead of all cluster heads(CHs).CHs subsequently transmit packets to neighboring RPs.The unique determination of this study is the shortest path to reach RPs.As the mobile sink(MS)concept has emerged as a promising solution to the energy consumption problem in WSNs,caused by multi-hop data collection with static sinks.In this study,we proposed two novel hybrid algorithms,namely“ Reduced k-means based on Artificial Neural Network”(RkM-ANN)and“Delay Bound Reduced kmeans with ANN”(DBRkM-ANN)for designing a fast,efficient,and most proficient MS path depending upon rendezvous points(RPs).The first algorithm optimizes the MS’s latency,while the second considers the designing of delay-bound paths,also defined as the number of paths with delay over bound for the MS.Both methods use a weight function and k-means clustering to choose RPs in a way that maximizes efficiency and guarantees network-wide coverage.In addition,a method of using MS scheduling for efficient data collection is provided.Extensive simulations and comparisons to several existing algorithms have shown the effectiveness of the suggested methodologies over a wide range of performance indicators.
基金supported by“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning(KETEP),granted financial resources from the Ministry of Trade,Industry&Energy,Republic of Korea.(No.20204010600090).
文摘The Internet of Vehicles(IoV)is a networking paradigm related to the intercommunication of vehicles using a network.In a dynamic network,one of the key challenges in IoV is traffic management under increasing vehicles to avoid congestion.Therefore,optimal path selection to route traffic between the origin and destination is vital.This research proposed a realistic strategy to reduce traffic management service response time by enabling real-time content distribution in IoV systems using heterogeneous network access.Firstly,this work proposed a novel use of the Ant Colony Optimization(ACO)algorithm and formulated the path planning optimization problem as an Integer Linear Program(ILP).This integrates the future estimation metric to predict the future arrivals of the vehicles,searching the optimal routes.Considering the mobile nature of IOV,fuzzy logic is used for congestion level estimation along with the ACO to determine the optimal path.The model results indicate that the suggested scheme outperforms the existing state-of-the-art methods by identifying the shortest and most cost-effective path.Thus,this work strongly supports its use in applications having stringent Quality of Service(QoS)requirements for the vehicles.
基金funded by the Researchers Supporting Project Number(RSP2023R 509),King Saud University,Riyadh,Saudi Arabia.
文摘The overgrowth of weeds growing along with the primary crop in the fields reduces crop production.Conventional solutions like hand weeding are labor-intensive,costly,and time-consuming;farmers have used herbicides.The application of herbicide is effective but causes environmental and health concerns.Hence,Precision Agriculture(PA)suggests the variable spraying of herbicides so that herbicide chemicals do not affect the primary plants.Motivated by the gap above,we proposed a Deep Learning(DL)based model for detecting Eggplant(Brinjal)weed in this paper.The key objective of this study is to detect plant and non-plant(weed)parts from crop images.With the help of object detection,the precise location of weeds from images can be achieved.The dataset is collected manually from a private farm in Gandhinagar,Gujarat,India.The combined approach of classification and object detection is applied in the proposed model.The Convolutional Neural Network(CNN)model is used to classify weed and non-weed images;further DL models are applied for object detection.We have compared DL models based on accuracy,memory usage,and Intersection over Union(IoU).ResNet-18,YOLOv3,CenterNet,and Faster RCNN are used in the proposed work.CenterNet outperforms all other models in terms of accuracy,i.e.,88%.Compared to other models,YOLOv3 is the least memory-intensive,utilizing 4.78 GB to evaluate the data.
文摘A simple,stable and reliable virtual logic analyzer is presented. The logic analyzer had two modules:one was the test pattern generation module,the other was the logic monitoring module. Combining the two modules,one is able to test a digital circuit automatically. The user interface of the logic analyzer was programmed with LabVIEW. Two Arduino UNO boards were used as the hardware targets to input and output the logic signals. The maximum pattern update rate was set to be 20 Hz. The maximum logic sampling rate was set to be 200 Hz. After twelve thousand cycles of exhaustive tests,the logic analyzer had a 100% accuracy. As a tutorial showing how to build virtual instruments with Arduino,the software detail is also explained in this article.