Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhance...Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.展开更多
Software cost estimation is a crucial aspect of software project management,significantly impacting productivity and planning.This research investigates the impact of various feature selection techniques on software c...Software cost estimation is a crucial aspect of software project management,significantly impacting productivity and planning.This research investigates the impact of various feature selection techniques on software cost estimation accuracy using the CoCoMo NASA dataset,which comprises data from 93 unique software projects with 24 attributes.By applying multiple machine learning algorithms alongside three feature selection methods,this study aims to reduce data redundancy and enhance model accuracy.Our findings reveal that the principal component analysis(PCA)-based feature selection technique achieved the highest performance,underscoring the importance of optimal feature selection in improving software cost estimation accuracy.It is demonstrated that our proposed method outperforms the existing method while achieving the highest precision,accuracy,and recall rates.展开更多
Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As re...Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As requirement changes continuously,it increases the irrelevancy and redundancy during testing.Due to these challenges;fault detection capability decreases and there arises a need to improve the testing process,which is based on changes in requirements specification.In this research,we have developed a model to resolve testing challenges through requirement prioritization and prediction in an agile-based environment.The research objective is to identify the most relevant and meaningful requirements through semantic analysis for correct change analysis.Then compute the similarity of requirements through case-based reasoning,which predicted the requirements for reuse and restricted to error-based requirements.Afterward,the apriori algorithm mapped out requirement frequency to select relevant test cases based on frequently reused or not reused test cases to increase the fault detection rate.Furthermore,the proposed model was evaluated by conducting experiments.The results showed that requirement redundancy and irrelevancy improved due to semantic analysis,which correctly predicted the requirements,increasing the fault detection rate and resulting in high user satisfaction.The predicted requirements are mapped into test cases,increasing the fault detection rate after changes to achieve higher user satisfaction.Therefore,the model improves the redundancy and irrelevancy of requirements by more than 90%compared to other clustering methods and the analytical hierarchical process,achieving an 80%fault detection rate at an earlier stage.Hence,it provides guidelines for practitioners and researchers in the modern era.In the future,we will provide the working prototype of this model for proof of concept.展开更多
When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ...When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications.展开更多
Embracing software product lines(SPLs)is pivotal in the dynamic landscape of contemporary software devel-opment.However,the flexibility and global distribution inherent in modern systems pose significant challenges to...Embracing software product lines(SPLs)is pivotal in the dynamic landscape of contemporary software devel-opment.However,the flexibility and global distribution inherent in modern systems pose significant challenges to managing SPL variability,underscoring the critical importance of robust cybersecurity measures.This paper advocates for leveraging machine learning(ML)to address variability management issues and fortify the security of SPL.In the context of the broader special issue theme on innovative cybersecurity approaches,our proposed ML-based framework offers an interdisciplinary perspective,blending insights from computing,social sciences,and business.Specifically,it employs ML for demand analysis,dynamic feature extraction,and enhanced feature selection in distributed settings,contributing to cyber-resilient ecosystems.Our experiments demonstrate the framework’s superiority,emphasizing its potential to boost productivity and security in SPLs.As digital threats evolve,this research catalyzes interdisciplinary collaborations,aligning with the special issue’s goal of breaking down academic barriers to strengthen digital ecosystems against sophisticated attacks while upholding ethics,privacy,and human values.展开更多
End-user computing empowers non-developers to manage data and applications, enhancing collaboration and efficiency. Spreadsheets, a prime example of end-user programming environments widely used in business for data a...End-user computing empowers non-developers to manage data and applications, enhancing collaboration and efficiency. Spreadsheets, a prime example of end-user programming environments widely used in business for data analysis. However, Excel functionalities have limits compared to dedicated programming languages. This paper addresses this gap by proposing a prototype for integrating Python’s capabilities into Excel through on-premises desktop to build custom spreadsheet functions with Python. This approach overcomes potential latency issues associated with cloud-based solutions. This prototype utilizes Excel-DNA and IronPython. Excel-DNA allows creating custom Python functions that seamlessly integrate with Excel’s calculation engine. IronPython enables the execution of these Python (CSFs) directly within Excel. C# and VSTO add-ins form the core components, facilitating communication between Python and Excel. This approach empowers users with a potentially open-ended set of Python (CSFs) for tasks like mathematical calculations, statistical analysis, and even predictive modeling, all within the familiar Excel interface. This prototype demonstrates smooth integration, allowing users to call Python (CSFs) just like standard Excel functions. This research contributes to enhancing spreadsheet capabilities for end-user programmers by leveraging Python’s power within Excel. Future research could explore expanding data analysis capabilities by expanding the (CSFs) functions for complex calculations, statistical analysis, data manipulation, and even external library integration. The possibility of integrating machine learning models through the (CSFs) functions within the familiar Excel environment.展开更多
With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud...With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud servers vulnerable due to insufficient encryption.This paper introduces a novel mechanism that encrypts data in‘bundle’units,designed to meet the dual requirements of efficiency and security for frequently updated collaborative data.Each bundle includes updated information,allowing only the updated portions to be reencrypted when changes occur.The encryption method proposed in this paper addresses the inefficiencies of traditional encryption modes,such as Cipher Block Chaining(CBC)and Counter(CTR),which require decrypting and re-encrypting the entire dataset whenever updates occur.The proposed method leverages update-specific information embedded within data bundles and metadata that maps the relationship between these bundles and the plaintext data.By utilizing this information,the method accurately identifies the modified portions and applies algorithms to selectively re-encrypt only those sections.This approach significantly enhances the efficiency of data updates while maintaining high performance,particularly in large-scale data environments.To validate this approach,we conducted experiments measuring execution time as both the size of the modified data and the total dataset size varied.Results show that the proposed method significantly outperforms CBC and CTR modes in execution speed,with greater performance gains as data size increases.Additionally,our security evaluation confirms that this method provides robust protection against both passive and active attacks.展开更多
Domain randomization is a widely adopted technique in deep reinforcement learning(DRL)to improve agent generalization by exposing policies to diverse environmental conditions.This paper investigates the impact of diff...Domain randomization is a widely adopted technique in deep reinforcement learning(DRL)to improve agent generalization by exposing policies to diverse environmental conditions.This paper investigates the impact of different reset strategies,normal,non-randomized,and randomized,on agent performance using the Deep Deterministic Policy Gradient(DDPG)and Twin Delayed DDPG(TD3)algorithms within the CarRacing-v2 environment.Two experimental setups were conducted:an extended training regime with DDPG for 1000 steps per episode across 1000 episodes,and a fast execution setup comparing DDPG and TD3 for 30 episodes with 50 steps per episode under constrained computational resources.A step-based reward scaling mechanism was applied under the randomized reset condition to promote broader state exploration.Experimental results showthat randomized resets significantly enhance learning efficiency and generalization,with DDPG demonstrating superior performance across all reset strategies.In particular,DDPG combined with randomized resets achieves the highest smoothed rewards(reaching approximately 15),best stability,and fastest convergence.These differences are statistically significant,as confirmed by t-tests:DDPG outperforms TD3 under randomized(t=−101.91,p<0.0001),normal(t=−21.59,p<0.0001),and non-randomized(t=−62.46,p<0.0001)reset conditions.The findings underscore the critical role of reset strategy and reward shaping in enhancing the robustness and adaptability of DRL agents in continuous control tasks,particularly in environments where computational efficiency and training stability are crucial.展开更多
Digital content such as games,extended reality(XR),and movies has been widely and easily distributed over wireless networks.As a result,unauthorized access,copyright infringement by third parties or eavesdroppers,and ...Digital content such as games,extended reality(XR),and movies has been widely and easily distributed over wireless networks.As a result,unauthorized access,copyright infringement by third parties or eavesdroppers,and cyberattacks over these networks have become pressing concerns.Therefore,protecting copyrighted content and preventing illegal distribution in wireless communications has garnered significant attention.The Intelligent Reflecting Surface(IRS)is regarded as a promising technology for future wireless and mobile networks due to its ability to reconfigure the radio propagation environment.This study investigates the security performance of an uplink Non-Orthogonal Multiple Access(NOMA)system integrated with an IRS and employing Fountain Codes(FCs).Specifically,two users send signals to the base station at separate distances.A relay receives the signal from the nearby user first and then relays it to the base station.The IRS receives the signal from the distant user and reflects it to the relay,which then sends the reflected signal to the base station.Furthermore,a malevolent eavesdropper intercepts both user and relay communications.We construct mathematical equations for Outage Probability(OP),throughput,diversity evaluation,and Interception Probability(IP),offering quantitative insights to assess system security and performance.Additionally,OP and IP are analyzed using a Deep Neural Network(DNN)model.A deeper comprehension of the security performance of the IRS-assisted NOMA systemin signal transmission is provided by Monte Carlo simulations,which are also carried out to confirm the theoretical conclusions.展开更多
Recommendation systems(RSs)are crucial in personalizing user experiences in digital environments by suggesting relevant content or items.Collaborative filtering(CF)is a widely used personalization technique that lever...Recommendation systems(RSs)are crucial in personalizing user experiences in digital environments by suggesting relevant content or items.Collaborative filtering(CF)is a widely used personalization technique that leverages user-item interactions to generate recommendations.However,it struggles with challenges like the cold-start problem,scalability issues,and data sparsity.To address these limitations,we develop a Graph Convolutional Networks(GCNs)model that captures the complex network of interactions between users and items,identifying subtle patterns that traditional methods may overlook.We integrate this GCNs model into a federated learning(FL)framework,enabling themodel to learn fromdecentralized datasets.This not only significantly enhances user privacy—a significant improvement over conventionalmodels but also reassures users about the safety of their data.Additionally,by securely incorporating demographic information,our approach further personalizes recommendations and mitigates the coldstart issue without compromising user data.We validate our RSs model using the openMovieLens dataset and evaluate its performance across six key metrics:Precision,Recall,Area Under the Receiver Operating Characteristic Curve(ROC-AUC),F1 Score,Normalized Discounted Cumulative Gain(NDCG),and Mean Reciprocal Rank(MRR).The experimental results demonstrate significant enhancements in recommendation quality,underscoring that combining GCNs with CF in a federated setting provides a transformative solution for advanced recommendation systems.展开更多
In today’s digital era,the rapid evolution of image editing technologies has brought about a significant simplification of image manipulation.Unfortunately,this progress has also given rise to the misuse of manipulat...In today’s digital era,the rapid evolution of image editing technologies has brought about a significant simplification of image manipulation.Unfortunately,this progress has also given rise to the misuse of manipulated images across various domains.One of the pressing challenges stemming from this advancement is the increasing difficulty in discerning between unaltered and manipulated images.This paper offers a comprehensive survey of existing methodologies for detecting image tampering,shedding light on the diverse approaches employed in the field of contemporary image forensics.The methods used to identify image forgery can be broadly classified into two primary categories:classical machine learning techniques,heavily reliant on manually crafted features,and deep learning methods.Additionally,this paper explores recent developments in image forensics,placing particular emphasis on the detection of counterfeit colorization.Image colorization involves predicting colors for grayscale images,thereby enhancing their visual appeal.The advancements in colorization techniques have reached a level where distinguishing between authentic and forged images with the naked eye has become an exceptionally challenging task.This paper serves as an in-depth exploration of the intricacies of image forensics in the modern age,with a specific focus on the detection of colorization forgery,presenting a comprehensive overview of methodologies in this critical field.展开更多
Electric Vehicle Charging Systems(EVCS)are increasingly vulnerable to cybersecurity threats as they integrate deeply into smart grids and Internet ofThings(IoT)environments,raising significant security challenges.Most...Electric Vehicle Charging Systems(EVCS)are increasingly vulnerable to cybersecurity threats as they integrate deeply into smart grids and Internet ofThings(IoT)environments,raising significant security challenges.Most existing research primarily emphasizes network-level anomaly detection,leaving critical vulnerabilities at the host level underexplored.This study introduces a novel forensic analysis framework leveraging host-level data,including system logs,kernel events,and Hardware Performance Counters(HPC),to detect and analyze sophisticated cyberattacks such as cryptojacking,Denial-of-Service(DoS),and reconnaissance activities targeting EVCS.Using comprehensive forensic analysis and machine learning models,the proposed framework significantly outperforms existing methods,achieving an accuracy of 98.81%.The findings offer insights into distinct behavioral signatures associated with specific cyber threats,enabling improved cybersecurity strategies and actionable recommendations for robust EVCS infrastructure protection.展开更多
Software crowdsourcing(SW CS)is an evolving software development paradigm,in which crowds of people are asked to solve various problems through an open call(with the encouragement of prizes for the top solutions).Beca...Software crowdsourcing(SW CS)is an evolving software development paradigm,in which crowds of people are asked to solve various problems through an open call(with the encouragement of prizes for the top solutions).Because of its dynamic nature,SW CS has been progressively accepted and adopted in the software industry.However,issues pertinent to the understanding of requirements among crowds of people and requirements engineers are yet to be clarified and explained.If the requirements are not clear to the development team,it has a significant effect on the quality of the software product.This study aims to identify the potential challenges faced by requirements engineers when conducting the SW–CS based requirements engineering(RE)process.Moreover,solutions to overcome these challenges are also identified.Qualitative data analysis is performed on the interview data collected from software industry professionals.Consequently,20 SW–CS based RE challenges and their subsequent proposed solutions are devised,which are further grouped under seven categories.This study is beneficial for academicians,researchers and practitioners by providing detailed SW–CS based RE challenges and subsequent solutions that could eventually guide them to understand and effectively implement RE in SW CS.展开更多
The controller in software-defined networking(SDN)acts as strategic point of control for the underlying network.Multiple controllers are available,and every single controller retains a number of features such as the O...The controller in software-defined networking(SDN)acts as strategic point of control for the underlying network.Multiple controllers are available,and every single controller retains a number of features such as the OpenFlow version,clustering,modularity,platform,and partnership support,etc.They are regarded as vital when making a selection among a set of controllers.As such,the selection of the controller becomes a multi-criteria decision making(MCDM)problem with several features.Hence,an increase in this number will increase the computational complexity of the controller selection process.Previously,the selection of controllers based on features has been studied by the researchers.However,the prioritization of features has gotten less attention.Moreover,several features increase the computational complexity of the selection process.In this paper,we propose a mathematical modeling for feature prioritization with analytical network process(ANP)bridge model for SDN controllers.The results indicate that a prioritized features model lead to a reduction in the computational complexity of the selection of SDN controller.In addition,our model generates prioritized features for SDN controllers.展开更多
Despite advances in technological complexity and efforts,software repository maintenance requires reusing the data to reduce the effort and complexity.However,increasing ambiguity,irrelevance,and bugs while extracting...Despite advances in technological complexity and efforts,software repository maintenance requires reusing the data to reduce the effort and complexity.However,increasing ambiguity,irrelevance,and bugs while extracting similar data during software development generate a large amount of data from those data that reside in repositories.Thus,there is a need for a repository mining technique for relevant and bug-free data prediction.This paper proposes a fault prediction approach using a data-mining technique to find good predictors for high-quality software.To predict errors in mining data,the Apriori algorithm was used to discover association rules by fixing confidence at more than 40%and support at least 30%.The pruning strategy was adopted based on evaluation measures.Next,the rules were extracted from three projects of different domains;the extracted rules were then combined to obtain the most popular rules based on the evaluation measure values.To evaluate the proposed approach,we conducted an experimental study to compare the proposed rules with existing ones using four different industrial projects.The evaluation showed that the results of our proposal are promising.Practitioners and developers can utilize these rules for defect prediction during early software development.展开更多
The rapid growth in software demand incentivizes software development organizations to develop exclusive software for their customers worldwide.This problem is addressed by the software development industry by softwar...The rapid growth in software demand incentivizes software development organizations to develop exclusive software for their customers worldwide.This problem is addressed by the software development industry by software product line(SPL)practices that employ feature models.However,optimal feature selection based on user requirements is a challenging task.Thus,there is a requirement to resolve the challenges of software development,to increase satisfaction and maintain high product quality,for massive customer needs within limited resources.In this work,we propose a recommender system for the development team and clients to increase productivity and quality by utilizing historical information and prior experiences of similar developers and clients.The proposed system recommends features with their estimated cost concerning new software requirements,from all over the globe according to similar developers’and clients’needs and preferences.The system guides and facilitates the development team by suggesting a list of features,code snippets,libraries,cheat sheets of programming languages,and coding references from a cloud-based knowledge management repository.Similarly,a list of features is suggested to the client according to their needs and preferences.The experimental results revealed that the proposed recommender system is feasible and effective,providing better recommendations to developers and clients.It provides proper and reasonably well-estimated costs to perform development tasks effectively as well as increase the client’s satisfaction level.The results indicate that there is an increase in productivity,performance,and quality of products and a reduction in effort,complexity,and system failure.Therefore,our proposed system facilitates developers and clients during development by providing better recommendations in terms of solutions and anticipated costs.Thus,the increase in productivity and satisfaction level maximizes the benefits and usability of SPL in the modern era of technology.展开更多
In software engineering, software measures are often proposed without precise identification of the measurable concepts they attempt to quantify: consequently, the numbers obtained are challenging to reproduce in diff...In software engineering, software measures are often proposed without precise identification of the measurable concepts they attempt to quantify: consequently, the numbers obtained are challenging to reproduce in different measurement contexts and to interpret, either as base measures or in combination as derived measures. The lack of consistency when using base measures in data collection can affect both data preparation and data analysis. This paper analyzes the similarities and differences across three different views of measurement methods (ISO International Vocabulary on Metrology, ISO 15939, and ISO 25021), and uses a process proposed for the design of software measurement methods to analyze two examples of such methods selected from the literature.展开更多
This paper presents new theoretical aspects of software engineering which oriented on product lines for building applied systems and software product families from readymade reusable components in conditions of progra...This paper presents new theoretical aspects of software engineering which oriented on product lines for building applied systems and software product families from readymade reusable components in conditions of program factories. These aspects are the new disciplines such as the theory of component programming;models variability and interoperability of system;theory for building systems and product families from components. Principles and methods of implementing these theories were realized in the instrumental and technological complex by lines of component development: assembling program factories using lines, e-learning to new theories and technologies in textbook of “Software Engineering” by the universities students.展开更多
Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for b...Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for both control plane(CP)and data plane(DP).Likewise,the heterogeneous 5th generation(5G)communication applications,including Mobile Broadband Communications(MBBC),massive Machine-Type Commutation(mMTC),and ultra-reliable low latency communications(URLLC),obligate to perform intelligent Quality-of-Service(QoS)Class Identifier(QCI),while the CP entities will be suffered from the complicated massive HIOT applications.Moreover,the existing management and orchestration(MANO)models are inappropriate for resource utilization and allocation in large-scale and complicated network environments.To cope with the issues mentioned above,this paper presents an adopted software-defined mobile edge computing(SDMEC)with a lightweight machine learning(ML)algorithm,namely support vector machine(SVM),to enable intelligent MANO for real-time and resource-constraints IoT applications which require lightweight computation models.Furthermore,the SVM algorithm plays an essential role in performing QCI classification.Moreover,the software-defined networking(SDN)controller allocates and configures priority resources according to the SVM classification outcomes.Thus,the complementary of SVM and SDMEC conducts intelligent resource MANO for massive QCI environments and meets the perspectives of mission-critical communication with resource constraint applications.Based on the E2E experimentation metrics,the proposed scheme shows remarkable outperformance in key performance indicator(KPI)QoS,including communication reliability,latency,and communication throughput over the various powerful reference methods.展开更多
文摘Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods.
文摘Software cost estimation is a crucial aspect of software project management,significantly impacting productivity and planning.This research investigates the impact of various feature selection techniques on software cost estimation accuracy using the CoCoMo NASA dataset,which comprises data from 93 unique software projects with 24 attributes.By applying multiple machine learning algorithms alongside three feature selection methods,this study aims to reduce data redundancy and enhance model accuracy.Our findings reveal that the principal component analysis(PCA)-based feature selection technique achieved the highest performance,underscoring the importance of optimal feature selection in improving software cost estimation accuracy.It is demonstrated that our proposed method outperforms the existing method while achieving the highest precision,accuracy,and recall rates.
文摘Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As requirement changes continuously,it increases the irrelevancy and redundancy during testing.Due to these challenges;fault detection capability decreases and there arises a need to improve the testing process,which is based on changes in requirements specification.In this research,we have developed a model to resolve testing challenges through requirement prioritization and prediction in an agile-based environment.The research objective is to identify the most relevant and meaningful requirements through semantic analysis for correct change analysis.Then compute the similarity of requirements through case-based reasoning,which predicted the requirements for reuse and restricted to error-based requirements.Afterward,the apriori algorithm mapped out requirement frequency to select relevant test cases based on frequently reused or not reused test cases to increase the fault detection rate.Furthermore,the proposed model was evaluated by conducting experiments.The results showed that requirement redundancy and irrelevancy improved due to semantic analysis,which correctly predicted the requirements,increasing the fault detection rate and resulting in high user satisfaction.The predicted requirements are mapped into test cases,increasing the fault detection rate after changes to achieve higher user satisfaction.Therefore,the model improves the redundancy and irrelevancy of requirements by more than 90%compared to other clustering methods and the analytical hierarchical process,achieving an 80%fault detection rate at an earlier stage.Hence,it provides guidelines for practitioners and researchers in the modern era.In the future,we will provide the working prototype of this model for proof of concept.
基金the R&D&I,Spain grants PID2020-119478GB-I00 and,PID2020-115832GB-I00 funded by MCIN/AEI/10.13039/501100011033.N.Rodríguez-Barroso was supported by the grant FPU18/04475 funded by MCIN/AEI/10.13039/501100011033 and by“ESF Investing in your future”Spain.J.Moyano was supported by a postdoctoral Juan de la Cierva Formación grant FJC2020-043823-I funded by MCIN/AEI/10.13039/501100011033 and by European Union NextGenerationEU/PRTR.J.Del Ser acknowledges funding support from the Spanish Centro para el Desarrollo Tecnológico Industrial(CDTI)through the AI4ES projectthe Department of Education of the Basque Government(consolidated research group MATHMODE,IT1456-22)。
文摘When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications.
基金supported via funding from Ministry of Defense,Government of Pakistan under Project Number AHQ/95013/6/4/8/NASTP(ACP).Titled:Development of ICT and Artificial Intelligence Based Precision Agriculture Systems Utilizing Dual-Use Aerospace Technologies-GREENAI.
文摘Embracing software product lines(SPLs)is pivotal in the dynamic landscape of contemporary software devel-opment.However,the flexibility and global distribution inherent in modern systems pose significant challenges to managing SPL variability,underscoring the critical importance of robust cybersecurity measures.This paper advocates for leveraging machine learning(ML)to address variability management issues and fortify the security of SPL.In the context of the broader special issue theme on innovative cybersecurity approaches,our proposed ML-based framework offers an interdisciplinary perspective,blending insights from computing,social sciences,and business.Specifically,it employs ML for demand analysis,dynamic feature extraction,and enhanced feature selection in distributed settings,contributing to cyber-resilient ecosystems.Our experiments demonstrate the framework’s superiority,emphasizing its potential to boost productivity and security in SPLs.As digital threats evolve,this research catalyzes interdisciplinary collaborations,aligning with the special issue’s goal of breaking down academic barriers to strengthen digital ecosystems against sophisticated attacks while upholding ethics,privacy,and human values.
文摘End-user computing empowers non-developers to manage data and applications, enhancing collaboration and efficiency. Spreadsheets, a prime example of end-user programming environments widely used in business for data analysis. However, Excel functionalities have limits compared to dedicated programming languages. This paper addresses this gap by proposing a prototype for integrating Python’s capabilities into Excel through on-premises desktop to build custom spreadsheet functions with Python. This approach overcomes potential latency issues associated with cloud-based solutions. This prototype utilizes Excel-DNA and IronPython. Excel-DNA allows creating custom Python functions that seamlessly integrate with Excel’s calculation engine. IronPython enables the execution of these Python (CSFs) directly within Excel. C# and VSTO add-ins form the core components, facilitating communication between Python and Excel. This approach empowers users with a potentially open-ended set of Python (CSFs) for tasks like mathematical calculations, statistical analysis, and even predictive modeling, all within the familiar Excel interface. This prototype demonstrates smooth integration, allowing users to call Python (CSFs) just like standard Excel functions. This research contributes to enhancing spreadsheet capabilities for end-user programmers by leveraging Python’s power within Excel. Future research could explore expanding data analysis capabilities by expanding the (CSFs) functions for complex calculations, statistical analysis, data manipulation, and even external library integration. The possibility of integrating machine learning models through the (CSFs) functions within the familiar Excel environment.
基金supported by the Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(RS-2024-00399401,Development of Quantum-Safe Infrastructure Migration and Quantum Security Verification Technologies).
文摘With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud servers vulnerable due to insufficient encryption.This paper introduces a novel mechanism that encrypts data in‘bundle’units,designed to meet the dual requirements of efficiency and security for frequently updated collaborative data.Each bundle includes updated information,allowing only the updated portions to be reencrypted when changes occur.The encryption method proposed in this paper addresses the inefficiencies of traditional encryption modes,such as Cipher Block Chaining(CBC)and Counter(CTR),which require decrypting and re-encrypting the entire dataset whenever updates occur.The proposed method leverages update-specific information embedded within data bundles and metadata that maps the relationship between these bundles and the plaintext data.By utilizing this information,the method accurately identifies the modified portions and applies algorithms to selectively re-encrypt only those sections.This approach significantly enhances the efficiency of data updates while maintaining high performance,particularly in large-scale data environments.To validate this approach,we conducted experiments measuring execution time as both the size of the modified data and the total dataset size varied.Results show that the proposed method significantly outperforms CBC and CTR modes in execution speed,with greater performance gains as data size increases.Additionally,our security evaluation confirms that this method provides robust protection against both passive and active attacks.
基金supported by the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia(Project No.MoE-IF-UJ-R2-22-04220773-1).
文摘Domain randomization is a widely adopted technique in deep reinforcement learning(DRL)to improve agent generalization by exposing policies to diverse environmental conditions.This paper investigates the impact of different reset strategies,normal,non-randomized,and randomized,on agent performance using the Deep Deterministic Policy Gradient(DDPG)and Twin Delayed DDPG(TD3)algorithms within the CarRacing-v2 environment.Two experimental setups were conducted:an extended training regime with DDPG for 1000 steps per episode across 1000 episodes,and a fast execution setup comparing DDPG and TD3 for 30 episodes with 50 steps per episode under constrained computational resources.A step-based reward scaling mechanism was applied under the randomized reset condition to promote broader state exploration.Experimental results showthat randomized resets significantly enhance learning efficiency and generalization,with DDPG demonstrating superior performance across all reset strategies.In particular,DDPG combined with randomized resets achieves the highest smoothed rewards(reaching approximately 15),best stability,and fastest convergence.These differences are statistically significant,as confirmed by t-tests:DDPG outperforms TD3 under randomized(t=−101.91,p<0.0001),normal(t=−21.59,p<0.0001),and non-randomized(t=−62.46,p<0.0001)reset conditions.The findings underscore the critical role of reset strategy and reward shaping in enhancing the robustness and adaptability of DRL agents in continuous control tasks,particularly in environments where computational efficiency and training stability are crucial.
基金supported in part by Vietnam National Foundation for Science and Technology Development(NAFOSTED)under Grant 102.04-2021.57in part by Culture,Sports and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture,Sports and Tourism in 2024(Project Name:Global Talent Training Program for Copyright Management Technology in Game Contents,Project Number:RS-2024-00396709,Contribution Rate:100%).
文摘Digital content such as games,extended reality(XR),and movies has been widely and easily distributed over wireless networks.As a result,unauthorized access,copyright infringement by third parties or eavesdroppers,and cyberattacks over these networks have become pressing concerns.Therefore,protecting copyrighted content and preventing illegal distribution in wireless communications has garnered significant attention.The Intelligent Reflecting Surface(IRS)is regarded as a promising technology for future wireless and mobile networks due to its ability to reconfigure the radio propagation environment.This study investigates the security performance of an uplink Non-Orthogonal Multiple Access(NOMA)system integrated with an IRS and employing Fountain Codes(FCs).Specifically,two users send signals to the base station at separate distances.A relay receives the signal from the nearby user first and then relays it to the base station.The IRS receives the signal from the distant user and reflects it to the relay,which then sends the reflected signal to the base station.Furthermore,a malevolent eavesdropper intercepts both user and relay communications.We construct mathematical equations for Outage Probability(OP),throughput,diversity evaluation,and Interception Probability(IP),offering quantitative insights to assess system security and performance.Additionally,OP and IP are analyzed using a Deep Neural Network(DNN)model.A deeper comprehension of the security performance of the IRS-assisted NOMA systemin signal transmission is provided by Monte Carlo simulations,which are also carried out to confirm the theoretical conclusions.
基金funded by Soonchunhyang University,Grant Numbers 20241422BK21 FOUR(Fostering Outstanding Universities for Research,Grant Number 5199990914048).
文摘Recommendation systems(RSs)are crucial in personalizing user experiences in digital environments by suggesting relevant content or items.Collaborative filtering(CF)is a widely used personalization technique that leverages user-item interactions to generate recommendations.However,it struggles with challenges like the cold-start problem,scalability issues,and data sparsity.To address these limitations,we develop a Graph Convolutional Networks(GCNs)model that captures the complex network of interactions between users and items,identifying subtle patterns that traditional methods may overlook.We integrate this GCNs model into a federated learning(FL)framework,enabling themodel to learn fromdecentralized datasets.This not only significantly enhances user privacy—a significant improvement over conventionalmodels but also reassures users about the safety of their data.Additionally,by securely incorporating demographic information,our approach further personalizes recommendations and mitigates the coldstart issue without compromising user data.We validate our RSs model using the openMovieLens dataset and evaluate its performance across six key metrics:Precision,Recall,Area Under the Receiver Operating Characteristic Curve(ROC-AUC),F1 Score,Normalized Discounted Cumulative Gain(NDCG),and Mean Reciprocal Rank(MRR).The experimental results demonstrate significant enhancements in recommendation quality,underscoring that combining GCNs with CF in a federated setting provides a transformative solution for advanced recommendation systems.
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2021R1I1A3049788).
文摘In today’s digital era,the rapid evolution of image editing technologies has brought about a significant simplification of image manipulation.Unfortunately,this progress has also given rise to the misuse of manipulated images across various domains.One of the pressing challenges stemming from this advancement is the increasing difficulty in discerning between unaltered and manipulated images.This paper offers a comprehensive survey of existing methodologies for detecting image tampering,shedding light on the diverse approaches employed in the field of contemporary image forensics.The methods used to identify image forgery can be broadly classified into two primary categories:classical machine learning techniques,heavily reliant on manually crafted features,and deep learning methods.Additionally,this paper explores recent developments in image forensics,placing particular emphasis on the detection of counterfeit colorization.Image colorization involves predicting colors for grayscale images,thereby enhancing their visual appeal.The advancements in colorization techniques have reached a level where distinguishing between authentic and forged images with the naked eye has become an exceptionally challenging task.This paper serves as an in-depth exploration of the intricacies of image forensics in the modern age,with a specific focus on the detection of colorization forgery,presenting a comprehensive overview of methodologies in this critical field.
文摘Electric Vehicle Charging Systems(EVCS)are increasingly vulnerable to cybersecurity threats as they integrate deeply into smart grids and Internet ofThings(IoT)environments,raising significant security challenges.Most existing research primarily emphasizes network-level anomaly detection,leaving critical vulnerabilities at the host level underexplored.This study introduces a novel forensic analysis framework leveraging host-level data,including system logs,kernel events,and Hardware Performance Counters(HPC),to detect and analyze sophisticated cyberattacks such as cryptojacking,Denial-of-Service(DoS),and reconnaissance activities targeting EVCS.Using comprehensive forensic analysis and machine learning models,the proposed framework significantly outperforms existing methods,achieving an accuracy of 98.81%.The findings offer insights into distinct behavioral signatures associated with specific cyber threats,enabling improved cybersecurity strategies and actionable recommendations for robust EVCS infrastructure protection.
基金‘This research is funded by Taif University,TURSP-2020/115’.
文摘Software crowdsourcing(SW CS)is an evolving software development paradigm,in which crowds of people are asked to solve various problems through an open call(with the encouragement of prizes for the top solutions).Because of its dynamic nature,SW CS has been progressively accepted and adopted in the software industry.However,issues pertinent to the understanding of requirements among crowds of people and requirements engineers are yet to be clarified and explained.If the requirements are not clear to the development team,it has a significant effect on the quality of the software product.This study aims to identify the potential challenges faced by requirements engineers when conducting the SW–CS based requirements engineering(RE)process.Moreover,solutions to overcome these challenges are also identified.Qualitative data analysis is performed on the interview data collected from software industry professionals.Consequently,20 SW–CS based RE challenges and their subsequent proposed solutions are devised,which are further grouped under seven categories.This study is beneficial for academicians,researchers and practitioners by providing detailed SW–CS based RE challenges and subsequent solutions that could eventually guide them to understand and effectively implement RE in SW CS.
基金This research was supported partially by LIG Nex1It was also supported partially by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2021-2018-0-01431)supervised by the IITP(Institute for Information&Communications Technology Planning Evaluation).
文摘The controller in software-defined networking(SDN)acts as strategic point of control for the underlying network.Multiple controllers are available,and every single controller retains a number of features such as the OpenFlow version,clustering,modularity,platform,and partnership support,etc.They are regarded as vital when making a selection among a set of controllers.As such,the selection of the controller becomes a multi-criteria decision making(MCDM)problem with several features.Hence,an increase in this number will increase the computational complexity of the controller selection process.Previously,the selection of controllers based on features has been studied by the researchers.However,the prioritization of features has gotten less attention.Moreover,several features increase the computational complexity of the selection process.In this paper,we propose a mathematical modeling for feature prioritization with analytical network process(ANP)bridge model for SDN controllers.The results indicate that a prioritized features model lead to a reduction in the computational complexity of the selection of SDN controller.In addition,our model generates prioritized features for SDN controllers.
基金This research was financially supported in part by the Ministry of Trade,Industry and Energy(MOTIE)and Korea Institute for Advancement of Technology(KIAT)through the International Cooperative R&D program.(Project No.P0016038)in part by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2021-2016-0-00312)supervised by the IITP(Institute for Information&communications Technology Planning&Evaluation).
文摘Despite advances in technological complexity and efforts,software repository maintenance requires reusing the data to reduce the effort and complexity.However,increasing ambiguity,irrelevance,and bugs while extracting similar data during software development generate a large amount of data from those data that reside in repositories.Thus,there is a need for a repository mining technique for relevant and bug-free data prediction.This paper proposes a fault prediction approach using a data-mining technique to find good predictors for high-quality software.To predict errors in mining data,the Apriori algorithm was used to discover association rules by fixing confidence at more than 40%and support at least 30%.The pruning strategy was adopted based on evaluation measures.Next,the rules were extracted from three projects of different domains;the extracted rules were then combined to obtain the most popular rules based on the evaluation measure values.To evaluate the proposed approach,we conducted an experimental study to compare the proposed rules with existing ones using four different industrial projects.The evaluation showed that the results of our proposal are promising.Practitioners and developers can utilize these rules for defect prediction during early software development.
基金supported by the National Natural Science Foundation of China(Grant Number:61672080,Sponsored Authors:Yang S.,Sponsors’Websites:http://www.nsfc.gov.cn/english/site_1/index.html).
文摘The rapid growth in software demand incentivizes software development organizations to develop exclusive software for their customers worldwide.This problem is addressed by the software development industry by software product line(SPL)practices that employ feature models.However,optimal feature selection based on user requirements is a challenging task.Thus,there is a requirement to resolve the challenges of software development,to increase satisfaction and maintain high product quality,for massive customer needs within limited resources.In this work,we propose a recommender system for the development team and clients to increase productivity and quality by utilizing historical information and prior experiences of similar developers and clients.The proposed system recommends features with their estimated cost concerning new software requirements,from all over the globe according to similar developers’and clients’needs and preferences.The system guides and facilitates the development team by suggesting a list of features,code snippets,libraries,cheat sheets of programming languages,and coding references from a cloud-based knowledge management repository.Similarly,a list of features is suggested to the client according to their needs and preferences.The experimental results revealed that the proposed recommender system is feasible and effective,providing better recommendations to developers and clients.It provides proper and reasonably well-estimated costs to perform development tasks effectively as well as increase the client’s satisfaction level.The results indicate that there is an increase in productivity,performance,and quality of products and a reduction in effort,complexity,and system failure.Therefore,our proposed system facilitates developers and clients during development by providing better recommendations in terms of solutions and anticipated costs.Thus,the increase in productivity and satisfaction level maximizes the benefits and usability of SPL in the modern era of technology.
文摘In software engineering, software measures are often proposed without precise identification of the measurable concepts they attempt to quantify: consequently, the numbers obtained are challenging to reproduce in different measurement contexts and to interpret, either as base measures or in combination as derived measures. The lack of consistency when using base measures in data collection can affect both data preparation and data analysis. This paper analyzes the similarities and differences across three different views of measurement methods (ISO International Vocabulary on Metrology, ISO 15939, and ISO 25021), and uses a process proposed for the design of software measurement methods to analyze two examples of such methods selected from the literature.
文摘This paper presents new theoretical aspects of software engineering which oriented on product lines for building applied systems and software product families from readymade reusable components in conditions of program factories. These aspects are the new disciplines such as the theory of component programming;models variability and interoperability of system;theory for building systems and product families from components. Principles and methods of implementing these theories were realized in the instrumental and technological complex by lines of component development: assembling program factories using lines, e-learning to new theories and technologies in textbook of “Software Engineering” by the universities students.
基金This work was funded by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2020R1I1A3066543)this research was supported by the Bio and Medical Technology Development Program of the National Research Foundation(NRF)funded by the Korean government(MSIT)(No.NRF-2019M3E5D1A02069073)In addition,this work was supported by the Soonchunhyang University Research Fund.
文摘Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for both control plane(CP)and data plane(DP).Likewise,the heterogeneous 5th generation(5G)communication applications,including Mobile Broadband Communications(MBBC),massive Machine-Type Commutation(mMTC),and ultra-reliable low latency communications(URLLC),obligate to perform intelligent Quality-of-Service(QoS)Class Identifier(QCI),while the CP entities will be suffered from the complicated massive HIOT applications.Moreover,the existing management and orchestration(MANO)models are inappropriate for resource utilization and allocation in large-scale and complicated network environments.To cope with the issues mentioned above,this paper presents an adopted software-defined mobile edge computing(SDMEC)with a lightweight machine learning(ML)algorithm,namely support vector machine(SVM),to enable intelligent MANO for real-time and resource-constraints IoT applications which require lightweight computation models.Furthermore,the SVM algorithm plays an essential role in performing QCI classification.Moreover,the software-defined networking(SDN)controller allocates and configures priority resources according to the SVM classification outcomes.Thus,the complementary of SVM and SDMEC conducts intelligent resource MANO for massive QCI environments and meets the perspectives of mission-critical communication with resource constraint applications.Based on the E2E experimentation metrics,the proposed scheme shows remarkable outperformance in key performance indicator(KPI)QoS,including communication reliability,latency,and communication throughput over the various powerful reference methods.