期刊文献+
共找到322篇文章
< 1 2 17 >
每页显示 20 50 100
Multi-Objective Enhanced Cheetah Optimizer for Joint Optimization of Computation Offloading and Task Scheduling in Fog Computing
1
作者 Ahmad Zia Nazia Azim +5 位作者 Bekarystankyzy Akbayan Khalid J.Alzahrani Ateeq Ur Rehman Faheem Ullah Khan Nouf Al-Kahtani Hend Khalid Alkahtani 《Computers, Materials & Continua》 2026年第3期1559-1588,共30页
The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous c... The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods. 展开更多
关键词 Computation offloading task scheduling cheetah optimizer fog computing optimization resource allocation internet of things
在线阅读 下载PDF
A Survey of Link Failure Detection and Recovery in Software-Defined Networks
2
作者 Suheib Alhiyari Siti Hafizah AB Hamid Nur Nasuha Daud 《Computers, Materials & Continua》 SCIE EI 2025年第1期103-137,共35页
Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhance... Software-defined networking(SDN)is an innovative paradigm that separates the control and data planes,introducing centralized network control.SDN is increasingly being adopted by Carrier Grade networks,offering enhanced networkmanagement capabilities than those of traditional networks.However,because SDN is designed to ensure high-level service availability,it faces additional challenges.One of themost critical challenges is ensuring efficient detection and recovery from link failures in the data plane.Such failures can significantly impact network performance and lead to service outages,making resiliency a key concern for the effective adoption of SDN.Since the recovery process is intrinsically dependent on timely failure detection,this research surveys and analyzes the current literature on both failure detection and recovery approaches in SDN.The survey provides a critical comparison of existing failure detection techniques,highlighting their advantages and disadvantages.Additionally,it examines the current failure recovery methods,categorized as either restoration-based or protection-based,and offers a comprehensive comparison of their strengths and limitations.Lastly,future research challenges and directions are discussed to address the shortcomings of existing failure recovery methods. 展开更多
关键词 Software defined networking failure detection failure recovery RESTORATION PROTECTION
在线阅读 下载PDF
Crowdsourced Requirements Engineering Challenges and Solutions:A Software Industry Perspective 被引量:2
3
作者 Huma Hayat Khan Muhammad Noman Malik +2 位作者 Youseef Alotaibi Abdulmajeed Alsufyani Saleh Alghamdi 《Computer Systems Science & Engineering》 SCIE EI 2021年第11期221-236,共16页
Software crowdsourcing(SW CS)is an evolving software development paradigm,in which crowds of people are asked to solve various problems through an open call(with the encouragement of prizes for the top solutions).Beca... Software crowdsourcing(SW CS)is an evolving software development paradigm,in which crowds of people are asked to solve various problems through an open call(with the encouragement of prizes for the top solutions).Because of its dynamic nature,SW CS has been progressively accepted and adopted in the software industry.However,issues pertinent to the understanding of requirements among crowds of people and requirements engineers are yet to be clarified and explained.If the requirements are not clear to the development team,it has a significant effect on the quality of the software product.This study aims to identify the potential challenges faced by requirements engineers when conducting the SW–CS based requirements engineering(RE)process.Moreover,solutions to overcome these challenges are also identified.Qualitative data analysis is performed on the interview data collected from software industry professionals.Consequently,20 SW–CS based RE challenges and their subsequent proposed solutions are devised,which are further grouped under seven categories.This study is beneficial for academicians,researchers and practitioners by providing detailed SW–CS based RE challenges and subsequent solutions that could eventually guide them to understand and effectively implement RE in SW CS. 展开更多
关键词 Software crowdsourced requirements engineering software industry software development SURVEY CHALLENGES
在线阅读 下载PDF
Towards Improving the Quality of Requirement and Testing Process in Agile Software Development:An Empirical Study 被引量:1
4
作者 Irum Ilays Yaser Hafeez +4 位作者 Nabil Almashfi Sadia Ali Mamoona Humayun Muhammad Aqib Ghadah Alwakid 《Computers, Materials & Continua》 SCIE EI 2024年第9期3761-3784,共24页
Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As re... Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As requirement changes continuously,it increases the irrelevancy and redundancy during testing.Due to these challenges;fault detection capability decreases and there arises a need to improve the testing process,which is based on changes in requirements specification.In this research,we have developed a model to resolve testing challenges through requirement prioritization and prediction in an agile-based environment.The research objective is to identify the most relevant and meaningful requirements through semantic analysis for correct change analysis.Then compute the similarity of requirements through case-based reasoning,which predicted the requirements for reuse and restricted to error-based requirements.Afterward,the apriori algorithm mapped out requirement frequency to select relevant test cases based on frequently reused or not reused test cases to increase the fault detection rate.Furthermore,the proposed model was evaluated by conducting experiments.The results showed that requirement redundancy and irrelevancy improved due to semantic analysis,which correctly predicted the requirements,increasing the fault detection rate and resulting in high user satisfaction.The predicted requirements are mapped into test cases,increasing the fault detection rate after changes to achieve higher user satisfaction.Therefore,the model improves the redundancy and irrelevancy of requirements by more than 90%compared to other clustering methods and the analytical hierarchical process,achieving an 80%fault detection rate at an earlier stage.Hence,it provides guidelines for practitioners and researchers in the modern era.In the future,we will provide the working prototype of this model for proof of concept. 展开更多
关键词 Requirement prediction software testing agile software development semantic analysis case-based reasoning
在线阅读 下载PDF
New Theoretical Aspects of Software Engineering for Development Applications and E-Learning 被引量:1
5
作者 Ekaterina Lavrischeva Alexei Ostrovski 《Journal of Software Engineering and Applications》 2013年第9期34-40,共7页
This paper presents new theoretical aspects of software engineering which oriented on product lines for building applied systems and software product families from readymade reusable components in conditions of progra... This paper presents new theoretical aspects of software engineering which oriented on product lines for building applied systems and software product families from readymade reusable components in conditions of program factories. These aspects are the new disciplines such as the theory of component programming;models variability and interoperability of system;theory for building systems and product families from components. Principles and methods of implementing these theories were realized in the instrumental and technological complex by lines of component development: assembling program factories using lines, e-learning to new theories and technologies in textbook of “Software Engineering” by the universities students. 展开更多
关键词 SOFTWARE Engineering Theory DISCIPLINES Technologies INTEROPERABILITY Applied Systems SOFTWARE Industry FABRICS E-LEARNING
在线阅读 下载PDF
Fuzzy coloured petri nets‐based method to analyse and verify the functionality of software
6
作者 Mina Chavoshi Seyed Morteza Babamir 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第3期863-879,共17页
Some types of software systems,like event‐based and non‐deterministic ones,are usually specified as rules so that we can analyse the system behaviour by drawing inferences from firing the rules.However,when the fuzz... Some types of software systems,like event‐based and non‐deterministic ones,are usually specified as rules so that we can analyse the system behaviour by drawing inferences from firing the rules.However,when the fuzzy rules are used for the specification of non‐deterministic behaviour and they contain a large number of variables,they constitute a complex form that is difficult to understand and infer.A solution is to visualise the system specification with the capability of automatic rule inference.In this study,by representing a high‐level system specification,the authors visualise rule representation and firing using fuzzy coloured Petri‐nets.Already,several fuzzy Petri‐nets‐based methods have been presented,but they either do not support a large number of rules and variables or do not consider significant cases like(a)the weight of the premise's propositions in the occurrence of the rule conclusion,(b)the weight of conclusion's proposition,(c)threshold values for premise and conclusion's propositions of the rule,and(d)the certainty factor(CF)for the rule or the conclusion's proposition.By considering cases(a)-(d),a wider variety of fuzzy rules are supported.The authors applied their model to the analysis of attacks against a part of a real secure water treatment system.In another real experiment,the authors applied the model to the two scenarios from their previous work and analysed the results. 展开更多
关键词 fuzzy logic software engineering VERIFICATION
在线阅读 下载PDF
A Survey of Approaches Reconciling between Safety and Security Requirements Engineering for Cyber-Physical Systems
7
作者 Mohammed F. H. Abulamddi 《Journal of Computer and Communications》 2017年第1期94-100,共7页
The fields of safety and security use different conceptual standards and methods. As a consequence, these two separate but related research areas utilize different approaches. Addressing the integration between safety... The fields of safety and security use different conceptual standards and methods. As a consequence, these two separate but related research areas utilize different approaches. Addressing the integration between safety and security concerns in this context, we would conduct a survey exploring approaches and standards that were created by the scholars to combine safety and security requirement engineering. 展开更多
关键词 STANDARDS SECURITY SAFETY Reconcile DEPENDABILITY Requirements
暂未订购
Reducing the Gap between Software Engineering Curricula and Software Industry in Jordan
8
作者 Samer Hanna Hayat Jaber +1 位作者 Ayad Almasalmeh Fawze Abu Jaber 《Journal of Software Engineering and Applications》 2014年第7期602-616,共15页
Nowadays software is taking a very important role in almost all aspects of our daily lives which gave great importance to the study field of Software Engineering. However, most of the current Software Engineering grad... Nowadays software is taking a very important role in almost all aspects of our daily lives which gave great importance to the study field of Software Engineering. However, most of the current Software Engineering graduates in Jordan lack the required knowledge and skills to join software industry because of many reasons. This research investigates these reasons by firstly analyzing more than 1000 software job listings in Jordanian and Gulf area e-recruitment services in order to discover the skills and knowledge areas that are mostly required by software industry in Jordan and the Gulf area, and secondly comparing these knowledge areas and skills with those provided by the Software Engineering curricula at the Jordanian Universities. The awareness of the Software Engineering students and academic staff of the concluded mostly required knowledge areas and skills is measured using two questionnaires. Recommendations to decrease the gap between Software Engineering academia and industry had also been taken from a sample of software companies’ manager using a third questionnaire. The results of this research revealed that many important skills such as Web applications development are very poorly covered by Software engineering curricula and that many Software engineering students and academic staffs are not aware about many of the mostly needed skills to join industry. 展开更多
关键词 SOFTWARE Engineering SOFTWARE INDUSTRY KNOWLEDGE Areas KNOWLEDGE GAP Required Skills to JOIN INDUSTRY
暂未订购
Building Custom Spreadsheet Functions with Python: End-User Software Engineering Approach
9
作者 Tamer Bahgat Elserwy Atef Tayh Nour El-Din Raslan +1 位作者 Tarek Ali Mervat H. Gheith 《Journal of Software Engineering and Applications》 2024年第5期246-258,共13页
End-user computing empowers non-developers to manage data and applications, enhancing collaboration and efficiency. Spreadsheets, a prime example of end-user programming environments widely used in business for data a... End-user computing empowers non-developers to manage data and applications, enhancing collaboration and efficiency. Spreadsheets, a prime example of end-user programming environments widely used in business for data analysis. However, Excel functionalities have limits compared to dedicated programming languages. This paper addresses this gap by proposing a prototype for integrating Python’s capabilities into Excel through on-premises desktop to build custom spreadsheet functions with Python. This approach overcomes potential latency issues associated with cloud-based solutions. This prototype utilizes Excel-DNA and IronPython. Excel-DNA allows creating custom Python functions that seamlessly integrate with Excel’s calculation engine. IronPython enables the execution of these Python (CSFs) directly within Excel. C# and VSTO add-ins form the core components, facilitating communication between Python and Excel. This approach empowers users with a potentially open-ended set of Python (CSFs) for tasks like mathematical calculations, statistical analysis, and even predictive modeling, all within the familiar Excel interface. This prototype demonstrates smooth integration, allowing users to call Python (CSFs) just like standard Excel functions. This research contributes to enhancing spreadsheet capabilities for end-user programmers by leveraging Python’s power within Excel. Future research could explore expanding data analysis capabilities by expanding the (CSFs) functions for complex calculations, statistical analysis, data manipulation, and even external library integration. The possibility of integrating machine learning models through the (CSFs) functions within the familiar Excel environment. 展开更多
关键词 End-User Software Engineering Custom Spreadsheet Functions (CSFs)
在线阅读 下载PDF
Dynamic traffic congestion pricing and electric vehicle charging management system for the internet of vehicles in smart cities 被引量:4
10
作者 Nyothiri Aung Weidong Zhang +2 位作者 Kashif Sultan Sahraoui Dhelim Yibo Ai 《Digital Communications and Networks》 SCIE CSCD 2021年第4期492-504,共13页
The integration of the Internet of Vehicles(IoV)in future smart cities could help solve many traffic-related challenges,such as reducing traffic congestion and traffic accidents.Various congestion pricing and electric... The integration of the Internet of Vehicles(IoV)in future smart cities could help solve many traffic-related challenges,such as reducing traffic congestion and traffic accidents.Various congestion pricing and electric vehicle charging policies have been introduced in recent years.Nonetheless,the majority of these schemes emphasize penalizing the vehicles that opt to take the congested roads or charge in the crowded charging station and do not reward the vehicles that cooperate with the traffic management system.In this paper,we propose a novel dynamic traffic congestion pricing and electric vehicle charging management system for the internet of vehicles in an urban smart city environment.The proposed system rewards the drivers that opt to take alternative congested-free ways and congested-free charging stations.We propose a token management system that serves as a virtual currency,where the vehicles earn these tokens if they take alternative non-congested ways and charging stations and use the tokens to pay for the charging fees.The proposed system is designed for Vehicular Ad-hoc Networks(VANETs)in the context of a smart city environment without the need to set up any expensive toll collection stations.Through large-scale traffic simulation in different smart city scenarios,it is proved that the system can reduce the traffic congestion and the total charging time at the charging stations. 展开更多
关键词 IoV EV VANET Smart city Congestion pricing Congestion avoidance EV charging Traffic optimization
在线阅读 下载PDF
Recognition and Detection of Diabetic Retinopathy Using Densenet-65 Based Faster-RCNN 被引量:3
11
作者 Saleh Albahli Tahira Nazir +1 位作者 Aun Irtaza Ali Javed 《Computers, Materials & Continua》 SCIE EI 2021年第5期1333-1351,共19页
Diabetes is a metabolic disorder that results in a retinal complication called diabetic retinopathy(DR)which is one of the four main reasons for sightlessness all over the globe.DR usually has no clear symptoms before... Diabetes is a metabolic disorder that results in a retinal complication called diabetic retinopathy(DR)which is one of the four main reasons for sightlessness all over the globe.DR usually has no clear symptoms before the onset,thus making disease identication a challenging task.The healthcare industry may face unfavorable consequences if the gap in identifying DR is not lled with effective automation.Thus,our objective is to develop an automatic and cost-effective method for classifying DR samples.In this work,we present a custom Faster-RCNN technique for the recognition and classication of DR lesions from retinal images.After pre-processing,we generate the annotations of the dataset which is required for model training.Then,introduce DenseNet-65 at the feature extraction level of Faster-RCNN to compute the representative set of key points.Finally,the Faster-RCNN localizes and classies the input sample into ve classes.Rigorous experiments performed on a Kaggle dataset comprising of 88,704 images show that the introduced methodology outperforms with an accuracy of 97.2%.We have compared our technique with state-of-the-art approaches to show its robustness in term of DR localization and classication.Additionally,we performed cross-dataset validation on the Kaggle and APTOS datasets and achieved remarkable results on both training and testing phases. 展开更多
关键词 Deep learning medical informatics diabetic retinopathy healthcare computer vision
在线阅读 下载PDF
An Improved Artificial Rabbits Optimization Algorithm with Chaotic Local Search and Opposition-Based Learning for Engineering Problems and Its Applications in Breast Cancer Problem 被引量:1
12
作者 Feyza AltunbeyÖzbay ErdalÖzbay Farhad Soleimanian Gharehchopogh 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第11期1067-1110,共44页
Artificial rabbits optimization(ARO)is a recently proposed biology-based optimization algorithm inspired by the detour foraging and random hiding behavior of rabbits in nature.However,for solving optimization problems... Artificial rabbits optimization(ARO)is a recently proposed biology-based optimization algorithm inspired by the detour foraging and random hiding behavior of rabbits in nature.However,for solving optimization problems,the ARO algorithm shows slow convergence speed and can fall into local minima.To overcome these drawbacks,this paper proposes chaotic opposition-based learning ARO(COARO),an improved version of the ARO algorithm that incorporates opposition-based learning(OBL)and chaotic local search(CLS)techniques.By adding OBL to ARO,the convergence speed of the algorithm increases and it explores the search space better.Chaotic maps in CLS provide rapid convergence by scanning the search space efficiently,since their ergodicity and non-repetitive properties.The proposed COARO algorithm has been tested using thirty-three distinct benchmark functions.The outcomes have been compared with the most recent optimization algorithms.Additionally,the COARO algorithm’s problem-solving capabilities have been evaluated using six different engineering design problems and compared with various other algorithms.This study also introduces a binary variant of the continuous COARO algorithm,named BCOARO.The performance of BCOARO was evaluated on the breast cancer dataset.The effectiveness of BCOARO has been compared with different feature selection algorithms.The proposed BCOARO outperforms alternative algorithms,according to the findings obtained for real applications in terms of accuracy performance,and fitness value.Extensive experiments show that the COARO and BCOARO algorithms achieve promising results compared to other metaheuristic algorithms. 展开更多
关键词 Artificial rabbit optimization binary optimization breast cancer chaotic local search engineering design problem opposition-based learning
在线阅读 下载PDF
A Comparative Study of Metaheuristic Optimization Algorithms for Solving Real-World Engineering Design Problems 被引量:1
13
作者 Elif Varol Altay Osman Altay Yusuf Ovik 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期1039-1094,共56页
Real-world engineering design problems with complex objective functions under some constraints are relatively difficult problems to solve.Such design problems are widely experienced in many engineering fields,such as ... Real-world engineering design problems with complex objective functions under some constraints are relatively difficult problems to solve.Such design problems are widely experienced in many engineering fields,such as industry,automotive,construction,machinery,and interdisciplinary research.However,there are established optimization techniques that have shown effectiveness in addressing these types of issues.This research paper gives a comparative study of the implementation of seventeen new metaheuristic methods in order to optimize twelve distinct engineering design issues.The algorithms used in the study are listed as:transient search optimization(TSO),equilibrium optimizer(EO),grey wolf optimizer(GWO),moth-flame optimization(MFO),whale optimization algorithm(WOA),slimemould algorithm(SMA),harris hawks optimization(HHO),chimp optimization algorithm(COA),coot optimization algorithm(COOT),multi-verse optimization(MVO),arithmetic optimization algorithm(AOA),aquila optimizer(AO),sine cosine algorithm(SCA),smell agent optimization(SAO),and seagull optimization algorithm(SOA),pelican optimization algorithm(POA),and coati optimization algorithm(CA).As far as we know,there is no comparative analysis of recent and popular methods against the concrete conditions of real-world engineering problems.Hence,a remarkable research guideline is presented in the study for researchersworking in the fields of engineering and artificial intelligence,especiallywhen applying the optimization methods that have emerged recently.Future research can rely on this work for a literature search on comparisons of metaheuristic optimization methods in real-world problems under similar conditions. 展开更多
关键词 Metaheuristic optimization algorithms real-world engineering design problems multidisciplinary design optimization problems
在线阅读 下载PDF
Mining Software Repository for Cleaning Bugs Using Data Mining Technique 被引量:1
14
作者 Nasir Mahmood Yaser Hafeez +4 位作者 Khalid Iqbal Shariq Hussain Muhammad Aqib Muhammad Jamal Oh-Young Song 《Computers, Materials & Continua》 SCIE EI 2021年第10期873-893,共21页
Despite advances in technological complexity and efforts,software repository maintenance requires reusing the data to reduce the effort and complexity.However,increasing ambiguity,irrelevance,and bugs while extracting... Despite advances in technological complexity and efforts,software repository maintenance requires reusing the data to reduce the effort and complexity.However,increasing ambiguity,irrelevance,and bugs while extracting similar data during software development generate a large amount of data from those data that reside in repositories.Thus,there is a need for a repository mining technique for relevant and bug-free data prediction.This paper proposes a fault prediction approach using a data-mining technique to find good predictors for high-quality software.To predict errors in mining data,the Apriori algorithm was used to discover association rules by fixing confidence at more than 40%and support at least 30%.The pruning strategy was adopted based on evaluation measures.Next,the rules were extracted from three projects of different domains;the extracted rules were then combined to obtain the most popular rules based on the evaluation measure values.To evaluate the proposed approach,we conducted an experimental study to compare the proposed rules with existing ones using four different industrial projects.The evaluation showed that the results of our proposal are promising.Practitioners and developers can utilize these rules for defect prediction during early software development. 展开更多
关键词 Fault prediction association rule data mining frequent pattern mining
在线阅读 下载PDF
A Tutorial on Federated Learning from Theory to Practice:Foundations,Software Frameworks,Exemplary Use Cases,and Selected Trends 被引量:1
15
作者 M.Victoria Luzón Nuria Rodríguez-Barroso +5 位作者 Alberto Argente-Garrido Daniel Jiménez-López Jose M.Moyano Javier Del Ser Weiping Ding Francisco Herrera 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第4期824-850,共27页
When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ... When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications. 展开更多
关键词 Data privacy distributed machine learning federated learning software frameworks
在线阅读 下载PDF
Finding a Practical IT Solution-Open Source Accounting Software 被引量:1
16
作者 Manar Abu Talib Adel Khelifi +4 位作者 Osama El-Temtamy Fatima Ismaeel Mahra Rashed Najah Hasan Summaya Khaled 《通讯和计算机(中英文版)》 2012年第4期406-413,共8页
关键词 会计软件 开放源码 IT 小型企业 开源软件 阿联酋 研究论文 挑战性
在线阅读 下载PDF
Evaluating Domain Randomization Techniques in DRL Agents:A Comparative Study of Normal,Randomized,and Non-Randomized Resets
17
作者 Abubakar Elsafi 《Computer Modeling in Engineering & Sciences》 2025年第8期1749-1766,共18页
Domain randomization is a widely adopted technique in deep reinforcement learning(DRL)to improve agent generalization by exposing policies to diverse environmental conditions.This paper investigates the impact of diff... Domain randomization is a widely adopted technique in deep reinforcement learning(DRL)to improve agent generalization by exposing policies to diverse environmental conditions.This paper investigates the impact of different reset strategies,normal,non-randomized,and randomized,on agent performance using the Deep Deterministic Policy Gradient(DDPG)and Twin Delayed DDPG(TD3)algorithms within the CarRacing-v2 environment.Two experimental setups were conducted:an extended training regime with DDPG for 1000 steps per episode across 1000 episodes,and a fast execution setup comparing DDPG and TD3 for 30 episodes with 50 steps per episode under constrained computational resources.A step-based reward scaling mechanism was applied under the randomized reset condition to promote broader state exploration.Experimental results showthat randomized resets significantly enhance learning efficiency and generalization,with DDPG demonstrating superior performance across all reset strategies.In particular,DDPG combined with randomized resets achieves the highest smoothed rewards(reaching approximately 15),best stability,and fastest convergence.These differences are statistically significant,as confirmed by t-tests:DDPG outperforms TD3 under randomized(t=−101.91,p<0.0001),normal(t=−21.59,p<0.0001),and non-randomized(t=−62.46,p<0.0001)reset conditions.The findings underscore the critical role of reset strategy and reward shaping in enhancing the robustness and adaptability of DRL agents in continuous control tasks,particularly in environments where computational efficiency and training stability are crucial. 展开更多
关键词 DDPG agent TD3 agent deep reinforcement learning domain randomization generalization non-randomized reset normal reset randomized reset
在线阅读 下载PDF
ARNet:Integrating Spatial and Temporal Deep Learning for Robust Action Recognition in Videos
18
作者 Hussain Dawood Marriam Nawaz +3 位作者 Tahira Nazir Ali Javed Abdul Khader Jilani Saudagar Hatoon S.AlSagri 《Computer Modeling in Engineering & Sciences》 2025年第7期429-459,共31页
Reliable human action recognition(HAR)in video sequences is critical for a wide range of applications,such as security surveillance,healthcare monitoring,and human-computer interaction.Several automated systems have b... Reliable human action recognition(HAR)in video sequences is critical for a wide range of applications,such as security surveillance,healthcare monitoring,and human-computer interaction.Several automated systems have been designed for this purpose;however,existing methods often struggle to effectively integrate spatial and temporal information from input samples such as 2-stream networks or 3D convolutional neural networks(CNNs),which limits their accuracy in discriminating numerous human actions.Therefore,this study introduces a novel deeplearning framework called theARNet,designed for robustHAR.ARNet consists of two mainmodules,namely,a refined InceptionResNet-V2-based CNN and a Bi-LSTM(Long Short-Term Memory)network.The refined InceptionResNet-V2 employs a parametric rectified linear unit(PReLU)activation strategy within convolutional layers to enhance spatial feature extraction fromindividual video frames.The inclusion of the PReLUmethod improves the spatial informationcapturing ability of the approach as it uses learnable parameters to adaptively control the slope of the negative part of the activation function,allowing richer gradient flow during backpropagation and resulting in robust information capturing and stable model training.These spatial features holding essential pixel characteristics are then processed by the Bi-LSTMmodule for temporal analysis,which assists the ARNet in understanding the dynamic behavior of actions over time.The ARNet integrates three additional dense layers after the Bi-LSTM module to ensure a comprehensive computation of both spatial and temporal patterns and further boost the feature representation.The experimental validation of the model is conducted on 3 benchmark datasets named HMDB51,KTH,and UCF Sports and reports accuracies of 93.82%,99%,and 99.16%,respectively.The Precision results of HMDB51,KTH,and UCF Sports datasets are 97.41%,99.54%,and 99.01%;the Recall values are 98.87%,98.60%,99.08%,and the F1-Score is 98.13%,99.07%,99.04%,respectively.These results highlight the robustness of the ARNet approach and its potential as a versatile tool for accurate HAR across various real-world applications. 展开更多
关键词 Action recognition Bi-LSTM computer vision deep learning InceptionResNet-V2 PReLU
在线阅读 下载PDF
Efficient Bit-Plane Based Medical Image Cryptosystem Using Novel and Robust Sine-Cosine Chaotic Map
19
作者 Zeric Tabekoueng Njitacke Louai A.Maghrabi +1 位作者 Musheer Ahmad Turki Althaqafi 《Computers, Materials & Continua》 2025年第4期917-933,共17页
This paper presents a high-security medical image encryption method that leverages a novel and robust sine-cosine map.The map demonstrates remarkable chaotic dynamics over a wide range of parameters.We employ nonlinea... This paper presents a high-security medical image encryption method that leverages a novel and robust sine-cosine map.The map demonstrates remarkable chaotic dynamics over a wide range of parameters.We employ nonlinear analytical tools to thoroughly investigate the dynamics of the chaotic map,which allows us to select optimal parameter configurations for the encryption process.Our findings indicate that the proposed sine-cosine map is capable of generating a rich variety of chaotic attractors,an essential characteristic for effective encryption.The encryption technique is based on bit-plane decomposition,wherein a plain image is divided into distinct bit planes.These planes are organized into two matrices:one containing the most significant bit planes and the other housing the least significant ones.The subsequent phases of chaotic confusion and diffusion utilize these matrices to enhance security.An auxiliary matrix is then generated,comprising the combined bit planes that yield the final encrypted image.Experimental results demonstrate that our proposed technique achieves a commendable level of security for safeguarding sensitive patient information in medical images.As a result,image quality is evaluated using the Structural Similarity Index(SSIM),yielding values close to zero for encrypted images and approaching one for decrypted images.Additionally,the entropy values of the encrypted images are near 8,with a Number of Pixel Change Rate(NPCR)and Unified Average Change Intensity(UACI)exceeding 99.50%and 33%,respectively.Furthermore,quantitative assessments of occlusion attacks,along with comparisons to leading algorithms,validate the integrity and efficacy of our medical image encryption approach. 展开更多
关键词 Image cryptosystem robust chaos sine-cosine map nonlinear analysis tools medical images
在线阅读 下载PDF
A Convolutional Neural Network Based Optical Character Recognition for Purely Handwritten Characters and Digits
20
作者 Syed Atir Raza Muhammad Shoaib Farooq +3 位作者 Uzma Farooq Hanen Karamti Tahir Khurshaid Imran Ashraf 《Computers, Materials & Continua》 2025年第8期3149-3173,共25页
Urdu,a prominent subcontinental language,serves as a versatile means of communication.However,its handwritten expressions present challenges for optical character recognition(OCR).While various OCR techniques have bee... Urdu,a prominent subcontinental language,serves as a versatile means of communication.However,its handwritten expressions present challenges for optical character recognition(OCR).While various OCR techniques have been proposed,most of them focus on recognizing printed Urdu characters and digits.To the best of our knowledge,very little research has focused solely on Urdu pure handwriting recognition,and the results of such proposed methods are often inadequate.In this study,we introduce a novel approach to recognizing Urdu pure handwritten digits and characters using Convolutional Neural Networks(CNN).Our proposed method utilizes convolutional layers to extract important features from input images and classifies them using fully connected layers,enabling efficient and accurate detection of Urdu handwritten digits and characters.We implemented the proposed technique on a large publicly available dataset of Urdu handwritten digits and characters.The findings demonstrate that the CNN model achieves an accuracy of 98.30%and an F1 score of 88.6%,indicating its effectiveness in detecting and classifyingUrdu handwritten digits and characters.These results have far-reaching implications for various applications,including document analysis,text recognition,and language understanding,which have previously been unexplored in the context of Urdu handwriting data.This work lays a solid foundation for future research and development in Urdu language detection and processing,opening up new opportunities for advancement in this field. 展开更多
关键词 Image processing natural language processing handwritten Urdu characters optical character recognition deep learning feature extraction CLASSIFICATION
在线阅读 下载PDF
上一页 1 2 17 下一页 到第
使用帮助 返回顶部