Recently,several edge deployment types,such as on-premise edge clusters,Unmanned Aerial Vehicles(UAV)-attached edge devices,telecommunication base stations installed with edge clusters,etc.,are being deployed to enabl...Recently,several edge deployment types,such as on-premise edge clusters,Unmanned Aerial Vehicles(UAV)-attached edge devices,telecommunication base stations installed with edge clusters,etc.,are being deployed to enable faster response time for latency-sensitive tasks.One fundamental problem is where and how to offload and schedule multi-dependent tasks so as to minimize their collective execution time and to achieve high resource utilization.Existing approaches randomly dispatch tasks naively to available edge nodes without considering the resource demands of tasks,inter-dependencies of tasks and edge resource availability.These approaches can result in the longer waiting time for tasks due to insufficient resource availability or dependency support,as well as provider lock-in.Therefore,we present Edge Colla,which is based on the integration of edge resources running across multi-edge deployments.Edge Colla leverages learning techniques to intelligently dispatch multidependent tasks,and a variant bin-packing optimization method to co-locate these tasks firmly on available nodes to optimally utilize them.Extensive experiments on real-world datasets from Alibaba on task dependencies show that our approach can achieve optimal performance than the baseline schemes.展开更多
This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak...This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak internal protocols, the study identifies key vulnerabilities exacerbating cyber threats to MFIs. A literature review using databases like IEEE Xplore and Google Scholar focused on studies from 2019 to 2023 addressing human factors in cybersecurity specific to MFIs. Analysis of 57 studies reveals that phishing and insider threats are predominant, with a 20% annual increase in phishing attempts. Employee susceptibility to these attacks is heightened by insufficient training, with entry-level employees showing the highest vulnerability rates. Further, only 35% of MFIs offer regular cybersecurity training, significantly impacting incident reduction. This paper recommends enhanced training frequency, robust internal controls, and a cybersecurity-aware culture to mitigate human-induced cyber risks in MFIs.展开更多
The Internet of Things(IoT)is emerging as an innovative phenomenon concerned with the development of numerous vital applications.With the development of IoT devices,huge amounts of information,including users’private...The Internet of Things(IoT)is emerging as an innovative phenomenon concerned with the development of numerous vital applications.With the development of IoT devices,huge amounts of information,including users’private data,are generated.IoT systems face major security and data privacy challenges owing to their integral features such as scalability,resource constraints,and heterogeneity.These challenges are intensified by the fact that IoT technology frequently gathers and conveys complex data,creating an attractive opportunity for cyberattacks.To address these challenges,artificial intelligence(AI)techniques,such as machine learning(ML)and deep learning(DL),are utilized to build an intrusion detection system(IDS)that helps to secure IoT systems.Federated learning(FL)is a decentralized technique that can help to improve information privacy and performance by training the IDS on discrete linked devices.FL delivers an effectual tool to defend user confidentiality,mainly in the field of IoT,where IoT devices often obtain privacy-sensitive personal data.This study develops a Privacy-Enhanced Federated Learning for Intrusion Detection using the Chameleon Swarm Algorithm and Artificial Intelligence(PEFLID-CSAAI)technique.The main aim of the PEFLID-CSAAI method is to recognize the existence of attack behavior in IoT networks.First,the PEFLIDCSAAI technique involves data preprocessing using Z-score normalization to transformthe input data into a beneficial format.Then,the PEFLID-CSAAI method uses the Osprey Optimization Algorithm(OOA)for the feature selection(FS)model.For the classification of intrusion detection attacks,the Self-Attentive Variational Autoencoder(SA-VAE)technique can be exploited.Finally,the Chameleon Swarm Algorithm(CSA)is applied for the hyperparameter finetuning process that is involved in the SA-VAE model.A wide range of experiments were conducted to validate the execution of the PEFLID-CSAAI model.The simulated outcomes demonstrated that the PEFLID-CSAAI technique outperformed other recent models,highlighting its potential as a valuable tool for future applications in healthcare devices and small engineering systems.展开更多
Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unma...Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements.展开更多
In this paper,we present a comprehensive system model for Industrial Internet of Things(IIoT)networks empowered by Non-Orthogonal Multiple Access(NOMA)and Mobile Edge Computing(MEC)technologies.The network comprises e...In this paper,we present a comprehensive system model for Industrial Internet of Things(IIoT)networks empowered by Non-Orthogonal Multiple Access(NOMA)and Mobile Edge Computing(MEC)technologies.The network comprises essential components such as base stations,edge servers,and numerous IIoT devices characterized by limited energy and computing capacities.The central challenge addressed is the optimization of resource allocation and task distribution while adhering to stringent queueing delay constraints and minimizing overall energy consumption.The system operates in discrete time slots and employs a quasi-static approach,with a specific focus on the complexities of task partitioning and the management of constrained resources within the IIoT context.This study makes valuable contributions to the field by enhancing the understanding of resourceefficient management and task allocation,particularly relevant in real-time industrial applications.Experimental results indicate that our proposed algorithmsignificantly outperforms existing approaches,reducing queue backlog by 45.32% and 17.25% compared to SMRA and ACRA while achieving a 27.31% and 74.12% improvement in Qn O.Moreover,the algorithmeffectively balances complexity and network performance,as demonstratedwhen reducing the number of devices in each group(Ng)from 200 to 50,resulting in a 97.21% reduction in complexity with only a 7.35% increase in energy consumption.This research offers a practical solution for optimizing IIoT networks in real-time industrial settings.展开更多
In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine ...In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine learning-based technique.In order to increase the prediction accuracy of the reference point position on the data collected using the fingerprinting method over LoRa technology,this study proposed an optimized machine learning(ML)based algorithm.Received signal strength indicator(RSSI)data from the sensors at different positions was first gathered via an experiment through the LoRa network in a multistory round layout building.The noise factor is also taken into account,and the signal-to-noise ratio(SNR)value is recorded for every RSSI measurement.This study concludes the examination of reference point accuracy with the modified KNN method(MKNN).MKNN was created to more precisely anticipate the position of the reference point.The findings showed that MKNN outperformed other algorithms in terms of accuracy and complexity.展开更多
Intelligent Reflecting Surface(IRS),with the potential capability to reconstruct the electromagnetic propagation environment,evolves a new IRSassisted covert communications paradigm to eliminate the negligible detecti...Intelligent Reflecting Surface(IRS),with the potential capability to reconstruct the electromagnetic propagation environment,evolves a new IRSassisted covert communications paradigm to eliminate the negligible detection of malicious eavesdroppers by coherently beaming the scattered signals and suppressing the signals leakage.However,when multiple IRSs are involved,accurate channel estimation is still a challenge due to the extra hardware complexity and communication overhead.Besides the crossinterference caused by massive reflecting paths,it is hard to obtain the close-formed solution for the optimization of covert communications.On this basis,the paper improves a heterogeneous multi-agent deep deterministic policy gradient(MADDPG)approach for the joint active and passive beamforming(Joint A&P BF)optimization without the channel estimation,where the base station(BS)and multiple IRSs are taken as different types of agents and learn to enhance the covert spectrum efficiency(CSE)cooperatively.Thanks to the‘centralized training and distributed execution’feature of MADDPG,each agent can execute the active or passive beamforming independently based on its partial observation without referring to others.Numeral results demonstrate that the proposed deep reinforcement learning(DRL)approach could not only obtain a preferable CSE of legitimate users and a low detection of probability(LPD)of warden,but also alleviate the communication overhead and simplify the IRSs deployment.展开更多
In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of ...In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of contagion.Employees,however,have been exposed to different security risks because of working from home.Moreover,the rapid global spread of COVID-19 has increased the volume of data generated from various sources.Working from home depends mainly on cloud computing(CC)applications that help employees to efficiently accomplish their tasks.The cloud computing environment(CCE)is an unsung hero in the COVID-19 pandemic crisis.It consists of the fast-paced practices for services that reflect the trend of rapidly deployable applications for maintaining data.Despite the increase in the use of CC applications,there is an ongoing research challenge in the domains of CCE concerning data,guaranteeing security,and the availability of CC applications.This paper,to the best of our knowledge,is the first paper that thoroughly explains the impact of the COVID-19 pandemic on CCE.Additionally,this paper also highlights the security risks of working from home during the COVID-19 pandemic.展开更多
Due to the recent developments in communications technology,cognitive computations have been used in smart healthcare techniques that can combine massive medical data,artificial intelligence,federated learning,bio-ins...Due to the recent developments in communications technology,cognitive computations have been used in smart healthcare techniques that can combine massive medical data,artificial intelligence,federated learning,bio-inspired computation,and the Internet of Medical Things.It has helped in knowledge sharing and scaling ability between patients,doctors,and clinics for effective treatment of patients.Speech-based respiratory disease detection and monitoring are crucial in this direction and have shown several promising results.Since the subject’s speech can be remotely recorded and submitted for further examination,it offers a quick,economical,dependable,and noninvasive prospective alternative detection approach.However,the two main requirements of this are higher accuracy and lower computational complexity and,in many cases,these two requirements do not correlate with each other.This problem has been taken up in this paper to develop a low computational complexity-based neural network with higher accuracy.A cascaded perceptual functional link artificial neural network(PFLANN)is used to capture the nonlinearity in the data for better classification performance with low computational complexity.The proposed model is being tested for multiple respiratory diseases,and the analysis of various performance matrices demonstrates the superior performance of the proposed model both in terms of accuracy and complexity.展开更多
Breast cancer remains a significant global health concern,with early detection being crucial for effective treatment and improved survival rates.This study introduces HERA-Net(Hybrid Extraction and Recognition Archite...Breast cancer remains a significant global health concern,with early detection being crucial for effective treatment and improved survival rates.This study introduces HERA-Net(Hybrid Extraction and Recognition Architec-ture),an advanced hybrid model designed to enhance the diagnostic accuracy of breast cancer detection by leveraging both thermographic and ultrasound imaging modalities.The HERA-Net model integrates powerful deep learning architectures,including VGG19,U-Net,GRU(Gated Recurrent Units),and ResNet-50,to capture multi-dimensional features that support robust image segmentation,feature extraction,and temporal analysis.For thermographic imaging,a comprehensive dataset of 3534 infrared(IR)images from the DMR(Database for Mastology Research)was utilized,with images captured by the high-resolution FLIR SC-620 camera.This dataset was partitioned with 70%of images allocated to training,15%to validation,and 15%to testing,ensuring a balanced approach for model development and evaluation.To prepare the images,preprocessing steps included resizing,Contrast-Limited Adaptive Histogram Equalization(CLAHE)for enhanced contrast,bilateral filtering for noise reduction,and Non-Local Means(NLMS)filtering to refine structural details.Statistical metrics such as mean,variance,standard deviation,entropy,kurtosis,and skewness were extracted to provide a detailed analysis of thermal distribution across samples.Similarly,the ultrasound dataset was processed to extract detailed anatomical features relevant to breast cancer diagnosis.Preprocessing involved grayscale conversion,bilateral filtering,and Multipurpose Beta Optimized Bihistogram Equalization(MBOBHE)for contrast enhancement,followed by segmentation using Geodesic Active Contours.The ultrasound and thermographic datasets were subsequently fed into HERA-Net,where VGG19 and U-Net were applied for feature extraction and segmentation,GRU for temporal pattern recognition,and ResNet-50 for classification.The performance assessment of HERA-Net on both imaging modalities demonstrated a high degree of diagnostic accuracy,with the proposed model achieving an overall accuracy of 99.86%in breast cancer detection,surpassing other models such as VGG16(99.80%)and Inception V3(99.64%).In terms of sensitivity,HERA-Net reached a flawless 100%,indicating its ability to correctly identify all positive cases,while maintaining a specificity of 99.81%,significantly reducing the likelihood of false positives.The model’s robustness was further illustrated through cross-entropy loss convergence and ROC(Receiver Operating Characteristic)curves,with the combined ROC curve showing consistent discrimination ability across training,validation,and testing phases.Overall,the HERA-Net model’s integration of thermographic and ultrasound imaging,combined with advanced deep learning techniques,showcases a powerful approach to breast cancer detection,achieving unprecedented accuracy and sensitivity.展开更多
Software cost estimation is a crucial aspect of software project management,significantly impacting productivity and planning.This research investigates the impact of various feature selection techniques on software c...Software cost estimation is a crucial aspect of software project management,significantly impacting productivity and planning.This research investigates the impact of various feature selection techniques on software cost estimation accuracy using the CoCoMo NASA dataset,which comprises data from 93 unique software projects with 24 attributes.By applying multiple machine learning algorithms alongside three feature selection methods,this study aims to reduce data redundancy and enhance model accuracy.Our findings reveal that the principal component analysis(PCA)-based feature selection technique achieved the highest performance,underscoring the importance of optimal feature selection in improving software cost estimation accuracy.It is demonstrated that our proposed method outperforms the existing method while achieving the highest precision,accuracy,and recall rates.展开更多
Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seaml...Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seamless and error-free HCI.HGRoc technology is pivotal in healthcare and communication for the deaf community.Despite significant advancements in computer vision-based gesture recognition for language understanding,two considerable challenges persist in this field:(a)limited and common gestures are considered,(b)processing multiple channels of information across a network takes huge computational time during discriminative feature extraction.Therefore,a novel hand vision-based convolutional neural network(CNN)model named(HVCNNM)offers several benefits,notably enhanced accuracy,robustness to variations,real-time performance,reduced channels,and scalability.Additionally,these models can be optimized for real-time performance,learn from large amounts of data,and are scalable to handle complex recognition tasks for efficient human-computer interaction.The proposed model was evaluated on two challenging datasets,namely the Massey University Dataset(MUD)and the American Sign Language(ASL)Alphabet Dataset(ASLAD).On the MUD and ASLAD datasets,HVCNNM achieved a score of 99.23% and 99.00%,respectively.These results demonstrate the effectiveness of CNN as a promising HGRoc approach.The findings suggest that the proposed model have potential roles in applications such as sign language recognition,human-computer interaction,and robotics.展开更多
COVID-19 pandemic restrictions limited all social activities to curtail the spread of the virus.The foremost and most prime sector among those affected were schools,colleges,and universities.The education system of en...COVID-19 pandemic restrictions limited all social activities to curtail the spread of the virus.The foremost and most prime sector among those affected were schools,colleges,and universities.The education system of entire nations had shifted to online education during this time.Many shortcomings of Learning Management Systems(LMSs)were detected to support education in an online mode that spawned the research in Artificial Intelligence(AI)based tools that are being developed by the research community to improve the effectiveness of LMSs.This paper presents a detailed survey of the different enhancements to LMSs,which are led by key advances in the area of AI to enhance the real-time and non-real-time user experience.The AI-based enhancements proposed to the LMSs start from the Application layer and Presentation layer in the form of flipped classroom models for the efficient learning environment and appropriately designed UI/UX for efficient utilization of LMS utilities and resources,including AI-based chatbots.Session layer enhancements are also required,such as AI-based online proctoring and user authentication using Biometrics.These extend to the Transport layer to support real-time and rate adaptive encrypted video transmission for user security/privacy and satisfactory working of AI-algorithms.It also needs the support of the Networking layer for IP-based geolocation features,the Virtual Private Network(VPN)feature,and the support of Software-Defined Networks(SDN)for optimum Quality of Service(QoS).Finally,in addition to these,non-real-time user experience is enhanced by other AI-based enhancements such as Plagiarism detection algorithms and Data Analytics.展开更多
Devices and networks constantly upgrade,leading to rapid technological evolution.Three-dimensional(3D)point cloud transmission plays a crucial role in aerial computing terminology,facilitating information exchange.Var...Devices and networks constantly upgrade,leading to rapid technological evolution.Three-dimensional(3D)point cloud transmission plays a crucial role in aerial computing terminology,facilitating information exchange.Various network types,including sensor networks and 5G mobile networks,support this transmission.Notably,Flying Ad hoc Networks(FANETs)utilize Unmanned Aerial Vehicles(UAVs)as nodes,operating in a 3D environment with Six Degrees of Freedom(6DoF).This study comprehensively surveys UAV networks,focusing on models for Light Detection and Ranging(LiDAR)3D point cloud compression/transmission.Key topics covered include autonomous navigation,challenges in video streaming infrastructure,motivations for Quality of Experience(QoE)enhancement,and avenues for future research.Additionally,the paper conducts an extensive review of UAVs,encompassing current wireless technologies,applications across various sectors,routing protocols,design considerations,security measures,blockchain applications in UAVs,contributions to healthcare systems,and integration with the Internet of Things(IoT),Artificial Intelligence(AI),Machine Learning(ML),and Deep Learning(DL).Furthermore,the paper thoroughly discusses the core contributions of LiDAR 3D point clouds in UAV systems and their future prediction along with mobility models.It also explores the prospects of UAV systems and presents state-of-the-art solutions.展开更多
Hybridizing metaheuristic algorithms involves synergistically combining different optimization techniques to effectively address complex and challenging optimization problems.This approach aims to leverage the strengt...Hybridizing metaheuristic algorithms involves synergistically combining different optimization techniques to effectively address complex and challenging optimization problems.This approach aims to leverage the strengths of multiple algorithms,enhancing solution quality,convergence speed,and robustness,thereby offering a more versatile and efficient means of solving intricate real-world optimization tasks.In this paper,we introduce a hybrid algorithm that amalgamates three distinct metaheuristics:the Beluga Whale Optimization(BWO),the Honey Badger Algorithm(HBA),and the Jellyfish Search(JS)optimizer.The proposed hybrid algorithm will be referred to as BHJO.Through this fusion,the BHJO algorithm aims to leverage the strengths of each optimizer.Before this hybridization,we thoroughly examined the exploration and exploitation capabilities of the BWO,HBA,and JS metaheuristics,as well as their ability to strike a balance between exploration and exploitation.This meticulous analysis allowed us to identify the pros and cons of each algorithm,enabling us to combine them in a novel hybrid approach that capitalizes on their respective strengths for enhanced optimization performance.In addition,the BHJO algorithm incorporates Opposition-Based Learning(OBL)to harness the advantages offered by this technique,leveraging its diverse exploration,accelerated convergence,and improved solution quality to enhance the overall performance and effectiveness of the hybrid algorithm.Moreover,the performance of the BHJO algorithm was evaluated across a range of both unconstrained and constrained optimization problems,providing a comprehensive assessment of its efficacy and applicability in diverse problem domains.Similarly,the BHJO algorithm was subjected to a comparative analysis with several renowned algorithms,where mean and standard deviation values were utilized as evaluation metrics.This rigorous comparison aimed to assess the performance of the BHJOalgorithmabout its counterparts,shedding light on its effectiveness and reliability in solving optimization problems.Finally,the obtained numerical statistics underwent rigorous analysis using the Friedman post hoc Dunn’s test.The resulting numerical values revealed the BHJO algorithm’s competitiveness in tackling intricate optimization problems,affirming its capability to deliver favorable outcomes in challenging scenarios.展开更多
In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding ...In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s.展开更多
The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integrat...The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integration,robotic navigation,autonomous driving,and guided tour systems,heavily rely on this type of scene comprehension.This paper presents a novel segmentation approach based on the UNet network model,aimed at recognizing multiple objects within an image.The methodology begins with the acquisition and preprocessing of the image,followed by segmentation using the fine-tuned UNet architecture.Afterward,we use an annotation tool to accurately label the segmented regions.Upon labeling,significant features are extracted from these segmented objects,encompassing KAZE(Accelerated Segmentation and Extraction)features,energy-based edge detection,frequency-based,and blob characteristics.For the classification stage,a convolution neural network(CNN)is employed.This comprehensive methodology demonstrates a robust framework for achieving accurate and efficient recognition of multiple objects in images.The experimental results,which include complex object datasets like MSRC-v2 and PASCAL-VOC12,have been documented.After analyzing the experimental results,it was found that the PASCAL-VOC12 dataset achieved an accuracy rate of 95%,while the MSRC-v2 dataset achieved an accuracy of 89%.The evaluation performed on these diverse datasets highlights a notably impressive level of performance.展开更多
In the contemporary era,the death rate is increasing due to lung cancer.However,technology is continuously enhancing the quality of well-being.To improve the survival rate,radiologists rely on Computed Tomography(CT)s...In the contemporary era,the death rate is increasing due to lung cancer.However,technology is continuously enhancing the quality of well-being.To improve the survival rate,radiologists rely on Computed Tomography(CT)scans for early detection and diagnosis of lung nodules.This paper presented a detailed,systematic review of several identification and categorization techniques for lung nodules.The analysis of the report explored the challenges,advancements,and future opinions in computer-aided diagnosis CAD systems for detecting and classifying lung nodules employing the deep learning(DL)algorithm.The findings also highlighted the usefulness of DL networks,especially convolutional neural networks(CNNs)in elevating sensitivity,accuracy,and specificity as well as overcoming false positives in the initial stages of lung cancer detection.This paper further presented the integral nodule classification stage,which stressed the importance of differentiating between benign and malignant nodules for initial cancer diagnosis.Moreover,the findings presented a comprehensive analysis of multiple techniques and studies for nodule classification,highlighting the evolution of methodologies from conventional machine learning(ML)classifiers to transfer learning and integrated CNNs.Interestingly,while accepting the strides formed by CAD systems,the review addressed persistent challenges.展开更多
Operating System(OS)is a critical piece of software that manages a computer’s hardware and resources,acting as the intermediary between the computer and the user.The existing OS is not designed for Big Data and Cloud...Operating System(OS)is a critical piece of software that manages a computer’s hardware and resources,acting as the intermediary between the computer and the user.The existing OS is not designed for Big Data and Cloud Computing,resulting in data processing and management inefficiency.This paper proposes a simplified and improved kernel on an x86 system designed for Big Data and Cloud Computing purposes.The proposed algorithm utilizes the performance benefits from the improved Input/Output(I/O)performance.The performance engineering runs the data-oriented design on traditional data management to improve data processing speed by reducing memory access overheads in conventional data management.The OS incorporates a data-oriented design to“modernize”various Data Science and management aspects.The resulting OS contains a basic input/output system(BIOS)bootloader that boots into Intel 32-bit protected mode,a text display terminal,4 GB paging memory,4096 heap block size,a Hard Disk Drive(HDD)I/O Advanced Technology Attachment(ATA)driver and more.There are also I/O scheduling algorithm prototypes that demonstrate how a simple Sweeping algorithm is superior to more conventionally known I/O scheduling algorithms.A MapReduce prototype is implemented using Message Passing Interface(MPI)for big data purposes.An attempt was made to optimize binary search using modern performance engineering and data-oriented design.展开更多
Cardiovascular disease prediction is a significant area of research in healthcare management systems(HMS).We will only be able to reduce the number of deaths if we anticipate cardiac problems in advance.The existing h...Cardiovascular disease prediction is a significant area of research in healthcare management systems(HMS).We will only be able to reduce the number of deaths if we anticipate cardiac problems in advance.The existing heart disease detection systems using machine learning have not yet produced sufficient results due to the reliance on available data.We present Clustered Butterfly Optimization Techniques(RoughK-means+BOA)as a new hybrid method for predicting heart disease.This method comprises two phases:clustering data using Roughk-means(RKM)and data analysis using the butterfly optimization algorithm(BOA).The benchmark dataset from the UCI repository is used for our experiments.The experiments are divided into three sets:the first set involves the RKM clustering technique,the next set evaluates the classification outcomes,and the last set validates the performance of the proposed hybrid model.The proposed RoughK-means+BOA has achieved a reasonable accuracy of 97.03 and a minimal error rate of 2.97.This result is comparatively better than other combinations of optimization techniques.In addition,this approach effectively enhances data segmentation,optimization,and classification performance.展开更多
基金The financial support of the National Natural Science Foundation of China under grants 61901416 and 61571401(part of the Natural Science Foundation of Henan under grant 242300420269)the Young Elite Scientists Sponsorship Program of Henan under grant 2024HYTP026the Innovative Talent of Colleges and the University of Henan Province under grant 18HASTIT021。
文摘Recently,several edge deployment types,such as on-premise edge clusters,Unmanned Aerial Vehicles(UAV)-attached edge devices,telecommunication base stations installed with edge clusters,etc.,are being deployed to enable faster response time for latency-sensitive tasks.One fundamental problem is where and how to offload and schedule multi-dependent tasks so as to minimize their collective execution time and to achieve high resource utilization.Existing approaches randomly dispatch tasks naively to available edge nodes without considering the resource demands of tasks,inter-dependencies of tasks and edge resource availability.These approaches can result in the longer waiting time for tasks due to insufficient resource availability or dependency support,as well as provider lock-in.Therefore,we present Edge Colla,which is based on the integration of edge resources running across multi-edge deployments.Edge Colla leverages learning techniques to intelligently dispatch multidependent tasks,and a variant bin-packing optimization method to co-locate these tasks firmly on available nodes to optimally utilize them.Extensive experiments on real-world datasets from Alibaba on task dependencies show that our approach can achieve optimal performance than the baseline schemes.
文摘This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak internal protocols, the study identifies key vulnerabilities exacerbating cyber threats to MFIs. A literature review using databases like IEEE Xplore and Google Scholar focused on studies from 2019 to 2023 addressing human factors in cybersecurity specific to MFIs. Analysis of 57 studies reveals that phishing and insider threats are predominant, with a 20% annual increase in phishing attempts. Employee susceptibility to these attacks is heightened by insufficient training, with entry-level employees showing the highest vulnerability rates. Further, only 35% of MFIs offer regular cybersecurity training, significantly impacting incident reduction. This paper recommends enhanced training frequency, robust internal controls, and a cybersecurity-aware culture to mitigate human-induced cyber risks in MFIs.
基金funded by the Deanship of Scientific Research at Northern Border University,Arar,Saudi Arabia,under grant number NBU-FFR-2025-451-6.
文摘The Internet of Things(IoT)is emerging as an innovative phenomenon concerned with the development of numerous vital applications.With the development of IoT devices,huge amounts of information,including users’private data,are generated.IoT systems face major security and data privacy challenges owing to their integral features such as scalability,resource constraints,and heterogeneity.These challenges are intensified by the fact that IoT technology frequently gathers and conveys complex data,creating an attractive opportunity for cyberattacks.To address these challenges,artificial intelligence(AI)techniques,such as machine learning(ML)and deep learning(DL),are utilized to build an intrusion detection system(IDS)that helps to secure IoT systems.Federated learning(FL)is a decentralized technique that can help to improve information privacy and performance by training the IDS on discrete linked devices.FL delivers an effectual tool to defend user confidentiality,mainly in the field of IoT,where IoT devices often obtain privacy-sensitive personal data.This study develops a Privacy-Enhanced Federated Learning for Intrusion Detection using the Chameleon Swarm Algorithm and Artificial Intelligence(PEFLID-CSAAI)technique.The main aim of the PEFLID-CSAAI method is to recognize the existence of attack behavior in IoT networks.First,the PEFLIDCSAAI technique involves data preprocessing using Z-score normalization to transformthe input data into a beneficial format.Then,the PEFLID-CSAAI method uses the Osprey Optimization Algorithm(OOA)for the feature selection(FS)model.For the classification of intrusion detection attacks,the Self-Attentive Variational Autoencoder(SA-VAE)technique can be exploited.Finally,the Chameleon Swarm Algorithm(CSA)is applied for the hyperparameter finetuning process that is involved in the SA-VAE model.A wide range of experiments were conducted to validate the execution of the PEFLID-CSAAI model.The simulated outcomes demonstrated that the PEFLID-CSAAI technique outperformed other recent models,highlighting its potential as a valuable tool for future applications in healthcare devices and small engineering systems.
文摘Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements.
基金the Deanship of Scientific Research at King Khalid University for funding this work through large group research project under Grant Number RGP2/474/44.
文摘In this paper,we present a comprehensive system model for Industrial Internet of Things(IIoT)networks empowered by Non-Orthogonal Multiple Access(NOMA)and Mobile Edge Computing(MEC)technologies.The network comprises essential components such as base stations,edge servers,and numerous IIoT devices characterized by limited energy and computing capacities.The central challenge addressed is the optimization of resource allocation and task distribution while adhering to stringent queueing delay constraints and minimizing overall energy consumption.The system operates in discrete time slots and employs a quasi-static approach,with a specific focus on the complexities of task partitioning and the management of constrained resources within the IIoT context.This study makes valuable contributions to the field by enhancing the understanding of resourceefficient management and task allocation,particularly relevant in real-time industrial applications.Experimental results indicate that our proposed algorithmsignificantly outperforms existing approaches,reducing queue backlog by 45.32% and 17.25% compared to SMRA and ACRA while achieving a 27.31% and 74.12% improvement in Qn O.Moreover,the algorithmeffectively balances complexity and network performance,as demonstratedwhen reducing the number of devices in each group(Ng)from 200 to 50,resulting in a 97.21% reduction in complexity with only a 7.35% increase in energy consumption.This research offers a practical solution for optimizing IIoT networks in real-time industrial settings.
基金The research will be funded by the Multimedia University,Department of Information Technology,Persiaran Multimedia,63100,Cyberjaya,Selangor,Malaysia.
文摘In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine learning-based technique.In order to increase the prediction accuracy of the reference point position on the data collected using the fingerprinting method over LoRa technology,this study proposed an optimized machine learning(ML)based algorithm.Received signal strength indicator(RSSI)data from the sensors at different positions was first gathered via an experiment through the LoRa network in a multistory round layout building.The noise factor is also taken into account,and the signal-to-noise ratio(SNR)value is recorded for every RSSI measurement.This study concludes the examination of reference point accuracy with the modified KNN method(MKNN).MKNN was created to more precisely anticipate the position of the reference point.The findings showed that MKNN outperformed other algorithms in terms of accuracy and complexity.
基金supported by the Key Laboratory of Near Ground Detection and Perception Technology(No.6142414220406 and 6142414210101)Shaanxi and Taicang Keypoint Research and Invention Program(No.2021GXLH-01-15 and TC2019SF03)。
文摘Intelligent Reflecting Surface(IRS),with the potential capability to reconstruct the electromagnetic propagation environment,evolves a new IRSassisted covert communications paradigm to eliminate the negligible detection of malicious eavesdroppers by coherently beaming the scattered signals and suppressing the signals leakage.However,when multiple IRSs are involved,accurate channel estimation is still a challenge due to the extra hardware complexity and communication overhead.Besides the crossinterference caused by massive reflecting paths,it is hard to obtain the close-formed solution for the optimization of covert communications.On this basis,the paper improves a heterogeneous multi-agent deep deterministic policy gradient(MADDPG)approach for the joint active and passive beamforming(Joint A&P BF)optimization without the channel estimation,where the base station(BS)and multiple IRSs are taken as different types of agents and learn to enhance the covert spectrum efficiency(CSE)cooperatively.Thanks to the‘centralized training and distributed execution’feature of MADDPG,each agent can execute the active or passive beamforming independently based on its partial observation without referring to others.Numeral results demonstrate that the proposed deep reinforcement learning(DRL)approach could not only obtain a preferable CSE of legitimate users and a low detection of probability(LPD)of warden,but also alleviate the communication overhead and simplify the IRSs deployment.
文摘In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of contagion.Employees,however,have been exposed to different security risks because of working from home.Moreover,the rapid global spread of COVID-19 has increased the volume of data generated from various sources.Working from home depends mainly on cloud computing(CC)applications that help employees to efficiently accomplish their tasks.The cloud computing environment(CCE)is an unsung hero in the COVID-19 pandemic crisis.It consists of the fast-paced practices for services that reflect the trend of rapidly deployable applications for maintaining data.Despite the increase in the use of CC applications,there is an ongoing research challenge in the domains of CCE concerning data,guaranteeing security,and the availability of CC applications.This paper,to the best of our knowledge,is the first paper that thoroughly explains the impact of the COVID-19 pandemic on CCE.Additionally,this paper also highlights the security risks of working from home during the COVID-19 pandemic.
文摘Due to the recent developments in communications technology,cognitive computations have been used in smart healthcare techniques that can combine massive medical data,artificial intelligence,federated learning,bio-inspired computation,and the Internet of Medical Things.It has helped in knowledge sharing and scaling ability between patients,doctors,and clinics for effective treatment of patients.Speech-based respiratory disease detection and monitoring are crucial in this direction and have shown several promising results.Since the subject’s speech can be remotely recorded and submitted for further examination,it offers a quick,economical,dependable,and noninvasive prospective alternative detection approach.However,the two main requirements of this are higher accuracy and lower computational complexity and,in many cases,these two requirements do not correlate with each other.This problem has been taken up in this paper to develop a low computational complexity-based neural network with higher accuracy.A cascaded perceptual functional link artificial neural network(PFLANN)is used to capture the nonlinearity in the data for better classification performance with low computational complexity.The proposed model is being tested for multiple respiratory diseases,and the analysis of various performance matrices demonstrates the superior performance of the proposed model both in terms of accuracy and complexity.
文摘Breast cancer remains a significant global health concern,with early detection being crucial for effective treatment and improved survival rates.This study introduces HERA-Net(Hybrid Extraction and Recognition Architec-ture),an advanced hybrid model designed to enhance the diagnostic accuracy of breast cancer detection by leveraging both thermographic and ultrasound imaging modalities.The HERA-Net model integrates powerful deep learning architectures,including VGG19,U-Net,GRU(Gated Recurrent Units),and ResNet-50,to capture multi-dimensional features that support robust image segmentation,feature extraction,and temporal analysis.For thermographic imaging,a comprehensive dataset of 3534 infrared(IR)images from the DMR(Database for Mastology Research)was utilized,with images captured by the high-resolution FLIR SC-620 camera.This dataset was partitioned with 70%of images allocated to training,15%to validation,and 15%to testing,ensuring a balanced approach for model development and evaluation.To prepare the images,preprocessing steps included resizing,Contrast-Limited Adaptive Histogram Equalization(CLAHE)for enhanced contrast,bilateral filtering for noise reduction,and Non-Local Means(NLMS)filtering to refine structural details.Statistical metrics such as mean,variance,standard deviation,entropy,kurtosis,and skewness were extracted to provide a detailed analysis of thermal distribution across samples.Similarly,the ultrasound dataset was processed to extract detailed anatomical features relevant to breast cancer diagnosis.Preprocessing involved grayscale conversion,bilateral filtering,and Multipurpose Beta Optimized Bihistogram Equalization(MBOBHE)for contrast enhancement,followed by segmentation using Geodesic Active Contours.The ultrasound and thermographic datasets were subsequently fed into HERA-Net,where VGG19 and U-Net were applied for feature extraction and segmentation,GRU for temporal pattern recognition,and ResNet-50 for classification.The performance assessment of HERA-Net on both imaging modalities demonstrated a high degree of diagnostic accuracy,with the proposed model achieving an overall accuracy of 99.86%in breast cancer detection,surpassing other models such as VGG16(99.80%)and Inception V3(99.64%).In terms of sensitivity,HERA-Net reached a flawless 100%,indicating its ability to correctly identify all positive cases,while maintaining a specificity of 99.81%,significantly reducing the likelihood of false positives.The model’s robustness was further illustrated through cross-entropy loss convergence and ROC(Receiver Operating Characteristic)curves,with the combined ROC curve showing consistent discrimination ability across training,validation,and testing phases.Overall,the HERA-Net model’s integration of thermographic and ultrasound imaging,combined with advanced deep learning techniques,showcases a powerful approach to breast cancer detection,achieving unprecedented accuracy and sensitivity.
文摘Software cost estimation is a crucial aspect of software project management,significantly impacting productivity and planning.This research investigates the impact of various feature selection techniques on software cost estimation accuracy using the CoCoMo NASA dataset,which comprises data from 93 unique software projects with 24 attributes.By applying multiple machine learning algorithms alongside three feature selection methods,this study aims to reduce data redundancy and enhance model accuracy.Our findings reveal that the principal component analysis(PCA)-based feature selection technique achieved the highest performance,underscoring the importance of optimal feature selection in improving software cost estimation accuracy.It is demonstrated that our proposed method outperforms the existing method while achieving the highest precision,accuracy,and recall rates.
基金funded by Researchers Supporting Project Number(RSPD2024 R947),King Saud University,Riyadh,Saudi Arabia.
文摘Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seamless and error-free HCI.HGRoc technology is pivotal in healthcare and communication for the deaf community.Despite significant advancements in computer vision-based gesture recognition for language understanding,two considerable challenges persist in this field:(a)limited and common gestures are considered,(b)processing multiple channels of information across a network takes huge computational time during discriminative feature extraction.Therefore,a novel hand vision-based convolutional neural network(CNN)model named(HVCNNM)offers several benefits,notably enhanced accuracy,robustness to variations,real-time performance,reduced channels,and scalability.Additionally,these models can be optimized for real-time performance,learn from large amounts of data,and are scalable to handle complex recognition tasks for efficient human-computer interaction.The proposed model was evaluated on two challenging datasets,namely the Massey University Dataset(MUD)and the American Sign Language(ASL)Alphabet Dataset(ASLAD).On the MUD and ASLAD datasets,HVCNNM achieved a score of 99.23% and 99.00%,respectively.These results demonstrate the effectiveness of CNN as a promising HGRoc approach.The findings suggest that the proposed model have potential roles in applications such as sign language recognition,human-computer interaction,and robotics.
文摘COVID-19 pandemic restrictions limited all social activities to curtail the spread of the virus.The foremost and most prime sector among those affected were schools,colleges,and universities.The education system of entire nations had shifted to online education during this time.Many shortcomings of Learning Management Systems(LMSs)were detected to support education in an online mode that spawned the research in Artificial Intelligence(AI)based tools that are being developed by the research community to improve the effectiveness of LMSs.This paper presents a detailed survey of the different enhancements to LMSs,which are led by key advances in the area of AI to enhance the real-time and non-real-time user experience.The AI-based enhancements proposed to the LMSs start from the Application layer and Presentation layer in the form of flipped classroom models for the efficient learning environment and appropriately designed UI/UX for efficient utilization of LMS utilities and resources,including AI-based chatbots.Session layer enhancements are also required,such as AI-based online proctoring and user authentication using Biometrics.These extend to the Transport layer to support real-time and rate adaptive encrypted video transmission for user security/privacy and satisfactory working of AI-algorithms.It also needs the support of the Networking layer for IP-based geolocation features,the Virtual Private Network(VPN)feature,and the support of Software-Defined Networks(SDN)for optimum Quality of Service(QoS).Finally,in addition to these,non-real-time user experience is enhanced by other AI-based enhancements such as Plagiarism detection algorithms and Data Analytics.
基金supported by the Researchers Supporting Project number(RSP2024R395),King Saud University,Riyadh,Saudi Arabia.
文摘Devices and networks constantly upgrade,leading to rapid technological evolution.Three-dimensional(3D)point cloud transmission plays a crucial role in aerial computing terminology,facilitating information exchange.Various network types,including sensor networks and 5G mobile networks,support this transmission.Notably,Flying Ad hoc Networks(FANETs)utilize Unmanned Aerial Vehicles(UAVs)as nodes,operating in a 3D environment with Six Degrees of Freedom(6DoF).This study comprehensively surveys UAV networks,focusing on models for Light Detection and Ranging(LiDAR)3D point cloud compression/transmission.Key topics covered include autonomous navigation,challenges in video streaming infrastructure,motivations for Quality of Experience(QoE)enhancement,and avenues for future research.Additionally,the paper conducts an extensive review of UAVs,encompassing current wireless technologies,applications across various sectors,routing protocols,design considerations,security measures,blockchain applications in UAVs,contributions to healthcare systems,and integration with the Internet of Things(IoT),Artificial Intelligence(AI),Machine Learning(ML),and Deep Learning(DL).Furthermore,the paper thoroughly discusses the core contributions of LiDAR 3D point clouds in UAV systems and their future prediction along with mobility models.It also explores the prospects of UAV systems and presents state-of-the-art solutions.
基金funded by the Researchers Supporting Program at King Saud University(RSPD2024R809).
文摘Hybridizing metaheuristic algorithms involves synergistically combining different optimization techniques to effectively address complex and challenging optimization problems.This approach aims to leverage the strengths of multiple algorithms,enhancing solution quality,convergence speed,and robustness,thereby offering a more versatile and efficient means of solving intricate real-world optimization tasks.In this paper,we introduce a hybrid algorithm that amalgamates three distinct metaheuristics:the Beluga Whale Optimization(BWO),the Honey Badger Algorithm(HBA),and the Jellyfish Search(JS)optimizer.The proposed hybrid algorithm will be referred to as BHJO.Through this fusion,the BHJO algorithm aims to leverage the strengths of each optimizer.Before this hybridization,we thoroughly examined the exploration and exploitation capabilities of the BWO,HBA,and JS metaheuristics,as well as their ability to strike a balance between exploration and exploitation.This meticulous analysis allowed us to identify the pros and cons of each algorithm,enabling us to combine them in a novel hybrid approach that capitalizes on their respective strengths for enhanced optimization performance.In addition,the BHJO algorithm incorporates Opposition-Based Learning(OBL)to harness the advantages offered by this technique,leveraging its diverse exploration,accelerated convergence,and improved solution quality to enhance the overall performance and effectiveness of the hybrid algorithm.Moreover,the performance of the BHJO algorithm was evaluated across a range of both unconstrained and constrained optimization problems,providing a comprehensive assessment of its efficacy and applicability in diverse problem domains.Similarly,the BHJO algorithm was subjected to a comparative analysis with several renowned algorithms,where mean and standard deviation values were utilized as evaluation metrics.This rigorous comparison aimed to assess the performance of the BHJOalgorithmabout its counterparts,shedding light on its effectiveness and reliability in solving optimization problems.Finally,the obtained numerical statistics underwent rigorous analysis using the Friedman post hoc Dunn’s test.The resulting numerical values revealed the BHJO algorithm’s competitiveness in tackling intricate optimization problems,affirming its capability to deliver favorable outcomes in challenging scenarios.
基金the deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number(IFP-2022-34).
文摘In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s.
基金supported by the MSIT(Ministry of Science and ICT),Korea,under the ICAN(ICT Challenge and Advanced Network of HRD)Program(IITP-2024-RS-2022-00156326)supervised by the IITP(Institute of Information&Communications Technology Planning&Evaluation)+2 种基金The authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding Program Grant Code(NU/GP/SERC/13/30)funding for this work was provided by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R410)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors extend their appreciation to the Deanship of Scientific Research at Northern Border University,Arar,KSA for funding this research work through the Project Number“NBU-FFR-2024-231-06”.
文摘The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integration,robotic navigation,autonomous driving,and guided tour systems,heavily rely on this type of scene comprehension.This paper presents a novel segmentation approach based on the UNet network model,aimed at recognizing multiple objects within an image.The methodology begins with the acquisition and preprocessing of the image,followed by segmentation using the fine-tuned UNet architecture.Afterward,we use an annotation tool to accurately label the segmented regions.Upon labeling,significant features are extracted from these segmented objects,encompassing KAZE(Accelerated Segmentation and Extraction)features,energy-based edge detection,frequency-based,and blob characteristics.For the classification stage,a convolution neural network(CNN)is employed.This comprehensive methodology demonstrates a robust framework for achieving accurate and efficient recognition of multiple objects in images.The experimental results,which include complex object datasets like MSRC-v2 and PASCAL-VOC12,have been documented.After analyzing the experimental results,it was found that the PASCAL-VOC12 dataset achieved an accuracy rate of 95%,while the MSRC-v2 dataset achieved an accuracy of 89%.The evaluation performed on these diverse datasets highlights a notably impressive level of performance.
文摘In the contemporary era,the death rate is increasing due to lung cancer.However,technology is continuously enhancing the quality of well-being.To improve the survival rate,radiologists rely on Computed Tomography(CT)scans for early detection and diagnosis of lung nodules.This paper presented a detailed,systematic review of several identification and categorization techniques for lung nodules.The analysis of the report explored the challenges,advancements,and future opinions in computer-aided diagnosis CAD systems for detecting and classifying lung nodules employing the deep learning(DL)algorithm.The findings also highlighted the usefulness of DL networks,especially convolutional neural networks(CNNs)in elevating sensitivity,accuracy,and specificity as well as overcoming false positives in the initial stages of lung cancer detection.This paper further presented the integral nodule classification stage,which stressed the importance of differentiating between benign and malignant nodules for initial cancer diagnosis.Moreover,the findings presented a comprehensive analysis of multiple techniques and studies for nodule classification,highlighting the evolution of methodologies from conventional machine learning(ML)classifiers to transfer learning and integrated CNNs.Interestingly,while accepting the strides formed by CAD systems,the review addressed persistent challenges.
文摘Operating System(OS)is a critical piece of software that manages a computer’s hardware and resources,acting as the intermediary between the computer and the user.The existing OS is not designed for Big Data and Cloud Computing,resulting in data processing and management inefficiency.This paper proposes a simplified and improved kernel on an x86 system designed for Big Data and Cloud Computing purposes.The proposed algorithm utilizes the performance benefits from the improved Input/Output(I/O)performance.The performance engineering runs the data-oriented design on traditional data management to improve data processing speed by reducing memory access overheads in conventional data management.The OS incorporates a data-oriented design to“modernize”various Data Science and management aspects.The resulting OS contains a basic input/output system(BIOS)bootloader that boots into Intel 32-bit protected mode,a text display terminal,4 GB paging memory,4096 heap block size,a Hard Disk Drive(HDD)I/O Advanced Technology Attachment(ATA)driver and more.There are also I/O scheduling algorithm prototypes that demonstrate how a simple Sweeping algorithm is superior to more conventionally known I/O scheduling algorithms.A MapReduce prototype is implemented using Message Passing Interface(MPI)for big data purposes.An attempt was made to optimize binary search using modern performance engineering and data-oriented design.
基金supported by the Research Incentive Grant 23200 of Zayed University,United Arab Emirates.
文摘Cardiovascular disease prediction is a significant area of research in healthcare management systems(HMS).We will only be able to reduce the number of deaths if we anticipate cardiac problems in advance.The existing heart disease detection systems using machine learning have not yet produced sufficient results due to the reliance on available data.We present Clustered Butterfly Optimization Techniques(RoughK-means+BOA)as a new hybrid method for predicting heart disease.This method comprises two phases:clustering data using Roughk-means(RKM)and data analysis using the butterfly optimization algorithm(BOA).The benchmark dataset from the UCI repository is used for our experiments.The experiments are divided into three sets:the first set involves the RKM clustering technique,the next set evaluates the classification outcomes,and the last set validates the performance of the proposed hybrid model.The proposed RoughK-means+BOA has achieved a reasonable accuracy of 97.03 and a minimal error rate of 2.97.This result is comparatively better than other combinations of optimization techniques.In addition,this approach effectively enhances data segmentation,optimization,and classification performance.