Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified ne...Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified network lifecycle,and policies management.Network vulnerabilities try to modify services provided by Network Function Virtualization MANagement and Orchestration(NFV MANO),and malicious attacks in different scenarios disrupt the NFV Orchestrator(NFVO)and Virtualized Infrastructure Manager(VIM)lifecycle management related to network services or individual Virtualized Network Function(VNF).This paper proposes an anomaly detection mechanism that monitors threats in NFV MANO and manages promptly and adaptively to implement and handle security functions in order to enhance the quality of experience for end users.An anomaly detector investigates these identified risks and provides secure network services.It enables virtual network security functions and identifies anomalies in Kubernetes(a cloud-based platform).For training and testing purpose of the proposed approach,an intrusion-containing dataset is used that hold multiple malicious activities like a Smurf,Neptune,Teardrop,Pod,Land,IPsweep,etc.,categorized as Probing(Prob),Denial of Service(DoS),User to Root(U2R),and Remote to User(R2L)attacks.An anomaly detector is anticipated with the capabilities of a Machine Learning(ML)technique,making use of supervised learning techniques like Logistic Regression(LR),Support Vector Machine(SVM),Random Forest(RF),Naïve Bayes(NB),and Extreme Gradient Boosting(XGBoost).The proposed framework has been evaluated by deploying the identified ML algorithm on a Jupyter notebook in Kubeflow to simulate Kubernetes for validation purposes.RF classifier has shown better outcomes(99.90%accuracy)than other classifiers in detecting anomalies/intrusions in the containerized environment.展开更多
This paper investigates the application ofmachine learning to develop a response model to cardiovascular problems and the use of AdaBoost which incorporates an application of Outlier Detection methodologies namely;Z-S...This paper investigates the application ofmachine learning to develop a response model to cardiovascular problems and the use of AdaBoost which incorporates an application of Outlier Detection methodologies namely;Z-Score incorporated with GreyWolf Optimization(GWO)as well as Interquartile Range(IQR)coupled with Ant Colony Optimization(ACO).Using a performance index,it is shown that when compared with the Z-Score and GWO with AdaBoost,the IQR and ACO,with AdaBoost are not very accurate(89.0%vs.86.0%)and less discriminative(Area Under the Curve(AUC)score of 93.0%vs.91.0%).The Z-Score and GWO methods also outperformed the others in terms of precision,scoring 89.0%;and the recall was also found to be satisfactory,scoring 90.0%.Thus,the paper helps to reveal various specific benefits and drawbacks associated with different outlier detection and feature selection techniques,which can be important to consider in further improving various aspects of diagnostics in cardiovascular health.Collectively,these findings can enhance the knowledge of heart disease prediction and patient treatment using enhanced and innovativemachine learning(ML)techniques.These findings when combined improve patient therapy knowledge and cardiac disease prediction through the use of cutting-edge and improved machine learning approaches.This work lays the groundwork for more precise diagnosis models by highlighting the benefits of combining multiple optimization methodologies.Future studies should focus on maximizing patient outcomes and model efficacy through research on these combinations.展开更多
Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR ...Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.展开更多
With the increasing number of connected devices in the Internet of Things(IoT)era,the number of intrusions is also increasing.An intrusion detection system(IDS)is a secondary intelligent system for monitoring,detectin...With the increasing number of connected devices in the Internet of Things(IoT)era,the number of intrusions is also increasing.An intrusion detection system(IDS)is a secondary intelligent system for monitoring,detecting and alerting against malicious activity.IDS is important in developing advanced security models.This study reviews the importance of various techniques,tools,and methods used in IoT detection and/or prevention systems.Specifically,it focuses on machine learning(ML)and deep learning(DL)techniques for IDS.This paper proposes an accurate intrusion detection model to detect traditional and new attacks on the Internet of Vehicles.To speed up the detection of recent attacks,the proposed network architecture developed at the data processing layer is incorporated with a convolutional neural network(CNN),which performs better than a support vector machine(SVM).Processing data are enhanced using the synthetic minority oversampling technique to ensure learning accuracy.The nearest class mean classifier is applied during the testing phase to identify new attacks.Experimental results using the AWID dataset,which is one of the most common open intrusion detection datasets,revealed a higher detection accuracy(94%)compared to SVM and random forest methods.展开更多
What causes object detection in video to be less accurate than it is in still images?Because some video frames have degraded in appearance from fast movement,out-of-focus camera shots,and changes in posture.These reas...What causes object detection in video to be less accurate than it is in still images?Because some video frames have degraded in appearance from fast movement,out-of-focus camera shots,and changes in posture.These reasons have made video object detection(VID)a growing area of research in recent years.Video object detection can be used for various healthcare applications,such as detecting and tracking tumors in medical imaging,monitoring the movement of patients in hospitals and long-term care facilities,and analyzing videos of surgeries to improve technique and training.Additionally,it can be used in telemedicine to help diagnose and monitor patients remotely.Existing VID techniques are based on recurrent neural networks or optical flow for feature aggregation to produce reliable features which can be used for detection.Some of those methods aggregate features on the full-sequence level or from nearby frames.To create feature maps,existing VID techniques frequently use Convolutional Neural Networks(CNNs)as the backbone network.On the other hand,Vision Transformers have outperformed CNNs in various vision tasks,including object detection in still images and image classification.We propose in this research to use Swin-Transformer,a state-of-the-art Vision Transformer,as an alternative to CNN-based backbone networks for object detection in videos.The proposed architecture enhances the accuracy of existing VID methods.The ImageNet VID and EPIC KITCHENS datasets are used to evaluate the suggested methodology.We have demonstrated that our proposed method is efficient by achieving 84.3%mean average precision(mAP)on ImageNet VID using less memory in comparison to other leading VID techniques.The source code is available on the website https://github.com/amaharek/SwinVid.展开更多
With the high level of proliferation of connected mobile devices,the risk of intrusion becomes higher.Artificial Intelligence(AI)and Machine Learning(ML)algorithms started to feature in protection software and showed ...With the high level of proliferation of connected mobile devices,the risk of intrusion becomes higher.Artificial Intelligence(AI)and Machine Learning(ML)algorithms started to feature in protection software and showed effective results.These algorithms are nonetheless hindered by the lack of rich datasets and compounded by the appearance of new categories of malware such that the race between attackers’malware,especially with the assistance of Artificial Intelligence tools and protection solutions makes these systems and frameworks lose effectiveness quickly.In this article,we present a framework for mobile malware detection based on a new dataset containing new categories of mobile malware.We focus on categories of malware that were not tested before by Machine Learning algorithms proven effective in malware detection.We carefully select an optimal number of features,do necessary preprocessing,and then apply Machine Learning algorithms to discover malicious code effectively.From our experiments,we have found that the Random Forest algorithm is the best-performing algorithm with such mobile malware with detection rates of around 99%.We compared our results from this work and found that they are aligned well with our previous work.We also compared our work with State-of-the-Art works of others and found that the results are very close and competitive.展开更多
In recent years,container-based cloud virtualization solutions have emerged to mitigate the performance gap between non-virtualized and virtualized physical resources.However,there is a noticeable absence of technique...In recent years,container-based cloud virtualization solutions have emerged to mitigate the performance gap between non-virtualized and virtualized physical resources.However,there is a noticeable absence of techniques for predicting microservice performance in current research,which impacts cloud service users’ability to determine when to provision or de-provision microservices.Predicting microservice performance poses challenges due to overheads associated with actions such as variations in processing time caused by resource contention,which potentially leads to user confusion.In this paper,we propose,develop,and validate a probabilistic architecture named Microservice Performance Diagnosis and Prediction(MPDP).MPDP considers various factors such as response time,throughput,CPU usage,and othermetrics to dynamicallymodel interactions betweenmicroservice performance indicators for diagnosis and prediction.Using experimental data fromourmonitoring tool,stakeholders can build various networks for probabilistic analysis ofmicroservice performance diagnosis and prediction and estimate the best microservice resource combination for a given Quality of Service(QoS)level.We generated a dataset of microservices with 2726 records across four benchmarks including CPU,memory,response time,and throughput to demonstrate the efficacy of the proposed MPDP architecture.We validate MPDP and demonstrate its capability to predict microservice performance.We compared various Bayesian networks such as the Noisy-OR Network(NOR),Naive Bayes Network(NBN),and Complex Bayesian Network(CBN),achieving an overall accuracy rate of 89.98%when using CBN.展开更多
The rapid adoption of Internet of Things(IoT)technologies has introduced significant security challenges across the physical,network,and application layers,particularly with the widespread use of the Message Queue Tel...The rapid adoption of Internet of Things(IoT)technologies has introduced significant security challenges across the physical,network,and application layers,particularly with the widespread use of the Message Queue Telemetry Transport(MQTT)protocol,which,while efficient in bandwidth consumption,lacks inherent security features,making it vulnerable to various cyber threats.This research addresses these challenges by presenting a secure,lightweight communication proxy that enhances the scalability and security of MQTT-based Internet of Things(IoT)networks.The proposed solution builds upon the Dang-Scheme,a mutual authentication protocol designed explicitly for resource-constrained environments and enhances it using Elliptic Curve Cryptography(ECC).This integration significantly improves device authentication,data confidentiality,and energy efficiency,achieving an 87.68%increase in data confidentiality and up to 77.04%energy savings during publish/subscribe communications in smart homes.The Middleware Broker System dynamically manages transaction keys and session IDs,offering robust defences against common cyber threats like impersonation and brute-force attacks.Penetration testing with tools such as Hydra and Nmap further validated the system’s security,demonstrating its potential to significantly improve the security and efficiency of IoT networks while underscoring the need for ongoing research to combat emerging threats.展开更多
Fruit infections have an impact on both the yield and the quality of the crop.As a result,an automated recognition system for fruit leaf diseases is important.In artificial intelligence(AI)applications,especially in a...Fruit infections have an impact on both the yield and the quality of the crop.As a result,an automated recognition system for fruit leaf diseases is important.In artificial intelligence(AI)applications,especially in agriculture,deep learning shows promising disease detection and classification results.The recent AI-based techniques have a few challenges for fruit disease recognition,such as low-resolution images,small datasets for learning models,and irrelevant feature extraction.This work proposed a new fruit leaf leaf leaf disease recognition framework using deep learning features and improved pathfinder optimization.Three fruit types have been employed in this work for the validation process,such as apple,grape,and Citrus.In the first step,a noisy dataset is prepared by employing the original images to learn the designed framework better.The EfficientNet-B0 deep model is fine-tuned on the next step and trained separately on the original and noisy data.After that,features are fused using a serial concatenation approach that is later optimized in the next step using an improved Path Finder Algorithm(PFA).This algorithm aims to select the best features based on the fitness score and ignore redundant information.The selected features are finally classified using machine learning classifiers such as Medium Neural Network,Wide Neural Network,and Support Vector Machine.The experimental process was conducted on each fruit dataset separately and obtained an accuracy of 100%,99.7%,99.7%,and 93.4%for apple,grape,Citrus fruit,and citrus plant leaves,respectively.A detailed analysis is conducted and also compared with the recent techniques,and the proposed framework shows improved accuracy.展开更多
The number of blogs and other forms of opinionated online content has increased dramatically in recent years.Many fields,including academia and national security,place an emphasis on automated political article orient...The number of blogs and other forms of opinionated online content has increased dramatically in recent years.Many fields,including academia and national security,place an emphasis on automated political article orientation detection.Political articles(especially in the Arab world)are different from other articles due to their subjectivity,in which the author’s beliefs and political affiliation might have a significant influence on a political article.With categories representing the main political ideologies,this problem may be thought of as a subset of the text categorization(classification).In general,the performance of machine learning models for text classification is sensitive to hyperparameter settings.Furthermore,the feature vector used to represent a document must capture,to some extent,the complex semantics of natural language.To this end,this paper presents an intelligent system to detect political Arabic article orientation that adapts the categorical boosting(CatBoost)method combined with a multi-level feature concept.Extracting features at multiple levels can enhance the model’s ability to discriminate between different classes or patterns.Each level may capture different aspects of the input data,contributing to a more comprehensive representation.CatBoost,a robust and efficient gradient-boosting algorithm,is utilized to effectively learn and predict the complex relationships between these features and the political orientation labels associated with the articles.A dataset of political Arabic texts collected from diverse sources,including postings and articles,is used to assess the suggested technique.Conservative,reform,and revolutionary are the three subcategories of these opinions.The results of this study demonstrate that compared to other frequently used machine learning models for text classification,the CatBoost method using multi-level features performs better with an accuracy of 98.14%.展开更多
Field training is the backbone of the teacher-preparation process.Its importance stems from the goals that colleges of education aim to achieve,which include bridging the gap between theory and practice and aligning w...Field training is the backbone of the teacher-preparation process.Its importance stems from the goals that colleges of education aim to achieve,which include bridging the gap between theory and practice and aligning with contemporary educational trends during teacher training.Currently,trainee students attendance in field training is recordedmanually through signatures on attendance sheets.However,thismethod is prone to impersonation,time wastage,and misplacement.Additionally,traditional methods of evaluating trainee students are often susceptible to human errors during the evaluation and scoring processes.Field training also lacks modern technology that the supervisor can use in case of his absence from school to monitor the trainee students’implementation of the required activities and tasks.These shortcomings do not meet the needs of the digital era that universities are currently experiencing.As a result,this paper presents a smart management system for field training based on Internet of Things(IoT)and mobile technology.It includes three subsystems:attendance,monitoring,and evaluation.The attendance subsystem uses an R307 fingerprint sensor to record trainee students’attendance.The Arduino Nano microcontroller transmits attendance data to the proposed Android application via an ESP-12F Wi-Fi module,which then forwards it to the Firebase database for storage.The monitoring subsystem utilizes Global Positioning System(GPS)technology to continually track trainee students’locations,ensuring they remain at the school during training.It also enables remote communication between trainee students and supervisors via audio,video,or text by integrating video call and chat technologies.The evaluation subsystem is based on three items:an online exam,attendance,and implementation of required activities and tasks.Experimental results have demonstrated the accuracy and efficiency of the proposed management system in recording attendance,as well as in monitoring and evaluating trainee students during field traiing.展开更多
This article presents an exhaustive comparative investigation into the accuracy of gender identification across diverse geographical regions,employing a deep learning classification algorithm for speech signal analysi...This article presents an exhaustive comparative investigation into the accuracy of gender identification across diverse geographical regions,employing a deep learning classification algorithm for speech signal analysis.In this study,speech samples are categorized for both training and testing purposes based on their geographical origin.Category 1 comprises speech samples from speakers outside of India,whereas Category 2 comprises live-recorded speech samples from Indian speakers.Testing speech samples are likewise classified into four distinct sets,taking into consideration both geographical origin and the language spoken by the speakers.Significantly,the results indicate a noticeable difference in gender identification accuracy among speakers from different geographical areas.Indian speakers,utilizing 52 Hindi and 26 English phonemes in their speech,demonstrate a notably higher gender identification accuracy of 85.75%compared to those speakers who predominantly use 26 English phonemes in their conversations when the system is trained using speech samples from Indian speakers.The gender identification accuracy of the proposed model reaches 83.20%when the system is trained using speech samples from speakers outside of India.In the analysis of speech signals,Mel Frequency Cepstral Coefficients(MFCCs)serve as relevant features for the speech data.The deep learning classification algorithm utilized in this research is based on a Bidirectional Long Short-Term Memory(BiLSTM)architecture within a Recurrent Neural Network(RNN)model.展开更多
Users of social networks can readily express their thoughts on websites like Twitter(now X),Facebook,and Instagram.The volume of textual data flowing from users has greatly increased with the advent of social media in...Users of social networks can readily express their thoughts on websites like Twitter(now X),Facebook,and Instagram.The volume of textual data flowing from users has greatly increased with the advent of social media in comparison to traditional media.For instance,using natural language processing(NLP)methods,social media can be leveraged to obtain crucial information on the present situation during disasters.In this work,tweets on the Uttarakhand flash flood are analyzed using a hybrid NLP model.This investigation employed sentiment analysis(SA)to determine the people’s expressed negative attitudes regarding the disaster.We apply a machine learning algorithm and evaluate the performance using the standard metrics,namely root mean square error(RMSE),mean absolute error(MAE),and mean absolute percentage error(MAPE).Our random forest(RF)classifier outperforms comparable works with an accuracy of 98.10%.In order to gain a competitive edge,the study shows how Twitter(now X)data and machine learning(ML)techniques can analyze public discourse and sentiments regarding disasters.It does this by comparing positive and negative comments in order to develop strategies to deal with public sentiments on disasters.展开更多
Layout synthesis in quantum computing is crucial due to the physical constraints of quantum devices where quantum bits(qubits)can only interact effectively with their nearest neighbors.This constraint severely impacts...Layout synthesis in quantum computing is crucial due to the physical constraints of quantum devices where quantum bits(qubits)can only interact effectively with their nearest neighbors.This constraint severely impacts the design and efficiency of quantum algorithms,as arranging qubits optimally can significantly reduce circuit depth and improve computational performance.To tackle the layout synthesis challenge,we propose an algorithm based on integer linear programming(ILP).ILP is well-suited for this problem as it can formulate the optimization objective of minimizing circuit depth while adhering to the nearest neighbor interaction constraint.The algorithm aims to generate layouts that maximize qubit connectivity within the given physical constraints of the quantum device.For experimental validation,we outline a clear and feasible setup using real quantum devices.This includes specifying the type and configuration of the quantum hardware used,such as the number of qubits,connectivity constraints,and any technological limitations.The proposed algorithm is implemented on these devices to demonstrate its effectiveness in producing depth-optimal quantum circuit layouts.By integrating these elements,our research aims to provide practical solutions to enhance the efficiency and scalability of quantum computing systems,paving the way for advancements in quantum algorithm design and implementation.展开更多
A generalization of supervised single-label learning based on the assumption that each sample in a dataset may belong to more than one class simultaneously is called multi-label learning.The main objective of this wor...A generalization of supervised single-label learning based on the assumption that each sample in a dataset may belong to more than one class simultaneously is called multi-label learning.The main objective of this work is to create a novel framework for learning and classifying imbalancedmulti-label data.This work proposes a framework of two phases.The imbalanced distribution of themulti-label dataset is addressed through the proposed Borderline MLSMOTE resampling method in phase 1.Later,an adaptive weighted l21 norm regularized(Elastic-net)multilabel logistic regression is used to predict unseen samples in phase 2.The proposed Borderline MLSMOTE resampling method focuses on samples with concurrent high labels in contrast to conventional MLSMOTE.The minority labels in these samples are called difficult minority labels and are more prone to penalize classification performance.The concurrentmeasure is considered borderline,and labels associated with samples are regarded as borderline labels in the decision boundary.In phase II,a novel adaptive l21 norm regularized weighted multi-label logistic regression is used to handle balanced data with different weighted synthetic samples.Experimentation on various benchmark datasets shows the outperformance of the proposed method and its powerful predictive performances over existing conventional state-of-the-art multi-label methods.展开更多
Cyber Defense is becoming a major issue for every organization to keep business continuity intact.The presented paper explores the effectiveness of a meta-heuristic optimization algorithm-Artificial Bees Colony Algori...Cyber Defense is becoming a major issue for every organization to keep business continuity intact.The presented paper explores the effectiveness of a meta-heuristic optimization algorithm-Artificial Bees Colony Algorithm(ABC)as an Nature Inspired Cyber Security mechanism to achieve adaptive defense.It experiments on the Denial-Of-Service attack scenarios which involves limiting the traffic flow for each node.Businesses today have adapted their service distribution models to include the use of the Internet,allowing them to effectively manage and interact with their customer data.This shift has created an increased reliance on online services to store vast amounts of confidential customer data,meaning any disruption or outage of these services could be disastrous for the business,leaving them without the knowledge to serve their customers.Adversaries can exploit such an event to gain unauthorized access to the confidential data of the customers.The proposed algorithm utilizes an Adaptive Defense approach to continuously select nodes that could present characteristics of a probable malicious entity.For any changes in network parameters,the cluster of nodes is selected in the prepared solution set as a probable malicious node and the traffic rate with the ratio of packet delivery is managed with respect to the properties of normal nodes to deliver a disaster recovery plan for potential businesses.展开更多
Path-based clustering algorithms typically generate clusters by optimizing a benchmark function.Most optimiza-tion methods in clustering algorithms often offer solutions close to the general optimal value.This study a...Path-based clustering algorithms typically generate clusters by optimizing a benchmark function.Most optimiza-tion methods in clustering algorithms often offer solutions close to the general optimal value.This study achieves the global optimum value for the criterion function in a shorter time using the minimax distance,Maximum Spanning Tree“MST”,and meta-heuristic algorithms,including Genetic Algorithm“GA”and Particle Swarm Optimization“PSO”.The Fast Path-based Clustering“FPC”algorithm proposed in this paper can find cluster centers correctly in most datasets and quickly perform clustering operations.The FPC does this operation using MST,the minimax distance,and a new hybrid meta-heuristic algorithm in a few rounds of algorithm iterations.This algorithm can achieve the global optimal value,and the main clustering process of the algorithm has a computational complexity of O�k2×n�.However,due to the complexity of the minimum distance algorithm,the total computational complexity is O�n2�.Experimental results of FPC on synthetic datasets with arbitrary shapes demonstrate that the algorithm is resistant to noise and outliers and can correctly identify clusters of varying sizes and numbers.In addition,the FPC requires the number of clusters as the only parameter to perform the clustering process.A comparative analysis of FPC and other clustering algorithms in this domain indicates that FPC exhibits superior speed,stability,and performance.展开更多
More businesses are deploying powerful Intrusion Detection Systems(IDS)to secure their data and physical assets.Improved cyber-attack detection and prevention in these systems requires machine learning(ML)approaches.T...More businesses are deploying powerful Intrusion Detection Systems(IDS)to secure their data and physical assets.Improved cyber-attack detection and prevention in these systems requires machine learning(ML)approaches.This paper examines a cyber-attack prediction system combining feature selection(FS)and ML.Our technique’s foundation was based on Correlation Analysis(CA),Mutual Information(MI),and recursive feature reduction with cross-validation.To optimize the IDS performance,the security features must be carefully selected from multiple-dimensional datasets,and our hybrid FS technique must be extended to validate our methodology using the improved UNSW-NB 15 and TON_IoT datasets.Our technique identified 22 key characteristics in UNSW-NB-15 and 8 in TON_IoT.We evaluated prediction using seven ML methods:Decision Tree(DT),Random Forest(RF),Logistic Regression(LR),Naive Bayes(NB),K-Nearest Neighbors(KNN),Support Vector Machines(SVM),and Multilayer Perceptron(MLP)classifiers.The DT,RF,NB,and MLP classifiers helped our model surpass the competition on both datasets.Therefore,the investigational outcomes of our hybrid model may help IDSs defend business assets from various cyberattack vectors.展开更多
Ultra-high voltage(UHV)transmission lines are an important part of China’s power grid and are often surrounded by a complex electromagnetic environment.The ground total electric field is considered a main electromagn...Ultra-high voltage(UHV)transmission lines are an important part of China’s power grid and are often surrounded by a complex electromagnetic environment.The ground total electric field is considered a main electromagnetic environment indicator of UHV transmission lines and is currently employed for reliable long-term operation of the power grid.Yet,the accurate prediction of the ground total electric field remains a technical challenge.In this work,we collected the total electric field data from the Ningdong-Zhejiang±800 kV UHVDC transmission project,as of the Ling Shao line,and perform an outlier analysis of the total electric field data.We show that the Local Outlier Factor(LOF)elimination algorithm has a small average difference and overcomes the performance of Density-Based Spatial Clustering of Applications with Noise(DBSCAN)and Isolated Forest elimination algorithms.Moreover,the Stacking algorithm has been found to have superior prediction accuracy than a variety of similar prediction algorithms,including the traditional finite element.The low prediction error of the Stacking algorithm highlights the superior ability to accurately forecast the ground total electric field of UHVDC transmission lines.展开更多
In the digital age,non-touch communication technologies are reshaping human-device interactions and raising security concerns.A major challenge in current technology is the misinterpretation of gestures by sensors and...In the digital age,non-touch communication technologies are reshaping human-device interactions and raising security concerns.A major challenge in current technology is the misinterpretation of gestures by sensors and cameras,often caused by environmental factors.This issue has spurred the need for advanced data processing methods to achieve more accurate gesture recognition and predictions.Our study presents a novel virtual keyboard allowing character input via distinct hand gestures,focusing on two key aspects:hand gesture recognition and character input mechanisms.We developed a novel model with LSTM and fully connected layers for enhanced sequential data processing and hand gesture recognition.We also integrated CNN,max-pooling,and dropout layers for improved spatial feature extraction.This model architecture processes both temporal and spatial aspects of hand gestures,using LSTM to extract complex patterns from frame sequences for a comprehensive understanding of input data.Our unique dataset,essential for training the model,includes 1,662 landmarks from dynamic hand gestures,33 postures,and 468 face landmarks,all captured in real-time using advanced pose estimation.The model demonstrated high accuracy,achieving 98.52%in hand gesture recognition and over 97%in character input across different scenarios.Its excellent performance in real-time testing underlines its practicality and effectiveness,marking a significant advancement in enhancing human-device interactions in the digital age.展开更多
基金This work was funded by the Deanship of Scientific Research at Jouf University under Grant Number(DSR2022-RG-0102).
文摘Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified network lifecycle,and policies management.Network vulnerabilities try to modify services provided by Network Function Virtualization MANagement and Orchestration(NFV MANO),and malicious attacks in different scenarios disrupt the NFV Orchestrator(NFVO)and Virtualized Infrastructure Manager(VIM)lifecycle management related to network services or individual Virtualized Network Function(VNF).This paper proposes an anomaly detection mechanism that monitors threats in NFV MANO and manages promptly and adaptively to implement and handle security functions in order to enhance the quality of experience for end users.An anomaly detector investigates these identified risks and provides secure network services.It enables virtual network security functions and identifies anomalies in Kubernetes(a cloud-based platform).For training and testing purpose of the proposed approach,an intrusion-containing dataset is used that hold multiple malicious activities like a Smurf,Neptune,Teardrop,Pod,Land,IPsweep,etc.,categorized as Probing(Prob),Denial of Service(DoS),User to Root(U2R),and Remote to User(R2L)attacks.An anomaly detector is anticipated with the capabilities of a Machine Learning(ML)technique,making use of supervised learning techniques like Logistic Regression(LR),Support Vector Machine(SVM),Random Forest(RF),Naïve Bayes(NB),and Extreme Gradient Boosting(XGBoost).The proposed framework has been evaluated by deploying the identified ML algorithm on a Jupyter notebook in Kubeflow to simulate Kubernetes for validation purposes.RF classifier has shown better outcomes(99.90%accuracy)than other classifiers in detecting anomalies/intrusions in the containerized environment.
文摘This paper investigates the application ofmachine learning to develop a response model to cardiovascular problems and the use of AdaBoost which incorporates an application of Outlier Detection methodologies namely;Z-Score incorporated with GreyWolf Optimization(GWO)as well as Interquartile Range(IQR)coupled with Ant Colony Optimization(ACO).Using a performance index,it is shown that when compared with the Z-Score and GWO with AdaBoost,the IQR and ACO,with AdaBoost are not very accurate(89.0%vs.86.0%)and less discriminative(Area Under the Curve(AUC)score of 93.0%vs.91.0%).The Z-Score and GWO methods also outperformed the others in terms of precision,scoring 89.0%;and the recall was also found to be satisfactory,scoring 90.0%.Thus,the paper helps to reveal various specific benefits and drawbacks associated with different outlier detection and feature selection techniques,which can be important to consider in further improving various aspects of diagnostics in cardiovascular health.Collectively,these findings can enhance the knowledge of heart disease prediction and patient treatment using enhanced and innovativemachine learning(ML)techniques.These findings when combined improve patient therapy knowledge and cardiac disease prediction through the use of cutting-edge and improved machine learning approaches.This work lays the groundwork for more precise diagnosis models by highlighting the benefits of combining multiple optimization methodologies.Future studies should focus on maximizing patient outcomes and model efficacy through research on these combinations.
基金This research was funded by the National Natural Science Foundation of China(Nos.71762010,62262019,62162025,61966013,12162012)the Hainan Provincial Natural Science Foundation of China(Nos.823RC488,623RC481,620RC603,621QN241,620RC602,121RC536)+1 种基金the Haikou Science and Technology Plan Project of China(No.2022-016)the Project supported by the Education Department of Hainan Province,No.Hnky2021-23.
文摘Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.
基金The author extends the appreciation to the Deanship of Postgraduate Studies and Scientific Research atMajmaah University for funding this research work through the project number(R-2024-920).
文摘With the increasing number of connected devices in the Internet of Things(IoT)era,the number of intrusions is also increasing.An intrusion detection system(IDS)is a secondary intelligent system for monitoring,detecting and alerting against malicious activity.IDS is important in developing advanced security models.This study reviews the importance of various techniques,tools,and methods used in IoT detection and/or prevention systems.Specifically,it focuses on machine learning(ML)and deep learning(DL)techniques for IDS.This paper proposes an accurate intrusion detection model to detect traditional and new attacks on the Internet of Vehicles.To speed up the detection of recent attacks,the proposed network architecture developed at the data processing layer is incorporated with a convolutional neural network(CNN),which performs better than a support vector machine(SVM).Processing data are enhanced using the synthetic minority oversampling technique to ensure learning accuracy.The nearest class mean classifier is applied during the testing phase to identify new attacks.Experimental results using the AWID dataset,which is one of the most common open intrusion detection datasets,revealed a higher detection accuracy(94%)compared to SVM and random forest methods.
文摘What causes object detection in video to be less accurate than it is in still images?Because some video frames have degraded in appearance from fast movement,out-of-focus camera shots,and changes in posture.These reasons have made video object detection(VID)a growing area of research in recent years.Video object detection can be used for various healthcare applications,such as detecting and tracking tumors in medical imaging,monitoring the movement of patients in hospitals and long-term care facilities,and analyzing videos of surgeries to improve technique and training.Additionally,it can be used in telemedicine to help diagnose and monitor patients remotely.Existing VID techniques are based on recurrent neural networks or optical flow for feature aggregation to produce reliable features which can be used for detection.Some of those methods aggregate features on the full-sequence level or from nearby frames.To create feature maps,existing VID techniques frequently use Convolutional Neural Networks(CNNs)as the backbone network.On the other hand,Vision Transformers have outperformed CNNs in various vision tasks,including object detection in still images and image classification.We propose in this research to use Swin-Transformer,a state-of-the-art Vision Transformer,as an alternative to CNN-based backbone networks for object detection in videos.The proposed architecture enhances the accuracy of existing VID methods.The ImageNet VID and EPIC KITCHENS datasets are used to evaluate the suggested methodology.We have demonstrated that our proposed method is efficient by achieving 84.3%mean average precision(mAP)on ImageNet VID using less memory in comparison to other leading VID techniques.The source code is available on the website https://github.com/amaharek/SwinVid.
文摘With the high level of proliferation of connected mobile devices,the risk of intrusion becomes higher.Artificial Intelligence(AI)and Machine Learning(ML)algorithms started to feature in protection software and showed effective results.These algorithms are nonetheless hindered by the lack of rich datasets and compounded by the appearance of new categories of malware such that the race between attackers’malware,especially with the assistance of Artificial Intelligence tools and protection solutions makes these systems and frameworks lose effectiveness quickly.In this article,we present a framework for mobile malware detection based on a new dataset containing new categories of mobile malware.We focus on categories of malware that were not tested before by Machine Learning algorithms proven effective in malware detection.We carefully select an optimal number of features,do necessary preprocessing,and then apply Machine Learning algorithms to discover malicious code effectively.From our experiments,we have found that the Random Forest algorithm is the best-performing algorithm with such mobile malware with detection rates of around 99%.We compared our results from this work and found that they are aligned well with our previous work.We also compared our work with State-of-the-Art works of others and found that the results are very close and competitive.
文摘In recent years,container-based cloud virtualization solutions have emerged to mitigate the performance gap between non-virtualized and virtualized physical resources.However,there is a noticeable absence of techniques for predicting microservice performance in current research,which impacts cloud service users’ability to determine when to provision or de-provision microservices.Predicting microservice performance poses challenges due to overheads associated with actions such as variations in processing time caused by resource contention,which potentially leads to user confusion.In this paper,we propose,develop,and validate a probabilistic architecture named Microservice Performance Diagnosis and Prediction(MPDP).MPDP considers various factors such as response time,throughput,CPU usage,and othermetrics to dynamicallymodel interactions betweenmicroservice performance indicators for diagnosis and prediction.Using experimental data fromourmonitoring tool,stakeholders can build various networks for probabilistic analysis ofmicroservice performance diagnosis and prediction and estimate the best microservice resource combination for a given Quality of Service(QoS)level.We generated a dataset of microservices with 2726 records across four benchmarks including CPU,memory,response time,and throughput to demonstrate the efficacy of the proposed MPDP architecture.We validate MPDP and demonstrate its capability to predict microservice performance.We compared various Bayesian networks such as the Noisy-OR Network(NOR),Naive Bayes Network(NBN),and Complex Bayesian Network(CBN),achieving an overall accuracy rate of 89.98%when using CBN.
基金supported through Universiti Sains Malaysia(USM)and the Ministry of Higher Education Malaysia providing the research grant,Fundamental Research Grant Scheme(FRGS-Grant No.FRGS/1/2020/TK0/USM/02/1).
文摘The rapid adoption of Internet of Things(IoT)technologies has introduced significant security challenges across the physical,network,and application layers,particularly with the widespread use of the Message Queue Telemetry Transport(MQTT)protocol,which,while efficient in bandwidth consumption,lacks inherent security features,making it vulnerable to various cyber threats.This research addresses these challenges by presenting a secure,lightweight communication proxy that enhances the scalability and security of MQTT-based Internet of Things(IoT)networks.The proposed solution builds upon the Dang-Scheme,a mutual authentication protocol designed explicitly for resource-constrained environments and enhances it using Elliptic Curve Cryptography(ECC).This integration significantly improves device authentication,data confidentiality,and energy efficiency,achieving an 87.68%increase in data confidentiality and up to 77.04%energy savings during publish/subscribe communications in smart homes.The Middleware Broker System dynamically manages transaction keys and session IDs,offering robust defences against common cyber threats like impersonation and brute-force attacks.Penetration testing with tools such as Hydra and Nmap further validated the system’s security,demonstrating its potential to significantly improve the security and efficiency of IoT networks while underscoring the need for ongoing research to combat emerging threats.
文摘Fruit infections have an impact on both the yield and the quality of the crop.As a result,an automated recognition system for fruit leaf diseases is important.In artificial intelligence(AI)applications,especially in agriculture,deep learning shows promising disease detection and classification results.The recent AI-based techniques have a few challenges for fruit disease recognition,such as low-resolution images,small datasets for learning models,and irrelevant feature extraction.This work proposed a new fruit leaf leaf leaf disease recognition framework using deep learning features and improved pathfinder optimization.Three fruit types have been employed in this work for the validation process,such as apple,grape,and Citrus.In the first step,a noisy dataset is prepared by employing the original images to learn the designed framework better.The EfficientNet-B0 deep model is fine-tuned on the next step and trained separately on the original and noisy data.After that,features are fused using a serial concatenation approach that is later optimized in the next step using an improved Path Finder Algorithm(PFA).This algorithm aims to select the best features based on the fitness score and ignore redundant information.The selected features are finally classified using machine learning classifiers such as Medium Neural Network,Wide Neural Network,and Support Vector Machine.The experimental process was conducted on each fruit dataset separately and obtained an accuracy of 100%,99.7%,99.7%,and 93.4%for apple,grape,Citrus fruit,and citrus plant leaves,respectively.A detailed analysis is conducted and also compared with the recent techniques,and the proposed framework shows improved accuracy.
文摘The number of blogs and other forms of opinionated online content has increased dramatically in recent years.Many fields,including academia and national security,place an emphasis on automated political article orientation detection.Political articles(especially in the Arab world)are different from other articles due to their subjectivity,in which the author’s beliefs and political affiliation might have a significant influence on a political article.With categories representing the main political ideologies,this problem may be thought of as a subset of the text categorization(classification).In general,the performance of machine learning models for text classification is sensitive to hyperparameter settings.Furthermore,the feature vector used to represent a document must capture,to some extent,the complex semantics of natural language.To this end,this paper presents an intelligent system to detect political Arabic article orientation that adapts the categorical boosting(CatBoost)method combined with a multi-level feature concept.Extracting features at multiple levels can enhance the model’s ability to discriminate between different classes or patterns.Each level may capture different aspects of the input data,contributing to a more comprehensive representation.CatBoost,a robust and efficient gradient-boosting algorithm,is utilized to effectively learn and predict the complex relationships between these features and the political orientation labels associated with the articles.A dataset of political Arabic texts collected from diverse sources,including postings and articles,is used to assess the suggested technique.Conservative,reform,and revolutionary are the three subcategories of these opinions.The results of this study demonstrate that compared to other frequently used machine learning models for text classification,the CatBoost method using multi-level features performs better with an accuracy of 98.14%.
文摘Field training is the backbone of the teacher-preparation process.Its importance stems from the goals that colleges of education aim to achieve,which include bridging the gap between theory and practice and aligning with contemporary educational trends during teacher training.Currently,trainee students attendance in field training is recordedmanually through signatures on attendance sheets.However,thismethod is prone to impersonation,time wastage,and misplacement.Additionally,traditional methods of evaluating trainee students are often susceptible to human errors during the evaluation and scoring processes.Field training also lacks modern technology that the supervisor can use in case of his absence from school to monitor the trainee students’implementation of the required activities and tasks.These shortcomings do not meet the needs of the digital era that universities are currently experiencing.As a result,this paper presents a smart management system for field training based on Internet of Things(IoT)and mobile technology.It includes three subsystems:attendance,monitoring,and evaluation.The attendance subsystem uses an R307 fingerprint sensor to record trainee students’attendance.The Arduino Nano microcontroller transmits attendance data to the proposed Android application via an ESP-12F Wi-Fi module,which then forwards it to the Firebase database for storage.The monitoring subsystem utilizes Global Positioning System(GPS)technology to continually track trainee students’locations,ensuring they remain at the school during training.It also enables remote communication between trainee students and supervisors via audio,video,or text by integrating video call and chat technologies.The evaluation subsystem is based on three items:an online exam,attendance,and implementation of required activities and tasks.Experimental results have demonstrated the accuracy and efficiency of the proposed management system in recording attendance,as well as in monitoring and evaluating trainee students during field traiing.
文摘This article presents an exhaustive comparative investigation into the accuracy of gender identification across diverse geographical regions,employing a deep learning classification algorithm for speech signal analysis.In this study,speech samples are categorized for both training and testing purposes based on their geographical origin.Category 1 comprises speech samples from speakers outside of India,whereas Category 2 comprises live-recorded speech samples from Indian speakers.Testing speech samples are likewise classified into four distinct sets,taking into consideration both geographical origin and the language spoken by the speakers.Significantly,the results indicate a noticeable difference in gender identification accuracy among speakers from different geographical areas.Indian speakers,utilizing 52 Hindi and 26 English phonemes in their speech,demonstrate a notably higher gender identification accuracy of 85.75%compared to those speakers who predominantly use 26 English phonemes in their conversations when the system is trained using speech samples from Indian speakers.The gender identification accuracy of the proposed model reaches 83.20%when the system is trained using speech samples from speakers outside of India.In the analysis of speech signals,Mel Frequency Cepstral Coefficients(MFCCs)serve as relevant features for the speech data.The deep learning classification algorithm utilized in this research is based on a Bidirectional Long Short-Term Memory(BiLSTM)architecture within a Recurrent Neural Network(RNN)model.
文摘Users of social networks can readily express their thoughts on websites like Twitter(now X),Facebook,and Instagram.The volume of textual data flowing from users has greatly increased with the advent of social media in comparison to traditional media.For instance,using natural language processing(NLP)methods,social media can be leveraged to obtain crucial information on the present situation during disasters.In this work,tweets on the Uttarakhand flash flood are analyzed using a hybrid NLP model.This investigation employed sentiment analysis(SA)to determine the people’s expressed negative attitudes regarding the disaster.We apply a machine learning algorithm and evaluate the performance using the standard metrics,namely root mean square error(RMSE),mean absolute error(MAE),and mean absolute percentage error(MAPE).Our random forest(RF)classifier outperforms comparable works with an accuracy of 98.10%.In order to gain a competitive edge,the study shows how Twitter(now X)data and machine learning(ML)techniques can analyze public discourse and sentiments regarding disasters.It does this by comparing positive and negative comments in order to develop strategies to deal with public sentiments on disasters.
基金supported by National Science and Technology Council,Taiwan,NSTC 112-2221-E-024-004.
文摘Layout synthesis in quantum computing is crucial due to the physical constraints of quantum devices where quantum bits(qubits)can only interact effectively with their nearest neighbors.This constraint severely impacts the design and efficiency of quantum algorithms,as arranging qubits optimally can significantly reduce circuit depth and improve computational performance.To tackle the layout synthesis challenge,we propose an algorithm based on integer linear programming(ILP).ILP is well-suited for this problem as it can formulate the optimization objective of minimizing circuit depth while adhering to the nearest neighbor interaction constraint.The algorithm aims to generate layouts that maximize qubit connectivity within the given physical constraints of the quantum device.For experimental validation,we outline a clear and feasible setup using real quantum devices.This includes specifying the type and configuration of the quantum hardware used,such as the number of qubits,connectivity constraints,and any technological limitations.The proposed algorithm is implemented on these devices to demonstrate its effectiveness in producing depth-optimal quantum circuit layouts.By integrating these elements,our research aims to provide practical solutions to enhance the efficiency and scalability of quantum computing systems,paving the way for advancements in quantum algorithm design and implementation.
基金partly supported by the Technology Development Program of MSS(No.S3033853)by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2021R1A4A1031509).
文摘A generalization of supervised single-label learning based on the assumption that each sample in a dataset may belong to more than one class simultaneously is called multi-label learning.The main objective of this work is to create a novel framework for learning and classifying imbalancedmulti-label data.This work proposes a framework of two phases.The imbalanced distribution of themulti-label dataset is addressed through the proposed Borderline MLSMOTE resampling method in phase 1.Later,an adaptive weighted l21 norm regularized(Elastic-net)multilabel logistic regression is used to predict unseen samples in phase 2.The proposed Borderline MLSMOTE resampling method focuses on samples with concurrent high labels in contrast to conventional MLSMOTE.The minority labels in these samples are called difficult minority labels and are more prone to penalize classification performance.The concurrentmeasure is considered borderline,and labels associated with samples are regarded as borderline labels in the decision boundary.In phase II,a novel adaptive l21 norm regularized weighted multi-label logistic regression is used to handle balanced data with different weighted synthetic samples.Experimentation on various benchmark datasets shows the outperformance of the proposed method and its powerful predictive performances over existing conventional state-of-the-art multi-label methods.
文摘Cyber Defense is becoming a major issue for every organization to keep business continuity intact.The presented paper explores the effectiveness of a meta-heuristic optimization algorithm-Artificial Bees Colony Algorithm(ABC)as an Nature Inspired Cyber Security mechanism to achieve adaptive defense.It experiments on the Denial-Of-Service attack scenarios which involves limiting the traffic flow for each node.Businesses today have adapted their service distribution models to include the use of the Internet,allowing them to effectively manage and interact with their customer data.This shift has created an increased reliance on online services to store vast amounts of confidential customer data,meaning any disruption or outage of these services could be disastrous for the business,leaving them without the knowledge to serve their customers.Adversaries can exploit such an event to gain unauthorized access to the confidential data of the customers.The proposed algorithm utilizes an Adaptive Defense approach to continuously select nodes that could present characteristics of a probable malicious entity.For any changes in network parameters,the cluster of nodes is selected in the prepared solution set as a probable malicious node and the traffic rate with the ratio of packet delivery is managed with respect to the properties of normal nodes to deliver a disaster recovery plan for potential businesses.
文摘Path-based clustering algorithms typically generate clusters by optimizing a benchmark function.Most optimiza-tion methods in clustering algorithms often offer solutions close to the general optimal value.This study achieves the global optimum value for the criterion function in a shorter time using the minimax distance,Maximum Spanning Tree“MST”,and meta-heuristic algorithms,including Genetic Algorithm“GA”and Particle Swarm Optimization“PSO”.The Fast Path-based Clustering“FPC”algorithm proposed in this paper can find cluster centers correctly in most datasets and quickly perform clustering operations.The FPC does this operation using MST,the minimax distance,and a new hybrid meta-heuristic algorithm in a few rounds of algorithm iterations.This algorithm can achieve the global optimal value,and the main clustering process of the algorithm has a computational complexity of O�k2×n�.However,due to the complexity of the minimum distance algorithm,the total computational complexity is O�n2�.Experimental results of FPC on synthetic datasets with arbitrary shapes demonstrate that the algorithm is resistant to noise and outliers and can correctly identify clusters of varying sizes and numbers.In addition,the FPC requires the number of clusters as the only parameter to perform the clustering process.A comparative analysis of FPC and other clustering algorithms in this domain indicates that FPC exhibits superior speed,stability,and performance.
文摘More businesses are deploying powerful Intrusion Detection Systems(IDS)to secure their data and physical assets.Improved cyber-attack detection and prevention in these systems requires machine learning(ML)approaches.This paper examines a cyber-attack prediction system combining feature selection(FS)and ML.Our technique’s foundation was based on Correlation Analysis(CA),Mutual Information(MI),and recursive feature reduction with cross-validation.To optimize the IDS performance,the security features must be carefully selected from multiple-dimensional datasets,and our hybrid FS technique must be extended to validate our methodology using the improved UNSW-NB 15 and TON_IoT datasets.Our technique identified 22 key characteristics in UNSW-NB-15 and 8 in TON_IoT.We evaluated prediction using seven ML methods:Decision Tree(DT),Random Forest(RF),Logistic Regression(LR),Naive Bayes(NB),K-Nearest Neighbors(KNN),Support Vector Machines(SVM),and Multilayer Perceptron(MLP)classifiers.The DT,RF,NB,and MLP classifiers helped our model surpass the competition on both datasets.Therefore,the investigational outcomes of our hybrid model may help IDSs defend business assets from various cyberattack vectors.
基金funded by a science and technology project of State Grid Corporation of China“Comparative Analysis of Long-Term Measurement and Prediction of the Ground Synthetic Electric Field of±800 kV DC Transmission Line”(GYW11201907738)Paulo R.F.Rocha acknowledges the support and funding from the European Research Council(ERC)under the European Union’s Horizon 2020 Research and Innovation Program(Grant Agreement No.947897).
文摘Ultra-high voltage(UHV)transmission lines are an important part of China’s power grid and are often surrounded by a complex electromagnetic environment.The ground total electric field is considered a main electromagnetic environment indicator of UHV transmission lines and is currently employed for reliable long-term operation of the power grid.Yet,the accurate prediction of the ground total electric field remains a technical challenge.In this work,we collected the total electric field data from the Ningdong-Zhejiang±800 kV UHVDC transmission project,as of the Ling Shao line,and perform an outlier analysis of the total electric field data.We show that the Local Outlier Factor(LOF)elimination algorithm has a small average difference and overcomes the performance of Density-Based Spatial Clustering of Applications with Noise(DBSCAN)and Isolated Forest elimination algorithms.Moreover,the Stacking algorithm has been found to have superior prediction accuracy than a variety of similar prediction algorithms,including the traditional finite element.The low prediction error of the Stacking algorithm highlights the superior ability to accurately forecast the ground total electric field of UHVDC transmission lines.
文摘In the digital age,non-touch communication technologies are reshaping human-device interactions and raising security concerns.A major challenge in current technology is the misinterpretation of gestures by sensors and cameras,often caused by environmental factors.This issue has spurred the need for advanced data processing methods to achieve more accurate gesture recognition and predictions.Our study presents a novel virtual keyboard allowing character input via distinct hand gestures,focusing on two key aspects:hand gesture recognition and character input mechanisms.We developed a novel model with LSTM and fully connected layers for enhanced sequential data processing and hand gesture recognition.We also integrated CNN,max-pooling,and dropout layers for improved spatial feature extraction.This model architecture processes both temporal and spatial aspects of hand gestures,using LSTM to extract complex patterns from frame sequences for a comprehensive understanding of input data.Our unique dataset,essential for training the model,includes 1,662 landmarks from dynamic hand gestures,33 postures,and 468 face landmarks,all captured in real-time using advanced pose estimation.The model demonstrated high accuracy,achieving 98.52%in hand gesture recognition and over 97%in character input across different scenarios.Its excellent performance in real-time testing underlines its practicality and effectiveness,marking a significant advancement in enhancing human-device interactions in the digital age.