DeepSeek Chinese artificial intelligence(AI)open-source model,has gained a lot of attention due to its economical training and efficient inference.DeepSeek,a model trained on large-scale reinforcement learning without...DeepSeek Chinese artificial intelligence(AI)open-source model,has gained a lot of attention due to its economical training and efficient inference.DeepSeek,a model trained on large-scale reinforcement learning without supervised fine-tuning as a preliminary step,demonstrates remarkable reasoning capabilities of performing a wide range of tasks.DeepSeek is a prominent AI-driven chatbot that assists individuals in learning and enhances responses by generating insightful solutions to inquiries.Users possess divergent viewpoints regarding advanced models like DeepSeek,posting both their merits and shortcomings across several social media platforms.This research presents a new framework for predicting public sentiment to evaluate perceptions of DeepSeek.To transform the unstructured data into a suitable manner,we initially collect DeepSeek-related tweets from Twitter and subsequently implement various preprocessing methods.Subsequently,we annotated the tweets utilizing the Valence Aware Dictionary and sentiment Reasoning(VADER)methodology and the lexicon-driven TextBlob.Next,we classified the attitudes obtained from the purified data utilizing the proposed hybrid model.The proposed hybrid model consists of long-term,shortterm memory(LSTM)and bidirectional gated recurrent units(BiGRU).To strengthen it,we include multi-head attention,regularizer activation,and dropout units to enhance performance.Topic modeling employing KMeans clustering and Latent Dirichlet Allocation(LDA),was utilized to analyze public behavior concerning DeepSeek.The perceptions demonstrate that 82.5%of the people are positive,15.2%negative,and 2.3%neutral using TextBlob,and 82.8%positive,16.1%negative,and 1.2%neutral using the VADER analysis.The slight difference in results ensures that both analyses concur with their overall perceptions and may have distinct views of language peculiarities.The results indicate that the proposed model surpassed previous state-of-the-art approaches.展开更多
In the domain of Electronic Medical Records(EMRs),emerging technologies are crucial to addressing longstanding concerns surrounding transaction security and patient privacy.This paper explores the integration of smart...In the domain of Electronic Medical Records(EMRs),emerging technologies are crucial to addressing longstanding concerns surrounding transaction security and patient privacy.This paper explores the integration of smart contracts and blockchain technology as a robust framework for securing sensitive healthcare data.By leveraging the decentralized and immutable nature of blockchain,the proposed approach ensures transparency,integrity,and traceability of EMR transactions,effectivelymitigating risks of unauthorized access and data tampering.Smart contracts further enhance this framework by enabling the automation and enforcement of secure transactions,eliminating reliance on intermediaries and reducing the potential for human error.This integration marks a paradigm shift in management and exchange of healthcare information,fostering a secure and privacy-preserving ecosystem for all stakeholders.The research also evaluates the practical implementation of blockchain and smart contracts within healthcare systems,examining their real-world effectiveness in enhancing transactional security,safeguarding patient privacy,and maintaining data integrity.Findings from the study contribute valuable insights to the growing body of work on digital healthcare innovation,underscoring the potential of these technologies to transform EMR systems with high accuracy and precision.As global healthcare systems continue to face the challenge of protecting sensitive patient data,the proposed framework offers a forward-looking,scalable,and effective solution aligned with the evolving digital healthcare landscape.展开更多
Early and accurate detection of Heart Disease(HD)is critical for improving patient outcomes,as HD remains a leading cause of mortality worldwide.Timely and precise prediction can aid in preventive interventions,reduci...Early and accurate detection of Heart Disease(HD)is critical for improving patient outcomes,as HD remains a leading cause of mortality worldwide.Timely and precise prediction can aid in preventive interventions,reducing fatal risks associated with misdiagnosis.Machine learning(ML)models have gained significant attention in healthcare for their ability to assist professionals in diagnosing diseases with high accuracy.This study utilizes 918 instances from publicly available UCI and Kaggle datasets to develop and compare the performance of various ML models,including Adaptive Boosting(AB),Naïve Bayes(NB),Extreme Gradient Boosting(XGB),Bagging,and Logistic Regression(LR).Before model training,data preprocessing techniques such as handling missing values,outlier detection using Isolation Forest,and feature scaling were applied to improve model performance.The evaluation was conducted using performance metrics,including accuracy,precision,recall,and F1-score.Among the tested models,XGB demonstrated the highest predictive performance,achieving an accuracy of 94.34%and an F1-score of 95.19%,surpassing other models and previous studies in HD prediction.LR closely followed with an accuracy of 93.08%and an F1-score of 93.99%,indicating competitive performance.In contrast,NB exhibited the lowest performance,with an accuracy of 88.05%and an F1-score of 89.02%,highlighting its limitations in handling complex patterns within the dataset.Although ML models show superior performance as compared to previous studies,some limitations exist,including the use of publicly available datasets,which may not fully capture real-world clinical variations,and the lack of feature selection techniques,which could impact model interpretability and robustness.Despite these limitations,the findings highlight the potential of ML-based frameworks for accurate and efficient HD detection,demonstrating their value as decision-support tools in clinical settings.展开更多
The Internet of Things(IoT)is a smart infrastructure where devices share captured data with the respective server or edge modules.However,secure and reliable communication is among the challenging tasks in these netwo...The Internet of Things(IoT)is a smart infrastructure where devices share captured data with the respective server or edge modules.However,secure and reliable communication is among the challenging tasks in these networks,as shared channels are used to transmit packets.In this paper,a decision tree is integrated with other metrics to form a secure distributed communication strategy for IoT.Initially,every device works collaboratively to form a distributed network.In this model,if a device is deployed outside the coverage area of the nearest server,it communicates indirectly through the neighboring devices.For this purpose,every device collects data from the respective neighboring devices,such as hop count,average packet transmission delay,criticality factor,link reliability,and RSSI value,etc.These parameters are used to find an optimal route from the source to the destination.Secondly,the proposed approach has enabled devices to learn from the environment and adjust the optimal route-finding formula accordingly.Moreover,these devices and server modules must ensure that every packet is transmitted securely,which is possible only if it is encrypted with an encryption algorithm.For this purpose,a decision tree-enabled device-to-server authentication algorithm is presented where every device and server must take part in the offline phase.Simulation results have verified that the proposed distributed communication approach has the potential to ensure the integrity and confidentiality of data during transmission.Moreover,the proposed approach has outperformed the existing approaches in terms of communication cost,processing overhead,end-to-end delay,packet loss ratio,and throughput.Finally,the proposed approach is adoptable in different networking infrastructures.展开更多
Smart city-aspiring urban areas should have a number of necessary elements in place to achieve the intended objective.Precise controlling and management of traffic conditions,increased safety and surveillance,and enha...Smart city-aspiring urban areas should have a number of necessary elements in place to achieve the intended objective.Precise controlling and management of traffic conditions,increased safety and surveillance,and enhanced incident avoidance and management should be top priorities in smart city management.At the same time,Vehicle License Plate Number Recognition(VLPNR)has become a hot research topic,owing to several real-time applications like automated toll fee processing,traffic law enforcement,private space access control,and road traffic surveillance.Automated VLPNR is a computer vision-based technique which is employed in the recognition of automobiles based on vehicle number plates.The current research paper presents an effective Deep Learning(DL)-based VLPNR called DLVLPNR model to identify and recognize the alphanumeric characters present in license plate.The proposed model involves two main stages namely,license plate detection and Tesseract-based character recognition.The detection of alphanumeric characters present in license plate takes place with the help of fast RCNN with Inception V2 model.Then,the characters in the detected number plate are extracted using Tesseract Optical Character Recognition(OCR)model.The performance of DL-VLPNR model was tested in this paper using two benchmark databases,and the experimental outcome established the superior performance of the model compared to other methods.展开更多
Big data streams started becoming ubiquitous in recent years,thanks to rapid generation of massive volumes of data by different applications.It is challenging to apply existing data mining tools and techniques directl...Big data streams started becoming ubiquitous in recent years,thanks to rapid generation of massive volumes of data by different applications.It is challenging to apply existing data mining tools and techniques directly in these big data streams.At the same time,streaming data from several applications results in two major problems such as class imbalance and concept drift.The current research paper presents a new Multi-Objective Metaheuristic Optimization-based Big Data Analytics with Concept Drift Detection(MOMBD-CDD)method on High-Dimensional Streaming Data.The presented MOMBD-CDD model has different operational stages such as pre-processing,CDD,and classification.MOMBD-CDD model overcomes class imbalance problem by Synthetic Minority Over-sampling Technique(SMOTE).In order to determine the oversampling rates and neighboring point values of SMOTE,Glowworm Swarm Optimization(GSO)algorithm is employed.Besides,Statistical Test of Equal Proportions(STEPD),a CDD technique is also utilized.Finally,Bidirectional Long Short-Term Memory(Bi-LSTM)model is applied for classification.In order to improve classification performance and to compute the optimum parameters for Bi-LSTM model,GSO-based hyperparameter tuning process is carried out.The performance of the presented model was evaluated using high dimensional benchmark streaming datasets namely intrusion detection(NSL KDDCup)dataset and ECUE spam dataset.An extensive experimental validation process confirmed the effective outcome of MOMBD-CDD model.The proposed model attained high accuracy of 97.45%and 94.23%on the applied KDDCup99 Dataset and ECUE Spam datasets respectively.展开更多
Sleep plays a vital role in optimum working of the brain and the body.Numerous people suffer from sleep-oriented illnesses like apnea,insomnia,etc.Sleep stage classification is a primary process in the quantitative ex...Sleep plays a vital role in optimum working of the brain and the body.Numerous people suffer from sleep-oriented illnesses like apnea,insomnia,etc.Sleep stage classification is a primary process in the quantitative examination of polysomnographic recording.Sleep stage scoring is mainly based on experts’knowledge which is laborious and time consuming.Hence,it can be essential to design automated sleep stage classification model using machine learning(ML)and deep learning(DL)approaches.In this view,this study focuses on the design of Competitive Multi-verse Optimization with Deep Learning Based Sleep Stage Classification(CMVODL-SSC)model using Electroencephalogram(EEG)signals.The proposed CMVODL-SSC model intends to effectively categorize different sleep stages on EEG signals.Primarily,data pre-processing is performed to convert the actual data into useful format.Besides,a cascaded long short term memory(CLSTM)model is employed to perform classification process.At last,the CMVO algorithm is utilized for optimally tuning the hyperparameters involved in the CLSTM model.In order to report the enhancements of the CMVODL-SSC model,a wide range of simulations was carried out and the results ensured the better performance of the CMVODL-SSC model with average accuracy of 96.90%.展开更多
Data mining process involves a number of steps fromdata collection to visualization to identify useful data from massive data set.the same time,the recent advances of machine learning(ML)and deep learning(DL)models ca...Data mining process involves a number of steps fromdata collection to visualization to identify useful data from massive data set.the same time,the recent advances of machine learning(ML)and deep learning(DL)models can be utilized for effectual rainfall prediction.With this motivation,this article develops a novel comprehensive oppositionalmoth flame optimization with deep learning for rainfall prediction(COMFO-DLRP)Technique.The proposed CMFO-DLRP model mainly intends to predict the rainfall and thereby determine the environmental changes.Primarily,data pre-processing and correlation matrix(CM)based feature selection processes are carried out.In addition,deep belief network(DBN)model is applied for the effective prediction of rainfall data.Moreover,COMFO algorithm was derived by integrating the concepts of comprehensive oppositional based learning(COBL)with traditional MFO algorithm.Finally,the COMFO algorithm is employed for the optimal hyperparameter selection of the DBN model.For demonstrating the improved outcomes of the COMFO-DLRP approach,a sequence of simulations were carried out and the outcomes are assessed under distinct measures.The simulation outcome highlighted the enhanced outcomes of the COMFO-DLRP method on the other techniques.展开更多
The biomedical data classification process has received significant attention in recent times due to a massive increase in the generation of healthcare data from various sources.The developments of artificial intellig...The biomedical data classification process has received significant attention in recent times due to a massive increase in the generation of healthcare data from various sources.The developments of artificial intelligence(AI)and machine learning(ML)models assist in the effectual design of medical data classification models.Therefore,this article concentrates on the development of optimal Stacked Long Short Term Memory Sequence-toSequence Autoencoder(OSAE-LSTM)model for biomedical data classification.The presented OSAE-LSTM model intends to classify the biomedical data for the existence of diseases.Primarily,the OSAE-LSTM model involves min-max normalization based pre-processing to scale the data into uniform format.Followed by,the SAE-LSTM model is utilized for the detection and classification of diseases in biomedical data.At last,manta ray foraging optimization(MRFO)algorithm has been employed for hyperparameter optimization process.The utilization of MRFO algorithm assists in optimal selection of hypermeters involved in the SAE-LSTM model.The simulation analysis of the OSAE-LSTM model has been tested using a set of benchmark medical datasets and the results reported the improvements of the OSAELSTM model over the other approaches under several dimensions.展开更多
Artificial Intelligence(AI)encompasses various domains such as Machine Learning(ML),Deep Learning(DL),and other cognitive technologies which have been widely applied in healthcare sector.AI models are utilized in heal...Artificial Intelligence(AI)encompasses various domains such as Machine Learning(ML),Deep Learning(DL),and other cognitive technologies which have been widely applied in healthcare sector.AI models are utilized in healthcare sector in which the machines are used to investigate and make decisions based on prediction and classification of input data.With this motivation,the current study involves the design of Metaheuristic Optimization with Kernel Extreme Learning Machine for COVID-19 Prediction Model on Epidemiology Dataset,named MOKELM-CPED technique.The primary aim of the presented MOKELM-CPED model is to accomplish effectual COVID-19 classification outcomes using epidemiology dataset.In the proposed MOKELM-CPED model,the data first undergoes pre-processing to transform the medical data into useful format.Followed by,data classification process is performed by following Kernel Extreme Learning Machine(KELM)model.Finally,Symbiotic Organism Search(SOS)optimization algorithm is utilized to fine tune the KELM parameters which consequently helps in achieving high detection efficiency.In order to investigate the improved classifier outcomes of MOKELM-CPED model in an effectual manner,a comprehensive experimental analysis was conducted and the results were inspected under diverse aspects.The outcome of the experiments infer the enhanced performance of the proposed method over recent approaches under distinct measures.展开更多
Watermarking of digital images is required in diversified applicationsranging from medical imaging to commercial images used over the web.Usually, the copyright information is embossed over the image in the form ofa l...Watermarking of digital images is required in diversified applicationsranging from medical imaging to commercial images used over the web.Usually, the copyright information is embossed over the image in the form ofa logo at the corner or diagonal text in the background. However, this formof visible watermarking is not suitable for a large class of applications. In allsuch cases, a hidden watermark is embedded inside the original image as proofof ownership. A large number of techniques and algorithms are proposedby researchers for invisible watermarking. In this paper, we focus on issuesthat are critical for security aspects in the most common domains like digitalphotography copyrighting, online image stores, etc. The requirements of thisclass of application include robustness (resistance to attack), blindness (directextraction without original image), high embedding capacity, high Peak Signalto Noise Ratio (PSNR), and high Structural Similarity Matrix (SSIM). Mostof these requirements are conflicting, which means that an attempt to maximizeone requirement harms the other. In this paper, a blind type of imagewatermarking scheme is proposed using Lifting Wavelet Transform (LWT)as the baseline. Using this technique, custom binary watermarks in the formof a binary string can be embedded. Hu’s Invariant moments’ coefficientsare used as a key to extract the watermark. A Stochastic variant of theFirefly algorithm (FA) is used for the optimization of the technique. Undera prespecified size of embedding data, high PSNR and SSIM are obtainedusing the Stochastic Gradient variant of the Firefly technique. The simulationis done using Matrix Laboratory (MATLAB) tool and it is shown that theproposed technique outperforms the benchmark techniques of watermarkingconsidering PSNR and SSIM as quality metrics.展开更多
Emotion Recognition in Conversations(ERC)is fundamental in creating emotionally intelligentmachines.Graph-BasedNetwork(GBN)models have gained popularity in detecting conversational contexts for ERC tasks.However,their...Emotion Recognition in Conversations(ERC)is fundamental in creating emotionally intelligentmachines.Graph-BasedNetwork(GBN)models have gained popularity in detecting conversational contexts for ERC tasks.However,their limited ability to collect and acquire contextual information hinders their effectiveness.We propose a Text Augmentation-based computational model for recognizing emotions using transformers(TA-MERT)to address this.The proposed model uses the Multimodal Emotion Lines Dataset(MELD),which ensures a balanced representation for recognizing human emotions.Themodel used text augmentation techniques to producemore training data,improving the proposed model’s accuracy.Transformer encoders train the deep neural network(DNN)model,especially Bidirectional Encoder(BE)representations that capture both forward and backward contextual information.This integration improves the accuracy and robustness of the proposed model.Furthermore,we present a method for balancing the training dataset by creating enhanced samples from the original dataset.By balancing the dataset across all emotion categories,we can lessen the adverse effects of data imbalance on the accuracy of the proposed model.Experimental results on the MELD dataset show that TA-MERT outperforms earlier methods,achieving a weighted F1 score of 62.60%and an accuracy of 64.36%.Overall,the proposed TA-MERT model solves the GBN models’weaknesses in obtaining contextual data for ERC.TA-MERT model recognizes human emotions more accurately by employing text augmentation and transformer-based encoding.The balanced dataset and the additional training samples also enhance its resilience.These findings highlight the significance of transformer-based approaches for special emotion recognition in conversations.展开更多
Mobile clouds are the most common medium for aggregating,storing,and analyzing data from the medical Internet of Things(MIoT).It is employed to monitor a patient’s essential health signs for earlier disease diagnosis...Mobile clouds are the most common medium for aggregating,storing,and analyzing data from the medical Internet of Things(MIoT).It is employed to monitor a patient’s essential health signs for earlier disease diagnosis and prediction.Among the various disease,skin cancer was the wide variety of cancer,as well as enhances the endurance rate.In recent years,many skin cancer classification systems using machine and deep learning models have been developed for classifying skin tumors,including malignant melanoma(MM)and other skin cancers.However,accurate cancer detection was not performed with minimum time consumption.In order to address these existing problems,a novel Multidimensional Bregman Divergencive Feature Scaling Based Cophenetic Piecewise Regression Recurrent Deep Learning Classification(MBDFS-CPRRDLC)technique is introduced for detecting cancer at an earlier stage.The MBDFS-CPRRDLC performs skin cancer detection using different layers such as input,hidden,and output for feature selection and classification.The patient information is composed of IoT.The patient information was stored in mobile clouds server for performing predictive analytics.The collected data are sent to the recurrent deep learning classifier.In the first hidden layer,the feature selection process is carried out using the Multidimensional Bregman Divergencive Feature Scaling technique to find the significant features for disease identification resulting in decreases time consumption.Followed by,the disease classification is carried out in the second hidden layer using cophenetic correlative piecewise regression for analyzing the testing and training data.This process is repeatedly performed until the error gets minimized.In this way,disease classification is accurately performed with higher accuracy.Experimental evaluation is carried out for factors namely Accuracy,precision,recall,F-measure,as well as cancer detection time,by the amount of patient data.The observed result confirms that the proposed MBDFS-CPRRDLC technique increases accuracy as well as lesser cancer detection time compared to the conventional approaches.展开更多
Due to the overwhelming characteristics of the Internet of Things(IoT)and its adoption in approximately every aspect of our lives,the concept of individual devices’privacy has gained prominent attention from both cus...Due to the overwhelming characteristics of the Internet of Things(IoT)and its adoption in approximately every aspect of our lives,the concept of individual devices’privacy has gained prominent attention from both customers,i.e.,people,and industries as wearable devices collect sensitive information about patients(both admitted and outdoor)in smart healthcare infrastructures.In addition to privacy,outliers or noise are among the crucial issues,which are directly correlated with IoT infrastructures,as most member devices are resource-limited and could generate or transmit false data that is required to be refined before processing,i.e.,transmitting.Therefore,the development of privacy-preserving information fusion techniques is highly encouraged,especially those designed for smart IoT-enabled domains.In this paper,we are going to present an effective hybrid approach that can refine raw data values captured by the respectivemember device before transmission while preserving its privacy through the utilization of the differential privacy technique in IoT infrastructures.Sliding window,i.e.,δi based dynamic programming methodology,is implemented at the device level to ensure precise and accurate detection of outliers or noisy data,and refine it prior to activation of the respective transmission activity.Additionally,an appropriate privacy budget has been selected,which is enough to ensure the privacy of every individualmodule,i.e.,a wearable device such as a smartwatch attached to the patient’s body.In contrast,the end module,i.e.,the server in this case,can extract important information with approximately the maximum level of accuracy.Moreover,refined data has been processed by adding an appropriate nose through the Laplace mechanism to make it useless or meaningless for the adversary modules in the IoT.The proposed hybrid approach is trusted from both the device’s privacy and the integrity of the transmitted information perspectives.Simulation and analytical results have proved that the proposed privacy-preserving information fusion technique for wearable devices is an ideal solution for resource-constrained infrastructures such as IoT and the Internet ofMedical Things,where both device privacy and information integrity are important.Finally,the proposed hybrid approach is proven against well-known intruder attacks,especially those related to the privacy of the respective device in IoT infrastructures.展开更多
文摘DeepSeek Chinese artificial intelligence(AI)open-source model,has gained a lot of attention due to its economical training and efficient inference.DeepSeek,a model trained on large-scale reinforcement learning without supervised fine-tuning as a preliminary step,demonstrates remarkable reasoning capabilities of performing a wide range of tasks.DeepSeek is a prominent AI-driven chatbot that assists individuals in learning and enhances responses by generating insightful solutions to inquiries.Users possess divergent viewpoints regarding advanced models like DeepSeek,posting both their merits and shortcomings across several social media platforms.This research presents a new framework for predicting public sentiment to evaluate perceptions of DeepSeek.To transform the unstructured data into a suitable manner,we initially collect DeepSeek-related tweets from Twitter and subsequently implement various preprocessing methods.Subsequently,we annotated the tweets utilizing the Valence Aware Dictionary and sentiment Reasoning(VADER)methodology and the lexicon-driven TextBlob.Next,we classified the attitudes obtained from the purified data utilizing the proposed hybrid model.The proposed hybrid model consists of long-term,shortterm memory(LSTM)and bidirectional gated recurrent units(BiGRU).To strengthen it,we include multi-head attention,regularizer activation,and dropout units to enhance performance.Topic modeling employing KMeans clustering and Latent Dirichlet Allocation(LDA),was utilized to analyze public behavior concerning DeepSeek.The perceptions demonstrate that 82.5%of the people are positive,15.2%negative,and 2.3%neutral using TextBlob,and 82.8%positive,16.1%negative,and 1.2%neutral using the VADER analysis.The slight difference in results ensures that both analyses concur with their overall perceptions and may have distinct views of language peculiarities.The results indicate that the proposed model surpassed previous state-of-the-art approaches.
文摘In the domain of Electronic Medical Records(EMRs),emerging technologies are crucial to addressing longstanding concerns surrounding transaction security and patient privacy.This paper explores the integration of smart contracts and blockchain technology as a robust framework for securing sensitive healthcare data.By leveraging the decentralized and immutable nature of blockchain,the proposed approach ensures transparency,integrity,and traceability of EMR transactions,effectivelymitigating risks of unauthorized access and data tampering.Smart contracts further enhance this framework by enabling the automation and enforcement of secure transactions,eliminating reliance on intermediaries and reducing the potential for human error.This integration marks a paradigm shift in management and exchange of healthcare information,fostering a secure and privacy-preserving ecosystem for all stakeholders.The research also evaluates the practical implementation of blockchain and smart contracts within healthcare systems,examining their real-world effectiveness in enhancing transactional security,safeguarding patient privacy,and maintaining data integrity.Findings from the study contribute valuable insights to the growing body of work on digital healthcare innovation,underscoring the potential of these technologies to transform EMR systems with high accuracy and precision.As global healthcare systems continue to face the challenge of protecting sensitive patient data,the proposed framework offers a forward-looking,scalable,and effective solution aligned with the evolving digital healthcare landscape.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R235),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Early and accurate detection of Heart Disease(HD)is critical for improving patient outcomes,as HD remains a leading cause of mortality worldwide.Timely and precise prediction can aid in preventive interventions,reducing fatal risks associated with misdiagnosis.Machine learning(ML)models have gained significant attention in healthcare for their ability to assist professionals in diagnosing diseases with high accuracy.This study utilizes 918 instances from publicly available UCI and Kaggle datasets to develop and compare the performance of various ML models,including Adaptive Boosting(AB),Naïve Bayes(NB),Extreme Gradient Boosting(XGB),Bagging,and Logistic Regression(LR).Before model training,data preprocessing techniques such as handling missing values,outlier detection using Isolation Forest,and feature scaling were applied to improve model performance.The evaluation was conducted using performance metrics,including accuracy,precision,recall,and F1-score.Among the tested models,XGB demonstrated the highest predictive performance,achieving an accuracy of 94.34%and an F1-score of 95.19%,surpassing other models and previous studies in HD prediction.LR closely followed with an accuracy of 93.08%and an F1-score of 93.99%,indicating competitive performance.In contrast,NB exhibited the lowest performance,with an accuracy of 88.05%and an F1-score of 89.02%,highlighting its limitations in handling complex patterns within the dataset.Although ML models show superior performance as compared to previous studies,some limitations exist,including the use of publicly available datasets,which may not fully capture real-world clinical variations,and the lack of feature selection techniques,which could impact model interpretability and robustness.Despite these limitations,the findings highlight the potential of ML-based frameworks for accurate and efficient HD detection,demonstrating their value as decision-support tools in clinical settings.
基金supported by the Princess Nourah bint Abdulrahman University Riyadh,Saudi Arabia,through Project number(PNURSP2025R235).
文摘The Internet of Things(IoT)is a smart infrastructure where devices share captured data with the respective server or edge modules.However,secure and reliable communication is among the challenging tasks in these networks,as shared channels are used to transmit packets.In this paper,a decision tree is integrated with other metrics to form a secure distributed communication strategy for IoT.Initially,every device works collaboratively to form a distributed network.In this model,if a device is deployed outside the coverage area of the nearest server,it communicates indirectly through the neighboring devices.For this purpose,every device collects data from the respective neighboring devices,such as hop count,average packet transmission delay,criticality factor,link reliability,and RSSI value,etc.These parameters are used to find an optimal route from the source to the destination.Secondly,the proposed approach has enabled devices to learn from the environment and adjust the optimal route-finding formula accordingly.Moreover,these devices and server modules must ensure that every packet is transmitted securely,which is possible only if it is encrypted with an encryption algorithm.For this purpose,a decision tree-enabled device-to-server authentication algorithm is presented where every device and server must take part in the offline phase.Simulation results have verified that the proposed distributed communication approach has the potential to ensure the integrity and confidentiality of data during transmission.Moreover,the proposed approach has outperformed the existing approaches in terms of communication cost,processing overhead,end-to-end delay,packet loss ratio,and throughput.Finally,the proposed approach is adoptable in different networking infrastructures.
基金This research was funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University through the Fast-track Research Funding Program。
文摘Smart city-aspiring urban areas should have a number of necessary elements in place to achieve the intended objective.Precise controlling and management of traffic conditions,increased safety and surveillance,and enhanced incident avoidance and management should be top priorities in smart city management.At the same time,Vehicle License Plate Number Recognition(VLPNR)has become a hot research topic,owing to several real-time applications like automated toll fee processing,traffic law enforcement,private space access control,and road traffic surveillance.Automated VLPNR is a computer vision-based technique which is employed in the recognition of automobiles based on vehicle number plates.The current research paper presents an effective Deep Learning(DL)-based VLPNR called DLVLPNR model to identify and recognize the alphanumeric characters present in license plate.The proposed model involves two main stages namely,license plate detection and Tesseract-based character recognition.The detection of alphanumeric characters present in license plate takes place with the help of fast RCNN with Inception V2 model.Then,the characters in the detected number plate are extracted using Tesseract Optical Character Recognition(OCR)model.The performance of DL-VLPNR model was tested in this paper using two benchmark databases,and the experimental outcome established the superior performance of the model compared to other methods.
文摘Big data streams started becoming ubiquitous in recent years,thanks to rapid generation of massive volumes of data by different applications.It is challenging to apply existing data mining tools and techniques directly in these big data streams.At the same time,streaming data from several applications results in two major problems such as class imbalance and concept drift.The current research paper presents a new Multi-Objective Metaheuristic Optimization-based Big Data Analytics with Concept Drift Detection(MOMBD-CDD)method on High-Dimensional Streaming Data.The presented MOMBD-CDD model has different operational stages such as pre-processing,CDD,and classification.MOMBD-CDD model overcomes class imbalance problem by Synthetic Minority Over-sampling Technique(SMOTE).In order to determine the oversampling rates and neighboring point values of SMOTE,Glowworm Swarm Optimization(GSO)algorithm is employed.Besides,Statistical Test of Equal Proportions(STEPD),a CDD technique is also utilized.Finally,Bidirectional Long Short-Term Memory(Bi-LSTM)model is applied for classification.In order to improve classification performance and to compute the optimum parameters for Bi-LSTM model,GSO-based hyperparameter tuning process is carried out.The performance of the presented model was evaluated using high dimensional benchmark streaming datasets namely intrusion detection(NSL KDDCup)dataset and ECUE spam dataset.An extensive experimental validation process confirmed the effective outcome of MOMBD-CDD model.The proposed model attained high accuracy of 97.45%and 94.23%on the applied KDDCup99 Dataset and ECUE Spam datasets respectively.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number(RGP 2/158/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R235)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4340237DSR10).
文摘Sleep plays a vital role in optimum working of the brain and the body.Numerous people suffer from sleep-oriented illnesses like apnea,insomnia,etc.Sleep stage classification is a primary process in the quantitative examination of polysomnographic recording.Sleep stage scoring is mainly based on experts’knowledge which is laborious and time consuming.Hence,it can be essential to design automated sleep stage classification model using machine learning(ML)and deep learning(DL)approaches.In this view,this study focuses on the design of Competitive Multi-verse Optimization with Deep Learning Based Sleep Stage Classification(CMVODL-SSC)model using Electroencephalogram(EEG)signals.The proposed CMVODL-SSC model intends to effectively categorize different sleep stages on EEG signals.Primarily,data pre-processing is performed to convert the actual data into useful format.Besides,a cascaded long short term memory(CLSTM)model is employed to perform classification process.At last,the CMVO algorithm is utilized for optimally tuning the hyperparameters involved in the CLSTM model.In order to report the enhancements of the CMVODL-SSC model,a wide range of simulations was carried out and the results ensured the better performance of the CMVODL-SSC model with average accuracy of 96.90%.
基金the Deanship of Scientific Research at King Khalid University for funding this work under grant number(RGP 2/180/43)Princess Nourah bint Abdulrahman UniversityResearchers Supporting Project number(PNURSP2022R235)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research atUmmAl-Qura University for supporting this work by Grant Code:(22UQU4270206DSR01).
文摘Data mining process involves a number of steps fromdata collection to visualization to identify useful data from massive data set.the same time,the recent advances of machine learning(ML)and deep learning(DL)models can be utilized for effectual rainfall prediction.With this motivation,this article develops a novel comprehensive oppositionalmoth flame optimization with deep learning for rainfall prediction(COMFO-DLRP)Technique.The proposed CMFO-DLRP model mainly intends to predict the rainfall and thereby determine the environmental changes.Primarily,data pre-processing and correlation matrix(CM)based feature selection processes are carried out.In addition,deep belief network(DBN)model is applied for the effective prediction of rainfall data.Moreover,COMFO algorithm was derived by integrating the concepts of comprehensive oppositional based learning(COBL)with traditional MFO algorithm.Finally,the COMFO algorithm is employed for the optimal hyperparameter selection of the DBN model.For demonstrating the improved outcomes of the COMFO-DLRP approach,a sequence of simulations were carried out and the outcomes are assessed under distinct measures.The simulation outcome highlighted the enhanced outcomes of the COMFO-DLRP method on the other techniques.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number(RGP 2/158/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R235)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4340237DSR06).
文摘The biomedical data classification process has received significant attention in recent times due to a massive increase in the generation of healthcare data from various sources.The developments of artificial intelligence(AI)and machine learning(ML)models assist in the effectual design of medical data classification models.Therefore,this article concentrates on the development of optimal Stacked Long Short Term Memory Sequence-toSequence Autoencoder(OSAE-LSTM)model for biomedical data classification.The presented OSAE-LSTM model intends to classify the biomedical data for the existence of diseases.Primarily,the OSAE-LSTM model involves min-max normalization based pre-processing to scale the data into uniform format.Followed by,the SAE-LSTM model is utilized for the detection and classification of diseases in biomedical data.At last,manta ray foraging optimization(MRFO)algorithm has been employed for hyperparameter optimization process.The utilization of MRFO algorithm assists in optimal selection of hypermeters involved in the SAE-LSTM model.The simulation analysis of the OSAE-LSTM model has been tested using a set of benchmark medical datasets and the results reported the improvements of the OSAELSTM model over the other approaches under several dimensions.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number(RGP 1/322/42)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R235)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4210118DSR01).
文摘Artificial Intelligence(AI)encompasses various domains such as Machine Learning(ML),Deep Learning(DL),and other cognitive technologies which have been widely applied in healthcare sector.AI models are utilized in healthcare sector in which the machines are used to investigate and make decisions based on prediction and classification of input data.With this motivation,the current study involves the design of Metaheuristic Optimization with Kernel Extreme Learning Machine for COVID-19 Prediction Model on Epidemiology Dataset,named MOKELM-CPED technique.The primary aim of the presented MOKELM-CPED model is to accomplish effectual COVID-19 classification outcomes using epidemiology dataset.In the proposed MOKELM-CPED model,the data first undergoes pre-processing to transform the medical data into useful format.Followed by,data classification process is performed by following Kernel Extreme Learning Machine(KELM)model.Finally,Symbiotic Organism Search(SOS)optimization algorithm is utilized to fine tune the KELM parameters which consequently helps in achieving high detection efficiency.In order to investigate the improved classifier outcomes of MOKELM-CPED model in an effectual manner,a comprehensive experimental analysis was conducted and the results were inspected under diverse aspects.The outcome of the experiments infer the enhanced performance of the proposed method over recent approaches under distinct measures.
基金funded by Princess Nourah Bint Abdulrahman University Researchers Supporting Project Number (PNURSP2022R235)Princess Nourah Bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Watermarking of digital images is required in diversified applicationsranging from medical imaging to commercial images used over the web.Usually, the copyright information is embossed over the image in the form ofa logo at the corner or diagonal text in the background. However, this formof visible watermarking is not suitable for a large class of applications. In allsuch cases, a hidden watermark is embedded inside the original image as proofof ownership. A large number of techniques and algorithms are proposedby researchers for invisible watermarking. In this paper, we focus on issuesthat are critical for security aspects in the most common domains like digitalphotography copyrighting, online image stores, etc. The requirements of thisclass of application include robustness (resistance to attack), blindness (directextraction without original image), high embedding capacity, high Peak Signalto Noise Ratio (PSNR), and high Structural Similarity Matrix (SSIM). Mostof these requirements are conflicting, which means that an attempt to maximizeone requirement harms the other. In this paper, a blind type of imagewatermarking scheme is proposed using Lifting Wavelet Transform (LWT)as the baseline. Using this technique, custom binary watermarks in the formof a binary string can be embedded. Hu’s Invariant moments’ coefficientsare used as a key to extract the watermark. A Stochastic variant of theFirefly algorithm (FA) is used for the optimization of the technique. Undera prespecified size of embedding data, high PSNR and SSIM are obtainedusing the Stochastic Gradient variant of the Firefly technique. The simulationis done using Matrix Laboratory (MATLAB) tool and it is shown that theproposed technique outperforms the benchmark techniques of watermarkingconsidering PSNR and SSIM as quality metrics.
文摘Emotion Recognition in Conversations(ERC)is fundamental in creating emotionally intelligentmachines.Graph-BasedNetwork(GBN)models have gained popularity in detecting conversational contexts for ERC tasks.However,their limited ability to collect and acquire contextual information hinders their effectiveness.We propose a Text Augmentation-based computational model for recognizing emotions using transformers(TA-MERT)to address this.The proposed model uses the Multimodal Emotion Lines Dataset(MELD),which ensures a balanced representation for recognizing human emotions.Themodel used text augmentation techniques to producemore training data,improving the proposed model’s accuracy.Transformer encoders train the deep neural network(DNN)model,especially Bidirectional Encoder(BE)representations that capture both forward and backward contextual information.This integration improves the accuracy and robustness of the proposed model.Furthermore,we present a method for balancing the training dataset by creating enhanced samples from the original dataset.By balancing the dataset across all emotion categories,we can lessen the adverse effects of data imbalance on the accuracy of the proposed model.Experimental results on the MELD dataset show that TA-MERT outperforms earlier methods,achieving a weighted F1 score of 62.60%and an accuracy of 64.36%.Overall,the proposed TA-MERT model solves the GBN models’weaknesses in obtaining contextual data for ERC.TA-MERT model recognizes human emotions more accurately by employing text augmentation and transformer-based encoding.The balanced dataset and the additional training samples also enhance its resilience.These findings highlight the significance of transformer-based approaches for special emotion recognition in conversations.
基金This research is funded by Princess Nourah Bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R194)Princess Nourah Bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Mobile clouds are the most common medium for aggregating,storing,and analyzing data from the medical Internet of Things(MIoT).It is employed to monitor a patient’s essential health signs for earlier disease diagnosis and prediction.Among the various disease,skin cancer was the wide variety of cancer,as well as enhances the endurance rate.In recent years,many skin cancer classification systems using machine and deep learning models have been developed for classifying skin tumors,including malignant melanoma(MM)and other skin cancers.However,accurate cancer detection was not performed with minimum time consumption.In order to address these existing problems,a novel Multidimensional Bregman Divergencive Feature Scaling Based Cophenetic Piecewise Regression Recurrent Deep Learning Classification(MBDFS-CPRRDLC)technique is introduced for detecting cancer at an earlier stage.The MBDFS-CPRRDLC performs skin cancer detection using different layers such as input,hidden,and output for feature selection and classification.The patient information is composed of IoT.The patient information was stored in mobile clouds server for performing predictive analytics.The collected data are sent to the recurrent deep learning classifier.In the first hidden layer,the feature selection process is carried out using the Multidimensional Bregman Divergencive Feature Scaling technique to find the significant features for disease identification resulting in decreases time consumption.Followed by,the disease classification is carried out in the second hidden layer using cophenetic correlative piecewise regression for analyzing the testing and training data.This process is repeatedly performed until the error gets minimized.In this way,disease classification is accurately performed with higher accuracy.Experimental evaluation is carried out for factors namely Accuracy,precision,recall,F-measure,as well as cancer detection time,by the amount of patient data.The observed result confirms that the proposed MBDFS-CPRRDLC technique increases accuracy as well as lesser cancer detection time compared to the conventional approaches.
基金Ministry of Higher Education of Malaysia under the Research GrantLRGS/1/2019/UKM-UKM/5/2 and Princess Nourah bint Abdulrahman University for financing this researcher through Supporting Project Number(PNURSP2024R235),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Due to the overwhelming characteristics of the Internet of Things(IoT)and its adoption in approximately every aspect of our lives,the concept of individual devices’privacy has gained prominent attention from both customers,i.e.,people,and industries as wearable devices collect sensitive information about patients(both admitted and outdoor)in smart healthcare infrastructures.In addition to privacy,outliers or noise are among the crucial issues,which are directly correlated with IoT infrastructures,as most member devices are resource-limited and could generate or transmit false data that is required to be refined before processing,i.e.,transmitting.Therefore,the development of privacy-preserving information fusion techniques is highly encouraged,especially those designed for smart IoT-enabled domains.In this paper,we are going to present an effective hybrid approach that can refine raw data values captured by the respectivemember device before transmission while preserving its privacy through the utilization of the differential privacy technique in IoT infrastructures.Sliding window,i.e.,δi based dynamic programming methodology,is implemented at the device level to ensure precise and accurate detection of outliers or noisy data,and refine it prior to activation of the respective transmission activity.Additionally,an appropriate privacy budget has been selected,which is enough to ensure the privacy of every individualmodule,i.e.,a wearable device such as a smartwatch attached to the patient’s body.In contrast,the end module,i.e.,the server in this case,can extract important information with approximately the maximum level of accuracy.Moreover,refined data has been processed by adding an appropriate nose through the Laplace mechanism to make it useless or meaningless for the adversary modules in the IoT.The proposed hybrid approach is trusted from both the device’s privacy and the integrity of the transmitted information perspectives.Simulation and analytical results have proved that the proposed privacy-preserving information fusion technique for wearable devices is an ideal solution for resource-constrained infrastructures such as IoT and the Internet ofMedical Things,where both device privacy and information integrity are important.Finally,the proposed hybrid approach is proven against well-known intruder attacks,especially those related to the privacy of the respective device in IoT infrastructures.