Using the fuzzy rule-based classification method, normalized difference vegetation index (NDVI) images acquired from 1982 to 1998 were classified into seventeen phases. Based on these classification images, a probabil...Using the fuzzy rule-based classification method, normalized difference vegetation index (NDVI) images acquired from 1982 to 1998 were classified into seventeen phases. Based on these classification images, a probabilistic cellular automata-Markov Chain model was developed and used to simulate a land cover scenario of China for the year 2014. Spatiotemporal dynamics of land use/cover in China from 1982 to 2014 were then analyzed and evaluated. The results showed that the change trends of land cover type from 1998 to 2014 would be contrary to those from 1982 to 1998. In particular, forestland and grassland areas decreased by 1.56% and 1.46%, respectively, from 1982 to 1998, and should increase by 1.5% and 2.3% from 1998 to 2014, respectively.展开更多
Recently,many rapid developments in digital medical imaging have made further contributions to health care systems.The segmentation of regions of interest in medical images plays a vital role in assisting doctors with...Recently,many rapid developments in digital medical imaging have made further contributions to health care systems.The segmentation of regions of interest in medical images plays a vital role in assisting doctors with their medical diagnoses.Many factors like image contrast and quality affect the result of image segmentation.Due to that,image contrast remains a challenging problem for image segmentation.This study presents a new image enhancement model based on fractional Rényi entropy for the segmentation of kidney MRI scans.The proposed work consists of two stages:enhancement by fractional Rényi entropy,and MRI Kidney deep segmentation.The proposed enhancement model exploits the pixel’s probability representations for image enhancement.Since fractional Rényi entropy involves fractional calculus that has the ability to model the non-linear complexity problem to preserve the spatial relationship between pixels,yielding an overall better details of the kidney MRI scans.In the second stage,the deep learning kidney segmentation model is designed to segment kidney regions in MRI scans.The experimental results showed an average of 95.60%dice similarity index coefficient,which indicates best overlap between the segmented bodies with the ground truth.It is therefore concluded that the proposed enhancement model is suitable and effective for improving the kidney segmentation performance.展开更多
Machine Learning(ML)has changed clinical diagnostic procedures drastically.Especially in Cardiovascular Diseases(CVD),the use of ML is indispensable to reducing human errors.Enormous studies focused on disease predict...Machine Learning(ML)has changed clinical diagnostic procedures drastically.Especially in Cardiovascular Diseases(CVD),the use of ML is indispensable to reducing human errors.Enormous studies focused on disease prediction but depending on multiple parameters,further investigations are required to upgrade the clinical procedures.Multi-layered implementation of ML also called Deep Learning(DL)has unfolded new horizons in the field of clinical diagnostics.DL formulates reliable accuracy with big datasets but the reverse is the case with small datasets.This paper proposed a novel method that deals with the issue of less data dimensionality.Inspired by the regression analysis,the proposed method classifies the data by going through three different stages.In the first stage,feature representation is converted into probabilities using multiple regression techniques,the second stage grasps the probability conclusions from the previous stage and the third stage fabricates the final classifications.Extensive experiments were carried out on the Cleveland heart disease dataset.The results show significant improvement in classification accuracy.It is evident from the comparative results of the paper that the prevailing statistical ML methods are no more stagnant disease prediction techniques in demand in the future.展开更多
The conventional Close circuit television(CCTV)cameras-based surveillance and control systems require human resource supervision.Almost all the criminal activities take place using weapons mostly a handheld gun,revolv...The conventional Close circuit television(CCTV)cameras-based surveillance and control systems require human resource supervision.Almost all the criminal activities take place using weapons mostly a handheld gun,revolver,pistol,swords etc.Therefore,automatic weapons detection is a vital requirement now a day.The current research is concerned about the real-time detection of weapons for the surveillance cameras with an implementation of weapon detection using Efficient–Net.Real time datasets,from local surveillance department’s test sessions are used for model training and testing.Datasets consist of local environment images and videos from different type and resolution cameras that minimize the idealism.This research also contributes in the making of Efficient-Net that is experimented and results in a positive dimension.The results are also been represented in graphs and in calculations for the representation of results during training and results after training are also shown to represent our research contribution.Efficient-Net algorithm gives better results than existing algorithms.By using Efficient-Net algorithms the accuracy achieved 98.12%when epochs increase as compared to other algorithms.展开更多
Counterfeiting is still a pervasive global issue,affecting multiple industries and hindering industrial innovation,while causing substantial financial losses,reputational damage,and risks to consumer safety.From luxur...Counterfeiting is still a pervasive global issue,affecting multiple industries and hindering industrial innovation,while causing substantial financial losses,reputational damage,and risks to consumer safety.From luxury goods and pharmaceuticals to electronics and automotive parts,counterfeit products infiltrate supply chains,leading to a loss of revenue for legitimate businesses and undermining consumer trust.Traditional anti-counterfeiting measures,such as holograms,serial numbers,and barcodes,have proven to be insufficient as counterfeiters continuously develop more sophisticated replication techniques.As a result,there is a growing need for more advanced,secure,and reliable methods to prevent counterfeiting.This paper presents a novel,holistic anti-counterfeiting platform that integrates Near Field Communication(NFC)-enabled mobile applications with blockchain technology to provide an innovative,secure,and consumer-friendly authentication mechanism.Our approach addresses key gaps in existing solutions by incorporating dynamic product identifiers,which make replication significantly more difficult.The system enables consumers to verify the authenticity of products instantly using their smartphones,enhancing transparency and trust in the supply chain.Blockchain technology plays a crucial role in our proposed solution by providing an immutable,decentralized ledger that records product authentication data.This ensures that product verification records cannot be tampered with or altered,adding a layer of security that is absent in conventional systems.Additionally,NFC technology enhances security by offering unique identification capabilities,enabling real-time product verification.To validate the effectiveness of the proposed system,real-world testing was conducted across different industries.The results demonstrated the platform’s ability to significantly reduce counterfeit products in the supply chain,offering businesses and consumers a more robust and reliable authentication method.By leveraging the combined strengths of blockchain and NFC,this solution represents a significant advancement in the fight against counterfeiting,ensuring enhanced security,transparency,and consumer trust.展开更多
Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR ...Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.展开更多
Recently, many researchers have used nature inspired metaheuristicalgorithms due to their ability to perform optimally on complex problems. Tosolve problems in a simple way, in the recent era bat algorithm has becomef...Recently, many researchers have used nature inspired metaheuristicalgorithms due to their ability to perform optimally on complex problems. Tosolve problems in a simple way, in the recent era bat algorithm has becomefamous due to its high tendency towards convergence to the global optimummost of the time. But, still the standard bat with random walk has a problemof getting stuck in local minima. In order to solve this problem, this researchproposed bat algorithm with levy flight random walk. Then, the proposedBat with Levy flight algorithm is further hybridized with three differentvariants of ANN. The proposed BatLFBP is applied to the problem ofinsulin DNA sequence classification of healthy homosapien. For classificationperformance, the proposed models such as Bat levy flight Artificial NeuralNetwork (BatLFANN) and Bat levy Flight Back Propagation (BatLFBP) arecompared with the other state-of-the-art algorithms like Bat Artificial NeuralNetwork (BatANN), Bat back propagation (BatBP), Bat Gaussian distribution Artificial Neural Network (BatGDANN). And Bat Gaussian distributionback propagation (BatGDBP), in-terms of means squared error (MSE) andaccuracy. From the perspective of simulations results, it is show that theproposed BatLFANN achieved 99.88153% accuracy with MSE of 0.001185,and BatLFBP achieved 99.834185 accuracy with MSE of 0.001658 on WL5.While on WL10 the proposed BatLFANN achieved 99.89899% accuracy withMSE of 0.00101, and BatLFBP achieved 99.84473% accuracy with MSE of0.004553. Similarly, on WL15 the proposed BatLFANN achieved 99.82853%accuracy with MSE of 0.001715, and BatLFBP achieved 99.3262% accuracywith MSE of 0.006738 which achieve better accuracy as compared to the otherhybrid models.展开更多
The application of deep learning techniques in the medical field,specifically for Atrial Fibrillation(AFib)detection through Electrocardiogram(ECG)signals,has witnessed significant interest.Accurate and timely diagnos...The application of deep learning techniques in the medical field,specifically for Atrial Fibrillation(AFib)detection through Electrocardiogram(ECG)signals,has witnessed significant interest.Accurate and timely diagnosis increases the patient’s chances of recovery.However,issues like overfitting and inconsistent accuracy across datasets remain challenges.In a quest to address these challenges,a study presents two prominent deep learning architectures,ResNet-50 and DenseNet-121,to evaluate their effectiveness in AFib detection.The aim was to create a robust detection mechanism that consistently performs well.Metrics such as loss,accuracy,precision,sensitivity,and Area Under the Curve(AUC)were utilized for evaluation.The findings revealed that ResNet-50 surpassed DenseNet-121 in all evaluated categories.It demonstrated lower loss rate 0.0315 and 0.0305 superior accuracy of 98.77%and 98.88%,precision of 98.78%and 98.89%and sensitivity of 98.76%and 98.86%for training and validation,hinting at its advanced capability for AFib detection.These insights offer a substantial contribution to the existing literature on deep learning applications for AFib detection from ECG signals.The comparative performance data assists future researchers in selecting suitable deep-learning architectures for AFib detection.Moreover,the outcomes of this study are anticipated to stimulate the development of more advanced and efficient ECG-based AFib detection methodologies,for more accurate and early detection of AFib,thereby fostering improved patient care and outcomes.展开更多
Medical data tampering has become one of the main challenges in the field of secure-aware medical data processing.Forgery of normal patients’medical data to present them as COVID-19 patients is an illegitimate action...Medical data tampering has become one of the main challenges in the field of secure-aware medical data processing.Forgery of normal patients’medical data to present them as COVID-19 patients is an illegitimate action that has been carried out in different ways recently.Therefore,the integrity of these data can be questionable.Forgery detection is a method of detecting an anomaly in manipulated forged data.An appropriate number of features are needed to identify an anomaly as either forged or non-forged data in order to find distortion or tampering in the original data.Convolutional neural networks(CNNs)have contributed a major breakthrough in this type of detection.There has been much interest from both the clinicians and the AI community in the possibility of widespread usage of artificial neural networks for quick diagnosis using medical data for early COVID-19 patient screening.The purpose of this paper is to detect forgery in COVID-19 medical data by using CNN in the error level analysis(ELA)by verifying the noise pattern in the data.The proposed improved ELA method is evaluated using a type of data splicing forgery and sigmoid and ReLU phenomenon schemes.The proposed method is verified by manipulating COVID-19 data using different types of forgeries and then applying the proposed CNN model to the data to detect the data tampering.The results show that the accuracy of the proposed CNN model on the test COVID-19 data is approximately 92%.展开更多
Melanoma is of the lethal and rare types of skin cancer.It is curable at an initial stage and the patient can survive easily.It is very difficult to screen all skin lesion patients due to costly treatment.Clinicians ar...Melanoma is of the lethal and rare types of skin cancer.It is curable at an initial stage and the patient can survive easily.It is very difficult to screen all skin lesion patients due to costly treatment.Clinicians are requiring a correct method for the right treatment for dermoscopic clinical features such as lesion borders,pigment networks,and the color of melanoma.These challenges are required an automated system to classify the clinical features of melanoma and non-melanoma disease.The trained clinicians can overcome the issues such as low contrast,lesions varying in size,color,and the existence of several objects like hair,reflections,air bubbles,and oils on almost all images.Active contour is one of the suitable methods with some drawbacks for the segmentation of irre-gular shapes.An entropy and morphology-based automated mask selection is pro-posed for the active contour method.The proposed method can improve the overall segmentation along with the boundary of melanoma images.In this study,features have been extracted to perform the classification on different texture scales like Gray level co-occurrence matrix(GLCM)and Local binary pattern(LBP).When four different moments pull out in six different color spaces like HSV,Lin RGB,YIQ,YCbCr,XYZ,and CIE L*a*b then global information from different colors channels have been combined.Therefore,hybrid fused texture features;such as local,color feature as global,shape features,and Artificial neural network(ANN)as classifiers have been proposed for the categorization of the malignant and non-malignant.Experimentations had been carried out on datasets Dermis,DermQuest,and PH2.The results of our advanced method showed super-iority and contrast with the existing state-of-the-art techniques.展开更多
Organizational and end user data breaches are highly implicated by the role of information security conscious care behavior in respective incident responses.This research study draws upon the literature in the areas o...Organizational and end user data breaches are highly implicated by the role of information security conscious care behavior in respective incident responses.This research study draws upon the literature in the areas of information security,incident response,theory of planned behaviour,and protection motivation theory to expand and empirically validate a modified framework of information security conscious care behaviour formation.The applicability of the theoretical framework is shown through a case study labelled as a cyber-attack of unprecedented scale and sophistication in Singapore’s history to-date,the 2018 SingHealth data breach.The single in-depth case study observed information security awareness,policy,experience,attitude,subjective norms,perceived behavioral control,threat appraisal and self-efficacy as emerging prominently in the framework’s applicability in incident handling.The data analysis did not support threat severity relationship with conscious care behaviour.The findings from the above-mentioned observations are presented as possible key drivers in the shaping information security conscious care behaviour in real-world cyber incident management.展开更多
The world health organization(WHO)terms dengue as a serious illness that impacts almost half of the world’s population and carries no specific treatment.Early and accurate detection of spread in affected regions can ...The world health organization(WHO)terms dengue as a serious illness that impacts almost half of the world’s population and carries no specific treatment.Early and accurate detection of spread in affected regions can save precious lives.Despite the severity of the disease,a few noticeable works can be found that involve sentiment analysis to mine accurate intuitions from the social media text streams.However,the massive data explosion in recent years has led to difficulties in terms of storing and processing large amounts of data,as reliable mechanisms to gather the data and suitable techniques to extract meaningful insights from the data are required.This research study proposes a sentiment analysis polarity approach for collecting data and extracting relevant information about dengue via Apache Hadoop.The method consists of two main parts:the first part collects data from social media using Apache Flume,while the second part focuses on querying and extracting relevant information via the hybrid filtration-polarity algorithm using Apache Hive.To overcome the noisy and unstructured nature of the data,the process of extracting information is characterized by pre and post-filtration phases.As a result,only with the integration of Flume and Hive with filtration and polarity analysis,can a reliable sentiment analysis technique be offered to collect and process large-scale data from the social network.We introduce how the Apache Hadoop ecosystem–Flume and Hive–can provide a sentiment analysis capability by storing and processing large amounts of data.An important finding of this paper is that developing efficient sentiment analysis applications for detecting diseases can be more reliable through the use of the Hadoop ecosystem components than through the use of normal machines.展开更多
University timetabling problems are a yearly challenging task and are faced repeatedly each semester.The problems are considered nonpolynomial time(NP)and combinatorial optimization problems(COP),which means that they...University timetabling problems are a yearly challenging task and are faced repeatedly each semester.The problems are considered nonpolynomial time(NP)and combinatorial optimization problems(COP),which means that they can be solved through optimization algorithms to produce the aspired optimal timetable.Several techniques have been used to solve university timetabling problems,and most of them use optimization techniques.This paper provides a comprehensive review of the most recent studies dealing with concepts,methodologies,optimization,benchmarks,and open issues of university timetabling problems.The comprehensive review starts by presenting the essence of university timetabling as NP-COP,defining and clarifying the two formed classes of university timetabling:University Course Timetabling and University Examination Timetabling,illustrating the adopted algorithms for solving such a problem,elaborating the university timetabling constraints to be considered achieving the optimal timetable,and explaining how to analyze and measure the performance of the optimization algorithms by demonstrating the commonly used benchmark datasets for the evaluation.It is noted that meta-heuristic methodologies are widely used in the literature.Additionally,recently,multi-objective optimization has been increasingly used in solving such a problem that can identify robust university timetabling solutions.Finally,trends and future directions in university timetabling problems are provided.This paper provides good information for students,researchers,and specialists interested in this area of research.The challenges and possibilities for future research prospects are also explored.展开更多
Learning analytics is a rapidly evolving research discipline that uses theinsights generated from data analysis to support learners as well as optimize boththe learning process and environment. This paper studied stud...Learning analytics is a rapidly evolving research discipline that uses theinsights generated from data analysis to support learners as well as optimize boththe learning process and environment. This paper studied students’ engagementlevel of the Learning Management System (LMS) via a learning analytics tool,student’s approach in managing their studies and possible learning analytic methods to analyze student data. Moreover, extensive systematic literature review(SLR) was employed for the selection, sorting and exclusion of articles fromdiverse renowned sources. The findings show that most of the engagement inLMS are driven by educators. Additionally, we have discussed the factors inLMS, causes of low engagement and ways of increasing engagement factorsvia the Learning Analytics approach. Nevertheless, apart from recognizing theLearning Analytics approach as being a successful method and technique for analyzing the LMS data, this research further highlighted the possibility of mergingthe learning analytics technique with the LMS engagement in every institution asbeing a direction for future research.展开更多
Plant diseases pose a significant challenge to global agricultural productivity,necessitating efficient and precise diagnostic systems for early intervention and mitigation.In this study,we propose a novel hybrid fram...Plant diseases pose a significant challenge to global agricultural productivity,necessitating efficient and precise diagnostic systems for early intervention and mitigation.In this study,we propose a novel hybrid framework that integrates EfficientNet-B8,Vision Transformer(ViT),and Knowledge Graph Fusion(KGF)to enhance plant disease classification across 38 distinct disease categories.The proposed framework leverages deep learning and semantic enrichment to improve classification accuracy and interpretability.EfficientNet-B8,a convolutional neural network(CNN)with optimized depth and width scaling,captures fine-grained spatial details in high-resolution plant images,aiding in the detection of subtle disease symptoms.In parallel,ViT,a transformer-based architecture,effectively models long-range dependencies and global structural patterns within the images,ensuring robust disease pattern recognition.Furthermore,KGF incorporates domain-specific metadata,such as crop type,environmental conditions,and disease relationships,to provide contextual intelligence and improve classification accuracy.The proposed model was rigorously evaluated on a large-scale dataset containing diverse plant disease images,achieving outstanding performance with a 99.7%training accuracy and 99.3%testing accuracy.The precision and F1-score were consistently high across all disease classes,demonstrating the framework’s ability to minimize false positives and false negatives.Compared to conventional deep learning approaches,this hybrid method offers a more comprehensive and interpretable solution by integrating self-attention mechanisms and domain knowledge.Beyond its superior classification performance,this model opens avenues for optimizing metadata dependency and reducing computational complexity,making it more feasible for real-world deployment in resource-constrained agricultural settings.The proposed framework represents an advancement in precision agriculture,providing scalable,intelligent disease diagnosis that enhances crop protection and food security.展开更多
基金Supported by the National Natural Science Foundation of China(No.30730021)the Applied Basic Research Programs of Yunnan Province,China(Nos.2011FZ140 and 2010CD047)
文摘Using the fuzzy rule-based classification method, normalized difference vegetation index (NDVI) images acquired from 1982 to 1998 were classified into seventeen phases. Based on these classification images, a probabilistic cellular automata-Markov Chain model was developed and used to simulate a land cover scenario of China for the year 2014. Spatiotemporal dynamics of land use/cover in China from 1982 to 2014 were then analyzed and evaluated. The results showed that the change trends of land cover type from 1998 to 2014 would be contrary to those from 1982 to 1998. In particular, forestland and grassland areas decreased by 1.56% and 1.46%, respectively, from 1982 to 1998, and should increase by 1.5% and 2.3% from 1998 to 2014, respectively.
基金funded by the deanship of scientific research at princess Nourah bint Abdulrahman University through the fast-track research-funding program.
文摘Recently,many rapid developments in digital medical imaging have made further contributions to health care systems.The segmentation of regions of interest in medical images plays a vital role in assisting doctors with their medical diagnoses.Many factors like image contrast and quality affect the result of image segmentation.Due to that,image contrast remains a challenging problem for image segmentation.This study presents a new image enhancement model based on fractional Rényi entropy for the segmentation of kidney MRI scans.The proposed work consists of two stages:enhancement by fractional Rényi entropy,and MRI Kidney deep segmentation.The proposed enhancement model exploits the pixel’s probability representations for image enhancement.Since fractional Rényi entropy involves fractional calculus that has the ability to model the non-linear complexity problem to preserve the spatial relationship between pixels,yielding an overall better details of the kidney MRI scans.In the second stage,the deep learning kidney segmentation model is designed to segment kidney regions in MRI scans.The experimental results showed an average of 95.60%dice similarity index coefficient,which indicates best overlap between the segmented bodies with the ground truth.It is therefore concluded that the proposed enhancement model is suitable and effective for improving the kidney segmentation performance.
文摘Machine Learning(ML)has changed clinical diagnostic procedures drastically.Especially in Cardiovascular Diseases(CVD),the use of ML is indispensable to reducing human errors.Enormous studies focused on disease prediction but depending on multiple parameters,further investigations are required to upgrade the clinical procedures.Multi-layered implementation of ML also called Deep Learning(DL)has unfolded new horizons in the field of clinical diagnostics.DL formulates reliable accuracy with big datasets but the reverse is the case with small datasets.This paper proposed a novel method that deals with the issue of less data dimensionality.Inspired by the regression analysis,the proposed method classifies the data by going through three different stages.In the first stage,feature representation is converted into probabilities using multiple regression techniques,the second stage grasps the probability conclusions from the previous stage and the third stage fabricates the final classifications.Extensive experiments were carried out on the Cleveland heart disease dataset.The results show significant improvement in classification accuracy.It is evident from the comparative results of the paper that the prevailing statistical ML methods are no more stagnant disease prediction techniques in demand in the future.
文摘The conventional Close circuit television(CCTV)cameras-based surveillance and control systems require human resource supervision.Almost all the criminal activities take place using weapons mostly a handheld gun,revolver,pistol,swords etc.Therefore,automatic weapons detection is a vital requirement now a day.The current research is concerned about the real-time detection of weapons for the surveillance cameras with an implementation of weapon detection using Efficient–Net.Real time datasets,from local surveillance department’s test sessions are used for model training and testing.Datasets consist of local environment images and videos from different type and resolution cameras that minimize the idealism.This research also contributes in the making of Efficient-Net that is experimented and results in a positive dimension.The results are also been represented in graphs and in calculations for the representation of results during training and results after training are also shown to represent our research contribution.Efficient-Net algorithm gives better results than existing algorithms.By using Efficient-Net algorithms the accuracy achieved 98.12%when epochs increase as compared to other algorithms.
文摘Counterfeiting is still a pervasive global issue,affecting multiple industries and hindering industrial innovation,while causing substantial financial losses,reputational damage,and risks to consumer safety.From luxury goods and pharmaceuticals to electronics and automotive parts,counterfeit products infiltrate supply chains,leading to a loss of revenue for legitimate businesses and undermining consumer trust.Traditional anti-counterfeiting measures,such as holograms,serial numbers,and barcodes,have proven to be insufficient as counterfeiters continuously develop more sophisticated replication techniques.As a result,there is a growing need for more advanced,secure,and reliable methods to prevent counterfeiting.This paper presents a novel,holistic anti-counterfeiting platform that integrates Near Field Communication(NFC)-enabled mobile applications with blockchain technology to provide an innovative,secure,and consumer-friendly authentication mechanism.Our approach addresses key gaps in existing solutions by incorporating dynamic product identifiers,which make replication significantly more difficult.The system enables consumers to verify the authenticity of products instantly using their smartphones,enhancing transparency and trust in the supply chain.Blockchain technology plays a crucial role in our proposed solution by providing an immutable,decentralized ledger that records product authentication data.This ensures that product verification records cannot be tampered with or altered,adding a layer of security that is absent in conventional systems.Additionally,NFC technology enhances security by offering unique identification capabilities,enabling real-time product verification.To validate the effectiveness of the proposed system,real-world testing was conducted across different industries.The results demonstrated the platform’s ability to significantly reduce counterfeit products in the supply chain,offering businesses and consumers a more robust and reliable authentication method.By leveraging the combined strengths of blockchain and NFC,this solution represents a significant advancement in the fight against counterfeiting,ensuring enhanced security,transparency,and consumer trust.
基金This research was funded by the National Natural Science Foundation of China(Nos.71762010,62262019,62162025,61966013,12162012)the Hainan Provincial Natural Science Foundation of China(Nos.823RC488,623RC481,620RC603,621QN241,620RC602,121RC536)+1 种基金the Haikou Science and Technology Plan Project of China(No.2022-016)the Project supported by the Education Department of Hainan Province,No.Hnky2021-23.
文摘Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.
基金This research is supported by Tier-1 Research Grant, vote no. H938 by ResearchManagement Office (RMC), Universiti Tun Hussein Onn Malaysia and Ministry of Higher Education,Malaysia.
文摘Recently, many researchers have used nature inspired metaheuristicalgorithms due to their ability to perform optimally on complex problems. Tosolve problems in a simple way, in the recent era bat algorithm has becomefamous due to its high tendency towards convergence to the global optimummost of the time. But, still the standard bat with random walk has a problemof getting stuck in local minima. In order to solve this problem, this researchproposed bat algorithm with levy flight random walk. Then, the proposedBat with Levy flight algorithm is further hybridized with three differentvariants of ANN. The proposed BatLFBP is applied to the problem ofinsulin DNA sequence classification of healthy homosapien. For classificationperformance, the proposed models such as Bat levy flight Artificial NeuralNetwork (BatLFANN) and Bat levy Flight Back Propagation (BatLFBP) arecompared with the other state-of-the-art algorithms like Bat Artificial NeuralNetwork (BatANN), Bat back propagation (BatBP), Bat Gaussian distribution Artificial Neural Network (BatGDANN). And Bat Gaussian distributionback propagation (BatGDBP), in-terms of means squared error (MSE) andaccuracy. From the perspective of simulations results, it is show that theproposed BatLFANN achieved 99.88153% accuracy with MSE of 0.001185,and BatLFBP achieved 99.834185 accuracy with MSE of 0.001658 on WL5.While on WL10 the proposed BatLFANN achieved 99.89899% accuracy withMSE of 0.00101, and BatLFBP achieved 99.84473% accuracy with MSE of0.004553. Similarly, on WL15 the proposed BatLFANN achieved 99.82853%accuracy with MSE of 0.001715, and BatLFBP achieved 99.3262% accuracywith MSE of 0.006738 which achieve better accuracy as compared to the otherhybrid models.
文摘The application of deep learning techniques in the medical field,specifically for Atrial Fibrillation(AFib)detection through Electrocardiogram(ECG)signals,has witnessed significant interest.Accurate and timely diagnosis increases the patient’s chances of recovery.However,issues like overfitting and inconsistent accuracy across datasets remain challenges.In a quest to address these challenges,a study presents two prominent deep learning architectures,ResNet-50 and DenseNet-121,to evaluate their effectiveness in AFib detection.The aim was to create a robust detection mechanism that consistently performs well.Metrics such as loss,accuracy,precision,sensitivity,and Area Under the Curve(AUC)were utilized for evaluation.The findings revealed that ResNet-50 surpassed DenseNet-121 in all evaluated categories.It demonstrated lower loss rate 0.0315 and 0.0305 superior accuracy of 98.77%and 98.88%,precision of 98.78%and 98.89%and sensitivity of 98.76%and 98.86%for training and validation,hinting at its advanced capability for AFib detection.These insights offer a substantial contribution to the existing literature on deep learning applications for AFib detection from ECG signals.The comparative performance data assists future researchers in selecting suitable deep-learning architectures for AFib detection.Moreover,the outcomes of this study are anticipated to stimulate the development of more advanced and efficient ECG-based AFib detection methodologies,for more accurate and early detection of AFib,thereby fostering improved patient care and outcomes.
基金The work was partially supported by Computer Research Institute of Montreal,Quebec,Canada,we acknowledge the support of Ministère de l’Économie et de l’Innovation,Quebec,Canada.This work was also partially supported by Taif University Researchers Supporting Project Number(TURSP-2020/215),Taif University,Taif,Saudi Arabia.
文摘Medical data tampering has become one of the main challenges in the field of secure-aware medical data processing.Forgery of normal patients’medical data to present them as COVID-19 patients is an illegitimate action that has been carried out in different ways recently.Therefore,the integrity of these data can be questionable.Forgery detection is a method of detecting an anomaly in manipulated forged data.An appropriate number of features are needed to identify an anomaly as either forged or non-forged data in order to find distortion or tampering in the original data.Convolutional neural networks(CNNs)have contributed a major breakthrough in this type of detection.There has been much interest from both the clinicians and the AI community in the possibility of widespread usage of artificial neural networks for quick diagnosis using medical data for early COVID-19 patient screening.The purpose of this paper is to detect forgery in COVID-19 medical data by using CNN in the error level analysis(ELA)by verifying the noise pattern in the data.The proposed improved ELA method is evaluated using a type of data splicing forgery and sigmoid and ReLU phenomenon schemes.The proposed method is verified by manipulating COVID-19 data using different types of forgeries and then applying the proposed CNN model to the data to detect the data tampering.The results show that the accuracy of the proposed CNN model on the test COVID-19 data is approximately 92%.
文摘Melanoma is of the lethal and rare types of skin cancer.It is curable at an initial stage and the patient can survive easily.It is very difficult to screen all skin lesion patients due to costly treatment.Clinicians are requiring a correct method for the right treatment for dermoscopic clinical features such as lesion borders,pigment networks,and the color of melanoma.These challenges are required an automated system to classify the clinical features of melanoma and non-melanoma disease.The trained clinicians can overcome the issues such as low contrast,lesions varying in size,color,and the existence of several objects like hair,reflections,air bubbles,and oils on almost all images.Active contour is one of the suitable methods with some drawbacks for the segmentation of irre-gular shapes.An entropy and morphology-based automated mask selection is pro-posed for the active contour method.The proposed method can improve the overall segmentation along with the boundary of melanoma images.In this study,features have been extracted to perform the classification on different texture scales like Gray level co-occurrence matrix(GLCM)and Local binary pattern(LBP).When four different moments pull out in six different color spaces like HSV,Lin RGB,YIQ,YCbCr,XYZ,and CIE L*a*b then global information from different colors channels have been combined.Therefore,hybrid fused texture features;such as local,color feature as global,shape features,and Artificial neural network(ANN)as classifiers have been proposed for the categorization of the malignant and non-malignant.Experimentations had been carried out on datasets Dermis,DermQuest,and PH2.The results of our advanced method showed super-iority and contrast with the existing state-of-the-art techniques.
基金Taif University Researchers Supporting Project number(TURSP-2020/98).
文摘Organizational and end user data breaches are highly implicated by the role of information security conscious care behavior in respective incident responses.This research study draws upon the literature in the areas of information security,incident response,theory of planned behaviour,and protection motivation theory to expand and empirically validate a modified framework of information security conscious care behaviour formation.The applicability of the theoretical framework is shown through a case study labelled as a cyber-attack of unprecedented scale and sophistication in Singapore’s history to-date,the 2018 SingHealth data breach.The single in-depth case study observed information security awareness,policy,experience,attitude,subjective norms,perceived behavioral control,threat appraisal and self-efficacy as emerging prominently in the framework’s applicability in incident handling.The data analysis did not support threat severity relationship with conscious care behaviour.The findings from the above-mentioned observations are presented as possible key drivers in the shaping information security conscious care behaviour in real-world cyber incident management.
基金Taif University Researchers Supporting Project number(TURSP-2020/98).
文摘The world health organization(WHO)terms dengue as a serious illness that impacts almost half of the world’s population and carries no specific treatment.Early and accurate detection of spread in affected regions can save precious lives.Despite the severity of the disease,a few noticeable works can be found that involve sentiment analysis to mine accurate intuitions from the social media text streams.However,the massive data explosion in recent years has led to difficulties in terms of storing and processing large amounts of data,as reliable mechanisms to gather the data and suitable techniques to extract meaningful insights from the data are required.This research study proposes a sentiment analysis polarity approach for collecting data and extracting relevant information about dengue via Apache Hadoop.The method consists of two main parts:the first part collects data from social media using Apache Flume,while the second part focuses on querying and extracting relevant information via the hybrid filtration-polarity algorithm using Apache Hive.To overcome the noisy and unstructured nature of the data,the process of extracting information is characterized by pre and post-filtration phases.As a result,only with the integration of Flume and Hive with filtration and polarity analysis,can a reliable sentiment analysis technique be offered to collect and process large-scale data from the social network.We introduce how the Apache Hadoop ecosystem–Flume and Hive–can provide a sentiment analysis capability by storing and processing large amounts of data.An important finding of this paper is that developing efficient sentiment analysis applications for detecting diseases can be more reliable through the use of the Hadoop ecosystem components than through the use of normal machines.
基金This research work was supported by the University Malaysia Sabah,Malaysia.
文摘University timetabling problems are a yearly challenging task and are faced repeatedly each semester.The problems are considered nonpolynomial time(NP)and combinatorial optimization problems(COP),which means that they can be solved through optimization algorithms to produce the aspired optimal timetable.Several techniques have been used to solve university timetabling problems,and most of them use optimization techniques.This paper provides a comprehensive review of the most recent studies dealing with concepts,methodologies,optimization,benchmarks,and open issues of university timetabling problems.The comprehensive review starts by presenting the essence of university timetabling as NP-COP,defining and clarifying the two formed classes of university timetabling:University Course Timetabling and University Examination Timetabling,illustrating the adopted algorithms for solving such a problem,elaborating the university timetabling constraints to be considered achieving the optimal timetable,and explaining how to analyze and measure the performance of the optimization algorithms by demonstrating the commonly used benchmark datasets for the evaluation.It is noted that meta-heuristic methodologies are widely used in the literature.Additionally,recently,multi-objective optimization has been increasingly used in solving such a problem that can identify robust university timetabling solutions.Finally,trends and future directions in university timetabling problems are provided.This paper provides good information for students,researchers,and specialists interested in this area of research.The challenges and possibilities for future research prospects are also explored.
基金supported by the University of Malaya,Bantuan Khas Penyelidikan under the research grant of BKS083-2017Fundamental Research Grant Scheme(FRGS)under Grant number FP112-2018A from the Ministry of Education Malaysia,Higher Education.
文摘Learning analytics is a rapidly evolving research discipline that uses theinsights generated from data analysis to support learners as well as optimize boththe learning process and environment. This paper studied students’ engagementlevel of the Learning Management System (LMS) via a learning analytics tool,student’s approach in managing their studies and possible learning analytic methods to analyze student data. Moreover, extensive systematic literature review(SLR) was employed for the selection, sorting and exclusion of articles fromdiverse renowned sources. The findings show that most of the engagement inLMS are driven by educators. Additionally, we have discussed the factors inLMS, causes of low engagement and ways of increasing engagement factorsvia the Learning Analytics approach. Nevertheless, apart from recognizing theLearning Analytics approach as being a successful method and technique for analyzing the LMS data, this research further highlighted the possibility of mergingthe learning analytics technique with the LMS engagement in every institution asbeing a direction for future research.
文摘Plant diseases pose a significant challenge to global agricultural productivity,necessitating efficient and precise diagnostic systems for early intervention and mitigation.In this study,we propose a novel hybrid framework that integrates EfficientNet-B8,Vision Transformer(ViT),and Knowledge Graph Fusion(KGF)to enhance plant disease classification across 38 distinct disease categories.The proposed framework leverages deep learning and semantic enrichment to improve classification accuracy and interpretability.EfficientNet-B8,a convolutional neural network(CNN)with optimized depth and width scaling,captures fine-grained spatial details in high-resolution plant images,aiding in the detection of subtle disease symptoms.In parallel,ViT,a transformer-based architecture,effectively models long-range dependencies and global structural patterns within the images,ensuring robust disease pattern recognition.Furthermore,KGF incorporates domain-specific metadata,such as crop type,environmental conditions,and disease relationships,to provide contextual intelligence and improve classification accuracy.The proposed model was rigorously evaluated on a large-scale dataset containing diverse plant disease images,achieving outstanding performance with a 99.7%training accuracy and 99.3%testing accuracy.The precision and F1-score were consistently high across all disease classes,demonstrating the framework’s ability to minimize false positives and false negatives.Compared to conventional deep learning approaches,this hybrid method offers a more comprehensive and interpretable solution by integrating self-attention mechanisms and domain knowledge.Beyond its superior classification performance,this model opens avenues for optimizing metadata dependency and reducing computational complexity,making it more feasible for real-world deployment in resource-constrained agricultural settings.The proposed framework represents an advancement in precision agriculture,providing scalable,intelligent disease diagnosis that enhances crop protection and food security.