The detection and characterization of human veins using infrared (IR) image processing have gained significant attention due to its potential applications in biometric identification, medical diagnostics, and vein-bas...The detection and characterization of human veins using infrared (IR) image processing have gained significant attention due to its potential applications in biometric identification, medical diagnostics, and vein-based authentication systems. This paper presents a low-cost approach for automatic detection and characterization of human veins from IR images. The proposed method uses image processing techniques including segmentation, feature extraction, and, pattern recognition algorithms. Initially, the IR images are preprocessed to enhance vein structures and reduce noise. Subsequently, a CLAHE algorithm is employed to extract vein regions based on their unique IR absorption properties. Features such as vein thickness, orientation, and branching patterns are extracted using mathematical morphology and directional filters. Finally, a classification framework is implemented to categorize veins and distinguish them from surrounding tissues or artifacts. A setup based on Raspberry Pi was used. Experimental results of IR images demonstrate the effectiveness and robustness of the proposed approach in accurately detecting and characterizing human. The developed system shows promising for integration into applications requiring reliable and secure identification based on vein patterns. Our work provides an effective and low-cost solution for nursing staff in low and middle-income countries to perform a safe and accurate venipuncture.展开更多
The proliferation of maliciously coded documents as file transfers increase has led to a rise in sophisticated attacks.Portable Document Format(PDF)files have emerged as a major attack vector for malware due to their ...The proliferation of maliciously coded documents as file transfers increase has led to a rise in sophisticated attacks.Portable Document Format(PDF)files have emerged as a major attack vector for malware due to their adaptability and wide usage.Detecting malware in PDF files is challenging due to its ability to include various harmful elements such as embedded scripts,exploits,and malicious URLs.This paper presents a comparative analysis of machine learning(ML)techniques,including Naive Bayes(NB),K-Nearest Neighbor(KNN),Average One Dependency Estimator(A1DE),RandomForest(RF),and SupportVectorMachine(SVM)forPDFmalware detection.The study utilizes a dataset obtained from the Canadian Institute for Cyber-security and employs different testing criteria,namely percentage splitting and 10-fold cross-validation.The performance of the techniques is evaluated using F1-score,precision,recall,and accuracy measures.The results indicate that KNNoutperforms other models,achieving an accuracy of 99.8599%using 10-fold cross-validation.The findings highlight the effectiveness of ML models in accurately detecting PDF malware and provide insights for developing robust systems to protect against malicious activities.展开更多
The seamless integration of intelligent Internet of Things devices with conventional wireless sensor networks has revolutionized data communication for different applications,such as remote health monitoring,industria...The seamless integration of intelligent Internet of Things devices with conventional wireless sensor networks has revolutionized data communication for different applications,such as remote health monitoring,industrial monitoring,transportation,and smart agriculture.Efficient and reliable data routing is one of the major challenges in the Internet of Things network due to the heterogeneity of nodes.This paper presents a traffic-aware,cluster-based,and energy-efficient routing protocol that employs traffic-aware and cluster-based techniques to improve the data delivery in such networks.The proposed protocol divides the network into clusters where optimal cluster heads are selected among super and normal nodes based on their residual energies.The protocol considers multi-criteria attributes,i.e.,energy,traffic load,and distance parameters to select the next hop for data delivery towards the base station.The performance of the proposed protocol is evaluated through the network simulator NS3.40.For different traffic rates,number of nodes,and different packet sizes,the proposed protocol outperformed LoRaWAN in terms of end-to-end packet delivery ratio,energy consumption,end-to-end delay,and network lifetime.For 100 nodes,the proposed protocol achieved a 13%improvement in packet delivery ratio,10 ms improvement in delay,and 10 mJ improvement in average energy consumption over LoRaWAN.展开更多
In the field of radiocommunication, modulation type identification is one of the most important characteristics in signal processing. This study aims to implement a modulation recognition system on two approaches to m...In the field of radiocommunication, modulation type identification is one of the most important characteristics in signal processing. This study aims to implement a modulation recognition system on two approaches to machine learning techniques, the K-Nearest Neighbors (KNN) and Artificial Neural Networks (ANN). From a statistical and spectral analysis of signals, nine key differentiation features are extracted and used as input vectors for each trained model. The feature extraction is performed by using the Hilbert transform, the forward and inverse Fourier transforms. The experiments with the AMC Master dataset classify ten (10) types of analog and digital modulations. AM_DSB_FC, AM_DSB_SC, AM_USB, AM_LSB, FM, MPSK, 2PSK, MASK, 2ASK, MQAM are put forward in this article. For the simulation of the chosen model, signals are polluted by the Additive White Gaussian Noise (AWGN). The simulation results show that the best identification rate is the MLP neuronal method with 90.5% of accuracy after 10 dB signal-to-noise ratio value, with a shift of more than 15% from the k-nearest neighbors’ algorithm.展开更多
Retrieving information from evolving digital data collection using a user’s query is always essential and needs efficient retrieval mechanisms that help reduce the required time from such massive collections.Large-sc...Retrieving information from evolving digital data collection using a user’s query is always essential and needs efficient retrieval mechanisms that help reduce the required time from such massive collections.Large-scale time consumption is certain to scan and analyze to retrieve the most relevant textual data item from all the documents required a sophisticated technique for a query against the document collection.It is always challenging to retrieve a more accurate and fast retrieval from a large collection.Text summarization is a dominant research field in information retrieval and text processing to locate the most appropriate data object as single or multiple documents from the collection.Machine learning and knowledge-based techniques are the two query-based extractive text summarization techniques in Natural Language Processing(NLP)which can be used for precise retrieval and are considered to be the best option.NLP uses machine learning approaches for both supervised and unsupervised learning for calculating probabilistic features.The study aims to propose a hybrid approach for query-based extractive text summarization in the research study.Text-Rank Algorithm is used as a core algorithm for the flow of an implementation of the approach to gain the required goals.Query-based text summarization of multiple documents using a hybrid approach,combining the K-Means clustering technique with Latent Dirichlet Allocation(LDA)as topic modeling technique produces 0.288,0.631,and 0.328 for precision,recall,and F-score,respectively.The results show that the proposed hybrid approach performs better than the graph-based independent approach and the sentences and word frequency-based approach.展开更多
With the rapid development of information technology,digital images have become an important medium for information transmission.However,manipulating images is becoming a common task with the powerful image editing to...With the rapid development of information technology,digital images have become an important medium for information transmission.However,manipulating images is becoming a common task with the powerful image editing tools and software,and people can tamper the images content without leaving any visible traces of splicing in order to gain personal goal.Images are easily spliced and distributed,and the situation will be a great threat to social security.The survey covers splicing image and its localization.The present status of splicing image localization approaches is discussed along with a recommendation for future research.展开更多
The growing field of urban monitoring has increasingly recognized the potential of utilizing autonomous technologies,particularly in drone swarms.The deployment of intelligent drone swarms offers promising solutions f...The growing field of urban monitoring has increasingly recognized the potential of utilizing autonomous technologies,particularly in drone swarms.The deployment of intelligent drone swarms offers promising solutions for enhancing the efficiency and scope of urban condition assessments.In this context,this paper introduces an innovative algorithm designed to navigate a swarm of drones through urban landscapes for monitoring tasks.The primary challenge addressed by the algorithm is coordinating drone movements from one location to another while circumventing obstacles,such as buildings.The algorithm incorporates three key components to optimize the obstacle detection,navigation,and energy efficiency within a drone swarm.First,the algorithm utilizes a method to calculate the position of a virtual leader,acting as a navigational beacon to influence the overall direction of the swarm.Second,the algorithm identifies observers within the swarm based on the current orientation.To further refine obstacle avoidance,the third component involves the calculation of angular velocity using fuzzy logic.This approach considers the proximity of detected obstacles through operational rangefinders and the target’s location,allowing for a nuanced and adaptable computation of angular velocity.The integration of fuzzy logic enables the drone swarm to adapt to diverse urban conditions dynamically,ensuring practical obstacle avoidance.The proposed algorithm demonstrates enhanced performance in the obstacle detection and navigation accuracy through comprehensive simulations.The results suggest that the intelligent obstacle avoidance algorithm holds promise for the safe and efficient deployment of autonomous mobile drones in urban monitoring applications.展开更多
As digital technologies have advanced more rapidly,the number of paper documents recently converted into a digital format has exponentially increased.To respond to the urgent need to categorize the growing number of d...As digital technologies have advanced more rapidly,the number of paper documents recently converted into a digital format has exponentially increased.To respond to the urgent need to categorize the growing number of digitized documents,the classification of digitized documents in real time has been identified as the primary goal of our study.A paper classification is the first stage in automating document control and efficient knowledge discovery with no or little human involvement.Artificial intelligence methods such as Deep Learning are now combined with segmentation to study and interpret those traits,which were not conceivable ten years ago.Deep learning aids in comprehending input patterns so that object classes may be predicted.The segmentation process divides the input image into separate segments for a more thorough image study.This study proposes a deep learning-enabled framework for automated document classification,which can be implemented in higher education.To further this goal,a dataset was developed that includes seven categories:Diplomas,Personal documents,Journal of Accounting of higher education diplomas,Service letters,Orders,Production orders,and Student orders.Subsequently,a deep learning model based on Conv2D layers is proposed for the document classification process.In the final part of this research,the proposed model is evaluated and compared with other machine-learning techniques.The results demonstrate that the proposed deep learning model shows high results in document categorization overtaking the other machine learning models by reaching 94.84%,94.79%,94.62%,94.43%,94.07%in accuracy,precision,recall,F-score,and AUC-ROC,respectively.The achieved results prove that the proposed deep model is acceptable to use in practice as an assistant to an office worker.展开更多
Median filtering is a nonlinear signal processing technique and has an advantage in the field of image anti-forensics.Therefore,more attention has been paid to the forensics research of median filtering.In this paper,...Median filtering is a nonlinear signal processing technique and has an advantage in the field of image anti-forensics.Therefore,more attention has been paid to the forensics research of median filtering.In this paper,a median filtering forensics method based on quaternion convolutional neural network(QCNN)is proposed.The median filtering residuals(MFR)are used to preprocess the images.Then the output of MFR is expanded to four channels and used as the input of QCNN.In QCNN,quaternion convolution is designed that can better mix the information of different channels than traditional methods.The quaternion pooling layer is designed to evaluate the result of quaternion convolution.QCNN is proposed to features well combine the three-channel information of color image and fully extract forensics features.Experiments show that the proposed method has higher accuracy and shorter training time than the traditional convolutional neural network with the same convolution depth.展开更多
Trace tools like LTTng have a very low impact on the traced software as compared with traditional debuggers. However, for long runs, in resource constrained and high throughput environments, such as embedded network s...Trace tools like LTTng have a very low impact on the traced software as compared with traditional debuggers. However, for long runs, in resource constrained and high throughput environments, such as embedded network switching nodes and production servers, the collective tracing impact on the target software adds up considerably. The overhead is not just in terms of execution time but also in terms of the huge amount of data to be stored, processed and analyzed offiine. This paper presents a novel way of dealing with such huge trace data generation by introducing a Just-In-Time (JIT) filter based tracing system, for sieving through the flood of high frequency events, and recording only those that are relevant, when a specific condition is met. With a tiny filtering cost, the user can filter out most events and focus only on the events of interest. We show that in certain scenarios, the JIT compiled filters prove to be three times more effective than similar interpreted filters. We also show that with the increasing number of filter predicates and context variables, the benefits of JIT compilation increase with some JIT compiled filters being even three times faster than their interpreted counterparts. We further present a new architecture, using our filtering system, which can enable co-operative tracing between kernel and process tracing VMs (virtual machines) that share data efficiently. We demonstrate its use through a tracing scenario where the user can dynamically specify syscall latency through the userspace tracing VM whose effect is reflected in tracing decisions made by the kernel tracing VM. We compare the data access performance on our shared memory system and show an almost 100 times improvement over traditional data sharing for co-operative tracing.展开更多
文摘The detection and characterization of human veins using infrared (IR) image processing have gained significant attention due to its potential applications in biometric identification, medical diagnostics, and vein-based authentication systems. This paper presents a low-cost approach for automatic detection and characterization of human veins from IR images. The proposed method uses image processing techniques including segmentation, feature extraction, and, pattern recognition algorithms. Initially, the IR images are preprocessed to enhance vein structures and reduce noise. Subsequently, a CLAHE algorithm is employed to extract vein regions based on their unique IR absorption properties. Features such as vein thickness, orientation, and branching patterns are extracted using mathematical morphology and directional filters. Finally, a classification framework is implemented to categorize veins and distinguish them from surrounding tissues or artifacts. A setup based on Raspberry Pi was used. Experimental results of IR images demonstrate the effectiveness and robustness of the proposed approach in accurately detecting and characterizing human. The developed system shows promising for integration into applications requiring reliable and secure identification based on vein patterns. Our work provides an effective and low-cost solution for nursing staff in low and middle-income countries to perform a safe and accurate venipuncture.
文摘The proliferation of maliciously coded documents as file transfers increase has led to a rise in sophisticated attacks.Portable Document Format(PDF)files have emerged as a major attack vector for malware due to their adaptability and wide usage.Detecting malware in PDF files is challenging due to its ability to include various harmful elements such as embedded scripts,exploits,and malicious URLs.This paper presents a comparative analysis of machine learning(ML)techniques,including Naive Bayes(NB),K-Nearest Neighbor(KNN),Average One Dependency Estimator(A1DE),RandomForest(RF),and SupportVectorMachine(SVM)forPDFmalware detection.The study utilizes a dataset obtained from the Canadian Institute for Cyber-security and employs different testing criteria,namely percentage splitting and 10-fold cross-validation.The performance of the techniques is evaluated using F1-score,precision,recall,and accuracy measures.The results indicate that KNNoutperforms other models,achieving an accuracy of 99.8599%using 10-fold cross-validation.The findings highlight the effectiveness of ML models in accurately detecting PDF malware and provide insights for developing robust systems to protect against malicious activities.
基金This work was supported by the Basic Science Research Program through the NationalResearch Foundation ofKorea(NRF)funded by the Ministry of Education under Grant RS-2023-00237300 and Korea Institute of Planning and Evaluation for Technology in Food,Agriculture and Forestry(IPET)through the Agriculture and Food Convergence Technologies Program for Research Manpower Development,funded by Ministry of Agriculture,Food and Rural Affairs(MAFRA)(Project No.RS-2024-00397026).
文摘The seamless integration of intelligent Internet of Things devices with conventional wireless sensor networks has revolutionized data communication for different applications,such as remote health monitoring,industrial monitoring,transportation,and smart agriculture.Efficient and reliable data routing is one of the major challenges in the Internet of Things network due to the heterogeneity of nodes.This paper presents a traffic-aware,cluster-based,and energy-efficient routing protocol that employs traffic-aware and cluster-based techniques to improve the data delivery in such networks.The proposed protocol divides the network into clusters where optimal cluster heads are selected among super and normal nodes based on their residual energies.The protocol considers multi-criteria attributes,i.e.,energy,traffic load,and distance parameters to select the next hop for data delivery towards the base station.The performance of the proposed protocol is evaluated through the network simulator NS3.40.For different traffic rates,number of nodes,and different packet sizes,the proposed protocol outperformed LoRaWAN in terms of end-to-end packet delivery ratio,energy consumption,end-to-end delay,and network lifetime.For 100 nodes,the proposed protocol achieved a 13%improvement in packet delivery ratio,10 ms improvement in delay,and 10 mJ improvement in average energy consumption over LoRaWAN.
文摘In the field of radiocommunication, modulation type identification is one of the most important characteristics in signal processing. This study aims to implement a modulation recognition system on two approaches to machine learning techniques, the K-Nearest Neighbors (KNN) and Artificial Neural Networks (ANN). From a statistical and spectral analysis of signals, nine key differentiation features are extracted and used as input vectors for each trained model. The feature extraction is performed by using the Hilbert transform, the forward and inverse Fourier transforms. The experiments with the AMC Master dataset classify ten (10) types of analog and digital modulations. AM_DSB_FC, AM_DSB_SC, AM_USB, AM_LSB, FM, MPSK, 2PSK, MASK, 2ASK, MQAM are put forward in this article. For the simulation of the chosen model, signals are polluted by the Additive White Gaussian Noise (AWGN). The simulation results show that the best identification rate is the MLP neuronal method with 90.5% of accuracy after 10 dB signal-to-noise ratio value, with a shift of more than 15% from the k-nearest neighbors’ algorithm.
文摘Retrieving information from evolving digital data collection using a user’s query is always essential and needs efficient retrieval mechanisms that help reduce the required time from such massive collections.Large-scale time consumption is certain to scan and analyze to retrieve the most relevant textual data item from all the documents required a sophisticated technique for a query against the document collection.It is always challenging to retrieve a more accurate and fast retrieval from a large collection.Text summarization is a dominant research field in information retrieval and text processing to locate the most appropriate data object as single or multiple documents from the collection.Machine learning and knowledge-based techniques are the two query-based extractive text summarization techniques in Natural Language Processing(NLP)which can be used for precise retrieval and are considered to be the best option.NLP uses machine learning approaches for both supervised and unsupervised learning for calculating probabilistic features.The study aims to propose a hybrid approach for query-based extractive text summarization in the research study.Text-Rank Algorithm is used as a core algorithm for the flow of an implementation of the approach to gain the required goals.Query-based text summarization of multiple documents using a hybrid approach,combining the K-Means clustering technique with Latent Dirichlet Allocation(LDA)as topic modeling technique produces 0.288,0.631,and 0.328 for precision,recall,and F-score,respectively.The results show that the proposed hybrid approach performs better than the graph-based independent approach and the sentences and word frequency-based approach.
基金This work was supported in part by the Natural Science Foundation of China under Grants(Nos.61772281,U1636219,61502241,61272421,61232016,61402235 and 61572258)in part by the National Key R&D Program of China(Grant Nos.2016YFB0801303 and 2016QY01W0105)+3 种基金in part by the plan for Scientific Talent of Henan Province(Grant No.2018JR0018)in part by the Natural Science Foundation of Jiangsu Province,China under Grant BK20141006in part by the Natural Science Foundation of the Universities in Jiangsu Province under Grant 14KJB520024the PAPD fund and the CICAEET fund.
文摘With the rapid development of information technology,digital images have become an important medium for information transmission.However,manipulating images is becoming a common task with the powerful image editing tools and software,and people can tamper the images content without leaving any visible traces of splicing in order to gain personal goal.Images are easily spliced and distributed,and the situation will be a great threat to social security.The survey covers splicing image and its localization.The present status of splicing image localization approaches is discussed along with a recommendation for future research.
文摘The growing field of urban monitoring has increasingly recognized the potential of utilizing autonomous technologies,particularly in drone swarms.The deployment of intelligent drone swarms offers promising solutions for enhancing the efficiency and scope of urban condition assessments.In this context,this paper introduces an innovative algorithm designed to navigate a swarm of drones through urban landscapes for monitoring tasks.The primary challenge addressed by the algorithm is coordinating drone movements from one location to another while circumventing obstacles,such as buildings.The algorithm incorporates three key components to optimize the obstacle detection,navigation,and energy efficiency within a drone swarm.First,the algorithm utilizes a method to calculate the position of a virtual leader,acting as a navigational beacon to influence the overall direction of the swarm.Second,the algorithm identifies observers within the swarm based on the current orientation.To further refine obstacle avoidance,the third component involves the calculation of angular velocity using fuzzy logic.This approach considers the proximity of detected obstacles through operational rangefinders and the target’s location,allowing for a nuanced and adaptable computation of angular velocity.The integration of fuzzy logic enables the drone swarm to adapt to diverse urban conditions dynamically,ensuring practical obstacle avoidance.The proposed algorithm demonstrates enhanced performance in the obstacle detection and navigation accuracy through comprehensive simulations.The results suggest that the intelligent obstacle avoidance algorithm holds promise for the safe and efficient deployment of autonomous mobile drones in urban monitoring applications.
文摘As digital technologies have advanced more rapidly,the number of paper documents recently converted into a digital format has exponentially increased.To respond to the urgent need to categorize the growing number of digitized documents,the classification of digitized documents in real time has been identified as the primary goal of our study.A paper classification is the first stage in automating document control and efficient knowledge discovery with no or little human involvement.Artificial intelligence methods such as Deep Learning are now combined with segmentation to study and interpret those traits,which were not conceivable ten years ago.Deep learning aids in comprehending input patterns so that object classes may be predicted.The segmentation process divides the input image into separate segments for a more thorough image study.This study proposes a deep learning-enabled framework for automated document classification,which can be implemented in higher education.To further this goal,a dataset was developed that includes seven categories:Diplomas,Personal documents,Journal of Accounting of higher education diplomas,Service letters,Orders,Production orders,and Student orders.Subsequently,a deep learning model based on Conv2D layers is proposed for the document classification process.In the final part of this research,the proposed model is evaluated and compared with other machine-learning techniques.The results demonstrate that the proposed deep learning model shows high results in document categorization overtaking the other machine learning models by reaching 94.84%,94.79%,94.62%,94.43%,94.07%in accuracy,precision,recall,F-score,and AUC-ROC,respectively.The achieved results prove that the proposed deep model is acceptable to use in practice as an assistant to an office worker.
基金This work was supported in part by the Natural Science Foundation of China under Grants(Nos.61702235,61772281,U1636219,U1636117,61702235,61502241,61272421,61232016,61402235 and 61572258)in part by the National Key R\&D Program of China(Grant Nos.2016YFB0801303 and 2016QY 01W0105)+2 种基金in part by the plan for Scientific Talent of Henan Province(Grant No.2018JR0018)in part by the Natural Science Foundation of Jiangsu Province,China under Grant BK20141006in part by the Natural Science Foundation of the Universities in Jiangsu Province under Grant 14KJB520024,the PAPD fund and the CICAEET fund.
文摘Median filtering is a nonlinear signal processing technique and has an advantage in the field of image anti-forensics.Therefore,more attention has been paid to the forensics research of median filtering.In this paper,a median filtering forensics method based on quaternion convolutional neural network(QCNN)is proposed.The median filtering residuals(MFR)are used to preprocess the images.Then the output of MFR is expanded to four channels and used as the input of QCNN.In QCNN,quaternion convolution is designed that can better mix the information of different channels than traditional methods.The quaternion pooling layer is designed to evaluate the result of quaternion convolution.QCNN is proposed to features well combine the three-channel information of color image and fully extract forensics features.Experiments show that the proposed method has higher accuracy and shorter training time than the traditional convolutional neural network with the same convolution depth.
文摘Trace tools like LTTng have a very low impact on the traced software as compared with traditional debuggers. However, for long runs, in resource constrained and high throughput environments, such as embedded network switching nodes and production servers, the collective tracing impact on the target software adds up considerably. The overhead is not just in terms of execution time but also in terms of the huge amount of data to be stored, processed and analyzed offiine. This paper presents a novel way of dealing with such huge trace data generation by introducing a Just-In-Time (JIT) filter based tracing system, for sieving through the flood of high frequency events, and recording only those that are relevant, when a specific condition is met. With a tiny filtering cost, the user can filter out most events and focus only on the events of interest. We show that in certain scenarios, the JIT compiled filters prove to be three times more effective than similar interpreted filters. We also show that with the increasing number of filter predicates and context variables, the benefits of JIT compilation increase with some JIT compiled filters being even three times faster than their interpreted counterparts. We further present a new architecture, using our filtering system, which can enable co-operative tracing between kernel and process tracing VMs (virtual machines) that share data efficiently. We demonstrate its use through a tracing scenario where the user can dynamically specify syscall latency through the userspace tracing VM whose effect is reflected in tracing decisions made by the kernel tracing VM. We compare the data access performance on our shared memory system and show an almost 100 times improvement over traditional data sharing for co-operative tracing.