In a telemedicine diagnosis system,the emergence of 3D imaging enables doctors to make clearer judgments,and its accuracy also directly affects doctors’diagnosis of the disease.In order to ensure the safe transmissio...In a telemedicine diagnosis system,the emergence of 3D imaging enables doctors to make clearer judgments,and its accuracy also directly affects doctors’diagnosis of the disease.In order to ensure the safe transmission and storage of medical data,a 3D medical watermarking algorithm based on wavelet transform is proposed in this paper.The proposed algorithm employs the principal component analysis(PCA)transform to reduce the data dimension,which can minimize the error between the extracted components and the original data in the mean square sense.Especially,this algorithm helps to create a bacterial foraging model based on particle swarm optimization(BF-PSO),by which the optimal wavelet coefficient is found for embedding and is used as the absolute feature of watermark embedding,thereby achieving the optimal balance between embedding capacity and imperceptibility.A series of experimental results from MATLAB software based on the standard MRI brain volume dataset demonstrate that the proposed algorithm has strong robustness and make the 3D model have small deformation after embedding the watermark.展开更多
Small targets and occluded targets will inevitably appear in the image during the shooting process due to the influence of angle,distance,complex scene,illumination intensity,and other factors.These targets have few e...Small targets and occluded targets will inevitably appear in the image during the shooting process due to the influence of angle,distance,complex scene,illumination intensity,and other factors.These targets have few effective pixels,few features,and no apparent features,which makes extracting their efficient features difficult and easily leads to false detection,missed detection,and repeated detection,affecting the performance of target detection models.An improved faster region convolutional neural network(RCNN)algorithm(CF-RCNN)integrating convolutional block attention module(CBAM)and feature pyramid networks(FPN)is proposed to improve the detection and recognition accuracy of small-size objects,occluded or truncated objects in complex scenes.Firstly,the CBAM mechanism is integrated into the feature extraction network to improve the detection ability of occluded or truncated objects.Secondly,the FPN-featured pyramid structure is introduced to obtain high-resolution and vital semantic data to enhance the detection effect of small-size objects.The experimental results show that the mean average precision of target detection of the improved algorithm on PASCAL VOC2012 is improved to 76.1%,which is 13.8 percentage points higher than that of the commonly used Faster RCNN and other algorithms.Furthermore,it is better than the commonly used small sample target detection algorithm.展开更多
The problem of weeds in crops is a natural problem for farmers.Machine Learning(ML),Deep Learning(DL),and Unmanned Aerial Vehicles(UAV)are among the advanced technologies that should be used in order to reduce the use...The problem of weeds in crops is a natural problem for farmers.Machine Learning(ML),Deep Learning(DL),and Unmanned Aerial Vehicles(UAV)are among the advanced technologies that should be used in order to reduce the use of pesticides while also protecting the environment and ensuring the safety of crops.Deep Learning-based crop and weed identification systems have the potential to save money while also reducing environmental stress.The accuracy of ML/DL models has been proven to be restricted in the past due to a variety of factors,including the selection of an efficient wavelength,spatial resolution,and the selection and tuning of hyperparameters.The purpose of the current research is to develop a new automated weed detecting system that uses Convolution Neural Network(CNN)classification for a real dataset of 4400 UAV pictures with 15336 segments.Snapshots were used to choose the optimal parameters for the proposed CNN LVQ model.The soil class achieved the user accuracy of 100%with the proposed CNN LVQ model,followed by soybean(99.79%),grass(98.58%),and broadleaf(98.32%).The developed CNN LVQ model showed an overall accuracy of 99.44%after rigorous hyperparameter tuning for weed detection,significantly higher than previously reported studies.展开更多
Deep learning techniques have outstanding performance in feature extraction and modelfitting.In thefield of aero-engine fault diagnosis,the intro-duction of deep learning technology is of great significance.The aero-engi...Deep learning techniques have outstanding performance in feature extraction and modelfitting.In thefield of aero-engine fault diagnosis,the intro-duction of deep learning technology is of great significance.The aero-engine is the heart of the aircraft,and its stable operation is the primary guarantee of the aircraft.In order to ensure the normal operation of the aircraft,it is necessary to study and diagnose the faults of the aero-engine.Among the many engine fail-ures,the one that occurs more frequently and is more hazardous is the wheeze,which often poses a great threat toflight safety.On the basis of analyzing the mechanism of aero-engine surge,an aero-engine surge fault diagnosis method based on deep learning technology is proposed.In this paper,key sensor data are obtained by analyzing different engine sensor data.An aero-engine surge data-set acquisition algorithm(ASDA)is proposed to sample the fault and normal points to generate the training set,validation set and test set.Based on neural net-work models such as one-dimensional convolutional neural network(1D-CNN),convolutional neural network(RNN),and long-short memory neural network(LSTM),different neural network optimization algorithms are selected to achieve fault diagnosis and classification.The experimental results show that the deep learning technique has good effect in aero-engine surge fault diagnosis.The aero-engine surge fault diagnosis network(ASFDN)proposed in this paper achieves better results.Through training,the network achieves more than 99%classification accuracy for the test set.展开更多
In recent decades, log system management has been widely studied fordata security management. System abnormalities or illegal operations can befound in time by analyzing the log and provide evidence for intrusions. In...In recent decades, log system management has been widely studied fordata security management. System abnormalities or illegal operations can befound in time by analyzing the log and provide evidence for intrusions. In orderto ensure the integrity of the log in the current system, many researchers havedesigned it based on blockchain. However, the emerging blockchain is facing significant security challenges with the increment of quantum computers. An attackerequipped with a quantum computer can extract the user's private key from thepublic key to generate a forged signature, destroy the structure of the blockchain,and threaten the security of the log system. Thus, blind signature on the lattice inpost-quantum blockchain brings new security features for log systems. In ourpaper, to address these, firstly, we propose a novel log system based on post-quantum blockchain that can resist quantum computing attacks. Secondly, we utilize apost-quantum blind signature on the lattice to ensure both security and blindnessof log system, which makes the privacy of log information to a large extent.Lastly, we enhance the security level of lattice-based blind signature under therandom oracle model, and the signature size grows slowly compared with others.We also implement our protocol and conduct an extensive analysis to prove theideas. The results show that our scheme signature size edges up subtly comparedwith others with the improvement of security level.展开更多
Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER hav...Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER have been perfect on normal faces but have been found to be constrained in occluded faces.Recently,Deep Learning Techniques(DLT)have gained popular-ity in applications of real-world problems including recognition of human emo-tions.The human face reflects emotional states and human intentions.An expression is the most natural and powerful way of communicating non-verbally.Systems which form communications between the two are termed Human Machine Interaction(HMI)systems.FER can improve HMI systems as human expressions convey useful information to an observer.This paper proposes a FER scheme called EECNN(Enhanced Convolution Neural Network with Atten-tion mechanism)to recognize seven types of human emotions with satisfying results in its experiments.Proposed EECNN achieved 89.8%accuracy in classi-fying the images.展开更多
Internet of things enables every real world objects to be seamlessly integrated with traditional internet.Heterogeneous objects of real world are enhanced with capability to communicate,computing capabilities and stan...Internet of things enables every real world objects to be seamlessly integrated with traditional internet.Heterogeneous objects of real world are enhanced with capability to communicate,computing capabilities and standards to interoperate with existing network and these entities are resource constrained and vulnerable to various security attacks.Huge number of research works are being carried out to analyze various possible attacks and to propose standards for securing communication between devices in internet of things(IoT).In this article,a robust and lightweight authentication scheme for mutual authentication between client and server using constrained application protocol is proposed.Internet of things enables devices with different characteristics and capabilities to be integrated with internet.These heterogeneous devices should interoperate with each other to accumulate,process and transmit data for facilitating smart services.The growth of IoT applications leads to the rapid growth of IoT devices incorporated to the global network and network traffic over the traditional network.This scheme greatly reduces the authentication overhead between the devices by reducing the packet size of messages,number of messages transmitted and processing overhead on communicating devices.Efficiency of this authentication scheme against attacks such as DoS(denial of service),replay attacks and attacks to exhaust the resources are also examined.Message transmission time reduced upto 50%of using proposed techniques.展开更多
Safety helmet-wearing detection is an essential part of the intelligentmonitoring system. To improve the speed and accuracy of detection, especiallysmall targets and occluded objects, it presents a novel and efficient...Safety helmet-wearing detection is an essential part of the intelligentmonitoring system. To improve the speed and accuracy of detection, especiallysmall targets and occluded objects, it presents a novel and efficient detectormodel. The underlying core algorithm of this model adopts the YOLOv5 (YouOnly Look Once version 5) network with the best comprehensive detection performance. It is improved by adding an attention mechanism, a CIoU (CompleteIntersection Over Union) Loss function, and the Mish activation function. First,it applies the attention mechanism in the feature extraction. The network can learnthe weight of each channel independently and enhance the information dissemination between features. Second, it adopts CIoU loss function to achieve accuratebounding box regression. Third, it utilizes Mish activation function to improvedetection accuracy and generalization ability. It builds a safety helmet-wearingdetection data set containing more than 10,000 images collected from the Internetfor preprocessing. On the self-made helmet wearing test data set, the averageaccuracy of the helmet detection of the proposed algorithm is 96.7%, which is1.9% higher than that of the YOLOv5 algorithm. It meets the accuracy requirements of the helmet-wearing detection under construction scenarios.展开更多
As the typical peer-to-peer distributed networks, blockchain systemsrequire each node to copy a complete transaction database, so as to ensure newtransactions can by verified independently. In a blockchain system (e.g...As the typical peer-to-peer distributed networks, blockchain systemsrequire each node to copy a complete transaction database, so as to ensure newtransactions can by verified independently. In a blockchain system (e.g., bitcoinsystem), the node does not rely on any central organization, and every node keepsan entire copy of the transaction database. However, this feature determines thatthe size of blockchain transaction database is growing rapidly. Therefore, with thecontinuous system operations, the node memory also needs to be expanded tosupport the system running. Especially in the big data era, the increasing networktraffic will lead to faster transaction growth rate. This paper analyzes blockchaintransaction databases and proposes a storage optimization scheme. The proposedscheme divides blockchain transaction database into cold zone and hot zone usingexpiration recognition method based on Least Recently Used (LRU) algorithm. Itcan achieve storage optimization by moving unspent transaction outputs outsidethe in-memory transaction databases. We present the theoretical analysis on theoptimization method to validate the effectiveness. Extensive experiments showour proposed method outperforms the current mechanism for the blockchaintransaction databases.展开更多
Object detection is one of the most important and challenging branches of computer vision,which has been widely applied in people s life,such as monitoring security,autonomous driving and so on,with the purpose of loc...Object detection is one of the most important and challenging branches of computer vision,which has been widely applied in people s life,such as monitoring security,autonomous driving and so on,with the purpose of locating instances of semantic objects of a certain class.With the rapid development of deep learning algorithms for detection tasks,the performance of object detectors has been greatly improved.In order to understand the main development status of target detection,a comprehensive literature review of target detection and an overall discussion of the works closely related to it are presented in this paper.This paper various object detection methods,including one-stage and two-stage detectors,are systematically summarized,and the datasets and evaluation criteria used in object detection are introduced.In addition,the development of object detection technology is reviewed.Finally,based on the understanding of the current development of target detection,we discuss the main research directions in the future.展开更多
Presently,precision agriculture processes like plant disease,crop yield prediction,species recognition,weed detection,and irrigation can be accom-plished by the use of computer vision(CV)approaches.Weed plays a vital ...Presently,precision agriculture processes like plant disease,crop yield prediction,species recognition,weed detection,and irrigation can be accom-plished by the use of computer vision(CV)approaches.Weed plays a vital role in influencing crop productivity.The wastage and pollution of farmland's natural atmosphere instigated by full coverage chemical herbicide spraying are increased.Since the proper identification of weeds from crops helps to reduce the usage of herbicide and improve productivity,this study presents a novel computer vision and deep learning based weed detection and classification(CVDL-WDC)model for precision agriculture.The proposed CVDL-WDC technique intends to prop-erly discriminate the plants as well as weeds.The proposed CVDL-WDC technique involves two processes namely multiscale Faster RCNN based object detection and optimal extreme learning machine(ELM)based weed classification.The parameters of the ELM model are optimally adjusted by the use of farmland fertility optimization(FFO)algorithm.A comprehensive simulation analysis of the CVDL-WDC technique against benchmark dataset reported the enhanced out-comes over its recent approaches interms of several measures.展开更多
In thefield of agriculture,the development of an early warning diagnostic system is essential for timely detection and accurate diagnosis of diseases in rice plants.This research focuses on identifying the plant diseas...In thefield of agriculture,the development of an early warning diagnostic system is essential for timely detection and accurate diagnosis of diseases in rice plants.This research focuses on identifying the plant diseases and detecting them promptly through the advancements in thefield of computer vision.The images obtained from in-field farms are typically with less visual information.However,there is a significant impact on the classification accuracy in the disease diagnosis due to the lack of high-resolution crop images.We propose a novel Reconstructed Disease Aware–Convolutional Neural Network(RDA-CNN),inspired by recent CNN architectures,that integrates image super resolution and classification into a single model for rice plant disease classification.This network takes low-resolution images of rice crops as input and employs the super resolution layers to transform low-resolution images to super-resolution images to recover appearance such as spots,rot,and lesion on different parts of the rice plants.Extensive experimental results indicated that the proposed RDA-CNN method performs well under diverse aspects generating visually pleasing images and outperforms better than other con-ventional Super Resolution(SR)methods.Furthermore,these super-resolution images are subsequently passed through deep classification layers for disease classi-fication.The results demonstrate that the RDA-CNN significantly boosts the clas-sification performance by nearly 4–6%compared with the baseline architectures.展开更多
Accurate prediction of future events brings great benefits and reduces losses for society in many domains,such as civil unrest,pandemics,and crimes.Knowledge graph is a general language for describing and modeling com...Accurate prediction of future events brings great benefits and reduces losses for society in many domains,such as civil unrest,pandemics,and crimes.Knowledge graph is a general language for describing and modeling complex systems.Different types of events continually occur,which are often related to historical and concurrent events.In this paper,we formalize the future event prediction as a temporal knowledge graph reasoning problem.Most existing studies either conduct reasoning on static knowledge graphs or assume knowledges graphs of all timestamps are available during the training process.As a result,they cannot effectively reason over temporal knowledge graphs and predict events happening in the future.To address this problem,some recent works learn to infer future events based on historical eventbased temporal knowledge graphs.However,these methods do not comprehensively consider the latent patterns and influences behind historical events and concurrent events simultaneously.This paper proposes a new graph representation learning model,namely Recurrent Event Graph ATtention Network(RE-GAT),based on a novel historical and concurrent events attention-aware mechanism by modeling the event knowledge graph sequence recurrently.More specifically,our RE-GAT uses an attention-based historical events embedding module to encode past events,and employs an attention-based concurrent events embedding module to model the associations of events at the same timestamp.A translation-based decoder module and a learning objective are developed to optimize the embeddings of entities and relations.We evaluate our proposed method on four benchmark datasets.Extensive experimental results demonstrate the superiority of our RE-GAT model comparing to various base-lines,which proves that our method can more accurately predict what events are going to happen.展开更多
Internet traffic encryption is a very common traffic protection method.Most internet traffic is protected by the encryption protocol called transport layersecurity (TLS). Although traffic encryption can ensure the sec...Internet traffic encryption is a very common traffic protection method.Most internet traffic is protected by the encryption protocol called transport layersecurity (TLS). Although traffic encryption can ensure the security of communication, it also enables malware to hide its information and avoid being detected.At present, most of the malicious traffic detection methods are aimed at the unencrypted ones. There are some problems in the detection of encrypted traffic, suchas high false positive rate, difficulty in feature extraction, and insufficient practicability. The accuracy and effectiveness of existing methods need to be improved.In this paper, we present TLSmell, a framework that conducts maliciousencrypted HTTPs traffic detection with simple connection-specific indicators byusing different classifiers based online training. We perform deep packet analysisof encrypted traffic through data pre-processing to extract effective features, andthen the online training algorithm is used for training and prediction. Withoutdecrypting the original traffic, high-precision malicious traffic detection and analysis are realized, which can guarantee user privacy and communication security.At the same time, since there is no need to decrypt the traffic in advance, the effi-ciency of detecting malicious HTTPs traffic will be greatly improved. Combinedwith the traditional detection and analysis methods, malicious HTTPs traffic isscreened, and suspicious traffic is further analyzed by the expert through the context of suspicious behaviors, thereby improving the overall performance of malicious encrypted traffic detection.展开更多
Cyberattacks are difficult to prevent because the targeted companies and organizations are often relying on new and fundamentally insecure cloudbased technologies,such as the Internet of Things.With increasing industr...Cyberattacks are difficult to prevent because the targeted companies and organizations are often relying on new and fundamentally insecure cloudbased technologies,such as the Internet of Things.With increasing industry adoption and migration of traditional computing services to the cloud,one of the main challenges in cybersecurity is to provide mechanisms to secure these technologies.This work proposes a Data Security Framework for cloud computing services(CCS)that evaluates and improves CCS data security from a software engineering perspective by evaluating the levels of security within the cloud computing paradigm using engineering methods and techniques applied to CCS.This framework is developed by means of a methodology based on a heuristic theory that incorporates knowledge generated by existing works as well as the experience of their implementation.The paper presents the design details of the framework,which consists of three stages:identification of data security requirements,management of data security risks and evaluation of data security performance in CCS.展开更多
Offensive messages on social media,have recently been frequently used to harass and criticize people.In recent studies,many promising algorithms have been developed to identify offensive texts.Most algorithms analyze ...Offensive messages on social media,have recently been frequently used to harass and criticize people.In recent studies,many promising algorithms have been developed to identify offensive texts.Most algorithms analyze text in a unidirectional manner,where a bidirectional method can maximize performance results and capture semantic and contextual information in sentences.In addition,there are many separate models for identifying offensive texts based on monolin-gual and multilingual,but there are a few models that can detect both monolingual and multilingual-based offensive texts.In this study,a detection system has been developed for both monolingual and multilingual offensive texts by combining deep convolutional neural network and bidirectional encoder representations from transformers(Deep-BERT)to identify offensive posts on social media that are used to harass others.This paper explores a variety of ways to deal with multilin-gualism,including collaborative multilingual and translation-based approaches.Then,the Deep-BERT is tested on the Bengali and English datasets,including the different bidirectional encoder representations from transformers(BERT)pre-trained word-embedding techniques,and found that the proposed Deep-BERT’s efficacy outperformed all existing offensive text classification algorithms reaching an accuracy of 91.83%.The proposed model is a state-of-the-art model that can classify both monolingual-based and multilingual-based offensive texts.展开更多
Wireless sensor networks (WSN) have become a hot research areaowing to the unique characteristics and applicability in diverse application areas.Clustering and routing techniques can be considered as an NP hard optimi...Wireless sensor networks (WSN) have become a hot research areaowing to the unique characteristics and applicability in diverse application areas.Clustering and routing techniques can be considered as an NP hard optimizationproblem, which can be addressed by metaheuristic optimization algorithms. Withthis motivation, this study presents a chaotic sandpiper optimization algorithmbased clustering with groundwater flow optimization based routing technique(CSPOC-GFLR). The goal of the CSOC-GFLR technique is to cluster the sensornodes in WSN and elect an optimal set of routes with an intention of achievingenergy efficiency and maximizing network lifetime. The CSPOC algorithm isderived by incorporating the concepts of chaos theory to boost the global optimization capability of the SPOC algorithm. The CSPOC technique elects an optimum set of cluster heads (CH) whereas the other sensors are allocated to thenearer CH. Extensive experimentation portrayed the promising performance ofthe CSPOC-GFLR technique by achieving reduced energy utilization, improvedlifetime, and prolonged stability over the existing techniques.展开更多
The data generated from non-Euclidean domains and its graphical representation(with complex-relationship object interdependence)applications has observed an exponential growth.The sophistication of graph data has pose...The data generated from non-Euclidean domains and its graphical representation(with complex-relationship object interdependence)applications has observed an exponential growth.The sophistication of graph data has posed consequential obstacles to the existing machine learning algorithms.In this study,we have considered a revamped version of a semi-supervised learning algorithm for graph-structured data to address the issue of expanding deep learning approaches to represent the graph data.Additionally,the quantum information theory has been applied through Graph Neural Networks(GNNs)to generate Riemannian metrics in closed-form of several graph layers.In further,to pre-process the adjacency matrix of graphs,a new formulation is established to incorporate high order proximities.The proposed scheme has shown outstanding improvements to overcome the deficiencies in Graph Convolutional Network(GCN),particularly,the information loss and imprecise information representation with acceptable computational overhead.Moreover,the proposed Quantum Graph Convolutional Network(QGCN)has significantly strengthened the GCN on semi-supervised node classification tasks.In parallel,it expands the generalization process with a significant difference by making small random perturbationsG of the graph during the training process.The evaluation results are provided on three benchmark datasets,including Citeseer,Cora,and PubMed,that distinctly delineate the superiority of the proposed model in terms of computational accuracy against state-of-the-art GCN and three other methods based on the same algorithms in the existing literature.展开更多
In a cloud environment, Virtual Machines (VMs) consolidation andresource provisioning are used to address the issues of workload fluctuations.VM consolidation aims to move the VMs from one host to another in order tor...In a cloud environment, Virtual Machines (VMs) consolidation andresource provisioning are used to address the issues of workload fluctuations.VM consolidation aims to move the VMs from one host to another in order toreduce the number of active hosts and save power. Whereas resource provisioningattempts to provide additional resource capacity to the VMs as needed in order tomeet Quality of Service (QoS) requirements. However, these techniques have aset of limitations in terms of the additional costs related to migration and scalingtime, and energy overhead that need further consideration. Therefore, this paperpresents a comprehensive literature review on the subject of dynamic resourcemanagement (i.e., VMs consolidation and resource provisioning) in cloud computing environments, along with an overall discussion of the closely relatedworks. The outcomes of this research can be used to enhance the developmentof predictive resource management techniques, by considering the awareness ofperformance variation, energy consumption and cost to efficiently manage thecloud resources.展开更多
With the development of information technology,the Internet of Things(IoT)has gradually become the third wave of the worldwide information industry revolution after the computer and the Internet.The application of the...With the development of information technology,the Internet of Things(IoT)has gradually become the third wave of the worldwide information industry revolution after the computer and the Internet.The application of the IoT has brought great convenience to people’s production and life.However,the potential information security problems in various IoT applications are gradually exposed and people pay more attention to them.The traditional centralized data storage and management model of the IoT is easy to cause transmission delay,single point of failure,privacy disclosure and other problems,and eventually leads to unpredictable behavior of the system.Blockchain technology can effectively improve the operation and data security status of the IoT.Referring to the storage model of the Fabric blockchain project,this paper designs a data security storage model suitable for the IoT system.The simulation results show that the model is not only effective and extensible,but also can better protect the data security of the Internet of Things.展开更多
基金supported,in part,by the Natural Science Foundation of Jiangsu Province under grant numbers BK20201136,BK20191401in part,by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fund.
文摘In a telemedicine diagnosis system,the emergence of 3D imaging enables doctors to make clearer judgments,and its accuracy also directly affects doctors’diagnosis of the disease.In order to ensure the safe transmission and storage of medical data,a 3D medical watermarking algorithm based on wavelet transform is proposed in this paper.The proposed algorithm employs the principal component analysis(PCA)transform to reduce the data dimension,which can minimize the error between the extracted components and the original data in the mean square sense.Especially,this algorithm helps to create a bacterial foraging model based on particle swarm optimization(BF-PSO),by which the optimal wavelet coefficient is found for embedding and is used as the absolute feature of watermark embedding,thereby achieving the optimal balance between embedding capacity and imperceptibility.A series of experimental results from MATLAB software based on the standard MRI brain volume dataset demonstrate that the proposed algorithm has strong robustness and make the 3D model have small deformation after embedding the watermark.
基金sponsored by the Natural Science Research Program of Higher Education Jiangsu Province (19KJD520005)Qing Lan Project of Jiangsu Province (Su Teacher’s Letter [2021]No.11)the Young Teacher Development Fund of Pujiang Institute Nanjing Tech University ( [2021]No.73).
文摘Small targets and occluded targets will inevitably appear in the image during the shooting process due to the influence of angle,distance,complex scene,illumination intensity,and other factors.These targets have few effective pixels,few features,and no apparent features,which makes extracting their efficient features difficult and easily leads to false detection,missed detection,and repeated detection,affecting the performance of target detection models.An improved faster region convolutional neural network(RCNN)algorithm(CF-RCNN)integrating convolutional block attention module(CBAM)and feature pyramid networks(FPN)is proposed to improve the detection and recognition accuracy of small-size objects,occluded or truncated objects in complex scenes.Firstly,the CBAM mechanism is integrated into the feature extraction network to improve the detection ability of occluded or truncated objects.Secondly,the FPN-featured pyramid structure is introduced to obtain high-resolution and vital semantic data to enhance the detection effect of small-size objects.The experimental results show that the mean average precision of target detection of the improved algorithm on PASCAL VOC2012 is improved to 76.1%,which is 13.8 percentage points higher than that of the commonly used Faster RCNN and other algorithms.Furthermore,it is better than the commonly used small sample target detection algorithm.
基金the deputyship for Research&Innovation,Ministry of Education in Saudi Arabia,for funding this research work through the Project Number(IFP-2020-14).
文摘The problem of weeds in crops is a natural problem for farmers.Machine Learning(ML),Deep Learning(DL),and Unmanned Aerial Vehicles(UAV)are among the advanced technologies that should be used in order to reduce the use of pesticides while also protecting the environment and ensuring the safety of crops.Deep Learning-based crop and weed identification systems have the potential to save money while also reducing environmental stress.The accuracy of ML/DL models has been proven to be restricted in the past due to a variety of factors,including the selection of an efficient wavelength,spatial resolution,and the selection and tuning of hyperparameters.The purpose of the current research is to develop a new automated weed detecting system that uses Convolution Neural Network(CNN)classification for a real dataset of 4400 UAV pictures with 15336 segments.Snapshots were used to choose the optimal parameters for the proposed CNN LVQ model.The soil class achieved the user accuracy of 100%with the proposed CNN LVQ model,followed by soybean(99.79%),grass(98.58%),and broadleaf(98.32%).The developed CNN LVQ model showed an overall accuracy of 99.44%after rigorous hyperparameter tuning for weed detection,significantly higher than previously reported studies.
基金supported by Scientific Research Starting Project of SWPU[No.0202002131604]Major Science and Technology Project of Sichuan Province[No.8ZDZX0143,2019YFG0424]+2 种基金Ministry of Education Collaborative Education Project of China[No.952]Fundamental Research Project[Nos.549,550]Development of Aero-engine Test and training platform based on Simulation Technology[18ZA0030].
文摘Deep learning techniques have outstanding performance in feature extraction and modelfitting.In thefield of aero-engine fault diagnosis,the intro-duction of deep learning technology is of great significance.The aero-engine is the heart of the aircraft,and its stable operation is the primary guarantee of the aircraft.In order to ensure the normal operation of the aircraft,it is necessary to study and diagnose the faults of the aero-engine.Among the many engine fail-ures,the one that occurs more frequently and is more hazardous is the wheeze,which often poses a great threat toflight safety.On the basis of analyzing the mechanism of aero-engine surge,an aero-engine surge fault diagnosis method based on deep learning technology is proposed.In this paper,key sensor data are obtained by analyzing different engine sensor data.An aero-engine surge data-set acquisition algorithm(ASDA)is proposed to sample the fault and normal points to generate the training set,validation set and test set.Based on neural net-work models such as one-dimensional convolutional neural network(1D-CNN),convolutional neural network(RNN),and long-short memory neural network(LSTM),different neural network optimization algorithms are selected to achieve fault diagnosis and classification.The experimental results show that the deep learning technique has good effect in aero-engine surge fault diagnosis.The aero-engine surge fault diagnosis network(ASFDN)proposed in this paper achieves better results.Through training,the network achieves more than 99%classification accuracy for the test set.
基金supported by the NSFC(Grant Nos.92046001,61962009)JSPS KAKENHI Grant Number JP20F20080+3 种基金the Natural Science Foundation of Inner Mongolia(2021MS06006)Baotou Kundulun District Science and technology plan project(YF2020013)Inner Mongolia discipline inspection and supervision big data laboratory open project fund(IMDBD2020020)the Scientific Research Foundation of North China University of Technology.
文摘In recent decades, log system management has been widely studied fordata security management. System abnormalities or illegal operations can befound in time by analyzing the log and provide evidence for intrusions. In orderto ensure the integrity of the log in the current system, many researchers havedesigned it based on blockchain. However, the emerging blockchain is facing significant security challenges with the increment of quantum computers. An attackerequipped with a quantum computer can extract the user's private key from thepublic key to generate a forged signature, destroy the structure of the blockchain,and threaten the security of the log system. Thus, blind signature on the lattice inpost-quantum blockchain brings new security features for log systems. In ourpaper, to address these, firstly, we propose a novel log system based on post-quantum blockchain that can resist quantum computing attacks. Secondly, we utilize apost-quantum blind signature on the lattice to ensure both security and blindnessof log system, which makes the privacy of log information to a large extent.Lastly, we enhance the security level of lattice-based blind signature under therandom oracle model, and the signature size grows slowly compared with others.We also implement our protocol and conduct an extensive analysis to prove theideas. The results show that our scheme signature size edges up subtly comparedwith others with the improvement of security level.
文摘Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER have been perfect on normal faces but have been found to be constrained in occluded faces.Recently,Deep Learning Techniques(DLT)have gained popular-ity in applications of real-world problems including recognition of human emo-tions.The human face reflects emotional states and human intentions.An expression is the most natural and powerful way of communicating non-verbally.Systems which form communications between the two are termed Human Machine Interaction(HMI)systems.FER can improve HMI systems as human expressions convey useful information to an observer.This paper proposes a FER scheme called EECNN(Enhanced Convolution Neural Network with Atten-tion mechanism)to recognize seven types of human emotions with satisfying results in its experiments.Proposed EECNN achieved 89.8%accuracy in classi-fying the images.
文摘Internet of things enables every real world objects to be seamlessly integrated with traditional internet.Heterogeneous objects of real world are enhanced with capability to communicate,computing capabilities and standards to interoperate with existing network and these entities are resource constrained and vulnerable to various security attacks.Huge number of research works are being carried out to analyze various possible attacks and to propose standards for securing communication between devices in internet of things(IoT).In this article,a robust and lightweight authentication scheme for mutual authentication between client and server using constrained application protocol is proposed.Internet of things enables devices with different characteristics and capabilities to be integrated with internet.These heterogeneous devices should interoperate with each other to accumulate,process and transmit data for facilitating smart services.The growth of IoT applications leads to the rapid growth of IoT devices incorporated to the global network and network traffic over the traditional network.This scheme greatly reduces the authentication overhead between the devices by reducing the packet size of messages,number of messages transmitted and processing overhead on communicating devices.Efficiency of this authentication scheme against attacks such as DoS(denial of service),replay attacks and attacks to exhaust the resources are also examined.Message transmission time reduced upto 50%of using proposed techniques.
基金supported by NARI Technology Development Co.LTD.(No.524608190024).
文摘Safety helmet-wearing detection is an essential part of the intelligentmonitoring system. To improve the speed and accuracy of detection, especiallysmall targets and occluded objects, it presents a novel and efficient detectormodel. The underlying core algorithm of this model adopts the YOLOv5 (YouOnly Look Once version 5) network with the best comprehensive detection performance. It is improved by adding an attention mechanism, a CIoU (CompleteIntersection Over Union) Loss function, and the Mish activation function. First,it applies the attention mechanism in the feature extraction. The network can learnthe weight of each channel independently and enhance the information dissemination between features. Second, it adopts CIoU loss function to achieve accuratebounding box regression. Third, it utilizes Mish activation function to improvedetection accuracy and generalization ability. It builds a safety helmet-wearingdetection data set containing more than 10,000 images collected from the Internetfor preprocessing. On the self-made helmet wearing test data set, the averageaccuracy of the helmet detection of the proposed algorithm is 96.7%, which is1.9% higher than that of the YOLOv5 algorithm. It meets the accuracy requirements of the helmet-wearing detection under construction scenarios.
基金supported by Researchers Supporting Project(No.RSP-2020/102)King Saud University,Riyadh,Saudi Arabiathe National Natural Science Foundation of China(Nos.61802031,61772454,61811530332,61811540410)+4 种基金the Natural Science Foundation of Hunan Province,China(No.2019JGYB177)the Research Foundation of Education Bureau of Hunan Province,China(No.18C0216)the“Practical Innovation and Entrepreneurial Ability Improvement Plan”for Professional Degree Graduate students of Changsha University of Science and Technology(No.SJCX201971)Hunan Graduate Scientific Research Innovation Project,China(No.CX2019694)This work is also supported by the Programs of Transformation and Upgrading of Industries and Information Technologies of Jiangsu Province(No.JITC-1900AX2038/01).
文摘As the typical peer-to-peer distributed networks, blockchain systemsrequire each node to copy a complete transaction database, so as to ensure newtransactions can by verified independently. In a blockchain system (e.g., bitcoinsystem), the node does not rely on any central organization, and every node keepsan entire copy of the transaction database. However, this feature determines thatthe size of blockchain transaction database is growing rapidly. Therefore, with thecontinuous system operations, the node memory also needs to be expanded tosupport the system running. Especially in the big data era, the increasing networktraffic will lead to faster transaction growth rate. This paper analyzes blockchaintransaction databases and proposes a storage optimization scheme. The proposedscheme divides blockchain transaction database into cold zone and hot zone usingexpiration recognition method based on Least Recently Used (LRU) algorithm. Itcan achieve storage optimization by moving unspent transaction outputs outsidethe in-memory transaction databases. We present the theoretical analysis on theoptimization method to validate the effectiveness. Extensive experiments showour proposed method outperforms the current mechanism for the blockchaintransaction databases.
基金This work was supported National Natural Science Foundation of China(Grant No.41875184)innovation team of“Six Talent Peaks”in Jiangsu Province(Grant No.TD-XYDXX-004).
文摘Object detection is one of the most important and challenging branches of computer vision,which has been widely applied in people s life,such as monitoring security,autonomous driving and so on,with the purpose of locating instances of semantic objects of a certain class.With the rapid development of deep learning algorithms for detection tasks,the performance of object detectors has been greatly improved.In order to understand the main development status of target detection,a comprehensive literature review of target detection and an overall discussion of the works closely related to it are presented in this paper.This paper various object detection methods,including one-stage and two-stage detectors,are systematically summarized,and the datasets and evaluation criteria used in object detection are introduced.In addition,the development of object detection technology is reviewed.Finally,based on the understanding of the current development of target detection,we discuss the main research directions in the future.
文摘Presently,precision agriculture processes like plant disease,crop yield prediction,species recognition,weed detection,and irrigation can be accom-plished by the use of computer vision(CV)approaches.Weed plays a vital role in influencing crop productivity.The wastage and pollution of farmland's natural atmosphere instigated by full coverage chemical herbicide spraying are increased.Since the proper identification of weeds from crops helps to reduce the usage of herbicide and improve productivity,this study presents a novel computer vision and deep learning based weed detection and classification(CVDL-WDC)model for precision agriculture.The proposed CVDL-WDC technique intends to prop-erly discriminate the plants as well as weeds.The proposed CVDL-WDC technique involves two processes namely multiscale Faster RCNN based object detection and optimal extreme learning machine(ELM)based weed classification.The parameters of the ELM model are optimally adjusted by the use of farmland fertility optimization(FFO)algorithm.A comprehensive simulation analysis of the CVDL-WDC technique against benchmark dataset reported the enhanced out-comes over its recent approaches interms of several measures.
文摘In thefield of agriculture,the development of an early warning diagnostic system is essential for timely detection and accurate diagnosis of diseases in rice plants.This research focuses on identifying the plant diseases and detecting them promptly through the advancements in thefield of computer vision.The images obtained from in-field farms are typically with less visual information.However,there is a significant impact on the classification accuracy in the disease diagnosis due to the lack of high-resolution crop images.We propose a novel Reconstructed Disease Aware–Convolutional Neural Network(RDA-CNN),inspired by recent CNN architectures,that integrates image super resolution and classification into a single model for rice plant disease classification.This network takes low-resolution images of rice crops as input and employs the super resolution layers to transform low-resolution images to super-resolution images to recover appearance such as spots,rot,and lesion on different parts of the rice plants.Extensive experimental results indicated that the proposed RDA-CNN method performs well under diverse aspects generating visually pleasing images and outperforms better than other con-ventional Super Resolution(SR)methods.Furthermore,these super-resolution images are subsequently passed through deep classification layers for disease classi-fication.The results demonstrate that the RDA-CNN significantly boosts the clas-sification performance by nearly 4–6%compared with the baseline architectures.
基金supported by the National Natural Science Foundation of China under grants U19B2044National Key Research and Development Program of China(2021YFC3300500).
文摘Accurate prediction of future events brings great benefits and reduces losses for society in many domains,such as civil unrest,pandemics,and crimes.Knowledge graph is a general language for describing and modeling complex systems.Different types of events continually occur,which are often related to historical and concurrent events.In this paper,we formalize the future event prediction as a temporal knowledge graph reasoning problem.Most existing studies either conduct reasoning on static knowledge graphs or assume knowledges graphs of all timestamps are available during the training process.As a result,they cannot effectively reason over temporal knowledge graphs and predict events happening in the future.To address this problem,some recent works learn to infer future events based on historical eventbased temporal knowledge graphs.However,these methods do not comprehensively consider the latent patterns and influences behind historical events and concurrent events simultaneously.This paper proposes a new graph representation learning model,namely Recurrent Event Graph ATtention Network(RE-GAT),based on a novel historical and concurrent events attention-aware mechanism by modeling the event knowledge graph sequence recurrently.More specifically,our RE-GAT uses an attention-based historical events embedding module to encode past events,and employs an attention-based concurrent events embedding module to model the associations of events at the same timestamp.A translation-based decoder module and a learning objective are developed to optimize the embeddings of entities and relations.We evaluate our proposed method on four benchmark datasets.Extensive experimental results demonstrate the superiority of our RE-GAT model comparing to various base-lines,which proves that our method can more accurately predict what events are going to happen.
基金supported in part by the following grants:Wenzhou key scientific and technological projects(No.ZG2020031)Researchers Supporting Project of King Saud University,Riyadh,Saudi Arabia(No.RSP-2020/102)+6 种基金National Natural Science Foundation of China under Grant(No.U1936215 and 61772026)Ministry of Industry and Information Technology of the People’s Republic of China under Grant(No.TC190H3WN)State Grid Corporation of China under Grant(No.5211XT19006B)Wenzhou Polytechnic research projects(No.WZY2020001)2020 industrial Internet innovation and development project(TC200H01V)Wenzhou Scientific Research Projects for Underdeveloped Areas(WenRenSheFa[2020]61(No.5)).Zhejiang key R&D projects(No.2021C01117).
文摘Internet traffic encryption is a very common traffic protection method.Most internet traffic is protected by the encryption protocol called transport layersecurity (TLS). Although traffic encryption can ensure the security of communication, it also enables malware to hide its information and avoid being detected.At present, most of the malicious traffic detection methods are aimed at the unencrypted ones. There are some problems in the detection of encrypted traffic, suchas high false positive rate, difficulty in feature extraction, and insufficient practicability. The accuracy and effectiveness of existing methods need to be improved.In this paper, we present TLSmell, a framework that conducts maliciousencrypted HTTPs traffic detection with simple connection-specific indicators byusing different classifiers based online training. We perform deep packet analysisof encrypted traffic through data pre-processing to extract effective features, andthen the online training algorithm is used for training and prediction. Withoutdecrypting the original traffic, high-precision malicious traffic detection and analysis are realized, which can guarantee user privacy and communication security.At the same time, since there is no need to decrypt the traffic in advance, the effi-ciency of detecting malicious HTTPs traffic will be greatly improved. Combinedwith the traditional detection and analysis methods, malicious HTTPs traffic isscreened, and suspicious traffic is further analyzed by the expert through the context of suspicious behaviors, thereby improving the overall performance of malicious encrypted traffic detection.
文摘Cyberattacks are difficult to prevent because the targeted companies and organizations are often relying on new and fundamentally insecure cloudbased technologies,such as the Internet of Things.With increasing industry adoption and migration of traditional computing services to the cloud,one of the main challenges in cybersecurity is to provide mechanisms to secure these technologies.This work proposes a Data Security Framework for cloud computing services(CCS)that evaluates and improves CCS data security from a software engineering perspective by evaluating the levels of security within the cloud computing paradigm using engineering methods and techniques applied to CCS.This framework is developed by means of a methodology based on a heuristic theory that incorporates knowledge generated by existing works as well as the experience of their implementation.The paper presents the design details of the framework,which consists of three stages:identification of data security requirements,management of data security risks and evaluation of data security performance in CCS.
文摘Offensive messages on social media,have recently been frequently used to harass and criticize people.In recent studies,many promising algorithms have been developed to identify offensive texts.Most algorithms analyze text in a unidirectional manner,where a bidirectional method can maximize performance results and capture semantic and contextual information in sentences.In addition,there are many separate models for identifying offensive texts based on monolin-gual and multilingual,but there are a few models that can detect both monolingual and multilingual-based offensive texts.In this study,a detection system has been developed for both monolingual and multilingual offensive texts by combining deep convolutional neural network and bidirectional encoder representations from transformers(Deep-BERT)to identify offensive posts on social media that are used to harass others.This paper explores a variety of ways to deal with multilin-gualism,including collaborative multilingual and translation-based approaches.Then,the Deep-BERT is tested on the Bengali and English datasets,including the different bidirectional encoder representations from transformers(BERT)pre-trained word-embedding techniques,and found that the proposed Deep-BERT’s efficacy outperformed all existing offensive text classification algorithms reaching an accuracy of 91.83%.The proposed model is a state-of-the-art model that can classify both monolingual-based and multilingual-based offensive texts.
文摘Wireless sensor networks (WSN) have become a hot research areaowing to the unique characteristics and applicability in diverse application areas.Clustering and routing techniques can be considered as an NP hard optimizationproblem, which can be addressed by metaheuristic optimization algorithms. Withthis motivation, this study presents a chaotic sandpiper optimization algorithmbased clustering with groundwater flow optimization based routing technique(CSPOC-GFLR). The goal of the CSOC-GFLR technique is to cluster the sensornodes in WSN and elect an optimal set of routes with an intention of achievingenergy efficiency and maximizing network lifetime. The CSPOC algorithm isderived by incorporating the concepts of chaos theory to boost the global optimization capability of the SPOC algorithm. The CSPOC technique elects an optimum set of cluster heads (CH) whereas the other sensors are allocated to thenearer CH. Extensive experimentation portrayed the promising performance ofthe CSPOC-GFLR technique by achieving reduced energy utilization, improvedlifetime, and prolonged stability over the existing techniques.
基金supported by the National Key Research and Development Program of China(2018YFB1600600)the National Natural Science Foundation of China under(61976034,U1808206)the Dalian Science and Technology Innovation Fund(2019J12GX035).
文摘The data generated from non-Euclidean domains and its graphical representation(with complex-relationship object interdependence)applications has observed an exponential growth.The sophistication of graph data has posed consequential obstacles to the existing machine learning algorithms.In this study,we have considered a revamped version of a semi-supervised learning algorithm for graph-structured data to address the issue of expanding deep learning approaches to represent the graph data.Additionally,the quantum information theory has been applied through Graph Neural Networks(GNNs)to generate Riemannian metrics in closed-form of several graph layers.In further,to pre-process the adjacency matrix of graphs,a new formulation is established to incorporate high order proximities.The proposed scheme has shown outstanding improvements to overcome the deficiencies in Graph Convolutional Network(GCN),particularly,the information loss and imprecise information representation with acceptable computational overhead.Moreover,the proposed Quantum Graph Convolutional Network(QGCN)has significantly strengthened the GCN on semi-supervised node classification tasks.In parallel,it expands the generalization process with a significant difference by making small random perturbationsG of the graph during the training process.The evaluation results are provided on three benchmark datasets,including Citeseer,Cora,and PubMed,that distinctly delineate the superiority of the proposed model in terms of computational accuracy against state-of-the-art GCN and three other methods based on the same algorithms in the existing literature.
文摘In a cloud environment, Virtual Machines (VMs) consolidation andresource provisioning are used to address the issues of workload fluctuations.VM consolidation aims to move the VMs from one host to another in order toreduce the number of active hosts and save power. Whereas resource provisioningattempts to provide additional resource capacity to the VMs as needed in order tomeet Quality of Service (QoS) requirements. However, these techniques have aset of limitations in terms of the additional costs related to migration and scalingtime, and energy overhead that need further consideration. Therefore, this paperpresents a comprehensive literature review on the subject of dynamic resourcemanagement (i.e., VMs consolidation and resource provisioning) in cloud computing environments, along with an overall discussion of the closely relatedworks. The outcomes of this research can be used to enhance the developmentof predictive resource management techniques, by considering the awareness ofperformance variation, energy consumption and cost to efficiently manage thecloud resources.
基金supported by the National Social Science Foundation Project of China under Grant 16BTQ085.
文摘With the development of information technology,the Internet of Things(IoT)has gradually become the third wave of the worldwide information industry revolution after the computer and the Internet.The application of the IoT has brought great convenience to people’s production and life.However,the potential information security problems in various IoT applications are gradually exposed and people pay more attention to them.The traditional centralized data storage and management model of the IoT is easy to cause transmission delay,single point of failure,privacy disclosure and other problems,and eventually leads to unpredictable behavior of the system.Blockchain technology can effectively improve the operation and data security status of the IoT.Referring to the storage model of the Fabric blockchain project,this paper designs a data security storage model suitable for the IoT system.The simulation results show that the model is not only effective and extensible,but also can better protect the data security of the Internet of Things.