False data injection attack(FDIA)is an attack that affects the stability of grid cyber-physical system(GCPS)by evading the detecting mechanism of bad data.Existing FDIA detection methods usually employ complex neural ...False data injection attack(FDIA)is an attack that affects the stability of grid cyber-physical system(GCPS)by evading the detecting mechanism of bad data.Existing FDIA detection methods usually employ complex neural networkmodels to detect FDIA attacks.However,they overlook the fact that FDIA attack samples at public-private network edges are extremely sparse,making it difficult for neural network models to obtain sufficient samples to construct a robust detection model.To address this problem,this paper designs an efficient sample generative adversarial model of FDIA attack in public-private network edge,which can effectively bypass the detectionmodel to threaten the power grid system.A generative adversarial network(GAN)framework is first constructed by combining residual networks(ResNet)with fully connected networks(FCN).Then,a sparse adversarial learning model is built by integrating the time-aligned data and normal data,which is used to learn the distribution characteristics between normal data and attack data through iterative confrontation.Furthermore,we introduce a Gaussian hybrid distributionmatrix by aggregating the network structure of attack data characteristics and normal data characteristics,which can connect and calculate FDIA data with normal characteristics.Finally,efficient FDIA attack samples can be sequentially generated through interactive adversarial learning.Extensive simulation experiments are conducted with IEEE 14-bus and IEEE 118-bus system data,and the results demonstrate that the generated attack samples of the proposed model can present superior performance compared to state-of-the-art models in terms of attack strength,robustness,and covert capability.展开更多
Retinal vessel segmentation is a challenging medical task owing to small size of dataset,micro blood vessels and low image contrast.To address these issues,we introduce a novel convolutional neural network in this pap...Retinal vessel segmentation is a challenging medical task owing to small size of dataset,micro blood vessels and low image contrast.To address these issues,we introduce a novel convolutional neural network in this paper,which takes the advantage of both adversarial learning and recurrent neural network.An iterative design of network with recurrent unit is performed to refine the segmentation results from input retinal image gradually.Recurrent unit preserves high-level semantic information for feature reuse,so as to output a sufficiently refined segmentation map instead of a coarse mask.Moreover,an adversarial loss is imposing the integrity and connectivity constraints on the segmented vessel regions,thus greatly reducing topology errors of segmentation.The experimental results on the DRIVE dataset show that our method achieves area under curve and sensitivity of 98.17%and 80.64%,respectively.Our method achieves superior performance in retinal vessel segmentation compared with other existing state-of-the-art methods.展开更多
Hydrogen energy is a crucial support for China’s low-carbon energy transition.With the large-scale integration of renewable energy,the combination of hydrogen and integrated energy systems has become one of the most ...Hydrogen energy is a crucial support for China’s low-carbon energy transition.With the large-scale integration of renewable energy,the combination of hydrogen and integrated energy systems has become one of the most promising directions of development.This paper proposes an optimized schedulingmodel for a hydrogen-coupled electro-heat-gas integrated energy system(HCEHG-IES)using generative adversarial imitation learning(GAIL).The model aims to enhance renewable-energy absorption,reduce carbon emissions,and improve grid-regulation flexibility.First,the optimal scheduling problem of HCEHG-IES under uncertainty is modeled as a Markov decision process(MDP).To overcome the limitations of conventional deep reinforcement learning algorithms—including long optimization time,slow convergence,and subjective reward design—this study augments the PPO algorithm by incorporating a discriminator network and expert data.The newly developed algorithm,termed GAIL,enables the agent to perform imitation learning from expert data.Based on this model,dynamic scheduling decisions are made in continuous state and action spaces,generating optimal energy-allocation and management schemes.Simulation results indicate that,compared with traditional reinforcement-learning algorithms,the proposed algorithmoffers better economic performance.Guided by expert data,the agent avoids blind optimization,shortens the offline training time,and improves convergence performance.In the online phase,the algorithm enables flexible energy utilization,thereby promoting renewable-energy absorption and reducing carbon emissions.展开更多
The inefficient utilization of ubiquitous graph data with combinatorial structures necessitates graph embedding methods,aiming at learning a continuous vector space for the graph,which is amenable to be adopted in tra...The inefficient utilization of ubiquitous graph data with combinatorial structures necessitates graph embedding methods,aiming at learning a continuous vector space for the graph,which is amenable to be adopted in traditional machine learning algorithms in favor of vector representations.Graph embedding methods build an important bridge between social network analysis and data analytics,as social networks naturally generate an unprecedented volume of graph data continuously.Publishing social network data not only brings benefit for public health,disaster response,commercial promotion,and many other applications,but also gives birth to threats that jeopardize each individual’s privacy and security.Unfortunately,most existing works in publishing social graph embedding data only focus on preserving social graph structure with less attention paid to the privacy issues inherited from social networks.To be specific,attackers can infer the presence of a sensitive relationship between two individuals by training a predictive model with the exposed social network embedding.In this paper,we propose a novel link-privacy preserved graph embedding framework using adversarial learning,which can reduce adversary’s prediction accuracy on sensitive links,while persevering sufficient non-sensitive information,such as graph topology and node attributes in graph embedding.Extensive experiments are conducted to evaluate the proposed framework using ground truth social network datasets.展开更多
Owing to the continuous barrage of cyber threats,there is a massive amount of cyber threat intelligence.However,a great deal of cyber threat intelligence come from textual sources.For analysis of cyber threat intellig...Owing to the continuous barrage of cyber threats,there is a massive amount of cyber threat intelligence.However,a great deal of cyber threat intelligence come from textual sources.For analysis of cyber threat intelligence,many security analysts rely on cumbersome and time-consuming manual efforts.Cybersecurity knowledge graph plays a significant role in automatics analysis of cyber threat intelligence.As the foundation for constructing cybersecurity knowledge graph,named entity recognition(NER)is required for identifying critical threat-related elements from textual cyber threat intelligence.Recently,deep neural network-based models have attained very good results in NER.However,the performance of these models relies heavily on the amount of labeled data.Since labeled data in cybersecurity is scarce,in this paper,we propose an adversarial active learning framework to effectively select the informative samples for further annotation.In addition,leveraging the long short-term memory(LSTM)network and the bidirectional LSTM(BiLSTM)network,we propose a novel NER model by introducing a dynamic attention mechanism into the BiLSTM-LSTM encoderdecoder.With the selected informative samples annotated,the proposed NER model is retrained.As a result,the performance of the NER model is incrementally enhanced with low labeling cost.Experimental results show the effectiveness of the proposed method.展开更多
We propose to address the open set domain adaptation problem by aligning images at both the pixel space and the feature space.Our approach,called Open Set Translation and Adaptation Network(OSTAN),consists of two main...We propose to address the open set domain adaptation problem by aligning images at both the pixel space and the feature space.Our approach,called Open Set Translation and Adaptation Network(OSTAN),consists of two main components:translation and adaptation.The translation is a cycle-consistent generative adversarial network,which translates any source image to the“style”of a target domain to eliminate domain discrepancy in the pixel space.The adaptation is an instance-weighted adversarial network,which projects both(labeled)translated source images and(unlabeled)target images into a domain-invariant feature space to learn a prior probability for each target image.The learned probability is applied as a weight to the unknown classifier to facilitate the identification of the unknown class.The proposed OSTAN model significantly outperforms the state-of-the-art open set domain adaptation methods on multiple public datasets.Our experiments also demonstrate that both the image-to-image translation and the instance-weighting framework can further improve the decision boundaries for both known and unknown classes.展开更多
The goal of zero-shot recognition is to classify classes it has never seen before, which needs to build a bridge between seen and unseen classes through semantic embedding space. Therefore, semantic embedding space le...The goal of zero-shot recognition is to classify classes it has never seen before, which needs to build a bridge between seen and unseen classes through semantic embedding space. Therefore, semantic embedding space learning plays an important role in zero-shot recognition. Among existing works, semantic embedding space is mainly taken by user-defined attribute vectors. However, the discriminative information included in the user-defined attribute vector is limited. In this paper, we propose to learn an extra latent attribute space automatically to produce a more generalized and discriminative semantic embedded space. To prevent the bias problem, both user-defined attribute vector and latent attribute space are optimized by adversarial learning with auto-encoders. We also propose to reconstruct semantic patterns produced by explanatory graphs, which can make semantic embedding space more sensitive to usefully semantic information and less sensitive to useless information. The proposed method is evaluated on the AwA2 and CUB dataset. These results show that our proposed method achieves superior performance.展开更多
The purpose of adversarial deep learning is to train robust DNNs against adversarial attacks,and this is one of the major research focuses of deep learning.Game theory has been used to answer some of the basic questio...The purpose of adversarial deep learning is to train robust DNNs against adversarial attacks,and this is one of the major research focuses of deep learning.Game theory has been used to answer some of the basic questions about adversarial deep learning,such as those regarding the existence of a classifier with optimal robustness and the existence of optimal adversarial samples for a given class of classifiers.In most previous works,adversarial deep learning was formulated as a simultaneous game and the strategy spaces were assumed to be certain probability distributions in order for the Nash equilibrium to exist.However,this assumption is not applicable to practical situations.In this paper,we give answers to these basic questions for the practical case where the classifiers are DNNs with a given structure;we do that by formulating adversarial deep learning in the form of Stackelberg games.The existence of Stackelberg equilibria for these games is proven.Furthermore,it is shown that the equilibrium DNN has the largest adversarial accuracy among all DNNs with the same structure,when Carlini-Wagner s margin loss is used.The trade-off between robustness and accuracy in adversarial deep learning is also studied from a game theoretical perspective.展开更多
Intrusion detection system plays an important role in defending networks from security breaches.End-to-end machine learning-based intrusion detection systems are being used to achieve high detection accuracy.However,i...Intrusion detection system plays an important role in defending networks from security breaches.End-to-end machine learning-based intrusion detection systems are being used to achieve high detection accuracy.However,in case of adversarial attacks,that cause misclassification by introducing imperceptible perturbation on input samples,performance of machine learning-based intrusion detection systems is greatly affected.Though such problems have widely been discussed in image processing domain,very few studies have investigated network intrusion detection systems and proposed corresponding defence.In this paper,we attempt to fill this gap by using adversarial attacks on standard intrusion detection datasets and then using adversarial samples to train various machine learning algorithms(adversarial training)to test their defence performance.This is achieved by first creating adversarial sample based on Jacobian-based Saliency Map Attack(JSMA)and Fast Gradient Sign Attack(FGSM)using NSLKDD,UNSW-NB15 and CICIDS17 datasets.The study then trains and tests JSMA and FGSM based adversarial examples in seen(where model has been trained on adversarial samples)and unseen(where model is unaware of adversarial packets)attacks.The experiments includes multiple machine learning classifiers to evaluate their performance against adversarial attacks.The performance parameters include Accuracy,F1-Score and Area under the receiver operating characteristic curve(AUC)Score.展开更多
Recently, generative adversarial networks(GANs)have become a research focus of artificial intelligence. Inspired by two-player zero-sum game, GANs comprise a generator and a discriminator, both trained under the adver...Recently, generative adversarial networks(GANs)have become a research focus of artificial intelligence. Inspired by two-player zero-sum game, GANs comprise a generator and a discriminator, both trained under the adversarial learning idea.The goal of GANs is to estimate the potential distribution of real data samples and generate new samples from that distribution.Since their initiation, GANs have been widely studied due to their enormous prospect for applications, including image and vision computing, speech and language processing, etc. In this review paper, we summarize the state of the art of GANs and look into the future. Firstly, we survey GANs' proposal background,theoretic and implementation models, and application fields.Then, we discuss GANs' advantages and disadvantages, and their development trends. In particular, we investigate the relation between GANs and parallel intelligence,with the conclusion that GANs have a great potential in parallel systems research in terms of virtual-real interaction and integration. Clearly, GANs can provide substantial algorithmic support for parallel intelligence.展开更多
In this paper,a communication model in cognitive radios is developed and uses machine learning to learn the dynamics of jamming attacks in cognitive radios.It is designed further to make their transmission decision th...In this paper,a communication model in cognitive radios is developed and uses machine learning to learn the dynamics of jamming attacks in cognitive radios.It is designed further to make their transmission decision that automati-cally adapts to the transmission dynamics to mitigate the launched jamming attacks.The generative adversarial learning neural network(GALNN)or genera-tive dynamic neural network(GDNN)automatically learns with the synthesized training data(training)with a generator and discriminator type neural networks that encompass minimax game theory.The elimination of the jamming attack is carried out with the assistance of the defense strategies and with an increased detection rate in the generative adversarial network(GAN).The GDNN with game theory is designed to validate the channel condition with the cross entropy loss function and back-propagation algorithm,which improves the communica-tion reliability in the network.The simulation is conducted in NS2.34 tool against several performance metrics to reduce the misdetection rate and false alarm rates.The results show that the GDNN obtains an increased rate of successful transmis-sion by taking optimal actions to act as a defense mechanism to mislead the jam-mer,where the jammer makes high misclassification errors on transmission dynamics.展开更多
Residual learning based deep generative networks have achieved promising performance in image enhancement.However,due to the large color gap between a low-quality image and its highquality version,the identical mappin...Residual learning based deep generative networks have achieved promising performance in image enhancement.However,due to the large color gap between a low-quality image and its highquality version,the identical mapping in conventional residual learning cannot explore the elaborate detail differences,resulting in color deviations and texture losses in enhanced images.To address this issue,an innovative non-identical residual learning architecture is proposed,which views image enhancement as two complementary branches,namely a holistic color adjustment branch and a finegrained residual generation branch.In the holistic color adjustment,an adjusting map is calculated for each input low-quality image,in order to regulate the low-quality image to the high-quality representation in an overall way.In the fine-grained residual generation branch,a novel attention-aware recursive network is designed to generate residual images.This design can alleviate the overfitting problem by reusing parameters and promoting the network’s adaptability for different input conditions.In addition,a novel dynamic multi-level perceptual loss based on the error feedback ideology is proposed.Consequently,the proposed network can be dynamically optimized by the hybrid perceptual loss provided by a well-trained VGG,so as to improve the perceptual quality of enhanced images in a guided way.Extensive experiments conducted on publicly available datasets demonstrate the state-of-the-art performance of the proposed method.展开更多
Deep learning based on neural networks has made new progress in a wide variety of domain,however,it is lack of protection for sensitive information.The large amount of data used for training is easy to cause leakage o...Deep learning based on neural networks has made new progress in a wide variety of domain,however,it is lack of protection for sensitive information.The large amount of data used for training is easy to cause leakage of private information,thus the attacker can easily restore input through the representation of latent natural language.The privacy preserving deep learning aims to solve the above problems.In this paper,first,we introduce how to reduce training samples in order to reduce the amount of sensitive information,and then describe how to unbiasedly represent the data with respect to specific attributes,clarify the research results of other directions of privacy protection and its corresponding algorithms,summarize the common thoughts and existing problems.Finally,the commonly used datasets in the privacy protection research are discussed in this paper.展开更多
Classification models for multivariate time series have drawn the interest of many researchers to the field with the objective of developing accurate and efficient models.However,limited research has been conducted on...Classification models for multivariate time series have drawn the interest of many researchers to the field with the objective of developing accurate and efficient models.However,limited research has been conducted on generating adversarial samples for multivariate time series classification models.Adversarial samples could become a security concern in systems with complex sets of sensors.This study proposes extending the existing gradient adversarial transformation network(GATN)in combination with adversarial autoencoders to attack multivariate time series classification models.The proposed model attacks classification models by utilizing a distilled model to imitate the output of the multivariate time series classification model.In addition,the adversarial generator function is replaced with a variational autoencoder to enhance the adversarial samples.The developed methodology is tested on two multivariate time series classification models:1-nearest neighbor dynamic time warping(1-NN DTW)and a fully convolutional network(FCN).This study utilizes 30 multivariate time series benchmarks provided by the University of East Anglia(UEA)and University of California Riverside(UCR).The use of adversarial autoencoders shows an increase in the fraction of successful adversaries generated on multivariate time series.To the best of our knowledge,this is the first study to explore adversarial attacks on multivariate time series.Additionally,we recommend future research utilizing the generated latent space from the variational autoencoders.展开更多
While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity...While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity of Support Vector Machines(SVMs),we first describe the evasion attack against SVM classification and then propose a defense strategy in this paper.The evasion attack utilizes the classification surface of SVM to iteratively find the minimal perturbations that mislead the nonlinear classifier.Specially,we propose what is called a vulnerability function to measure the vulnerability of the SVM classifiers.Utilizing this vulnerability function,we put forward an effective defense strategy based on the kernel optimization of SVMs with Gaussian kernel against the evasion attack.Our defense method is verified to be very effective on the benchmark datasets,and the SVM classifier becomes more robust after using our kernel optimization scheme.展开更多
With aperture synthesis(AS)technique,a number of small antennas can be assembled to form a large telescope whose spatial resolution is determined by the distance of two farthest antennas instead of the diameter of a s...With aperture synthesis(AS)technique,a number of small antennas can be assembled to form a large telescope whose spatial resolution is determined by the distance of two farthest antennas instead of the diameter of a single-dish antenna.In contrast from a direct imaging system,an AS telescope captures the Fourier coefficients of a spatial object,and then implement inverse Fourier transform to reconstruct the spatial image.Due to the limited number of antennas,the Fourier coefficients are extremely sparse in practice,resulting in a very blurry image.To remove/reduce blur,“CLEAN”deconvolution has been widely used in the literature.However,it was initially designed for a point source.For an extended source,like the Sun,its efficiency is unsatisfactory.In this study,a deep neural network,referring to Generative Adversarial Network(GAN),is proposed for solar image deconvolution.The experimental results demonstrate that the proposed model is markedly better than traditional CLEAN on solar images.The main purpose of this work is visual inspection instead of quantitative scientific computation.We believe that this will also help scientists to better understand solar phenomena with high quality images.展开更多
With the prevalence of machine learning in malware defense,hackers have tried to attack machine learning models to evade detection.It is generally difficult to explore the details of malware detection models,hackers c...With the prevalence of machine learning in malware defense,hackers have tried to attack machine learning models to evade detection.It is generally difficult to explore the details of malware detection models,hackers can adopt fuzzing attack to manipulate the features of the malware closer to benign programs on the premise of retaining their functions.In this paper,attack and defense methods on malware detection models based on machine learning algorithms were studied.Firstly,we designed a fuzzing attack method by randomly modifying features to evade detection.The fuzzing attack can effectively descend the accuracy of machine learning model with single feature.Then an adversarial malware detection model MaliFuzz is proposed to defend fuzzing attack.Different from the ordinary single feature detection model,the combined features by static and dynamic analysis to improve the defense ability are used.The experiment results show that the adversarial malware detection model with combined features can deal with the attack.The methods designed in this paper have great significance in improving the security of malware detection models and have good application prospects.展开更多
The constantly increasing degree and frequency of cyber threats require the emergence of flexible and intelligent approaches to systems’protection.Despite the calls for the use of artificial intelligence(AI)and machi...The constantly increasing degree and frequency of cyber threats require the emergence of flexible and intelligent approaches to systems’protection.Despite the calls for the use of artificial intelligence(AI)and machine learning(ML)in strengthening cyber security,there needs to be more literature on an integrated view of the application areas,open issues or trends in AI and ML for cyber security.Based on 90 studies,in the following literature review,the author categorizes and systematically analyzes the current research field to fill this gap.The review evidences that,in contrast to rigid rule-based systems that are static and specific to a given type of threat,AI and ML are more portable and effective in large-scale anomaly detection,malware classification,and prevention of phishing attacks by analyzing the data,learning the patterns,and improving the performance based on new data.Further,the study outlines significant themes,such as data quality,integration,and bias with AI/ML models,and underscores overcoming barriers to undertaking standard AI/ML integration.The contributions of this work are as follows:a thorough description of AI/ML applications in cyber security,discussions on the critical issues,and relevant opportunities and suggestions for future research.Consequently,the work contributes to establishing directions for creating and implementing AI/ML-based cyber security with demonstrable returns of technical solutions,organizational change,and ethicist interventions.展开更多
The accuracy of numerical computation heavily relies on appropriate meshing,whichserves as the foundation for numerical computation.Although adaptive refinement methods areavailable,an adaptive numerical solution is l...The accuracy of numerical computation heavily relies on appropriate meshing,whichserves as the foundation for numerical computation.Although adaptive refinement methods areavailable,an adaptive numerical solution is likely to be ineffective if it originates from a poorly ini-tial mesh.Therefore,it is crucial to generate meshes that accurately capture the geometric features.As an indispensable input in meshing methods,the Mesh Size Function(MSF)determines the qual-ity of the generated mesh.However,the current generation of MSF involves human participation tospecify numerous parameters,leading to difficulties in practical usage.Considering the capacity ofmachine learning to reveal the latent relationships within data,this paper proposes a novel machinelearning method,Implicit Geometry Neural Network(IGNN),for automatic prediction of appro-priate MSFs based on the existing mesh data,enabling the generation of unstructured meshes thatalign precisely with geometric features.IGNN employs the generative adversarial theory to learnthe mapping between the implicit representation of the geometry(Signed Distance Function,SDF)and the corresponding MSF.Experimental results show that the proposed method is capableof automatically generating appropriate meshes and achieving comparable meshing results com-pared to traditional methods.This paper demonstrates the possibility of significantly decreasingthe workload of mesh generation using machine learning techniques,and it is expected to increasethe automation level of mesh generation.展开更多
With the rapid developments of Internet of Things(IoT)and proliferation of embedded devices,large volume of personal data are collected,which however,might carry massive private information about attributes that users...With the rapid developments of Internet of Things(IoT)and proliferation of embedded devices,large volume of personal data are collected,which however,might carry massive private information about attributes that users do not want to share.Many privacy-preserving methods have been proposed to prevent privacy leakage by perturbing raw data or extracting task-oriented features at local devices.Unfortunately,they would suffer from significant privacy leakage and accuracy drop when applied to other tasks as they are designed and optimized for predefined tasks.In this paper,we propose a novel task-free privacy-preserving data collection method via adversarial representation learning,called TF-ARL,to protect private attributes specified by users while maintaining data utility for unknown downstream tasks.To this end,we first propose a privacy adversarial learning mechanism(PAL)to protect private attributes by optimizing the feature extractor to maximize the adversary’s prediction uncertainty on private attributes,and then design a conditional decoding mechanism(ConDec)to maintain data utility for downstream tasks by minimizing the conditional reconstruction error from the sanitized features.With the joint learning of PAL and ConDec,we can learn a privacy-aware feature extractor where the sanitized features maintain the discriminative information except privacy.Extensive experimental results on real-world datasets demonstrate the effectiveness of TF-ARL.展开更多
基金supported in part by the the Natural Science Foundation of Shanghai(20ZR1421600)Research Fund of Guangxi Key Lab of Multi-Source Information Mining&Security(MIMS21-M-02).
文摘False data injection attack(FDIA)is an attack that affects the stability of grid cyber-physical system(GCPS)by evading the detecting mechanism of bad data.Existing FDIA detection methods usually employ complex neural networkmodels to detect FDIA attacks.However,they overlook the fact that FDIA attack samples at public-private network edges are extremely sparse,making it difficult for neural network models to obtain sufficient samples to construct a robust detection model.To address this problem,this paper designs an efficient sample generative adversarial model of FDIA attack in public-private network edge,which can effectively bypass the detectionmodel to threaten the power grid system.A generative adversarial network(GAN)framework is first constructed by combining residual networks(ResNet)with fully connected networks(FCN).Then,a sparse adversarial learning model is built by integrating the time-aligned data and normal data,which is used to learn the distribution characteristics between normal data and attack data through iterative confrontation.Furthermore,we introduce a Gaussian hybrid distributionmatrix by aggregating the network structure of attack data characteristics and normal data characteristics,which can connect and calculate FDIA data with normal characteristics.Finally,efficient FDIA attack samples can be sequentially generated through interactive adversarial learning.Extensive simulation experiments are conducted with IEEE 14-bus and IEEE 118-bus system data,and the results demonstrate that the generated attack samples of the proposed model can present superior performance compared to state-of-the-art models in terms of attack strength,robustness,and covert capability.
文摘Retinal vessel segmentation is a challenging medical task owing to small size of dataset,micro blood vessels and low image contrast.To address these issues,we introduce a novel convolutional neural network in this paper,which takes the advantage of both adversarial learning and recurrent neural network.An iterative design of network with recurrent unit is performed to refine the segmentation results from input retinal image gradually.Recurrent unit preserves high-level semantic information for feature reuse,so as to output a sufficiently refined segmentation map instead of a coarse mask.Moreover,an adversarial loss is imposing the integrity and connectivity constraints on the segmented vessel regions,thus greatly reducing topology errors of segmentation.The experimental results on the DRIVE dataset show that our method achieves area under curve and sensitivity of 98.17%and 80.64%,respectively.Our method achieves superior performance in retinal vessel segmentation compared with other existing state-of-the-art methods.
基金supported by State Grid Corporation Technology Project(No.522437250003).
文摘Hydrogen energy is a crucial support for China’s low-carbon energy transition.With the large-scale integration of renewable energy,the combination of hydrogen and integrated energy systems has become one of the most promising directions of development.This paper proposes an optimized schedulingmodel for a hydrogen-coupled electro-heat-gas integrated energy system(HCEHG-IES)using generative adversarial imitation learning(GAIL).The model aims to enhance renewable-energy absorption,reduce carbon emissions,and improve grid-regulation flexibility.First,the optimal scheduling problem of HCEHG-IES under uncertainty is modeled as a Markov decision process(MDP).To overcome the limitations of conventional deep reinforcement learning algorithms—including long optimization time,slow convergence,and subjective reward design—this study augments the PPO algorithm by incorporating a discriminator network and expert data.The newly developed algorithm,termed GAIL,enables the agent to perform imitation learning from expert data.Based on this model,dynamic scheduling decisions are made in continuous state and action spaces,generating optimal energy-allocation and management schemes.Simulation results indicate that,compared with traditional reinforcement-learning algorithms,the proposed algorithmoffers better economic performance.Guided by expert data,the agent avoids blind optimization,shortens the offline training time,and improves convergence performance.In the online phase,the algorithm enables flexible energy utilization,thereby promoting renewable-energy absorption and reducing carbon emissions.
基金supported by the National Science Foundation of USA(Nos.1829674,1912753,1704287,and 2011845)。
文摘The inefficient utilization of ubiquitous graph data with combinatorial structures necessitates graph embedding methods,aiming at learning a continuous vector space for the graph,which is amenable to be adopted in traditional machine learning algorithms in favor of vector representations.Graph embedding methods build an important bridge between social network analysis and data analytics,as social networks naturally generate an unprecedented volume of graph data continuously.Publishing social network data not only brings benefit for public health,disaster response,commercial promotion,and many other applications,but also gives birth to threats that jeopardize each individual’s privacy and security.Unfortunately,most existing works in publishing social graph embedding data only focus on preserving social graph structure with less attention paid to the privacy issues inherited from social networks.To be specific,attackers can infer the presence of a sensitive relationship between two individuals by training a predictive model with the exposed social network embedding.In this paper,we propose a novel link-privacy preserved graph embedding framework using adversarial learning,which can reduce adversary’s prediction accuracy on sensitive links,while persevering sufficient non-sensitive information,such as graph topology and node attributes in graph embedding.Extensive experiments are conducted to evaluate the proposed framework using ground truth social network datasets.
基金the National Natural Science Foundation of China undergrant 61501515.
文摘Owing to the continuous barrage of cyber threats,there is a massive amount of cyber threat intelligence.However,a great deal of cyber threat intelligence come from textual sources.For analysis of cyber threat intelligence,many security analysts rely on cumbersome and time-consuming manual efforts.Cybersecurity knowledge graph plays a significant role in automatics analysis of cyber threat intelligence.As the foundation for constructing cybersecurity knowledge graph,named entity recognition(NER)is required for identifying critical threat-related elements from textual cyber threat intelligence.Recently,deep neural network-based models have attained very good results in NER.However,the performance of these models relies heavily on the amount of labeled data.Since labeled data in cybersecurity is scarce,in this paper,we propose an adversarial active learning framework to effectively select the informative samples for further annotation.In addition,leveraging the long short-term memory(LSTM)network and the bidirectional LSTM(BiLSTM)network,we propose a novel NER model by introducing a dynamic attention mechanism into the BiLSTM-LSTM encoderdecoder.With the selected informative samples annotated,the proposed NER model is retrained.As a result,the performance of the NER model is incrementally enhanced with low labeling cost.Experimental results show the effectiveness of the proposed method.
基金supported by the National Natural Science Foundation of China under Grant Nos.62032011 and 61772257.
文摘We propose to address the open set domain adaptation problem by aligning images at both the pixel space and the feature space.Our approach,called Open Set Translation and Adaptation Network(OSTAN),consists of two main components:translation and adaptation.The translation is a cycle-consistent generative adversarial network,which translates any source image to the“style”of a target domain to eliminate domain discrepancy in the pixel space.The adaptation is an instance-weighted adversarial network,which projects both(labeled)translated source images and(unlabeled)target images into a domain-invariant feature space to learn a prior probability for each target image.The learned probability is applied as a weight to the unknown classifier to facilitate the identification of the unknown class.The proposed OSTAN model significantly outperforms the state-of-the-art open set domain adaptation methods on multiple public datasets.Our experiments also demonstrate that both the image-to-image translation and the instance-weighting framework can further improve the decision boundaries for both known and unknown classes.
文摘The goal of zero-shot recognition is to classify classes it has never seen before, which needs to build a bridge between seen and unseen classes through semantic embedding space. Therefore, semantic embedding space learning plays an important role in zero-shot recognition. Among existing works, semantic embedding space is mainly taken by user-defined attribute vectors. However, the discriminative information included in the user-defined attribute vector is limited. In this paper, we propose to learn an extra latent attribute space automatically to produce a more generalized and discriminative semantic embedded space. To prevent the bias problem, both user-defined attribute vector and latent attribute space are optimized by adversarial learning with auto-encoders. We also propose to reconstruct semantic patterns produced by explanatory graphs, which can make semantic embedding space more sensitive to usefully semantic information and less sensitive to useless information. The proposed method is evaluated on the AwA2 and CUB dataset. These results show that our proposed method achieves superior performance.
基金This work was partially supported by NSFC(12288201)NKRDP grant(2018YFA0704705).
文摘The purpose of adversarial deep learning is to train robust DNNs against adversarial attacks,and this is one of the major research focuses of deep learning.Game theory has been used to answer some of the basic questions about adversarial deep learning,such as those regarding the existence of a classifier with optimal robustness and the existence of optimal adversarial samples for a given class of classifiers.In most previous works,adversarial deep learning was formulated as a simultaneous game and the strategy spaces were assumed to be certain probability distributions in order for the Nash equilibrium to exist.However,this assumption is not applicable to practical situations.In this paper,we give answers to these basic questions for the practical case where the classifiers are DNNs with a given structure;we do that by formulating adversarial deep learning in the form of Stackelberg games.The existence of Stackelberg equilibria for these games is proven.Furthermore,it is shown that the equilibrium DNN has the largest adversarial accuracy among all DNNs with the same structure,when Carlini-Wagner s margin loss is used.The trade-off between robustness and accuracy in adversarial deep learning is also studied from a game theoretical perspective.
文摘Intrusion detection system plays an important role in defending networks from security breaches.End-to-end machine learning-based intrusion detection systems are being used to achieve high detection accuracy.However,in case of adversarial attacks,that cause misclassification by introducing imperceptible perturbation on input samples,performance of machine learning-based intrusion detection systems is greatly affected.Though such problems have widely been discussed in image processing domain,very few studies have investigated network intrusion detection systems and proposed corresponding defence.In this paper,we attempt to fill this gap by using adversarial attacks on standard intrusion detection datasets and then using adversarial samples to train various machine learning algorithms(adversarial training)to test their defence performance.This is achieved by first creating adversarial sample based on Jacobian-based Saliency Map Attack(JSMA)and Fast Gradient Sign Attack(FGSM)using NSLKDD,UNSW-NB15 and CICIDS17 datasets.The study then trains and tests JSMA and FGSM based adversarial examples in seen(where model has been trained on adversarial samples)and unseen(where model is unaware of adversarial packets)attacks.The experiments includes multiple machine learning classifiers to evaluate their performance against adversarial attacks.The performance parameters include Accuracy,F1-Score and Area under the receiver operating characteristic curve(AUC)Score.
基金supported by the National Natural Science Foundation of China(61533019,71232006,91520301)
文摘Recently, generative adversarial networks(GANs)have become a research focus of artificial intelligence. Inspired by two-player zero-sum game, GANs comprise a generator and a discriminator, both trained under the adversarial learning idea.The goal of GANs is to estimate the potential distribution of real data samples and generate new samples from that distribution.Since their initiation, GANs have been widely studied due to their enormous prospect for applications, including image and vision computing, speech and language processing, etc. In this review paper, we summarize the state of the art of GANs and look into the future. Firstly, we survey GANs' proposal background,theoretic and implementation models, and application fields.Then, we discuss GANs' advantages and disadvantages, and their development trends. In particular, we investigate the relation between GANs and parallel intelligence,with the conclusion that GANs have a great potential in parallel systems research in terms of virtual-real interaction and integration. Clearly, GANs can provide substantial algorithmic support for parallel intelligence.
文摘In this paper,a communication model in cognitive radios is developed and uses machine learning to learn the dynamics of jamming attacks in cognitive radios.It is designed further to make their transmission decision that automati-cally adapts to the transmission dynamics to mitigate the launched jamming attacks.The generative adversarial learning neural network(GALNN)or genera-tive dynamic neural network(GDNN)automatically learns with the synthesized training data(training)with a generator and discriminator type neural networks that encompass minimax game theory.The elimination of the jamming attack is carried out with the assistance of the defense strategies and with an increased detection rate in the generative adversarial network(GAN).The GDNN with game theory is designed to validate the channel condition with the cross entropy loss function and back-propagation algorithm,which improves the communica-tion reliability in the network.The simulation is conducted in NS2.34 tool against several performance metrics to reduce the misdetection rate and false alarm rates.The results show that the GDNN obtains an increased rate of successful transmis-sion by taking optimal actions to act as a defense mechanism to mislead the jam-mer,where the jammer makes high misclassification errors on transmission dynamics.
基金Supported by the National Natural Science Foundation of China(No.62172035)。
文摘Residual learning based deep generative networks have achieved promising performance in image enhancement.However,due to the large color gap between a low-quality image and its highquality version,the identical mapping in conventional residual learning cannot explore the elaborate detail differences,resulting in color deviations and texture losses in enhanced images.To address this issue,an innovative non-identical residual learning architecture is proposed,which views image enhancement as two complementary branches,namely a holistic color adjustment branch and a finegrained residual generation branch.In the holistic color adjustment,an adjusting map is calculated for each input low-quality image,in order to regulate the low-quality image to the high-quality representation in an overall way.In the fine-grained residual generation branch,a novel attention-aware recursive network is designed to generate residual images.This design can alleviate the overfitting problem by reusing parameters and promoting the network’s adaptability for different input conditions.In addition,a novel dynamic multi-level perceptual loss based on the error feedback ideology is proposed.Consequently,the proposed network can be dynamically optimized by the hybrid perceptual loss provided by a well-trained VGG,so as to improve the perceptual quality of enhanced images in a guided way.Extensive experiments conducted on publicly available datasets demonstrate the state-of-the-art performance of the proposed method.
基金supported by the NSFC[Grant Nos.61772281,61703212,61602254]Jiangsu Province Natural Science Foundation[Grant No.BK2160968]the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)and Jiangsu Collaborative Innovation Center on Atmospheric Environment and Equipment Technology(CICAEET).
文摘Deep learning based on neural networks has made new progress in a wide variety of domain,however,it is lack of protection for sensitive information.The large amount of data used for training is easy to cause leakage of private information,thus the attacker can easily restore input through the representation of latent natural language.The privacy preserving deep learning aims to solve the above problems.In this paper,first,we introduce how to reduce training samples in order to reduce the amount of sensitive information,and then describe how to unbiasedly represent the data with respect to specific attributes,clarify the research results of other directions of privacy protection and its corresponding algorithms,summarize the common thoughts and existing problems.Finally,the commonly used datasets in the privacy protection research are discussed in this paper.
文摘Classification models for multivariate time series have drawn the interest of many researchers to the field with the objective of developing accurate and efficient models.However,limited research has been conducted on generating adversarial samples for multivariate time series classification models.Adversarial samples could become a security concern in systems with complex sets of sensors.This study proposes extending the existing gradient adversarial transformation network(GATN)in combination with adversarial autoencoders to attack multivariate time series classification models.The proposed model attacks classification models by utilizing a distilled model to imitate the output of the multivariate time series classification model.In addition,the adversarial generator function is replaced with a variational autoencoder to enhance the adversarial samples.The developed methodology is tested on two multivariate time series classification models:1-nearest neighbor dynamic time warping(1-NN DTW)and a fully convolutional network(FCN).This study utilizes 30 multivariate time series benchmarks provided by the University of East Anglia(UEA)and University of California Riverside(UCR).The use of adversarial autoencoders shows an increase in the fraction of successful adversaries generated on multivariate time series.To the best of our knowledge,this is the first study to explore adversarial attacks on multivariate time series.Additionally,we recommend future research utilizing the generated latent space from the variational autoencoders.
基金supported by the National Natural Science Foundation of China under Grant No.61966011.
文摘While malicious samples are widely found in many application fields of machine learning,suitable countermeasures have been investigated in the field of adversarial machine learning.Due to the importance and popularity of Support Vector Machines(SVMs),we first describe the evasion attack against SVM classification and then propose a defense strategy in this paper.The evasion attack utilizes the classification surface of SVM to iteratively find the minimal perturbations that mislead the nonlinear classifier.Specially,we propose what is called a vulnerability function to measure the vulnerability of the SVM classifiers.Utilizing this vulnerability function,we put forward an effective defense strategy based on the kernel optimization of SVMs with Gaussian kernel against the evasion attack.Our defense method is verified to be very effective on the benchmark datasets,and the SVM classifier becomes more robust after using our kernel optimization scheme.
基金the National Natural Science Foundation of China(NSFC)(Grant Nos.61572461,61811530282,61872429,11790301 and 11790305).
文摘With aperture synthesis(AS)technique,a number of small antennas can be assembled to form a large telescope whose spatial resolution is determined by the distance of two farthest antennas instead of the diameter of a single-dish antenna.In contrast from a direct imaging system,an AS telescope captures the Fourier coefficients of a spatial object,and then implement inverse Fourier transform to reconstruct the spatial image.Due to the limited number of antennas,the Fourier coefficients are extremely sparse in practice,resulting in a very blurry image.To remove/reduce blur,“CLEAN”deconvolution has been widely used in the literature.However,it was initially designed for a point source.For an extended source,like the Sun,its efficiency is unsatisfactory.In this study,a deep neural network,referring to Generative Adversarial Network(GAN),is proposed for solar image deconvolution.The experimental results demonstrate that the proposed model is markedly better than traditional CLEAN on solar images.The main purpose of this work is visual inspection instead of quantitative scientific computation.We believe that this will also help scientists to better understand solar phenomena with high quality images.
文摘With the prevalence of machine learning in malware defense,hackers have tried to attack machine learning models to evade detection.It is generally difficult to explore the details of malware detection models,hackers can adopt fuzzing attack to manipulate the features of the malware closer to benign programs on the premise of retaining their functions.In this paper,attack and defense methods on malware detection models based on machine learning algorithms were studied.Firstly,we designed a fuzzing attack method by randomly modifying features to evade detection.The fuzzing attack can effectively descend the accuracy of machine learning model with single feature.Then an adversarial malware detection model MaliFuzz is proposed to defend fuzzing attack.Different from the ordinary single feature detection model,the combined features by static and dynamic analysis to improve the defense ability are used.The experiment results show that the adversarial malware detection model with combined features can deal with the attack.The methods designed in this paper have great significance in improving the security of malware detection models and have good application prospects.
文摘The constantly increasing degree and frequency of cyber threats require the emergence of flexible and intelligent approaches to systems’protection.Despite the calls for the use of artificial intelligence(AI)and machine learning(ML)in strengthening cyber security,there needs to be more literature on an integrated view of the application areas,open issues or trends in AI and ML for cyber security.Based on 90 studies,in the following literature review,the author categorizes and systematically analyzes the current research field to fill this gap.The review evidences that,in contrast to rigid rule-based systems that are static and specific to a given type of threat,AI and ML are more portable and effective in large-scale anomaly detection,malware classification,and prevention of phishing attacks by analyzing the data,learning the patterns,and improving the performance based on new data.Further,the study outlines significant themes,such as data quality,integration,and bias with AI/ML models,and underscores overcoming barriers to undertaking standard AI/ML integration.The contributions of this work are as follows:a thorough description of AI/ML applications in cyber security,discussions on the critical issues,and relevant opportunities and suggestions for future research.Consequently,the work contributes to establishing directions for creating and implementing AI/ML-based cyber security with demonstrable returns of technical solutions,organizational change,and ethicist interventions.
基金co-supported by the Aeronautical Science Foundation of China(Nos.2018ZA52002 and 2019ZA052011)。
文摘The accuracy of numerical computation heavily relies on appropriate meshing,whichserves as the foundation for numerical computation.Although adaptive refinement methods areavailable,an adaptive numerical solution is likely to be ineffective if it originates from a poorly ini-tial mesh.Therefore,it is crucial to generate meshes that accurately capture the geometric features.As an indispensable input in meshing methods,the Mesh Size Function(MSF)determines the qual-ity of the generated mesh.However,the current generation of MSF involves human participation tospecify numerous parameters,leading to difficulties in practical usage.Considering the capacity ofmachine learning to reveal the latent relationships within data,this paper proposes a novel machinelearning method,Implicit Geometry Neural Network(IGNN),for automatic prediction of appro-priate MSFs based on the existing mesh data,enabling the generation of unstructured meshes thatalign precisely with geometric features.IGNN employs the generative adversarial theory to learnthe mapping between the implicit representation of the geometry(Signed Distance Function,SDF)and the corresponding MSF.Experimental results show that the proposed method is capableof automatically generating appropriate meshes and achieving comparable meshing results com-pared to traditional methods.This paper demonstrates the possibility of significantly decreasingthe workload of mesh generation using machine learning techniques,and it is expected to increasethe automation level of mesh generation.
基金supported by National Key R&D Program of China (Grant No. 2021ZD0112803)National Natural Science Foundation of China (Grants No. 62122066, U20A20182, 61872274)
文摘With the rapid developments of Internet of Things(IoT)and proliferation of embedded devices,large volume of personal data are collected,which however,might carry massive private information about attributes that users do not want to share.Many privacy-preserving methods have been proposed to prevent privacy leakage by perturbing raw data or extracting task-oriented features at local devices.Unfortunately,they would suffer from significant privacy leakage and accuracy drop when applied to other tasks as they are designed and optimized for predefined tasks.In this paper,we propose a novel task-free privacy-preserving data collection method via adversarial representation learning,called TF-ARL,to protect private attributes specified by users while maintaining data utility for unknown downstream tasks.To this end,we first propose a privacy adversarial learning mechanism(PAL)to protect private attributes by optimizing the feature extractor to maximize the adversary’s prediction uncertainty on private attributes,and then design a conditional decoding mechanism(ConDec)to maintain data utility for downstream tasks by minimizing the conditional reconstruction error from the sanitized features.With the joint learning of PAL and ConDec,we can learn a privacy-aware feature extractor where the sanitized features maintain the discriminative information except privacy.Extensive experimental results on real-world datasets demonstrate the effectiveness of TF-ARL.