期刊文献+
共找到640篇文章
< 1 2 32 >
每页显示 20 50 100
A Survey of Adversarial Examples in Computer Vision:Attack,Defense,and Beyond
1
作者 XU Keyizhi LU Yajuan +1 位作者 WANG Zhongyuan LIANG Chao 《Wuhan University Journal of Natural Sciences》 2025年第1期1-20,共20页
Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples ca... Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples can easily mislead DNNs into incorrect behavior via the injection of imperceptible modification to the input data.In this survey,we focus on(1)adversarial attack algorithms to generate adversarial examples,(2)adversarial defense techniques to secure DNNs against adversarial examples,and(3)important problems in the realm of adversarial examples beyond attack and defense,including the theoretical explanations,trade-off issues and benign attacks in adversarial examples.Additionally,we draw a brief comparison between recently published surveys on adversarial examples,and identify the future directions for the research of adversarial examples,such as the generalization of methods and the understanding of transferability,that might be solutions to the open problems in this field. 展开更多
关键词 computer vision adversarial examples adversarial attack adversarial defense
原文传递
Improving Robustness for Tag Recommendation via Self-Paced Adversarial Metric Learning
2
作者 Zhengshun Fei Jianxin Chen +1 位作者 Gui Chen Xinjian Xiang 《Computers, Materials & Continua》 2025年第3期4237-4261,共25页
Tag recommendation systems can significantly improve the accuracy of information retrieval by recommending relevant tag sets that align with user preferences and resource characteristics.However,metric learning method... Tag recommendation systems can significantly improve the accuracy of information retrieval by recommending relevant tag sets that align with user preferences and resource characteristics.However,metric learning methods often suffer from high sensitivity,leading to unstable recommendation results when facing adversarial samples generated through malicious user behavior.Adversarial training is considered to be an effective method for improving the robustness of tag recommendation systems and addressing adversarial samples.However,it still faces the challenge of overfitting.Although curriculum learning-based adversarial training somewhat mitigates this issue,challenges still exist,such as the lack of a quantitative standard for attack intensity and catastrophic forgetting.To address these challenges,we propose a Self-Paced Adversarial Metric Learning(SPAML)method.First,we employ a metric learning model to capture the deep distance relationships between normal samples.Then,we incorporate a self-paced adversarial training model,which dynamically adjusts the weights of adversarial samples,allowing the model to progressively learn from simpler to more complex adversarial samples.Finally,we jointly optimize the metric learning loss and self-paced adversarial training loss in an adversarial manner,enhancing the robustness and performance of tag recommendation tasks.Extensive experiments on the MovieLens and LastFm datasets demonstrate that SPAML achieves F1@3 and NDCG@3 scores of 22%and 32.7%on the MovieLens dataset,and 19.4%and 29%on the LastFm dataset,respectively,outperforming the most competitive baselines.Specifically,F1@3 improves by 4.7%and 6.8%,and NDCG@3 improves by 5.0%and 6.9%,respectively. 展开更多
关键词 Tag recommendation metric learning adversarial training self-paced adversarial training ROBUSTNESS
在线阅读 下载PDF
Exploratory Research on Defense against Natural Adversarial Examples in Image Classification
3
作者 Yaoxuan Zhu Hua Yang Bin Zhu 《Computers, Materials & Continua》 2025年第2期1947-1968,共22页
The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natura... The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natural adversarial examples has posed significant challenges, as traditional defense methods against adversarial attacks have proven to be largely ineffective against these natural adversarial examples. This paper explores defenses against these natural adversarial examples from three perspectives: adversarial examples, model architecture, and dataset. First, it employs Class Activation Mapping (CAM) to visualize how models classify natural adversarial examples, identifying several typical attack patterns. Next, various common CNN models are analyzed to evaluate their susceptibility to these attacks, revealing that different architectures exhibit varying defensive capabilities. The study finds that as the depth of a network increases, its defenses against natural adversarial examples strengthen. Lastly, Finally, the impact of dataset class distribution on the defense capability of models is examined, focusing on two aspects: the number of classes in the training set and the number of predicted classes. This study investigates how these factors influence the model’s ability to defend against natural adversarial examples. Results indicate that reducing the number of training classes enhances the model’s defense against natural adversarial examples. Additionally, under a fixed number of training classes, some CNN models show an optimal range of predicted classes for achieving the best defense performance against these adversarial examples. 展开更多
关键词 Image classification convolutional neural network natural adversarial example data set defense against adversarial examples
在线阅读 下载PDF
Enhancing Adversarial Example Transferability via Regularized Constrained Feature Layer
4
作者 Xiaoyin Yi Long Chen +2 位作者 Jiacheng Huang Ning Yu Qian Huang 《Computers, Materials & Continua》 2025年第4期157-175,共19页
Transfer-based Adversarial Attacks(TAAs)can deceive a victim model even without prior knowledge.This is achieved by leveraging the property of adversarial examples.That is,when generated from a surrogate model,they re... Transfer-based Adversarial Attacks(TAAs)can deceive a victim model even without prior knowledge.This is achieved by leveraging the property of adversarial examples.That is,when generated from a surrogate model,they retain their features if applied to other models due to their good transferability.However,adversarial examples often exhibit overfitting,as they are tailored to exploit the particular architecture and feature representation of source models.Consequently,when attempting black-box transfer attacks on different target models,their effectiveness is decreased.To solve this problem,this study proposes an approach based on a Regularized Constrained Feature Layer(RCFL).The proposed method first uses regularization constraints to attenuate the initial examples of low-frequency components.Perturbations are then added to a pre-specified layer of the source model using the back-propagation technique,in order to modify the original adversarial examples.Afterward,a regularized loss function is used to enhance the black-box transferability between different target models.The proposed method is finally tested on the ImageNet,CIFAR-100,and Stanford Car datasets with various target models,The obtained results demonstrate that it achieves a significantly higher transfer-based adversarial attack success rate compared with baseline techniques. 展开更多
关键词 adversarial examples black-box transferability regularized constrained transfer-based adversarial attacks
在线阅读 下载PDF
5DGWO-GAN:A Novel Five-Dimensional Gray Wolf Optimizer for Generative Adversarial Network-Enabled Intrusion Detection in IoT Systems
5
作者 Sarvenaz Sadat Khatami Mehrdad Shoeibi +2 位作者 Anita Ershadi Oskouei Diego Martín Maral Keramat Dashliboroun 《Computers, Materials & Continua》 SCIE EI 2025年第1期881-911,共31页
The Internet of Things(IoT)is integral to modern infrastructure,enabling connectivity among a wide range of devices from home automation to industrial control systems.With the exponential increase in data generated by... The Internet of Things(IoT)is integral to modern infrastructure,enabling connectivity among a wide range of devices from home automation to industrial control systems.With the exponential increase in data generated by these interconnected devices,robust anomaly detection mechanisms are essential.Anomaly detection in this dynamic environment necessitates methods that can accurately distinguish between normal and anomalous behavior by learning intricate patterns.This paper presents a novel approach utilizing generative adversarial networks(GANs)for anomaly detection in IoT systems.However,optimizing GANs involves tuning hyper-parameters such as learning rate,batch size,and optimization algorithms,which can be challenging due to the non-convex nature of GAN loss functions.To address this,we propose a five-dimensional Gray wolf optimizer(5DGWO)to optimize GAN hyper-parameters.The 5DGWO introduces two new types of wolves:gamma(γ)for improved exploitation and convergence,and theta(θ)for enhanced exploration and escaping local minima.The proposed system framework comprises four key stages:1)preprocessing,2)generative model training,3)autoencoder(AE)training,and 4)predictive model training.The generative models are utilized to assist the AE training,and the final predictive models(including convolutional neural network(CNN),deep belief network(DBN),recurrent neural network(RNN),random forest(RF),and extreme gradient boosting(XGBoost))are trained using the generated data and AE-encoded features.We evaluated the system on three benchmark datasets:NSL-KDD,UNSW-NB15,and IoT-23.Experiments conducted on diverse IoT datasets show that our method outperforms existing anomaly detection strategies and significantly reduces false positives.The 5DGWO-GAN-CNNAE exhibits superior performance in various metrics,including accuracy,recall,precision,root mean square error(RMSE),and convergence trend.The proposed 5DGWO-GAN-CNNAE achieved the lowest RMSE values across the NSL-KDD,UNSW-NB15,and IoT-23 datasets,with values of 0.24,1.10,and 0.09,respectively.Additionally,it attained the highest accuracy,ranging from 94%to 100%.These results suggest a promising direction for future IoT security frameworks,offering a scalable and efficient solution to safeguard against evolving cyber threats. 展开更多
关键词 Internet of things intrusion detection generative adversarial networks five-dimensional binary gray wolf optimizer deep learning
在线阅读 下载PDF
Attention-Guided Sparse Adversarial Attacks with Gradient Dropout
6
作者 ZHAO Hongzhi HAO Lingguang +2 位作者 HAO Kuangrong WEI Bing LIU Xiaoyan 《Journal of Donghua University(English Edition)》 CAS 2024年第5期545-556,共12页
Deep neural networks are extremely vulnerable to externalities from intentionally generated adversarial examples which are achieved by overlaying tiny noise on the clean images.However,most existing transfer-based att... Deep neural networks are extremely vulnerable to externalities from intentionally generated adversarial examples which are achieved by overlaying tiny noise on the clean images.However,most existing transfer-based attack methods are chosen to add perturbations on each pixel of the original image with the same weight,resulting in redundant noise in the adversarial examples,which makes them easier to be detected.Given this deliberation,a novel attentionguided sparse adversarial attack strategy with gradient dropout that can be readily incorporated with existing gradient-based methods is introduced to minimize the intensity and the scale of perturbations and ensure the effectiveness of adversarial examples at the same time.Specifically,in the gradient dropout phase,some relatively unimportant gradient information is randomly discarded to limit the intensity of the perturbation.In the attentionguided phase,the influence of each pixel on the model output is evaluated by using a soft mask-refined attention mechanism,and the perturbation of those pixels with smaller influence is limited to restrict the scale of the perturbation.After conducting thorough experiments on the NeurIPS 2017 adversarial dataset and the ILSVRC 2012 validation dataset,the proposed strategy holds the potential to significantly diminish the superfluous noise present in adversarial examples,all while keeping their attack efficacy intact.For instance,in attacks on adversarially trained models,upon the integration of the strategy,the average level of noise injected into images experiences a decline of 8.32%.However,the average attack success rate decreases by only 0.34%.Furthermore,the competence is possessed to substantially elevate the attack success rate by merely introducing a slight degree of perturbation. 展开更多
关键词 deep neural network adversarial attack sparse adversarial attack adversarial transferability adversarial example
在线阅读 下载PDF
Adversarial Prompt Detection in Large Language Models:A Classification-Driven Approach
7
作者 Ahmet Emre Ergün Aytug Onan 《Computers, Materials & Continua》 2025年第6期4855-4877,共23页
Large Language Models(LLMs)have significantly advanced human-computer interaction by improving natural language understanding and generation.However,their vulnerability to adversarial prompts–carefully designed input... Large Language Models(LLMs)have significantly advanced human-computer interaction by improving natural language understanding and generation.However,their vulnerability to adversarial prompts–carefully designed inputs that manipulate model outputs–presents substantial challenges.This paper introduces a classification-based approach to detect adversarial prompts by utilizing both prompt features and prompt response features.Elevenmachine learning models were evaluated based on key metrics such as accuracy,precision,recall,and F1-score.The results show that the Convolutional Neural Network–Long Short-Term Memory(CNN-LSTM)cascade model delivers the best performance,especially when using prompt features,achieving an accuracy of over 97%in all adversarial scenarios.Furthermore,the Support Vector Machine(SVM)model performed best with prompt response features,particularly excelling in prompt type classification tasks.Classification results revealed that certain types of adversarial attacks,such as“Word Level”and“Adversarial Prefix”,were particularly difficult to detect,as indicated by their low recall and F1-scores.These findings suggest that more subtle manipulations can evade detection mechanisms.In contrast,attacks like“Sentence Level”and“Adversarial Insertion”were easier to identify,due to the model’s effectiveness in recognizing inserted content.Natural Language Processing(NLP)techniques played a critical role by enabling the extraction of semantic and syntactic features from both prompts and their corresponding responses.These insights highlight the importance of combining traditional and deep learning approaches,along with advanced NLP techniques,to build more reliable adversarial prompt detection systems for LLMs. 展开更多
关键词 LLM CLASSIFICATION NLP adversarial PROMPT machine learning deep learning
在线阅读 下载PDF
A Computationally Efficient Density-Aware Adversarial Resampling Framework Using Wasserstein GANs for Imbalance and Overlapping Data Classification
8
作者 Sidra Jubair Jie Yang +2 位作者 Bilal Ali Walid Emam Yusra Tashkandy 《Computer Modeling in Engineering & Sciences》 2025年第7期511-534,共24页
Effectively handling imbalanced datasets remains a fundamental challenge in computational modeling and machine learning,particularly when class overlap significantly deteriorates classification performance.Traditional... Effectively handling imbalanced datasets remains a fundamental challenge in computational modeling and machine learning,particularly when class overlap significantly deteriorates classification performance.Traditional oversampling methods often generate synthetic samples without considering density variations,leading to redundant or misleading instances that exacerbate class overlap in high-density regions.To address these limitations,we propose Wasserstein Generative Adversarial Network Variational Density Estimation WGAN-VDE,a computationally efficient density-aware adversarial resampling framework that enhances minority class representation while strategically reducing class overlap.The originality of WGAN-VDE lies in its density-aware sample refinement,ensuring that synthetic samples are positioned in underrepresented regions,thereby improving class distinctiveness.By applying structured feature representation,targeted sample generation,and density-based selection mechanisms strategies,the proposed framework ensures the generation of well-separated and diverse synthetic samples,improving class separability and reducing redundancy.The experimental evaluation on 20 benchmark datasets demonstrates that this approach outperforms 11 state-of-the-art rebalancing techniques,achieving superior results in F1-score,Accuracy,G-Mean,and AUC metrics.These results establish the proposed method as an effective and robust computational approach,suitable for diverse engineering and scientific applications involving imbalanced data classification and computational modeling. 展开更多
关键词 Machine learning imbalanced classification class overlap computational modelling adversarial resampling density estimation
在线阅读 下载PDF
Integrating Speech-to-Text for Image Generation Using Generative Adversarial Networks
9
作者 Smita Mahajan Shilpa Gite +5 位作者 Biswajeet Pradhan Abdullah Alamri Shaunak Inamdar Deva Shriyansh Akshat Ashish Shah Shruti Agarwal 《Computer Modeling in Engineering & Sciences》 2025年第5期2001-2026,共26页
The development of generative architectures has resulted in numerous novel deep-learning models that generate images using text inputs.However,humans naturally use speech for visualization prompts.Therefore,this paper... The development of generative architectures has resulted in numerous novel deep-learning models that generate images using text inputs.However,humans naturally use speech for visualization prompts.Therefore,this paper proposes an architecture that integrates speech prompts as input to image-generation Generative Adversarial Networks(GANs)model,leveraging Speech-to-Text translation along with the CLIP+VQGAN model.The proposed method involves translating speech prompts into text,which is then used by the Contrastive Language-Image Pretraining(CLIP)+Vector Quantized Generative Adversarial Network(VQGAN)model to generate images.This paper outlines the steps required to implement such a model and describes in detail the methods used for evaluating the model.The GAN model successfully generates artwork from descriptions using speech and text prompts.Experimental outcomes of synthesized images demonstrate that the proposed methodology can produce beautiful abstract visuals containing elements from the input prompts.The model achieved a Frechet Inception Distance(FID)score of 28.75,showcasing its capability to produce high-quality and diverse images.The proposed model can find numerous applications in educational,artistic,and design spaces due to its ability to generate images using speech and the distinct abstract artistry of the output images.This capability is demonstrated by giving the model out-of-the-box prompts to generate never-before-seen images with plausible realistic qualities. 展开更多
关键词 Generative adversarial networks speech-to-image translation visualization transformers prompt engineering
在线阅读 下载PDF
Deepfake Detection Using Adversarial Neural Network
10
作者 Priyadharsini Selvaraj Senthil Kumar Jagatheesaperumal +3 位作者 Karthiga Marimuthu Oviya Saravanan Bader Fahad Alkhamees Mohammad Mehedi Hassan 《Computer Modeling in Engineering & Sciences》 2025年第5期1575-1594,共20页
With expeditious advancements in AI-driven facial manipulation techniques,particularly deepfake technology,there is growing concern over its potential misuse.Deepfakes pose a significant threat to society,partic-ularl... With expeditious advancements in AI-driven facial manipulation techniques,particularly deepfake technology,there is growing concern over its potential misuse.Deepfakes pose a significant threat to society,partic-ularly by infringing on individuals’privacy.Amid significant endeavors to fabricate systems for identifying deepfake fabrications,existing methodologies often face hurdles in adjusting to innovative forgery techniques and demonstrate increased vulnerability to image and video clarity variations,thereby hindering their broad applicability to images and videos produced by unfamiliar technologies.In this manuscript,we endorse resilient training tactics to amplify generalization capabilities.In adversarial training,models are trained using deliberately crafted samples to deceive classification systems,thereby significantly enhancing their generalization ability.In response to this challenge,we propose an innovative hybrid adversarial training framework integrating Virtual Adversarial Training(VAT)with Two-Generated Blurred Adversarial Training.This combined framework bolsters the model’s resilience in detecting deepfakes made using unfamiliar deep learning technologies.Through such adversarial training,models are prompted to acquire more versatile attributes.Through experimental studies,we demonstrate that our model achieves higher accuracy than existing models. 展开更多
关键词 Deepfake GENERALIZATION forgery detection pixel-wise Gaussian blurring virtual adversarial training
在线阅读 下载PDF
Incomplete Physical Adversarial Attack on Face Recognition
11
作者 HU Weitao XU Wujun 《Journal of Donghua University(English Edition)》 2025年第4期442-448,共7页
In recent work,adversarial stickers are widely used to attack face recognition(FR)systems in the physical world.However,it is difficult to evaluate the performance of physical attacks because of the lack of volunteers... In recent work,adversarial stickers are widely used to attack face recognition(FR)systems in the physical world.However,it is difficult to evaluate the performance of physical attacks because of the lack of volunteers in the experiment.In this paper,a simple attack method called incomplete physical adversarial attack(IPAA)is proposed to simulate physical attacks.Different from the process of physical attacks,when an IPAA is conducted,a photo of the adversarial sticker is embedded into a facial image as the input to attack FR systems,which can obtain results similar to those of physical attacks without inviting any volunteers.The results show that IPAA has a higher similarity with physical attacks than digital attacks,indicating that IPAA is able to evaluate the performance of physical attacks.IPAA is effective in quantitatively measuring the impact of the sticker location on the results of attacks. 展开更多
关键词 physical attack digital attack face recognition interferential variable adversarial example
在线阅读 下载PDF
A solution framework for the experimental data shortage problem of lithium-ion batteries:Generative adversarial network-based data augmentation for battery state estimation
12
作者 Jinghua Sun Ankun Gu Josef Kainz 《Journal of Energy Chemistry》 2025年第4期476-497,共22页
In order to address the widespread data shortage problem in battery research,this paper proposes a generative adversarial network model that combines it with deep convolutional networks,the Wasserstein distance,and th... In order to address the widespread data shortage problem in battery research,this paper proposes a generative adversarial network model that combines it with deep convolutional networks,the Wasserstein distance,and the gradient penalty to achieve data augmentation.To lower the threshold for implementing the proposed method,transfer learning is further introduced.The W-DC-GAN-GP-TL framework is thereby formed.This framework is evaluated on 3 different publicly available datasets to judge the quality of generated data.Through visual comparisons and the examination of two visualization methods(probability density function(PDF)and principal component analysis(PCA)),it is demonstrated that the generated data is hard to distinguish from the real data.The application of generated data for training a battery state model using transfer learning is further evaluated.Specifically,Bi-GRU-based and Transformer-based methods are implemented on 2 separate datasets for estimating state of health(SOH)and state of charge(SOC),respectively.The results indicate that the proposed framework demonstrates satisfactory performance in different scenarios:for the data replacement scenario,where real data are removed and replaced with generated data,the state estimator accuracy decreases only slightly;for the data enhancement scenario,the estimator accuracy is further improved.The estimation accuracy of SOH and SOC is as low as 0.69%and 0.58%root mean square error(RMSE)after applying the proposed framework.This framework provides a reliable method for enriching battery measurement data.It is a generalized framework capable of generating a variety of time series data. 展开更多
关键词 Lithium-ion battery Generative adversarial network Data augmentation State of health State of charge Data shortage
在线阅读 下载PDF
Mitigating Adversarial Attack through Randomization Techniques and Image Smoothing
13
作者 Hyeong-Gyeong Kim Sang-Min Choi +1 位作者 Hyeon Seo Suwon Lee 《Computers, Materials & Continua》 2025年第9期4381-4397,共17页
Adversarial attacks pose a significant threat to artificial intelligence systems by exposing them to vulnerabilities in deep learning models.Existing defense mechanisms often suffer drawbacks,such as the need for mode... Adversarial attacks pose a significant threat to artificial intelligence systems by exposing them to vulnerabilities in deep learning models.Existing defense mechanisms often suffer drawbacks,such as the need for model retraining,significant inference time overhead,and limited effectiveness against specific attack types.Achieving perfect defense against adversarial attacks remains elusive,emphasizing the importance of mitigation strategies.In this study,we propose a defense mechanism that applies random cropping and Gaussian filtering to input images to mitigate the impact of adversarial attacks.First,the image was randomly cropped to vary its dimensions and then placed at the center of a fixed 299299 space,with the remaining areas filled with zero padding.Subsequently,Gaussian×filtering with a 77 kernel and a standard deviation of two was applied using a convolution operation.Finally,the×smoothed image was fed into the classification model.The proposed defense method consistently appeared in the upperright region across all attack scenarios,demonstrating its ability to preserve classification performance on clean images while significantly mitigating adversarial attacks.This visualization confirms that the proposed method is effective and reliable for defending against adversarial perturbations.Moreover,the proposed method incurs minimal computational overhead,making it suitable for real-time applications.Furthermore,owing to its model-agnostic nature,the proposed method can be easily incorporated into various neural network architectures,serving as a fundamental module for adversarial defense strategies. 展开更多
关键词 adversarial attacks deep learning artificial intelligence systems random cropping Gaussian filtering image smoothing
在线阅读 下载PDF
AMA:Adaptive Multimodal Adversarial Attack with Dynamic Perturbation Optimization
14
作者 Yufei Shi Ziwen He +2 位作者 Teng Jin Haochen Tong Zhangjie Fu 《Computer Modeling in Engineering & Sciences》 2025年第8期1831-1848,共18页
This article proposes an innovative adversarial attack method,AMA(Adaptive Multimodal Attack),which introduces an adaptive feedback mechanism by dynamically adjusting the perturbation strength.Specifically,AMA adjusts... This article proposes an innovative adversarial attack method,AMA(Adaptive Multimodal Attack),which introduces an adaptive feedback mechanism by dynamically adjusting the perturbation strength.Specifically,AMA adjusts perturbation amplitude based on task complexity and optimizes the perturbation direction based on the gradient direction in real time to enhance attack efficiency.Experimental results demonstrate that AMA elevates attack success rates from approximately 78.95%to 89.56%on visual question answering and from78.82%to 84.96%on visual reasoning tasks across representative vision-language benchmarks.These findings demonstrate AMA’s superior attack efficiency and reveal the vulnerability of current visual language models to carefully crafted adversarial examples,underscoring the need to enhance their robustness. 展开更多
关键词 adversarial attack visual language model black-box attack adaptive multimodal attack disturbance intensity
在线阅读 下载PDF
Bridging the Gap Between Individual and Universal Adversarial Perturbations
15
作者 Li Yanchun Li Zemin +2 位作者 Zeng Li Zhu Jiang Song Jingkuan 《China Communications》 2025年第9期244-263,共20页
In recent years,universal adversarial per-turbation(UAP)has attracted the attention of many re-searchers due to its good generalization.However,in order to generate an appropriate UAP,current methods usually require e... In recent years,universal adversarial per-turbation(UAP)has attracted the attention of many re-searchers due to its good generalization.However,in order to generate an appropriate UAP,current methods usually require either accessing the original dataset or meticulously constructing optimization functions and proxy datasets.In this paper,we aim to elimi-nate any dependency on proxy datasets and explore a method for generating Universal Adversarial Pertur-bations(UAP)on a single image.After revisiting re-search on UAP,we discovered that the key to gener-ating UAP lies in the accumulation of Individual Ad-versarial Perturbation(IAP)gradient,which prompted us to study the method of accumulating gradients from an IAP.We designed a simple and effective process to generate UAP,which only includes three steps:pre-cessing,generating an IAP and scaling the perturba-tions.Through our proposed process,any IAP gener-ated on an image can be constructed into a UAP with comparable performance,indicating that UAP can be generated free of data.Extensive experiments on var-ious classifiers and attack approaches demonstrate the superiority of our method on efficiency and aggressiveness. 展开更多
关键词 black-box attack data-independent transferability universal adversarial perturbation
在线阅读 下载PDF
Research on Emotion Classification Supported by Multimodal Adversarial Autoencoder
16
作者 Jing Yu 《Journal of Electronic Research and Application》 2025年第1期270-275,共6页
In this paper,the sentiment classification method of multimodal adversarial autoencoder is studied.This paper includes the introduction of the multimodal adversarial autoencoder emotion classification method and the e... In this paper,the sentiment classification method of multimodal adversarial autoencoder is studied.This paper includes the introduction of the multimodal adversarial autoencoder emotion classification method and the experiment of the emotion classification method based on the encoder.The experimental analysis shows that the encoder has higher precision than other encoders in emotion classification.It is hoped that this analysis can provide some reference for the emotion classification under the current intelligent algorithm mode. 展开更多
关键词 Artificial intelligence Multimode adversarial encoder Sentiment classification Evaluation criteria Modal Settings
在线阅读 下载PDF
Super-Resolution Generative Adversarial Network with Pyramid Attention Module for Face Generation
17
作者 Parvathaneni Naga Srinivasu G.JayaLakshmi +4 位作者 Sujatha Canavoy Narahari Victor Hugo C.de Albuquerque Muhammad Attique Khan Hee-Chan Cho Byoungchol Chang 《Computers, Materials & Continua》 2025年第10期2117-2139,共23页
The generation of high-quality,realistic face generation has emerged as a key field of research in computer vision.This paper proposes a robust approach that combines a Super-Resolution Generative Adversarial Network(... The generation of high-quality,realistic face generation has emerged as a key field of research in computer vision.This paper proposes a robust approach that combines a Super-Resolution Generative Adversarial Network(SRGAN)with a Pyramid Attention Module(PAM)to enhance the quality of deep face generation.The SRGAN framework is designed to improve the resolution of generated images,addressing common challenges such as blurriness and a lack of intricate details.The Pyramid Attention Module further complements the process by focusing on multi-scale feature extraction,enabling the network to capture finer details and complex facial features more effectively.The proposed method was trained and evaluated over 100 epochs on the CelebA dataset,demonstrating consistent improvements in image quality and a marked decrease in generator and discriminator losses,reflecting the model’s capacity to learn and synthesize high-quality images effectively,given adequate computational resources.Experimental outcome demonstrates that the SRGAN model with PAM module has outperformed,yielding an aggregate discriminator loss of 0.055 for real,0.043 for fake,and a generator loss of 10.58 after training for 100 epochs.The model has yielded an structural similarity index measure of 0.923,that has outperformed the other models that are considered in the current study for analysis. 展开更多
关键词 Artificial intelligence generative adversarial network pyramid attention module face generation deep learning
在线阅读 下载PDF
Diagnostic model for abnormal furnace conditions in blast furnace based on friendly adversarial training
18
作者 Fu-min Li Chang-hao Li +4 位作者 Song Liu Xiao-jie Liu Hong Xiao Jun Zhao Qing Lyu 《Journal of Iron and Steel Research International》 2025年第6期1477-1490,共14页
Accurate assessment of blast furnace conditions is a crucial component in the blast furnace control decision-making process.However,most adversarial models in the field currently update the parameters of the label pre... Accurate assessment of blast furnace conditions is a crucial component in the blast furnace control decision-making process.However,most adversarial models in the field currently update the parameters of the label predictor by minimising the objective function while maximising the objective function to update the domain discriminator's parameters.This strategy results in an excessive maximisation of the domain discriminator's loss.To address this,a friendly adversarial training-based tri-training furnace condition diagnosis model was proposed.This model employed a convolutional neural network-long short-term memory-attention mechanism network as a single-view feature extractor and used decision tree methods as three classifiers to compute the cosine similarity between features and representative vectors of each class.During the knowledge transfer process,the classifiers in this model have a specific goal;they not only seek to maximise the entropy of the target domain samples but also aim to minimise the entropy of the target domain samples when they are misclassified,thus resolving the trade-off in traditional models where robustness is improved at the expense of accuracy.Experimental results indicate that the diagnostic accuracy of this model reaches 96%,with an approximately 8%improvement over existing methods due to the inner optimisation approach.This model provides an effective and feasible solution for the efficient monitoring and diagnosis of blast furnace processes. 展开更多
关键词 Friendly adversarial training TRI-TRAINING Fault diagnosis Feature-based transfer learning Semi-supervised learning
原文传递
Hybrid Memory-Enhanced Autoencoder with Adversarial Training for Anomaly Detection in Virtual Power Plants
19
作者 Yuqiao Liu Chen Pan +1 位作者 YeonJae Oh Chang Gyoon Lim 《Computers, Materials & Continua》 2025年第3期4593-4629,共37页
Virtual Power Plants(VPPs)are integral to modern energy systems,providing stability and reliability in the face of the inherent complexities and fluctuations of solar power data.Traditional anomaly detection methodolo... Virtual Power Plants(VPPs)are integral to modern energy systems,providing stability and reliability in the face of the inherent complexities and fluctuations of solar power data.Traditional anomaly detection methodologies often need to adequately handle these fluctuations from solar radiation and ambient temperature variations.We introduce the Memory-Enhanced Autoencoder with Adversarial Training(MemAAE)model to overcome these limitations,designed explicitly for robust anomaly detection in VPP environments.The MemAAE model integrates three principal components:an LSTM-based autoencoder that effectively captures temporal dynamics to distinguish between normal and anomalous behaviors,an adversarial training module that enhances system resilience across diverse operational scenarios,and a prediction module that aids the autoencoder during the reconstruction process,thereby facilitating precise anomaly identification.Furthermore,MemAAE features a memory mechanism that stores critical pattern information,mitigating overfitting,alongside a dynamic threshold adjustment mechanism that adapts detection thresholds in response to evolving operational conditions.Our empirical evaluation of the MemAAE model using real-world solar power data shows that the model outperforms other comparative models on both datasets.On the Sopan-Finder dataset,MemAAE has an accuracy of 99.17%and an F1-score of 95.79%,while on the Sunalab Faro PV 2017 dataset,it has an accuracy of 97.67%and an F1-score of 93.27%.Significant performance advantages have been achieved on both datasets.These results show that MemAAE model is an effective method for real-time anomaly detection in virtual power plants(VPPs),which can enhance robustness and adaptability to inherent variables in solar power generation. 展开更多
关键词 Virtual power plants(VPPs) anomaly detection memory-enhanced autoencoder adversarial training solar power
在线阅读 下载PDF
A Generative Adversarial Network with an Attention Spatiotemporal Mechanism for Tropical Cyclone Forecasts
20
作者 Xiaohui LI Xinhai HAN +5 位作者 Jingsong YANG Jiuke WANG Guoqi HAN Jun DING Hui SHEN Jun YAN 《Advances in Atmospheric Sciences》 2025年第1期67-78,共12页
Tropical cyclones(TCs)are complex and powerful weather systems,and accurately forecasting their path,structure,and intensity remains a critical focus and challenge in meteorological research.In this paper,we propose a... Tropical cyclones(TCs)are complex and powerful weather systems,and accurately forecasting their path,structure,and intensity remains a critical focus and challenge in meteorological research.In this paper,we propose an Attention Spatio-Temporal predictive Generative Adversarial Network(AST-GAN)model for predicting the temporal and spatial distribution of TCs.The model forecasts the spatial distribution of TC wind speeds for the next 15 hours at 3-hour intervals,emphasizing the cyclone's center,high wind-speed areas,and its asymmetric structure.To effectively capture spatiotemporal feature transfer at different time steps,we employ a channel attention mechanism for feature selection,enhancing model performance and reducing parameter redundancy.We utilized High-Resolution Weather Research and Forecasting(HWRF)data to train our model,allowing it to assimilate a wide range of TC motion patterns.The model is versatile and can be applied to various complex scenarios,such as multiple TCs moving simultaneously or TCs approaching landfall.Our proposed model demonstrates superior forecasting performance,achieving a root-mean-square error(RMSE)of 0.71 m s^(-1)for overall wind speed and 2.74 m s^(-1)for maximum wind speed when benchmarked against ground truth data from HWRF.Furthermore,the model underwent optimization and independent testing using ERA5reanalysis data,showcasing its stability and scalability.After fine-tuning on the ERA5 dataset,the model achieved an RMSE of 1.33 m s^(-1)for wind speed and 1.75 m s^(-1)for maximum wind speed.The AST-GAN model outperforms other state-of-the-art models in RMSE on both the HWRF and ERA5 datasets,maintaining its superior performance and demonstrating its effectiveness for spatiotemporal prediction of TCs. 展开更多
关键词 tropical cyclones spatiotemporal prediction generative adversarial network attention spatiotemporal mechanism deep learning
在线阅读 下载PDF
上一页 1 2 32 下一页 到第
使用帮助 返回顶部