期刊文献+
共找到671篇文章
< 1 2 34 >
每页显示 20 50 100
A Survey of Adversarial Examples in Computer Vision:Attack,Defense,and Beyond
1
作者 XU Keyizhi LU Yajuan +1 位作者 WANG Zhongyuan LIANG Chao 《Wuhan University Journal of Natural Sciences》 2025年第1期1-20,共20页
Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples ca... Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples can easily mislead DNNs into incorrect behavior via the injection of imperceptible modification to the input data.In this survey,we focus on(1)adversarial attack algorithms to generate adversarial examples,(2)adversarial defense techniques to secure DNNs against adversarial examples,and(3)important problems in the realm of adversarial examples beyond attack and defense,including the theoretical explanations,trade-off issues and benign attacks in adversarial examples.Additionally,we draw a brief comparison between recently published surveys on adversarial examples,and identify the future directions for the research of adversarial examples,such as the generalization of methods and the understanding of transferability,that might be solutions to the open problems in this field. 展开更多
关键词 computer vision adversarial examples adversarial attack adversarial defense
原文传递
Improving Robustness for Tag Recommendation via Self-Paced Adversarial Metric Learning
2
作者 Zhengshun Fei Jianxin Chen +1 位作者 Gui Chen Xinjian Xiang 《Computers, Materials & Continua》 2025年第3期4237-4261,共25页
Tag recommendation systems can significantly improve the accuracy of information retrieval by recommending relevant tag sets that align with user preferences and resource characteristics.However,metric learning method... Tag recommendation systems can significantly improve the accuracy of information retrieval by recommending relevant tag sets that align with user preferences and resource characteristics.However,metric learning methods often suffer from high sensitivity,leading to unstable recommendation results when facing adversarial samples generated through malicious user behavior.Adversarial training is considered to be an effective method for improving the robustness of tag recommendation systems and addressing adversarial samples.However,it still faces the challenge of overfitting.Although curriculum learning-based adversarial training somewhat mitigates this issue,challenges still exist,such as the lack of a quantitative standard for attack intensity and catastrophic forgetting.To address these challenges,we propose a Self-Paced Adversarial Metric Learning(SPAML)method.First,we employ a metric learning model to capture the deep distance relationships between normal samples.Then,we incorporate a self-paced adversarial training model,which dynamically adjusts the weights of adversarial samples,allowing the model to progressively learn from simpler to more complex adversarial samples.Finally,we jointly optimize the metric learning loss and self-paced adversarial training loss in an adversarial manner,enhancing the robustness and performance of tag recommendation tasks.Extensive experiments on the MovieLens and LastFm datasets demonstrate that SPAML achieves F1@3 and NDCG@3 scores of 22%and 32.7%on the MovieLens dataset,and 19.4%and 29%on the LastFm dataset,respectively,outperforming the most competitive baselines.Specifically,F1@3 improves by 4.7%and 6.8%,and NDCG@3 improves by 5.0%and 6.9%,respectively. 展开更多
关键词 Tag recommendation metric learning adversarial training self-paced adversarial training ROBUSTNESS
在线阅读 下载PDF
Exploratory Research on Defense against Natural Adversarial Examples in Image Classification
3
作者 Yaoxuan Zhu Hua Yang Bin Zhu 《Computers, Materials & Continua》 2025年第2期1947-1968,共22页
The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natura... The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natural adversarial examples has posed significant challenges, as traditional defense methods against adversarial attacks have proven to be largely ineffective against these natural adversarial examples. This paper explores defenses against these natural adversarial examples from three perspectives: adversarial examples, model architecture, and dataset. First, it employs Class Activation Mapping (CAM) to visualize how models classify natural adversarial examples, identifying several typical attack patterns. Next, various common CNN models are analyzed to evaluate their susceptibility to these attacks, revealing that different architectures exhibit varying defensive capabilities. The study finds that as the depth of a network increases, its defenses against natural adversarial examples strengthen. Lastly, Finally, the impact of dataset class distribution on the defense capability of models is examined, focusing on two aspects: the number of classes in the training set and the number of predicted classes. This study investigates how these factors influence the model’s ability to defend against natural adversarial examples. Results indicate that reducing the number of training classes enhances the model’s defense against natural adversarial examples. Additionally, under a fixed number of training classes, some CNN models show an optimal range of predicted classes for achieving the best defense performance against these adversarial examples. 展开更多
关键词 Image classification convolutional neural network natural adversarial example data set defense against adversarial examples
在线阅读 下载PDF
Enhancing Adversarial Example Transferability via Regularized Constrained Feature Layer
4
作者 Xiaoyin Yi Long Chen +2 位作者 Jiacheng Huang Ning Yu Qian Huang 《Computers, Materials & Continua》 2025年第4期157-175,共19页
Transfer-based Adversarial Attacks(TAAs)can deceive a victim model even without prior knowledge.This is achieved by leveraging the property of adversarial examples.That is,when generated from a surrogate model,they re... Transfer-based Adversarial Attacks(TAAs)can deceive a victim model even without prior knowledge.This is achieved by leveraging the property of adversarial examples.That is,when generated from a surrogate model,they retain their features if applied to other models due to their good transferability.However,adversarial examples often exhibit overfitting,as they are tailored to exploit the particular architecture and feature representation of source models.Consequently,when attempting black-box transfer attacks on different target models,their effectiveness is decreased.To solve this problem,this study proposes an approach based on a Regularized Constrained Feature Layer(RCFL).The proposed method first uses regularization constraints to attenuate the initial examples of low-frequency components.Perturbations are then added to a pre-specified layer of the source model using the back-propagation technique,in order to modify the original adversarial examples.Afterward,a regularized loss function is used to enhance the black-box transferability between different target models.The proposed method is finally tested on the ImageNet,CIFAR-100,and Stanford Car datasets with various target models,The obtained results demonstrate that it achieves a significantly higher transfer-based adversarial attack success rate compared with baseline techniques. 展开更多
关键词 adversarial examples black-box transferability regularized constrained transfer-based adversarial attacks
在线阅读 下载PDF
5DGWO-GAN:A Novel Five-Dimensional Gray Wolf Optimizer for Generative Adversarial Network-Enabled Intrusion Detection in IoT Systems
5
作者 Sarvenaz Sadat Khatami Mehrdad Shoeibi +2 位作者 Anita Ershadi Oskouei Diego Martín Maral Keramat Dashliboroun 《Computers, Materials & Continua》 SCIE EI 2025年第1期881-911,共31页
The Internet of Things(IoT)is integral to modern infrastructure,enabling connectivity among a wide range of devices from home automation to industrial control systems.With the exponential increase in data generated by... The Internet of Things(IoT)is integral to modern infrastructure,enabling connectivity among a wide range of devices from home automation to industrial control systems.With the exponential increase in data generated by these interconnected devices,robust anomaly detection mechanisms are essential.Anomaly detection in this dynamic environment necessitates methods that can accurately distinguish between normal and anomalous behavior by learning intricate patterns.This paper presents a novel approach utilizing generative adversarial networks(GANs)for anomaly detection in IoT systems.However,optimizing GANs involves tuning hyper-parameters such as learning rate,batch size,and optimization algorithms,which can be challenging due to the non-convex nature of GAN loss functions.To address this,we propose a five-dimensional Gray wolf optimizer(5DGWO)to optimize GAN hyper-parameters.The 5DGWO introduces two new types of wolves:gamma(γ)for improved exploitation and convergence,and theta(θ)for enhanced exploration and escaping local minima.The proposed system framework comprises four key stages:1)preprocessing,2)generative model training,3)autoencoder(AE)training,and 4)predictive model training.The generative models are utilized to assist the AE training,and the final predictive models(including convolutional neural network(CNN),deep belief network(DBN),recurrent neural network(RNN),random forest(RF),and extreme gradient boosting(XGBoost))are trained using the generated data and AE-encoded features.We evaluated the system on three benchmark datasets:NSL-KDD,UNSW-NB15,and IoT-23.Experiments conducted on diverse IoT datasets show that our method outperforms existing anomaly detection strategies and significantly reduces false positives.The 5DGWO-GAN-CNNAE exhibits superior performance in various metrics,including accuracy,recall,precision,root mean square error(RMSE),and convergence trend.The proposed 5DGWO-GAN-CNNAE achieved the lowest RMSE values across the NSL-KDD,UNSW-NB15,and IoT-23 datasets,with values of 0.24,1.10,and 0.09,respectively.Additionally,it attained the highest accuracy,ranging from 94%to 100%.These results suggest a promising direction for future IoT security frameworks,offering a scalable and efficient solution to safeguard against evolving cyber threats. 展开更多
关键词 Internet of things intrusion detection generative adversarial networks five-dimensional binary gray wolf optimizer deep learning
在线阅读 下载PDF
Attention-Guided Sparse Adversarial Attacks with Gradient Dropout
6
作者 ZHAO Hongzhi HAO Lingguang +2 位作者 HAO Kuangrong WEI Bing LIU Xiaoyan 《Journal of Donghua University(English Edition)》 CAS 2024年第5期545-556,共12页
Deep neural networks are extremely vulnerable to externalities from intentionally generated adversarial examples which are achieved by overlaying tiny noise on the clean images.However,most existing transfer-based att... Deep neural networks are extremely vulnerable to externalities from intentionally generated adversarial examples which are achieved by overlaying tiny noise on the clean images.However,most existing transfer-based attack methods are chosen to add perturbations on each pixel of the original image with the same weight,resulting in redundant noise in the adversarial examples,which makes them easier to be detected.Given this deliberation,a novel attentionguided sparse adversarial attack strategy with gradient dropout that can be readily incorporated with existing gradient-based methods is introduced to minimize the intensity and the scale of perturbations and ensure the effectiveness of adversarial examples at the same time.Specifically,in the gradient dropout phase,some relatively unimportant gradient information is randomly discarded to limit the intensity of the perturbation.In the attentionguided phase,the influence of each pixel on the model output is evaluated by using a soft mask-refined attention mechanism,and the perturbation of those pixels with smaller influence is limited to restrict the scale of the perturbation.After conducting thorough experiments on the NeurIPS 2017 adversarial dataset and the ILSVRC 2012 validation dataset,the proposed strategy holds the potential to significantly diminish the superfluous noise present in adversarial examples,all while keeping their attack efficacy intact.For instance,in attacks on adversarially trained models,upon the integration of the strategy,the average level of noise injected into images experiences a decline of 8.32%.However,the average attack success rate decreases by only 0.34%.Furthermore,the competence is possessed to substantially elevate the attack success rate by merely introducing a slight degree of perturbation. 展开更多
关键词 deep neural network adversarial attack sparse adversarial attack adversarial transferability adversarial example
在线阅读 下载PDF
Adversarial Prompt Detection in Large Language Models:A Classification-Driven Approach
7
作者 Ahmet Emre Ergün Aytug Onan 《Computers, Materials & Continua》 2025年第6期4855-4877,共23页
Large Language Models(LLMs)have significantly advanced human-computer interaction by improving natural language understanding and generation.However,their vulnerability to adversarial prompts–carefully designed input... Large Language Models(LLMs)have significantly advanced human-computer interaction by improving natural language understanding and generation.However,their vulnerability to adversarial prompts–carefully designed inputs that manipulate model outputs–presents substantial challenges.This paper introduces a classification-based approach to detect adversarial prompts by utilizing both prompt features and prompt response features.Elevenmachine learning models were evaluated based on key metrics such as accuracy,precision,recall,and F1-score.The results show that the Convolutional Neural Network–Long Short-Term Memory(CNN-LSTM)cascade model delivers the best performance,especially when using prompt features,achieving an accuracy of over 97%in all adversarial scenarios.Furthermore,the Support Vector Machine(SVM)model performed best with prompt response features,particularly excelling in prompt type classification tasks.Classification results revealed that certain types of adversarial attacks,such as“Word Level”and“Adversarial Prefix”,were particularly difficult to detect,as indicated by their low recall and F1-scores.These findings suggest that more subtle manipulations can evade detection mechanisms.In contrast,attacks like“Sentence Level”and“Adversarial Insertion”were easier to identify,due to the model’s effectiveness in recognizing inserted content.Natural Language Processing(NLP)techniques played a critical role by enabling the extraction of semantic and syntactic features from both prompts and their corresponding responses.These insights highlight the importance of combining traditional and deep learning approaches,along with advanced NLP techniques,to build more reliable adversarial prompt detection systems for LLMs. 展开更多
关键词 LLM CLASSIFICATION NLP adversarial PROMPT machine learning deep learning
在线阅读 下载PDF
Pore structure properties characterization of shale using generative adversarial network:Image augmentation,super-resolution reconstruction,and multi-mineral auto-segmentation
8
作者 LIU Fugui YANG Yongfei +7 位作者 YANG Haiyuan TAO Liu TAO Yunwei ZHANG Kai SUN Hai ZHANG Lei ZHONG Junjie YAO Jun 《Petroleum Exploration and Development》 2025年第5期1262-1274,共13页
Existing imaging techniques cannot simultaneously achieve high resolution and a wide field of view,and manual multi-mineral segmentation in shale lacks precision.To address these limitations,we propose a comprehensive... Existing imaging techniques cannot simultaneously achieve high resolution and a wide field of view,and manual multi-mineral segmentation in shale lacks precision.To address these limitations,we propose a comprehensive framework based on generative adversarial network(GAN)for characterizing pore structure properties of shale,which incorporates image augmentation,super-resolution reconstruction,and multi-mineral auto-segmentation.Using real 2D and 3D shale images,the framework was assessed through correlation function,entropy,porosity,pore size distribution,and permeability.The application results show that this framework enables the enhancement of 3D low-resolution digital cores by a scale factor of 8,without paired shale images,effectively reconstructing the unresolved fine-scale pores under a low resolution,rather than merely denoising,deblurring,and edge clarification.The trained GAN-based segmentation model effectively improves manual multi-mineral segmentation results,resulting in a strong resemblance to real samples in terms of pore size distribution and permeability.This framework significantly improves the characterization of complex shale microstructures and can be expanded to other heterogeneous porous media,such as carbonate,coal,and tight sandstone reservoirs. 展开更多
关键词 SHALE pore structure parameter generative adversarial network super-resolution multi-mineral auto-segmentation multiscale fusion
在线阅读 下载PDF
A Computationally Efficient Density-Aware Adversarial Resampling Framework Using Wasserstein GANs for Imbalance and Overlapping Data Classification
9
作者 Sidra Jubair Jie Yang +2 位作者 Bilal Ali Walid Emam Yusra Tashkandy 《Computer Modeling in Engineering & Sciences》 2025年第7期511-534,共24页
Effectively handling imbalanced datasets remains a fundamental challenge in computational modeling and machine learning,particularly when class overlap significantly deteriorates classification performance.Traditional... Effectively handling imbalanced datasets remains a fundamental challenge in computational modeling and machine learning,particularly when class overlap significantly deteriorates classification performance.Traditional oversampling methods often generate synthetic samples without considering density variations,leading to redundant or misleading instances that exacerbate class overlap in high-density regions.To address these limitations,we propose Wasserstein Generative Adversarial Network Variational Density Estimation WGAN-VDE,a computationally efficient density-aware adversarial resampling framework that enhances minority class representation while strategically reducing class overlap.The originality of WGAN-VDE lies in its density-aware sample refinement,ensuring that synthetic samples are positioned in underrepresented regions,thereby improving class distinctiveness.By applying structured feature representation,targeted sample generation,and density-based selection mechanisms strategies,the proposed framework ensures the generation of well-separated and diverse synthetic samples,improving class separability and reducing redundancy.The experimental evaluation on 20 benchmark datasets demonstrates that this approach outperforms 11 state-of-the-art rebalancing techniques,achieving superior results in F1-score,Accuracy,G-Mean,and AUC metrics.These results establish the proposed method as an effective and robust computational approach,suitable for diverse engineering and scientific applications involving imbalanced data classification and computational modeling. 展开更多
关键词 Machine learning imbalanced classification class overlap computational modelling adversarial resampling density estimation
在线阅读 下载PDF
Integrating Speech-to-Text for Image Generation Using Generative Adversarial Networks
10
作者 Smita Mahajan Shilpa Gite +5 位作者 Biswajeet Pradhan Abdullah Alamri Shaunak Inamdar Deva Shriyansh Akshat Ashish Shah Shruti Agarwal 《Computer Modeling in Engineering & Sciences》 2025年第5期2001-2026,共26页
The development of generative architectures has resulted in numerous novel deep-learning models that generate images using text inputs.However,humans naturally use speech for visualization prompts.Therefore,this paper... The development of generative architectures has resulted in numerous novel deep-learning models that generate images using text inputs.However,humans naturally use speech for visualization prompts.Therefore,this paper proposes an architecture that integrates speech prompts as input to image-generation Generative Adversarial Networks(GANs)model,leveraging Speech-to-Text translation along with the CLIP+VQGAN model.The proposed method involves translating speech prompts into text,which is then used by the Contrastive Language-Image Pretraining(CLIP)+Vector Quantized Generative Adversarial Network(VQGAN)model to generate images.This paper outlines the steps required to implement such a model and describes in detail the methods used for evaluating the model.The GAN model successfully generates artwork from descriptions using speech and text prompts.Experimental outcomes of synthesized images demonstrate that the proposed methodology can produce beautiful abstract visuals containing elements from the input prompts.The model achieved a Frechet Inception Distance(FID)score of 28.75,showcasing its capability to produce high-quality and diverse images.The proposed model can find numerous applications in educational,artistic,and design spaces due to its ability to generate images using speech and the distinct abstract artistry of the output images.This capability is demonstrated by giving the model out-of-the-box prompts to generate never-before-seen images with plausible realistic qualities. 展开更多
关键词 Generative adversarial networks speech-to-image translation visualization transformers prompt engineering
在线阅读 下载PDF
Deepfake Detection Using Adversarial Neural Network
11
作者 Priyadharsini Selvaraj Senthil Kumar Jagatheesaperumal +3 位作者 Karthiga Marimuthu Oviya Saravanan Bader Fahad Alkhamees Mohammad Mehedi Hassan 《Computer Modeling in Engineering & Sciences》 2025年第5期1575-1594,共20页
With expeditious advancements in AI-driven facial manipulation techniques,particularly deepfake technology,there is growing concern over its potential misuse.Deepfakes pose a significant threat to society,partic-ularl... With expeditious advancements in AI-driven facial manipulation techniques,particularly deepfake technology,there is growing concern over its potential misuse.Deepfakes pose a significant threat to society,partic-ularly by infringing on individuals’privacy.Amid significant endeavors to fabricate systems for identifying deepfake fabrications,existing methodologies often face hurdles in adjusting to innovative forgery techniques and demonstrate increased vulnerability to image and video clarity variations,thereby hindering their broad applicability to images and videos produced by unfamiliar technologies.In this manuscript,we endorse resilient training tactics to amplify generalization capabilities.In adversarial training,models are trained using deliberately crafted samples to deceive classification systems,thereby significantly enhancing their generalization ability.In response to this challenge,we propose an innovative hybrid adversarial training framework integrating Virtual Adversarial Training(VAT)with Two-Generated Blurred Adversarial Training.This combined framework bolsters the model’s resilience in detecting deepfakes made using unfamiliar deep learning technologies.Through such adversarial training,models are prompted to acquire more versatile attributes.Through experimental studies,we demonstrate that our model achieves higher accuracy than existing models. 展开更多
关键词 Deepfake GENERALIZATION forgery detection pixel-wise Gaussian blurring virtual adversarial training
在线阅读 下载PDF
Incomplete Physical Adversarial Attack on Face Recognition
12
作者 HU Weitao XU Wujun 《Journal of Donghua University(English Edition)》 2025年第4期442-448,共7页
In recent work,adversarial stickers are widely used to attack face recognition(FR)systems in the physical world.However,it is difficult to evaluate the performance of physical attacks because of the lack of volunteers... In recent work,adversarial stickers are widely used to attack face recognition(FR)systems in the physical world.However,it is difficult to evaluate the performance of physical attacks because of the lack of volunteers in the experiment.In this paper,a simple attack method called incomplete physical adversarial attack(IPAA)is proposed to simulate physical attacks.Different from the process of physical attacks,when an IPAA is conducted,a photo of the adversarial sticker is embedded into a facial image as the input to attack FR systems,which can obtain results similar to those of physical attacks without inviting any volunteers.The results show that IPAA has a higher similarity with physical attacks than digital attacks,indicating that IPAA is able to evaluate the performance of physical attacks.IPAA is effective in quantitatively measuring the impact of the sticker location on the results of attacks. 展开更多
关键词 physical attack digital attack face recognition interferential variable adversarial example
在线阅读 下载PDF
A solution framework for the experimental data shortage problem of lithium-ion batteries:Generative adversarial network-based data augmentation for battery state estimation
13
作者 Jinghua Sun Ankun Gu Josef Kainz 《Journal of Energy Chemistry》 2025年第4期476-497,共22页
In order to address the widespread data shortage problem in battery research,this paper proposes a generative adversarial network model that combines it with deep convolutional networks,the Wasserstein distance,and th... In order to address the widespread data shortage problem in battery research,this paper proposes a generative adversarial network model that combines it with deep convolutional networks,the Wasserstein distance,and the gradient penalty to achieve data augmentation.To lower the threshold for implementing the proposed method,transfer learning is further introduced.The W-DC-GAN-GP-TL framework is thereby formed.This framework is evaluated on 3 different publicly available datasets to judge the quality of generated data.Through visual comparisons and the examination of two visualization methods(probability density function(PDF)and principal component analysis(PCA)),it is demonstrated that the generated data is hard to distinguish from the real data.The application of generated data for training a battery state model using transfer learning is further evaluated.Specifically,Bi-GRU-based and Transformer-based methods are implemented on 2 separate datasets for estimating state of health(SOH)and state of charge(SOC),respectively.The results indicate that the proposed framework demonstrates satisfactory performance in different scenarios:for the data replacement scenario,where real data are removed and replaced with generated data,the state estimator accuracy decreases only slightly;for the data enhancement scenario,the estimator accuracy is further improved.The estimation accuracy of SOH and SOC is as low as 0.69%and 0.58%root mean square error(RMSE)after applying the proposed framework.This framework provides a reliable method for enriching battery measurement data.It is a generalized framework capable of generating a variety of time series data. 展开更多
关键词 Lithium-ion battery Generative adversarial network Data augmentation State of health State of charge Data shortage
在线阅读 下载PDF
Randomly generating realistic calcareous sand for directional seepage simulation using deep convolutional generative adversarial networks
14
作者 Dou Chen Wei Zhang +4 位作者 Chenghao Li Linjian Ma Xiaoqing Shi Haiyang Li Honghu Zhu 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第11期7297-7312,共16页
The issues of seepage in calcareous sand foundations and backfillshave a potentially detrimental effect on the stability and safety of superstructures.Simplifying calcareous sand grains as spheres or ellipsoids in num... The issues of seepage in calcareous sand foundations and backfillshave a potentially detrimental effect on the stability and safety of superstructures.Simplifying calcareous sand grains as spheres or ellipsoids in numerical simulations may lead to significantinaccuracies.In this paper,we present a novel intelligence framework based on a deep convolutional generative adversarial network(DCGAN).A DCGAN model was trained using a training dataset comprising 11,625 real particles for the random generation of three-dimensional calcareous sand particles.Subsequently,3800 realistic calcareous sand particles with intra-particle voids were generated.Generative fidelityand validity of the DCGAN model were well verifiedby the consistency of the statistical values of nine morphological parameters of both the training dataset and the generated dataset.Digital calcareous sand columns were obtained through gravitational deposition simulation of the generated particles.Directional seepage simulations were conducted,and the vertical permeability values of the sand columns were found to be in accordance with the objective law.The results demonstrate the potential of the proposed framework for stochastic modeling and multi-scale simulation of the seepage behaviors in calcareous sand foundations and backfills. 展开更多
关键词 Calcareous sand Random generation Generative adversarial networks Discrete element modeling Signed distance field Vertical permeability
在线阅读 下载PDF
Optimization Scheduling of Hydrogen-Coupled Electro-Heat-Gas Integrated Energy System Based on Generative Adversarial Imitation Learning
15
作者 Baiyue Song Chenxi Zhang +1 位作者 Wei Zhang Leiyu Wan 《Energy Engineering》 2025年第12期4919-4945,共27页
Hydrogen energy is a crucial support for China’s low-carbon energy transition.With the large-scale integration of renewable energy,the combination of hydrogen and integrated energy systems has become one of the most ... Hydrogen energy is a crucial support for China’s low-carbon energy transition.With the large-scale integration of renewable energy,the combination of hydrogen and integrated energy systems has become one of the most promising directions of development.This paper proposes an optimized schedulingmodel for a hydrogen-coupled electro-heat-gas integrated energy system(HCEHG-IES)using generative adversarial imitation learning(GAIL).The model aims to enhance renewable-energy absorption,reduce carbon emissions,and improve grid-regulation flexibility.First,the optimal scheduling problem of HCEHG-IES under uncertainty is modeled as a Markov decision process(MDP).To overcome the limitations of conventional deep reinforcement learning algorithms—including long optimization time,slow convergence,and subjective reward design—this study augments the PPO algorithm by incorporating a discriminator network and expert data.The newly developed algorithm,termed GAIL,enables the agent to perform imitation learning from expert data.Based on this model,dynamic scheduling decisions are made in continuous state and action spaces,generating optimal energy-allocation and management schemes.Simulation results indicate that,compared with traditional reinforcement-learning algorithms,the proposed algorithmoffers better economic performance.Guided by expert data,the agent avoids blind optimization,shortens the offline training time,and improves convergence performance.In the online phase,the algorithm enables flexible energy utilization,thereby promoting renewable-energy absorption and reducing carbon emissions. 展开更多
关键词 Hydrogen energy optimization dispatch generative adversarial imitation learning proximal policy optimization imitation learning renewable energy
在线阅读 下载PDF
AMA:Adaptive Multimodal Adversarial Attack with Dynamic Perturbation Optimization
16
作者 Yufei Shi Ziwen He +2 位作者 Teng Jin Haochen Tong Zhangjie Fu 《Computer Modeling in Engineering & Sciences》 2025年第8期1831-1848,共18页
This article proposes an innovative adversarial attack method,AMA(Adaptive Multimodal Attack),which introduces an adaptive feedback mechanism by dynamically adjusting the perturbation strength.Specifically,AMA adjusts... This article proposes an innovative adversarial attack method,AMA(Adaptive Multimodal Attack),which introduces an adaptive feedback mechanism by dynamically adjusting the perturbation strength.Specifically,AMA adjusts perturbation amplitude based on task complexity and optimizes the perturbation direction based on the gradient direction in real time to enhance attack efficiency.Experimental results demonstrate that AMA elevates attack success rates from approximately 78.95%to 89.56%on visual question answering and from78.82%to 84.96%on visual reasoning tasks across representative vision-language benchmarks.These findings demonstrate AMA’s superior attack efficiency and reveal the vulnerability of current visual language models to carefully crafted adversarial examples,underscoring the need to enhance their robustness. 展开更多
关键词 adversarial attack visual language model black-box attack adaptive multimodal attack disturbance intensity
在线阅读 下载PDF
Mitigating Adversarial Attack through Randomization Techniques and Image Smoothing
17
作者 Hyeong-Gyeong Kim Sang-Min Choi +1 位作者 Hyeon Seo Suwon Lee 《Computers, Materials & Continua》 2025年第9期4381-4397,共17页
Adversarial attacks pose a significant threat to artificial intelligence systems by exposing them to vulnerabilities in deep learning models.Existing defense mechanisms often suffer drawbacks,such as the need for mode... Adversarial attacks pose a significant threat to artificial intelligence systems by exposing them to vulnerabilities in deep learning models.Existing defense mechanisms often suffer drawbacks,such as the need for model retraining,significant inference time overhead,and limited effectiveness against specific attack types.Achieving perfect defense against adversarial attacks remains elusive,emphasizing the importance of mitigation strategies.In this study,we propose a defense mechanism that applies random cropping and Gaussian filtering to input images to mitigate the impact of adversarial attacks.First,the image was randomly cropped to vary its dimensions and then placed at the center of a fixed 299299 space,with the remaining areas filled with zero padding.Subsequently,Gaussian×filtering with a 77 kernel and a standard deviation of two was applied using a convolution operation.Finally,the×smoothed image was fed into the classification model.The proposed defense method consistently appeared in the upperright region across all attack scenarios,demonstrating its ability to preserve classification performance on clean images while significantly mitigating adversarial attacks.This visualization confirms that the proposed method is effective and reliable for defending against adversarial perturbations.Moreover,the proposed method incurs minimal computational overhead,making it suitable for real-time applications.Furthermore,owing to its model-agnostic nature,the proposed method can be easily incorporated into various neural network architectures,serving as a fundamental module for adversarial defense strategies. 展开更多
关键词 adversarial attacks deep learning artificial intelligence systems random cropping Gaussian filtering image smoothing
在线阅读 下载PDF
Research on Emotion Classification Supported by Multimodal Adversarial Autoencoder
18
作者 Jing Yu 《Journal of Electronic Research and Application》 2025年第1期270-275,共6页
In this paper,the sentiment classification method of multimodal adversarial autoencoder is studied.This paper includes the introduction of the multimodal adversarial autoencoder emotion classification method and the e... In this paper,the sentiment classification method of multimodal adversarial autoencoder is studied.This paper includes the introduction of the multimodal adversarial autoencoder emotion classification method and the experiment of the emotion classification method based on the encoder.The experimental analysis shows that the encoder has higher precision than other encoders in emotion classification.It is hoped that this analysis can provide some reference for the emotion classification under the current intelligent algorithm mode. 展开更多
关键词 Artificial intelligence Multimode adversarial encoder Sentiment classification Evaluation criteria Modal Settings
在线阅读 下载PDF
Bridging the Gap Between Individual and Universal Adversarial Perturbations
19
作者 Li Yanchun Li Zemin +2 位作者 Zeng Li Zhu Jiang Song Jingkuan 《China Communications》 2025年第9期244-263,共20页
In recent years,universal adversarial per-turbation(UAP)has attracted the attention of many re-searchers due to its good generalization.However,in order to generate an appropriate UAP,current methods usually require e... In recent years,universal adversarial per-turbation(UAP)has attracted the attention of many re-searchers due to its good generalization.However,in order to generate an appropriate UAP,current methods usually require either accessing the original dataset or meticulously constructing optimization functions and proxy datasets.In this paper,we aim to elimi-nate any dependency on proxy datasets and explore a method for generating Universal Adversarial Pertur-bations(UAP)on a single image.After revisiting re-search on UAP,we discovered that the key to gener-ating UAP lies in the accumulation of Individual Ad-versarial Perturbation(IAP)gradient,which prompted us to study the method of accumulating gradients from an IAP.We designed a simple and effective process to generate UAP,which only includes three steps:pre-cessing,generating an IAP and scaling the perturba-tions.Through our proposed process,any IAP gener-ated on an image can be constructed into a UAP with comparable performance,indicating that UAP can be generated free of data.Extensive experiments on var-ious classifiers and attack approaches demonstrate the superiority of our method on efficiency and aggressiveness. 展开更多
关键词 black-box attack data-independent transferability universal adversarial perturbation
在线阅读 下载PDF
Autonomous Cyber-Physical System for Anomaly Detection and Attack Prevention Using Transformer-Based Attention Generative Adversarial Residual Network
20
作者 Abrar M.Alajlan Marwah M.Almasri 《Computers, Materials & Continua》 2025年第12期5237-5262,共26页
Cyber-Physical Systems integrated with information technologies introduce vulnerabilities that extend beyond traditional cyber threats.Attackers can non-invasively manipulate sensors and spoof controllers,which in tur... Cyber-Physical Systems integrated with information technologies introduce vulnerabilities that extend beyond traditional cyber threats.Attackers can non-invasively manipulate sensors and spoof controllers,which in turn increases the autonomy of the system.Even though the focus on protecting against sensor attacks increases,there is still uncertainty about the optimal timing for attack detection.Existing systems often struggle to manage the trade-off between latency and false alarm rate,leading to inefficiencies in real-time anomaly detection.This paper presents a framework designed to monitor,predict,and control dynamic systems with a particular emphasis on detecting and adapting to changes,including anomalies such as“drift”and“attack”.The proposed algorithm integrates a Transformer-based Attention Generative Adversarial Residual model,which combines the strengths of generative adversarial networks,residual networks,and attention algorithms.The system operates in two phases:offline and online.During the offline phase,the proposed model is trained to learn complex patterns,enabling robust anomaly detection.The online phase applies a trained model,where the drift adapter adjusts the model to handle data changes,and the attack detector identifies deviations by comparing predicted and actual values.Based on the output of the attack detector,the controller makes decisions then the actuator executes suitable actions.Finally,the experimental findings show that the proposed model balances detection accuracy of 99.25%,precision of 98.84%,sensitivity of 99.10%,specificity of 98.81%,and an F1-score of 98.96%,thus provides an effective solution for dynamic and safety-critical environments. 展开更多
关键词 Cyber-physical systems cyber threats generative adversarial networks residual networks and attention algorithms
在线阅读 下载PDF
上一页 1 2 34 下一页 到第
使用帮助 返回顶部