期刊文献+
共找到9篇文章
< 1 >
每页显示 20 50 100
Redefining the Programmer:Human-AI Collaboration,LLMs,and Security in Modern Software Engineering
1
作者 Elyson De La Cruz Hanh Le +2 位作者 Karthik Meduri Geeta Sandeep Nadella Hari Gonaygunta 《Computers, Materials & Continua》 2025年第11期3569-3582,共14页
The rapid integration of artificial intelligence(AI)into software development,driven by large language models(LLMs),is reshaping the role of programmers from traditional coders into strategic collaborators within Indu... The rapid integration of artificial intelligence(AI)into software development,driven by large language models(LLMs),is reshaping the role of programmers from traditional coders into strategic collaborators within Industry 4.0 ecosystems.This qualitative study employs a hermeneutic phenomenological approach to explore the lived experiences of Information Technology(IT)professionals as they navigate a dynamic technological landscape marked by intelligent automation,shifting professional identities,and emerging ethical concerns.Findings indicate that developers are actively adapting to AI-augmented environments by engaging in continuous upskilling,prompt engineering,interdisciplinary collaboration,and heightened ethical awareness.However,participants also voiced growing concerns about the reliability and security of AI-generated code,noting that these tools can introduce hidden vulnerabilities and reduce critical engagement due to automation bias.Many described instances of flawed logic,insecure patterns,or syntactically correct but contextually inappropriate suggestions,underscoring the need for rigorous human oversight.Additionally,the study reveals anxieties around job displacement and the gradual erosion of fundamental coding skills,particularly in environments where AI tools dominate routine development tasks.These findings highlight an urgent need for educational reforms,industry standards,and organizational policies that prioritize both technical robustness and the preservation of human expertise.As AI becomes increasingly embedded in software engineering workflows,this research offers timely insights into how developers and organizations can responsibly integrate intelligent systems to promote accountability,resilience,and innovation across the software development lifecycle. 展开更多
关键词 Human-ai collaboration large language models ai security developer identity ethical ai in software development ai-assisted programming
在线阅读 下载PDF
Adaptive Backdoor Attack against Deep Neural Networks 被引量:1
2
作者 Honglu He Zhiying Zhu Xinpeng Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第9期2617-2633,共17页
In recent years,the number of parameters of deep neural networks(DNNs)has been increasing rapidly.The training of DNNs is typically computation-intensive.As a result,many users leverage cloud computing and outsource t... In recent years,the number of parameters of deep neural networks(DNNs)has been increasing rapidly.The training of DNNs is typically computation-intensive.As a result,many users leverage cloud computing and outsource their training procedures.Outsourcing computation results in a potential risk called backdoor attack,in which a welltrained DNN would performabnormally on inputs with a certain trigger.Backdoor attacks can also be classified as attacks that exploit fake images.However,most backdoor attacks design a uniformtrigger for all images,which can be easilydetectedand removed.In this paper,we propose a novel adaptivebackdoor attack.We overcome this defect and design a generator to assign a unique trigger for each image depending on its texture.To achieve this goal,we use a texture complexitymetric to create a specialmask for eachimage,which forces the trigger tobe embedded into the rich texture regions.The trigger is distributed in texture regions,which makes it invisible to humans.Besides the stealthiness of triggers,we limit the range of modification of backdoor models to evade detection.Experiments show that our method is efficient in multiple datasets,and traditional detectors cannot reveal the existence of a backdoor. 展开更多
关键词 Backdoor attack ai security DNN
在线阅读 下载PDF
Interpretable Vulnerability Detection in LLMs:A BERT-Based Approach with SHAP Explanations
3
作者 Nouman Ahmad Changsheng Zhang 《Computers, Materials & Continua》 2025年第11期3321-3334,共14页
Source code vulnerabilities present significant security threats,necessitating effective detection techniques.Rigid rule-sets and pattern matching are the foundation of traditional static analysis tools,which drown de... Source code vulnerabilities present significant security threats,necessitating effective detection techniques.Rigid rule-sets and pattern matching are the foundation of traditional static analysis tools,which drown developers in false positives and miss context-sensitive vulnerabilities.Large Language Models(LLMs)like BERT,in particular,are examples of artificial intelligence(AI)that exhibit promise but frequently lack transparency.In order to overcome the issues with model interpretability,this work suggests a BERT-based LLM strategy for vulnerability detection that incorporates Explainable AI(XAI)methods like SHAP and attention heatmaps.Furthermore,to ensure auditable and comprehensible choices,we present a transparency obligation structure that covers the whole LLM lifetime.Our experiments on a comprehensive and extensive source code DiverseVul dataset show that the proposed method outperform,attaining 92.3%detection accuracy and surpassing CodeT5(89.4%),GPT-3.5(85.1%),and GPT-4(88.7%)under the same evaluation scenario.Through integrated SHAP analysis,this exhibits improved detection capabilities while preserving explainability,which is a crucial advantage over black-box LLM alternatives in security contexts.The XAI analysis discovers crucial predictive tokens such as susceptible and function through SHAP framework.Furthermore,the local token interactions that support the decision-making of the model process are graphically highlighted via attention heatmaps.This method provides a workable solution for reliable vulnerability identification in software systems by effectively fusing high detection accuracy with model explainability.Our findings imply that transparent AI models are capable of successfully detecting security flaws while preserving interpretability for human analysts. 展开更多
关键词 Attention mechanisms CodeBERT explainable ai(Xai)for security large language model(LLM) trustworthy ai vulnerability detection
在线阅读 下载PDF
Adversarial Example Generation Method Based on Sensitive Features
4
作者 WEN Zerui SHEN Zhidong +1 位作者 SUN Hui QI Baiwen 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2023年第1期35-44,共10页
As deep learning models have made remarkable strides in numerous fields,a variety of adversarial attack methods have emerged to interfere with deep learning models.Adversarial examples apply a minute perturbation to t... As deep learning models have made remarkable strides in numerous fields,a variety of adversarial attack methods have emerged to interfere with deep learning models.Adversarial examples apply a minute perturbation to the original image,which is inconceivable to the human but produces a massive error in the deep learning model.Existing attack methods have achieved good results when the network structure is known.However,in the case of unknown network structures,the effectiveness of the attacks still needs to be improved.Therefore,transfer-based attacks are now very popular because of their convenience and practicality,allowing adversarial samples generated on known models to be used in attacks on unknown models.In this paper,we extract sensitive features by Grad-CAM and propose two single-step attacks methods and a multi-step attack method to corrupt sensitive features.In two single-step attacks,one corrupts the features extracted from a single model and the other corrupts the features extracted from multiple models.In multi-step attack,our method improves the existing attack method,thus enhancing the adversarial sample transferability to achieve better results on unknown models.Our method is also validated on CIFAR-10 and MINST,and achieves a 1%-3%improvement in transferability. 展开更多
关键词 deep learning model adversarial example transferability sensitive characteristics ai security
原文传递
Artificial Intelligence in Healthcare: A Fusion of Technologies
5
作者 Eric Ayintareba Akolgo Dennis Redeemer Korda Emmanuel Oteng Dapaah 《Journal of Computer and Communications》 2024年第12期116-133,共18页
Purpose: This study examines the transformative impact of artificial intelligence (AI) in healthcare, focusing on its applications in medical diagnosis, drug discovery, surgery, and disease management while addressing... Purpose: This study examines the transformative impact of artificial intelligence (AI) in healthcare, focusing on its applications in medical diagnosis, drug discovery, surgery, and disease management while addressing ethical, technological, and social concerns. Method: A comprehensive literature review synthesizes research on AI applications, including AI-assisted diagnosis, drug discovery, robot-assisted surgery, stroke management, and artificial neurons. Findings: AI has enabled significant breakthroughs in healthcare, enhancing outcomes in diagnostics, personalized treatments, and surgical procedures. Despite its promise, challenges such as privacy, safety, and equitable access remain critical concerns. Research Limitations: The study relies on existing literature and lacks empirical validation of AI models, with its scope limited by the rapid evolution of AI technologies. Social Implications: The integration of AI raises concerns about privacy, patient rights, and equitable access, particularly in underserved regions, potentially exacerbating healthcare disparities. Practical Implications: The study urges healthcare practitioners to adopt AI tools for improved diagnostics and treatments while advocating for regulatory frameworks to ensure ethical and safe AI integration. Originality: This study offers a comprehensive review of AI’s transformative role in healthcare, emphasizing ethical considerations and providing actionable insights for researchers and practitioners. 展开更多
关键词 Machine Learning Medical Research Robot-Assisted Surgery Artificial Neurons ai Ethics ai security ai-Assisted Medical Diagnosis Drug Discovery
在线阅读 下载PDF
A Gaussian Noise-Based Algorithm for Enhancing Backdoor Attacks
6
作者 Hong Huang Yunfei Wang +1 位作者 Guotao Yuan Xin Li 《Computers, Materials & Continua》 SCIE EI 2024年第7期361-387,共27页
Deep Neural Networks(DNNs)are integral to various aspects of modern life,enhancing work efficiency.Nonethe-less,their susceptibility to diverse attack methods,including backdoor attacks,raises security concerns.We aim... Deep Neural Networks(DNNs)are integral to various aspects of modern life,enhancing work efficiency.Nonethe-less,their susceptibility to diverse attack methods,including backdoor attacks,raises security concerns.We aim to investigate backdoor attack methods for image categorization tasks,to promote the development of DNN towards higher security.Research on backdoor attacks currently faces significant challenges due to the distinct and abnormal data patterns of malicious samples,and the meticulous data screening by developers,hindering practical attack implementation.To overcome these challenges,this study proposes a Gaussian Noise-Targeted Universal Adversarial Perturbation(GN-TUAP)algorithm.This approach restricts the direction of perturbations and normalizes abnormal pixel values,ensuring that perturbations progress as much as possible in a direction perpendicular to the decision hyperplane in linear problems.This limits anomalies within the perturbations improves their visual stealthiness,and makes them more challenging for defense methods to detect.To verify the effectiveness,stealthiness,and robustness of GN-TUAP,we proposed a comprehensive threat model.Based on this model,extensive experiments were conducted using the CIFAR-10,CIFAR-100,GTSRB,and MNIST datasets,comparing our method with existing state-of-the-art attack methods.We also tested our perturbation triggers using various defense methods and further experimented on the robustness of the triggers against noise filtering techniques.The experimental outcomes demonstrate that backdoor attacks leveraging perturbations generated via our algorithm exhibit cross-model attack effectiveness and superior stealthiness.Furthermore,they possess robust anti-detection capabilities and maintain commendable performance when subjected to noise-filtering methods. 展开更多
关键词 Image classification model backdoor attack gaussian distribution Artificial Intelligence(ai)security
在线阅读 下载PDF
FMSA:a meta-learning framework-based fast model stealing attack technique against intelligent network intrusion detection systems
7
作者 Kaisheng Fan Weizhe Zhang +1 位作者 Guangrui Liu Hui He 《Cybersecurity》 EI CSCD 2024年第1期110-121,共12页
Intrusion detection systems are increasingly using machine learning.While machine learning has shown excellent performance in identifying malicious traffic,it may increase the risk of privacy leakage.This paper focuse... Intrusion detection systems are increasingly using machine learning.While machine learning has shown excellent performance in identifying malicious traffic,it may increase the risk of privacy leakage.This paper focuses on imple-menting a model stealing attack on intrusion detection systems.Existing model stealing attacks are hard to imple-ment in practical network environments,as they either need private data of the victim dataset or frequent access to the victim model.In this paper,we propose a novel solution called Fast Model Stealing Attack(FMSA)to address the problem in the field of model stealing attacks.We also highlight the risks of using ML-NIDS in network security.First,meta-learning frameworks are introduced into the model stealing algorithm to clone the victim model in a black-box state.Then,the number of accesses to the target model is used as an optimization term,resulting in minimal queries to achieve model stealing.Finally,adversarial training is used to simulate the data distribution of the target model and achieve the recovery of privacy data.Through experiments on multiple public datasets,compared to existing state-of-the-art algorithms,FMSA reduces the number of accesses to the target model and improves the accuracy of the clone model on the test dataset to 88.9%and the similarity with the target model to 90.1%.We can demonstrate the successful execution of model stealing attacks on the ML-NIDS system even with protective measures in place to limit the number of anomalous queries. 展开更多
关键词 ai security Model stealing attack Network intrusion detection Meta learning
原文传递
A survey of practical adversarial example attacks 被引量:1
8
作者 Lu Sun Mingtian Tan Zhe Zhou 《Cybersecurity》 2018年第1期213-221,共9页
Adversarial examples revealed the weakness of machine learning techniques in terms of robustness,which moreover inspired adversaries to make use of the weakness to attack systems employing machine learning.Existing re... Adversarial examples revealed the weakness of machine learning techniques in terms of robustness,which moreover inspired adversaries to make use of the weakness to attack systems employing machine learning.Existing researches covered the methodologies of adversarial example generation,the root reason of the existence of adversarial examples,and some defense schemes.However practical attack against real world systems did not appear until recent,mainly because of the difficulty in injecting a artificially generated example into the model behind the hosting system without breaking the integrity.Recent case study works against face recognition systems and road sign recognition systems finally abridged the gap between theoretical adversarial example generation methodologies and practical attack schemes against real systems.To guide future research in defending adversarial examples in the real world,we formalize the threat model for practical attacks with adversarial examples,and also analyze the restrictions and key procedures for launching real world adversarial example attacks. 展开更多
关键词 ai systems security Adversarial examples ATTACKS
原文传递
VAEFL: Integrating variational autoencoders for privacy preservation and performance retention in federated learning
9
作者 Zhixin Li Yicun Liu +4 位作者 Jiale Li Guangnan Ye Hongfeng Chai Zhihui Lu Jie Wu 《Security and Safety》 2024年第4期44-60,共17页
Federated Learning(FL) heralds a paradigm shift in the training of artificial intelligence(AI) models by fostering collaborative model training while safeguarding client data privacy. In sectors where data sensitivity... Federated Learning(FL) heralds a paradigm shift in the training of artificial intelligence(AI) models by fostering collaborative model training while safeguarding client data privacy. In sectors where data sensitivity and AI model security are of paramount importance, such as fintech and biomedicine, maintaining the utility of models without compromising privacy is crucial with the growing application of AI technologies. Therefore, the adoption of FL is attracting significant attention. However, traditional FL methods are susceptible to Deep Leakage from Gradients(DLG) attacks, and typical defensive strategies in current research, such as secure multi-party computation and diferential privacy, often lead to excessive computational costs or significant decreases in model accuracy. To address DLG attacks in FL, this study introduces VAEFL, an innovative FL framework that incorporates Variational Autoencoders(VAEs) to enhance privacy protection without undermining the predictive prowess of the models. VAEFL strategically partitions the model into a private encoder and a public decoder. The private encoder, remaining local, transmutes sensitive data into a latent space fortified for privacy, while the public decoder and classifier, through collaborative training across clients, learn to derive precise predictions from the encoded data. This bifurcation ensures that sensitive data attributes are not disclosed, circumventing gradient leakage attacks and simultaneously allowing the global model to benefit from the diverse knowledge of client datasets. Comprehensive experiments demonstrate that VAEFL not only surpasses standard FL benchmarks in privacy preservation but also maintains competitive performance in predictive tasks. VAEFL thus establishes a novel equilibrium between data privacy and model utility, ofering a secure and efficient FL approach for the sensitive application of FL in the financial domain. 展开更多
关键词 Federated learning variational autoencoders deep leakage from gradients ai model security privacy preservation
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部