Large Language Models(LLMs)have significantly advanced human-computer interaction by improving natural language understanding and generation.However,their vulnerability to adversarial prompts–carefully designed input...Large Language Models(LLMs)have significantly advanced human-computer interaction by improving natural language understanding and generation.However,their vulnerability to adversarial prompts–carefully designed inputs that manipulate model outputs–presents substantial challenges.This paper introduces a classification-based approach to detect adversarial prompts by utilizing both prompt features and prompt response features.Elevenmachine learning models were evaluated based on key metrics such as accuracy,precision,recall,and F1-score.The results show that the Convolutional Neural Network–Long Short-Term Memory(CNN-LSTM)cascade model delivers the best performance,especially when using prompt features,achieving an accuracy of over 97%in all adversarial scenarios.Furthermore,the Support Vector Machine(SVM)model performed best with prompt response features,particularly excelling in prompt type classification tasks.Classification results revealed that certain types of adversarial attacks,such as“Word Level”and“Adversarial Prefix”,were particularly difficult to detect,as indicated by their low recall and F1-scores.These findings suggest that more subtle manipulations can evade detection mechanisms.In contrast,attacks like“Sentence Level”and“Adversarial Insertion”were easier to identify,due to the model’s effectiveness in recognizing inserted content.Natural Language Processing(NLP)techniques played a critical role by enabling the extraction of semantic and syntactic features from both prompts and their corresponding responses.These insights highlight the importance of combining traditional and deep learning approaches,along with advanced NLP techniques,to build more reliable adversarial prompt detection systems for LLMs.展开更多
With the rapid development of intelligent video surveillance technology,pedestrian re-identification has become increasingly important inmulti-camera surveillance systems.This technology plays a critical role in enhan...With the rapid development of intelligent video surveillance technology,pedestrian re-identification has become increasingly important inmulti-camera surveillance systems.This technology plays a critical role in enhancing public safety.However,traditional methods typically process images and text separately,applying upstream models directly to downstream tasks.This approach significantly increases the complexity ofmodel training and computational costs.Furthermore,the common class imbalance in existing training datasets limitsmodel performance improvement.To address these challenges,we propose an innovative framework named Person Re-ID Network Based on Visual Prompt Technology andMulti-Instance Negative Pooling(VPM-Net).First,we incorporate the Contrastive Language-Image Pre-training(CLIP)pre-trained model to accurately map visual and textual features into a unified embedding space,effectively mitigating inconsistencies in data distribution and the training process.To enhancemodel adaptability and generalization,we introduce an efficient and task-specific Visual Prompt Tuning(VPT)technique,which improves the model’s relevance to specific tasks.Additionally,we design two key modules:the Knowledge-Aware Network(KAN)and theMulti-Instance Negative Pooling(MINP)module.The KAN module significantly enhances the model’s understanding of complex scenarios through deep contextual semantic modeling.MINP module handles samples,effectively improving the model’s ability to distinguish fine-grained features.The experimental outcomes across diverse datasets underscore the remarkable performance of VPM-Net.These results vividly demonstrate the unique advantages and robust reliability of VPM-Net in fine-grained retrieval tasks.展开更多
The goal of infrared and visible image fusion(IVIF)is to integrate the unique advantages of both modalities to achieve a more comprehensive understanding of a scene.However,existing methods struggle to effectively han...The goal of infrared and visible image fusion(IVIF)is to integrate the unique advantages of both modalities to achieve a more comprehensive understanding of a scene.However,existing methods struggle to effectively handle modal disparities,resulting in visual degradation of the details and prominent targets of the fused images.To address these challenges,we introduce Prompt Fusion,a prompt-based approach that harmoniously combines multi-modality images under the guidance of semantic prompts.Firstly,to better characterize the features of different modalities,a contourlet autoencoder is designed to separate and extract the high-/low-frequency components of different modalities,thereby improving the extraction of fine details and textures.We also introduce a prompt learning mechanism using positive and negative prompts,leveraging Vision-Language Models to improve the fusion model's understanding and identification of targets in multi-modality images,leading to improved performance in downstream tasks.Furthermore,we employ bi-level asymptotic convergence optimization.This approach simplifies the intricate non-singleton non-convex bi-level problem into a series of convergent and differentiable single optimization problems that can be effectively resolved through gradient descent.Our approach advances the state-of-the-art,delivering superior fusion quality and boosting the performance of related downstream tasks.Project page:https://github.com/hey-it-s-me/PromptFusion.展开更多
Prompt fission neutron spectra(PFNS)have a significant role in nuclear science and technology.In this study,the PFNS for^(239)Pu are evaluated using both differential and integral experimental data.A method that lever...Prompt fission neutron spectra(PFNS)have a significant role in nuclear science and technology.In this study,the PFNS for^(239)Pu are evaluated using both differential and integral experimental data.A method that leverages integral criticality benchmark experiments to constrain the PFNS data is introduced.The measured central values of the PFNS are perturbed by constructing a covariance matrix.The PFNS are sampled using two types of covariance matrices,either generated with an assumed correlation matrix and incorporating experimental uncertainties or derived directly from experimental reports.The joint Monte Carlo transport code is employed to perform transport simulations on five criticality benchmark assemblies by utilizing perturbed PFNS data.Extensive simulations result in an optimized PFNS that shows improved agreement with the integral criticality benchmark experiments.This study introduces a novel approach for optimizing differential experimental data through integral experiments,particularly when a covariance matrix is not provided.展开更多
Dialogue State Tracking(DST)is a critical component of task-oriented spoken dialogue systems(SDS),tasked with maintaining an accurate representation of the conversational state by predicting slots and their correspond...Dialogue State Tracking(DST)is a critical component of task-oriented spoken dialogue systems(SDS),tasked with maintaining an accurate representation of the conversational state by predicting slots and their corresponding values.Recent advances leverage Large Language Models(LLMs)with prompt-based tuning to improve tracking accuracy and efficiency.However,these approaches often incur substantial computational and memory overheads and typically address slot extraction implicitly within prompts,without explicitly modeling the complex dependencies between slots and values.In this work,we propose PUGG,a novel DST framework that constructs schema-driven prompts to fine-tune GPT-2 and utilizes its tokenizer to implement a memory encoder.PUGG explicitly extracts slot values via GPT-2 and employs Graph Attention Networks(GATs)to model and reason over the intricate relationships between slots and their associated values.We evaluate PUGG on four publicly available datasets,where it achieves stateof-the-art performance across multiple evaluation metrics,highlighting its robustness and generalizability in diverse conversational scenarios.Our results indicate that the integration of GPT-2 substantially reduces model complexity and memory consumption by streamlining key processes.Moreover,prompt tuning enhances the model’s flexibility and precision in extracting relevant slot-value pairs,while the incorporation of GATs facilitates effective relational reasoning,leading to improved dialogue state representations.展开更多
Large language models(LLMs)have demonstrated remarkable generalization abilities across multiple tasks in natural language processing(NLP).For multi-step reasoning tasks,chain-of-thought(CoT)prompting facilitates step...Large language models(LLMs)have demonstrated remarkable generalization abilities across multiple tasks in natural language processing(NLP).For multi-step reasoning tasks,chain-of-thought(CoT)prompting facilitates step-by-step thinking,leading to improved performance.However,despite significant advancements in LLMs,current CoT prompting performs suboptimally on smaller-scale models that have fewer parameters.Additionally,the common paradigm of few-shot CoT prompting relies on a set of manual demonstrations,with performance contingent on the quality of these annotations and varying with task-specific requirements.To address these limitations,we propose a select-and-answer prompting method(SAP)to enhance language model performance on reasoning tasks without the need for manual demonstrations.This method comprises two primary steps:guiding the model to conduct preliminary analysis and generate several candidate answers based on the prompting;allowing the model to provide final answers derived from these candidate answers.The proposed prompting strategy is evaluated across two language models of varying sizes and six datasets.On ChatGLM-6B,SAP consistently outperforms few-shot CoT across all datasets.For GPT-3.5,SAP achieves comparable performance to few-shot CoT and outperforms zero-shot CoT in most cases.These experimental results indicate that SAP can significantly improve the accuracy of language models in reasoning tasks.展开更多
Geological reports are a significant accomplishment for geologists involved in geological investigations and scientific research as they contain rich data and textual information.With the rapid development of science ...Geological reports are a significant accomplishment for geologists involved in geological investigations and scientific research as they contain rich data and textual information.With the rapid development of science and technology,a large number of textual reports have accumulated in the field of geology.However,many non-hot topics and non-English speaking regions are neglected in mainstream geoscience databases for geological information mining,making it more challenging for some researchers to extract necessary information from these texts.Natural Language Processing(NLP)has obvious advantages in processing large amounts of textual data.The objective of this paper is to identify geological named entities from Chinese geological texts using NLP techniques.We propose the RoBERTa-Prompt-Tuning-NER method,which leverages the concept of Prompt Learning and requires only a small amount of annotated data to train superior models for recognizing geological named entities in low-resource dataset configurations.The RoBERTa layer captures context-based information and longer-distance dependencies through dynamic word vectors.Finally,we conducted experiments on the constructed Geological Named Entity Recognition(GNER)dataset.Our experimental results show that the proposed model achieves the highest F1 score of 80.64%among the four baseline algorithms,demonstrating the reliability and robustness of using the model for Named Entity Recognition of geological texts.展开更多
Prompt engineering, the art of crafting effective prompts for artificial intelligence models, has emerged as a pivotal factor in determining the quality and usefulness of AI (Artificial Intelligence)-generated outputs...Prompt engineering, the art of crafting effective prompts for artificial intelligence models, has emerged as a pivotal factor in determining the quality and usefulness of AI (Artificial Intelligence)-generated outputs. This practice involves strategically designing and structuring prompts to guide AI models toward desired outcomes, ensuring that they generate relevant, informative, and accurate responses. The significance of prompt engineering cannot be overstated. Well-crafted prompts can significantly enhance the capabilities of AI models, enabling them to perform tasks that were once thought to be exclusively human domain. By providing clear and concise instructions, prompts can guide AI models to generate creative text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Moreover, prompt engineering can help mitigate biases and ensure that AI models produce outputs that are fair, equitable, and inclusive. However, prompt engineering is not without its challenges. Crafting effective prompts requires a deep understanding of both the AI model’s capabilities and the specific task at hand. Additionally, the quality of the prompts can be influenced by factors such as the model’s training data [1] and the complexity of the task. As AI models continue to evolve, prompt engineering will likely become even more critical in unlocking their full potential.展开更多
With recent advancements in robotic surgery,notable strides have been made in visual question answering(VQA).Existing VQA systems typically generate textual answers to questions but fail to indicate the location of th...With recent advancements in robotic surgery,notable strides have been made in visual question answering(VQA).Existing VQA systems typically generate textual answers to questions but fail to indicate the location of the relevant content within the image.This limitation restricts the interpretative capacity of the VQA models and their abil-ity to explore specific image regions.To address this issue,this study proposes a grounded VQA model for robotic surgery,capable of localizing a specific region during answer prediction.Drawing inspiration from prompt learning in language models,a dual-modality prompt model was developed to enhance precise multimodal information interactions.Specifically,two complementary prompters were introduced to effectively integrate visual and textual prompts into the encoding process of the model.A visual complementary prompter merges visual prompt knowl-edge with visual information features to guide accurate localization.The textual complementary prompter aligns vis-ual information with textual prompt knowledge and textual information,guiding textual information towards a more accurate inference of the answer.Additionally,a multiple iterative fusion strategy was adopted for comprehensive answer reasoning,to ensure high-quality generation of textual and grounded answers.The experimental results vali-date the effectiveness of the model,demonstrating its superiority over existing methods on the EndoVis-18 and End-oVis-17 datasets.展开更多
文摘Large Language Models(LLMs)have significantly advanced human-computer interaction by improving natural language understanding and generation.However,their vulnerability to adversarial prompts–carefully designed inputs that manipulate model outputs–presents substantial challenges.This paper introduces a classification-based approach to detect adversarial prompts by utilizing both prompt features and prompt response features.Elevenmachine learning models were evaluated based on key metrics such as accuracy,precision,recall,and F1-score.The results show that the Convolutional Neural Network–Long Short-Term Memory(CNN-LSTM)cascade model delivers the best performance,especially when using prompt features,achieving an accuracy of over 97%in all adversarial scenarios.Furthermore,the Support Vector Machine(SVM)model performed best with prompt response features,particularly excelling in prompt type classification tasks.Classification results revealed that certain types of adversarial attacks,such as“Word Level”and“Adversarial Prefix”,were particularly difficult to detect,as indicated by their low recall and F1-scores.These findings suggest that more subtle manipulations can evade detection mechanisms.In contrast,attacks like“Sentence Level”and“Adversarial Insertion”were easier to identify,due to the model’s effectiveness in recognizing inserted content.Natural Language Processing(NLP)techniques played a critical role by enabling the extraction of semantic and syntactic features from both prompts and their corresponding responses.These insights highlight the importance of combining traditional and deep learning approaches,along with advanced NLP techniques,to build more reliable adversarial prompt detection systems for LLMs.
基金funded by the Key Research and Development Program of Hubei Province,China(Grant No.2023BEB024)the Young and Middle-aged Scientific and Technological Innova-tion Team Plan in Higher Education Institutions inHubei Province,China(GrantNo.T2023007)the key projects ofHubei Provincial Department of Education(No.D20161403).
文摘With the rapid development of intelligent video surveillance technology,pedestrian re-identification has become increasingly important inmulti-camera surveillance systems.This technology plays a critical role in enhancing public safety.However,traditional methods typically process images and text separately,applying upstream models directly to downstream tasks.This approach significantly increases the complexity ofmodel training and computational costs.Furthermore,the common class imbalance in existing training datasets limitsmodel performance improvement.To address these challenges,we propose an innovative framework named Person Re-ID Network Based on Visual Prompt Technology andMulti-Instance Negative Pooling(VPM-Net).First,we incorporate the Contrastive Language-Image Pre-training(CLIP)pre-trained model to accurately map visual and textual features into a unified embedding space,effectively mitigating inconsistencies in data distribution and the training process.To enhancemodel adaptability and generalization,we introduce an efficient and task-specific Visual Prompt Tuning(VPT)technique,which improves the model’s relevance to specific tasks.Additionally,we design two key modules:the Knowledge-Aware Network(KAN)and theMulti-Instance Negative Pooling(MINP)module.The KAN module significantly enhances the model’s understanding of complex scenarios through deep contextual semantic modeling.MINP module handles samples,effectively improving the model’s ability to distinguish fine-grained features.The experimental outcomes across diverse datasets underscore the remarkable performance of VPM-Net.These results vividly demonstrate the unique advantages and robust reliability of VPM-Net in fine-grained retrieval tasks.
基金partially supported by China Postdoctoral Science Foundation(2023M730741)the National Natural Science Foundation of China(U22B2052,52102432,52202452,62372080,62302078)
文摘The goal of infrared and visible image fusion(IVIF)is to integrate the unique advantages of both modalities to achieve a more comprehensive understanding of a scene.However,existing methods struggle to effectively handle modal disparities,resulting in visual degradation of the details and prominent targets of the fused images.To address these challenges,we introduce Prompt Fusion,a prompt-based approach that harmoniously combines multi-modality images under the guidance of semantic prompts.Firstly,to better characterize the features of different modalities,a contourlet autoencoder is designed to separate and extract the high-/low-frequency components of different modalities,thereby improving the extraction of fine details and textures.We also introduce a prompt learning mechanism using positive and negative prompts,leveraging Vision-Language Models to improve the fusion model's understanding and identification of targets in multi-modality images,leading to improved performance in downstream tasks.Furthermore,we employ bi-level asymptotic convergence optimization.This approach simplifies the intricate non-singleton non-convex bi-level problem into a series of convergent and differentiable single optimization problems that can be effectively resolved through gradient descent.Our approach advances the state-of-the-art,delivering superior fusion quality and boosting the performance of related downstream tasks.Project page:https://github.com/hey-it-s-me/PromptFusion.
基金supported by the National Natural Science Foundation of China(No.12347126)。
文摘Prompt fission neutron spectra(PFNS)have a significant role in nuclear science and technology.In this study,the PFNS for^(239)Pu are evaluated using both differential and integral experimental data.A method that leverages integral criticality benchmark experiments to constrain the PFNS data is introduced.The measured central values of the PFNS are perturbed by constructing a covariance matrix.The PFNS are sampled using two types of covariance matrices,either generated with an assumed correlation matrix and incorporating experimental uncertainties or derived directly from experimental reports.The joint Monte Carlo transport code is employed to perform transport simulations on five criticality benchmark assemblies by utilizing perturbed PFNS data.Extensive simulations result in an optimized PFNS that shows improved agreement with the integral criticality benchmark experiments.This study introduces a novel approach for optimizing differential experimental data through integral experiments,particularly when a covariance matrix is not provided.
基金supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the ITRC(Information Technology Research Centre)support program(IITP-2024-RS-2024-00437191)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).
文摘Dialogue State Tracking(DST)is a critical component of task-oriented spoken dialogue systems(SDS),tasked with maintaining an accurate representation of the conversational state by predicting slots and their corresponding values.Recent advances leverage Large Language Models(LLMs)with prompt-based tuning to improve tracking accuracy and efficiency.However,these approaches often incur substantial computational and memory overheads and typically address slot extraction implicitly within prompts,without explicitly modeling the complex dependencies between slots and values.In this work,we propose PUGG,a novel DST framework that constructs schema-driven prompts to fine-tune GPT-2 and utilizes its tokenizer to implement a memory encoder.PUGG explicitly extracts slot values via GPT-2 and employs Graph Attention Networks(GATs)to model and reason over the intricate relationships between slots and their associated values.We evaluate PUGG on four publicly available datasets,where it achieves stateof-the-art performance across multiple evaluation metrics,highlighting its robustness and generalizability in diverse conversational scenarios.Our results indicate that the integration of GPT-2 substantially reduces model complexity and memory consumption by streamlining key processes.Moreover,prompt tuning enhances the model’s flexibility and precision in extracting relevant slot-value pairs,while the incorporation of GATs facilitates effective relational reasoning,leading to improved dialogue state representations.
基金National Natural Science Foundation of China(No.62176052)。
文摘Large language models(LLMs)have demonstrated remarkable generalization abilities across multiple tasks in natural language processing(NLP).For multi-step reasoning tasks,chain-of-thought(CoT)prompting facilitates step-by-step thinking,leading to improved performance.However,despite significant advancements in LLMs,current CoT prompting performs suboptimally on smaller-scale models that have fewer parameters.Additionally,the common paradigm of few-shot CoT prompting relies on a set of manual demonstrations,with performance contingent on the quality of these annotations and varying with task-specific requirements.To address these limitations,we propose a select-and-answer prompting method(SAP)to enhance language model performance on reasoning tasks without the need for manual demonstrations.This method comprises two primary steps:guiding the model to conduct preliminary analysis and generate several candidate answers based on the prompting;allowing the model to provide final answers derived from these candidate answers.The proposed prompting strategy is evaluated across two language models of varying sizes and six datasets.On ChatGLM-6B,SAP consistently outperforms few-shot CoT across all datasets.For GPT-3.5,SAP achieves comparable performance to few-shot CoT and outperforms zero-shot CoT in most cases.These experimental results indicate that SAP can significantly improve the accuracy of language models in reasoning tasks.
基金supported by the National Natural Science Foundation of China(Nos.42488201,42172137,42050104,and 42050102)the National Key R&D Program of China(No.2023YFF0804000)Sichuan Provincial Youth Science&Technology Innovative Research Group Fund(No.2022JDTD0004)
文摘Geological reports are a significant accomplishment for geologists involved in geological investigations and scientific research as they contain rich data and textual information.With the rapid development of science and technology,a large number of textual reports have accumulated in the field of geology.However,many non-hot topics and non-English speaking regions are neglected in mainstream geoscience databases for geological information mining,making it more challenging for some researchers to extract necessary information from these texts.Natural Language Processing(NLP)has obvious advantages in processing large amounts of textual data.The objective of this paper is to identify geological named entities from Chinese geological texts using NLP techniques.We propose the RoBERTa-Prompt-Tuning-NER method,which leverages the concept of Prompt Learning and requires only a small amount of annotated data to train superior models for recognizing geological named entities in low-resource dataset configurations.The RoBERTa layer captures context-based information and longer-distance dependencies through dynamic word vectors.Finally,we conducted experiments on the constructed Geological Named Entity Recognition(GNER)dataset.Our experimental results show that the proposed model achieves the highest F1 score of 80.64%among the four baseline algorithms,demonstrating the reliability and robustness of using the model for Named Entity Recognition of geological texts.
文摘Prompt engineering, the art of crafting effective prompts for artificial intelligence models, has emerged as a pivotal factor in determining the quality and usefulness of AI (Artificial Intelligence)-generated outputs. This practice involves strategically designing and structuring prompts to guide AI models toward desired outcomes, ensuring that they generate relevant, informative, and accurate responses. The significance of prompt engineering cannot be overstated. Well-crafted prompts can significantly enhance the capabilities of AI models, enabling them to perform tasks that were once thought to be exclusively human domain. By providing clear and concise instructions, prompts can guide AI models to generate creative text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Moreover, prompt engineering can help mitigate biases and ensure that AI models produce outputs that are fair, equitable, and inclusive. However, prompt engineering is not without its challenges. Crafting effective prompts requires a deep understanding of both the AI model’s capabilities and the specific task at hand. Additionally, the quality of the prompts can be influenced by factors such as the model’s training data [1] and the complexity of the task. As AI models continue to evolve, prompt engineering will likely become even more critical in unlocking their full potential.
基金supported in part by the National Key Research and Development Program of China,No.2021ZD0112400National Natural Science Foundation of China,No.U1908214+5 种基金Program for Innovative Research Team at the University of Liaoning Province,No.LT2020015the Support Plan for Key Field Innovation Team of Dalian,No.2021RT06the Support Plan for Leading Innovation Team of Dalian University,No.XLJ202010Program for the Liaoning Province Doctoral Research Starting Fund,No.2022-BS-336Key Laboratory of Advanced Design and Intelligent Computing(Dalian University),and Ministry of Education,No.ADIC2022003Interdisciplinary Project of Dalian University,No.DLUXK-2023-QN-015.
文摘With recent advancements in robotic surgery,notable strides have been made in visual question answering(VQA).Existing VQA systems typically generate textual answers to questions but fail to indicate the location of the relevant content within the image.This limitation restricts the interpretative capacity of the VQA models and their abil-ity to explore specific image regions.To address this issue,this study proposes a grounded VQA model for robotic surgery,capable of localizing a specific region during answer prediction.Drawing inspiration from prompt learning in language models,a dual-modality prompt model was developed to enhance precise multimodal information interactions.Specifically,two complementary prompters were introduced to effectively integrate visual and textual prompts into the encoding process of the model.A visual complementary prompter merges visual prompt knowl-edge with visual information features to guide accurate localization.The textual complementary prompter aligns vis-ual information with textual prompt knowledge and textual information,guiding textual information towards a more accurate inference of the answer.Additionally,a multiple iterative fusion strategy was adopted for comprehensive answer reasoning,to ensure high-quality generation of textual and grounded answers.The experimental results vali-date the effectiveness of the model,demonstrating its superiority over existing methods on the EndoVis-18 and End-oVis-17 datasets.