期刊文献+
共找到120,869篇文章
< 1 2 250 >
每页显示 20 50 100
基于灰色关联度法解析VB6外源处理对上海青采后品质的影响
1
作者 徐畅 程晓悦 +4 位作者 何雪 任凯 李鹏霞 周鑫 胡花丽 《食品科学》 北大核心 2026年第2期270-278,共9页
本实验以上海青(Brassica rapa subsp.chinensis)为试材,采用浸泡处理方法,研究不同质量浓度VB6对上海青在(15±1)°C贮藏条件下采后品质的影响,同时结合灰色关联度分析法,对不同质量浓度VB6处理上海青的效果进行了综合评价。... 本实验以上海青(Brassica rapa subsp.chinensis)为试材,采用浸泡处理方法,研究不同质量浓度VB6对上海青在(15±1)°C贮藏条件下采后品质的影响,同时结合灰色关联度分析法,对不同质量浓度VB6处理上海青的效果进行了综合评价。结果表明,200 mg/L VB6处理组的黄化指数最低,可显著延缓上海青叶绿素含量的下降,贮藏8 d时该组的总叶绿素含量比对照组高44.9%,能有效维持抗坏血酸(提升21.6%)和总酚(提升14.9%)等抗氧化物质水平,显著抑制丙二醛(降低36.4%)和亚硝酸盐(降低24.3%)的积累;同时可增强过氧化氢酶、过氧化物酶、超氧化物歧化酶活性及1,1-二苯基-2-三硝基苯肼自由基清除率与总还原力。灰色关联度分析显示该浓度处理的综合评分最高(0.776),证实200 mg/L VB6可通过多种生理途径协同延缓上海青采后衰老,本研究可为开发新型保鲜技术提供理论支撑。 展开更多
关键词 上海青 vb6 灰色关联度分析 品质
在线阅读 下载PDF
低温等离子体联合VB6和VC胁迫对紫花芸豆发芽富集γ-氨基丁酸的影响
2
作者 高瑞楠 许庆鹏 +2 位作者 王颖 赵自力 李冰 《食品工业科技》 北大核心 2026年第5期119-127,共9页
为探究低温等离子体(cold atmospheric pressure plasma,CAPP)联合VB6和VC胁迫对紫花芸豆发芽富集γ-氨基丁酸(γ-aminobutyric acid,GABA)含量的富集作用及效果。本实验以紫花芸豆为原料,采用不同浓度VB6溶液和VC溶液联合低温等离子体... 为探究低温等离子体(cold atmospheric pressure plasma,CAPP)联合VB6和VC胁迫对紫花芸豆发芽富集γ-氨基丁酸(γ-aminobutyric acid,GABA)含量的富集作用及效果。本实验以紫花芸豆为原料,采用不同浓度VB6溶液和VC溶液联合低温等离子体发芽,考察不同浓度VB6和VC对低温等离子体处理的芽豆GABA富集量以及相关代谢酶活性的影响。结果表明:低温等离子体联合VB6和VC处理对发芽紫花芸豆富集GABA有促进作用;CAPP联合0.25 mg/mL VC处理后,在发芽72 h时GABA富集量为10.05±0.93 mg/g。CAPP联合0.5 mg/mL VB6处理后,在发芽72 h时GABA富集量为10.09±0.06 mg/g。通过对发芽72 h紫花芸豆相关酶活性分析,CAPP、VB6和VC处理对谷氨酸脱羧酶(GAD)活性有促进作用,但对多胺氧化酶(PAO)活性有一定抑制作用。CAPP联合VB6以及CAPP联合VC处理紫花芸豆发芽都是通过提高GAD活性和抑制γ-氨基丁酸转氨酶(GABA-T)活性从而富集GABA。研究表明CAPP联合VB6和VC胁迫对芸豆发芽富集γ-氨基丁酸有促进作用,为生产富含高GABA食品提供理论参考。 展开更多
关键词 紫花芸豆 发芽 低温等离子体 Γ-氨基丁酸 vb6 VC
在线阅读 下载PDF
GLM-EP: An Equivariant Graph Neural Network and Protein Language Model Integrated Framework for Predicting Essential Proteins in Bacteriophages
3
作者 Jia Mi Zhikang Liu +1 位作者 Chang Li Jing Wan 《Computer Modeling in Engineering & Sciences》 2025年第12期4089-4106,共18页
Recognizing essential proteins within bacteriophages is fundamental to uncovering their replication and survival mechanisms and contributes to advances in phage-based antibacterial therapies.Despite notable progress,e... Recognizing essential proteins within bacteriophages is fundamental to uncovering their replication and survival mechanisms and contributes to advances in phage-based antibacterial therapies.Despite notable progress,existing computational techniques struggle to represent the interplay between sequence-derived and structuredependent protein features.To overcome this limitation,we introduce GLM-EP,a unified framework that fuses protein language models with equivariant graph neural networks.Bymerging semantic embeddings extracted from amino acid sequences with geometry-aware graph representations,GLM-EP enables an in-depth depiction of phage proteins and enhances essential protein identification.Evaluation on diverse benchmark datasets confirms that GLM-EP surpasses conventional sequence-based and independent deep-learning methods,yielding higher F1 and AUROC outcomes.Component-wise analysis demonstrates that GCNII,EGNN,and the gated multi-head attention mechanism function in a complementary manner to encode complex molecular attributes.In summary,GLM-EP serves as a robust and efficient tool for bacteriophage genomic analysis and provides valuable methodological perspectives for the discovery of antibiotic-resistance therapeutic targets.The corresponding code repository is available at:https://github.com/MiJia-ID/GLM-EP(accessed on 01 November 2025). 展开更多
关键词 Essential proteins BACTERIOPHAGES protein language models graph neural networks
在线阅读 下载PDF
A survey of backdoor attacks and defenses:From deep neural networks to large language models
4
作者 Ling-Xin Jin Wei Jiang +5 位作者 Xiang-Yu Wen Mei-Yu Lin Jin-Yu Zhan Xing-Zhi Zhou Maregu Assefa Habtie Naoufel Werghi 《Journal of Electronic Science and Technology》 2025年第3期13-35,共23页
Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susce... Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susceptibility to backdoors maliciously injected by adversaries.This vulnerability arises due to the intricate architecture and opacity of DNNs,resulting in numerous redundant neurons embedded within the models.Adversaries exploit these vulnerabilities to conceal malicious backdoor information within DNNs,thereby causing erroneous outputs and posing substantial threats to the efficacy of DNN-based applications.This article presents a comprehensive survey of backdoor attacks against DNNs and the countermeasure methods employed to mitigate them.Initially,we trace the evolution of the concept from traditional backdoor attacks to backdoor attacks against DNNs,highlighting the feasibility and practicality of generating backdoor attacks against DNNs.Subsequently,we provide an overview of notable works encompassing various attack and defense strategies,facilitating a comparative analysis of their approaches.Through these discussions,we offer constructive insights aimed at refining these techniques.Finally,we extend our research perspective to the domain of large language models(LLMs)and synthesize the characteristics and developmental trends of backdoor attacks and defense methods targeting LLMs.Through a systematic review of existing studies on backdoor vulnerabilities in LLMs,we identify critical open challenges in this field and propose actionable directions for future research. 展开更多
关键词 Backdoor Attacks Backdoor defenses Deep neural networks Large language model
在线阅读 下载PDF
VTAN: A Novel Video Transformer Attention-Based Network for Dynamic Sign Language Recognition
5
作者 Ziyang Deng Weidong Min +2 位作者 Qing Han Mengxue Liu Longfei Li 《Computers, Materials & Continua》 2025年第2期2793-2812,共20页
Dynamic sign language recognition holds significant importance, particularly with the application of deep learning to address its complexity. However, existing methods face several challenges. Firstly, recognizing dyn... Dynamic sign language recognition holds significant importance, particularly with the application of deep learning to address its complexity. However, existing methods face several challenges. Firstly, recognizing dynamic sign language requires identifying keyframes that best represent the signs, and missing these keyframes reduces accuracy. Secondly, some methods do not focus enough on hand regions, which are small within the overall frame, leading to information loss. To address these challenges, we propose a novel Video Transformer Attention-based Network (VTAN) for dynamic sign language recognition. Our approach prioritizes informative frames and hand regions effectively. To tackle the first issue, we designed a keyframe extraction module enhanced by a convolutional autoencoder, which focuses on selecting information-rich frames and eliminating redundant ones from the video sequences. For the second issue, we developed a soft attention-based transformer module that emphasizes extracting features from hand regions, ensuring that the network pays more attention to hand information within sequences. This dual-focus approach improves effective dynamic sign language recognition by addressing the key challenges of identifying critical frames and emphasizing hand regions. Experimental results on two public benchmark datasets demonstrate the effectiveness of our network, outperforming most of the typical methods in sign language recognition tasks. 展开更多
关键词 Dynamic sign language recognition TRANSFORMER soft attention attention-based visual feature aggregation
在线阅读 下载PDF
Deep Learning-Based Natural Language Processing Model and Optical Character Recognition for Detection of Online Grooming on Social Networking Services
6
作者 Sangmin Kim Byeongcheon Lee +2 位作者 Muazzam Maqsood Jihoon Moon Seungmin Rho 《Computer Modeling in Engineering & Sciences》 2025年第5期2079-2108,共30页
The increased accessibility of social networking services(SNSs)has facilitated communication and information sharing among users.However,it has also heightened concerns about digital safety,particularly for children a... The increased accessibility of social networking services(SNSs)has facilitated communication and information sharing among users.However,it has also heightened concerns about digital safety,particularly for children and adolescents who are increasingly exposed to online grooming crimes.Early and accurate identification of grooming conversations is crucial in preventing long-term harm to victims.However,research on grooming detection in South Korea remains limited,as existing models trained primarily on English text and fail to reflect the unique linguistic features of SNS conversations,leading to inaccurate classifications.To address these issues,this study proposes a novel framework that integrates optical character recognition(OCR)technology with KcELECTRA,a deep learning-based natural language processing(NLP)model that shows excellent performance in processing the colloquial Korean language.In the proposed framework,the KcELECTRA model is fine-tuned by an extensive dataset,including Korean social media conversations,Korean ethical verification data from AI-Hub,and Korean hate speech data from Hug-gingFace,to enable more accurate classification of text extracted from social media conversation images.Experimental results show that the proposed framework achieves an accuracy of 0.953,outperforming existing transformer-based models.Furthermore,OCR technology shows high accuracy in extracting text from images,demonstrating that the proposed framework is effective for online grooming detection.The proposed framework is expected to contribute to the more accurate detection of grooming text and the prevention of grooming-related crimes. 展开更多
关键词 Online grooming KcELECTRA natural language processing optical character recognition social networking service text classification
在线阅读 下载PDF
基于VB.NET和ArcEngine的县级年度变更建库软件设计与实现
7
作者 施评达 《资源导刊》 2025年第12期43-47,共5页
研究详细阐述了运用VB.NET与ArcEngine开发县级年度变更建库软件的过程,经深入分析业务需求,借助VB.NET的高效开发能力与ArcEngine的地理信息处理功能,构建涵盖数据管理、处理、查询统计等功能模块的软件系统,并通过应用示范,验证其建... 研究详细阐述了运用VB.NET与ArcEngine开发县级年度变更建库软件的过程,经深入分析业务需求,借助VB.NET的高效开发能力与ArcEngine的地理信息处理功能,构建涵盖数据管理、处理、查询统计等功能模块的软件系统,并通过应用示范,验证其建库效率与数据准确性,有力支撑了土地资源信息化管理。 展开更多
关键词 vb.net ARCENGINE 年度变更建库 增量更新 整图层更新
在线阅读 下载PDF
Agri-Eval:Multi-level Large Language Model Valuation Benchmark for Agriculture
8
作者 WANG Yaojun GE Mingliang +2 位作者 XU Guowei ZHANG Qiyu BIE Yuhui 《农业机械学报》 北大核心 2026年第1期290-299,共10页
Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLM... Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLMs.Therefore,in order to better assess the capability of LLMs in the agricultural domain,Agri-Eval was proposed as a benchmark for assessing the knowledge and reasoning ability of LLMs in agriculture.The assessment dataset used in Agri-Eval covered seven major disciplines in the agricultural domain:crop science,horticulture,plant protection,animal husbandry,forest science,aquaculture science,and grass science,and contained a total of 2283 questions.Among domestic general-purpose LLMs,DeepSeek R1 performed best with an accuracy rate of 75.49%.In the realm of international general-purpose LLMs,Gemini 2.0 pro exp 0205 standed out as the top performer,achieving an accuracy rate of 74.28%.As an LLMs in agriculture vertical,Shennong V2.0 outperformed all the LLMs in China,and the answer accuracy rate of agricultural knowledge exceeded that of all the existing general-purpose LLMs.The launch of Agri-Eval helped the LLM developers to comprehensively evaluate the model's capability in the field of agriculture through a variety of tasks and tests to promote the development of the LLMs in the field of agriculture. 展开更多
关键词 large language models assessment systems agricultural knowledge agricultural datasets
在线阅读 下载PDF
LinguTimeX a Framework for Multilingual CTC Detection Using Explainable AI and Natural Language Processing
9
作者 Omar Darwish Shorouq Al-Eidi +4 位作者 Abdallah Al-Shorman Majdi Maabreh Anas Alsobeh Plamen Zahariev Yahya Tashtoush 《Computers, Materials & Continua》 2026年第1期2231-2251,共21页
Covert timing channels(CTC)exploit network resources to establish hidden communication pathways,posing signi cant risks to data security and policy compliance.erefore,detecting such hidden and dangerous threats remain... Covert timing channels(CTC)exploit network resources to establish hidden communication pathways,posing signi cant risks to data security and policy compliance.erefore,detecting such hidden and dangerous threats remains one of the security challenges. is paper proposes LinguTimeX,a new framework that combines natural language processing with arti cial intelligence,along with explainable Arti cial Intelligence(AI)not only to detect CTC but also to provide insights into the decision process.LinguTimeX performs multidimensional feature extraction by fusing linguistic attributes with temporal network patterns to identify covert channels precisely.LinguTimeX demonstrates strong e ectiveness in detecting CTC across multiple languages;namely English,Arabic,and Chinese.Speci cally,the LSTM and RNN models achieved F1 scores of 90%on the English dataset,89%on the Arabic dataset,and 88%on the Chinese dataset,showcasing their superior performance and ability to generalize across multiple languages. is highlights their robustness in detecting CTCs within security systems,regardless of the language or cultural context of the data.In contrast,the DeepForest model produced F1-scores ranging from 86%to 87%across the same datasets,further con rming its e ectiveness in CTC detection.Although other algorithms also showed reasonable accuracy,the LSTM and RNN models consistently outperformed them in multilingual settings,suggesting that deep learning models might be better suited for this particular problem. 展开更多
关键词 Arabic language Chinese language covert timing channel CYBERSECURITY deep learning English language language processing machine learning
在线阅读 下载PDF
Upholding Academic Integrity amidst Advanced Language Models: Evaluating BiLSTM Networks with GloVe Embeddings for Detecting AI-Generated Scientific Abstracts
10
作者 Lilia-Eliana Popescu-Apreutesei Mihai-Sorin Iosupescu +1 位作者 Sabina Cristiana Necula Vasile-Daniel Pavaloaia 《Computers, Materials & Continua》 2025年第8期2605-2644,共40页
The increasing fluency of advanced language models,such as GPT-3.5,GPT-4,and the recently introduced DeepSeek,challenges the ability to distinguish between human-authored and AI-generated academic writing.This situati... The increasing fluency of advanced language models,such as GPT-3.5,GPT-4,and the recently introduced DeepSeek,challenges the ability to distinguish between human-authored and AI-generated academic writing.This situation is raising significant concerns regarding the integrity and authenticity of academic work.In light of the above,the current research evaluates the effectiveness of Bidirectional Long Short-TermMemory(BiLSTM)networks enhanced with pre-trained GloVe(Global Vectors for Word Representation)embeddings to detect AIgenerated scientific Abstracts drawn from the AI-GA(Artificial Intelligence Generated Abstracts)dataset.Two core BiLSTM variants were assessed:a single-layer approach and a dual-layer design,each tested under static or adaptive embeddings.The single-layer model achieved nearly 97%accuracy with trainable GloVe,occasionally surpassing the deeper model.Despite these gains,neither configuration fully matched the 98.7%benchmark set by an earlier LSTMWord2Vec pipeline.Some runs were over-fitted when embeddings were fine-tuned,whereas static embeddings offered a slightly lower yet stable accuracy of around 96%.This lingering gap reinforces a key ethical and procedural concern:relying solely on automated tools,such as Turnitin’s AI-detection features,to penalize individuals’risks and unjust outcomes.Misclassifications,whether legitimate work is misread as AI-generated or engineered text,evade detection,demonstrating that these classifiers should not stand as the sole arbiters of authenticity.Amore comprehensive approach is warranted,one which weaves model outputs into a systematic process supported by expert judgment and institutional guidelines designed to protect originality. 展开更多
关键词 AI-GA dataset bidirectional LSTM GloVe embeddings AI-generated text detection academic integrity deep learning OVERFITTING natural language processing
在线阅读 下载PDF
CIT-Rec:Enhancing Sequential Recommendation System with Large Language Models
11
作者 Ziyu Li Zhen Chen +2 位作者 Xuejing Fu Tong Mo Weiping Li 《Computers, Materials & Continua》 2026年第3期2328-2343,共16页
Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interact... Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interactions to predict future items of interest.However,many current methods rely on unique user and item IDs,limiting their ability to represent users and items effectively,especially in zero-shot learning scenarios where training data is scarce.With the rapid development of Large Language Models(LLMs),researchers are exploring their potential to enhance recommendation systems.However,there is a semantic gap between the linguistic semantics of LLMs and the collaborative semantics of recommendation systems,where items are typically indexed by IDs.Moreover,most research focuses on item representations,neglecting personalized user modeling.To address these issues,we propose a sequential recommendation framework using LLMs,called CIT-Rec,a model that integrates Collaborative semantics for user representation and Image and Text information for item representation to enhance Recommendations.Specifically,by aligning intuitive image information with text containing semantic features,we can more accurately represent items,improving item representation quality.We focus not only on item representations but also on user representations.To more precisely capture users’personalized preferences,we use traditional sequential recommendation models to train on users’historical interaction data,effectively capturing behavioral patterns.Finally,by combining LLMs and traditional sequential recommendation models,we allow the LLM to understand linguistic semantics while capturing collaborative semantics.Extensive evaluations on real-world datasets show that our model outperforms baseline methods,effectively combining user interaction history with item visual and textual modalities to provide personalized recommendations. 展开更多
关键词 Large language models vision language models sequential recommendation instruction tuning
在线阅读 下载PDF
富加镓业突破8英寸VB法氧化镓单晶制备技术
12
作者 齐红基 《人工晶体学报》 北大核心 2026年第1期161-161,共1页
杭州富加镓业科技有限公司(以下简称:富加镓业)在垂直布里奇曼法(VB法)制备氧化镓晶体领域取得重大突破,成功制备了8英寸(1英寸=2.54 cm)氧化镓晶体(见图1),刷新了国际上VB法制备氧化镓晶体的尺寸纪录。富加镓业在氧化镓晶体研发中展现... 杭州富加镓业科技有限公司(以下简称:富加镓业)在垂直布里奇曼法(VB法)制备氧化镓晶体领域取得重大突破,成功制备了8英寸(1英寸=2.54 cm)氧化镓晶体(见图1),刷新了国际上VB法制备氧化镓晶体的尺寸纪录。富加镓业在氧化镓晶体研发中展现出强劲的技术迭代能力。 展开更多
关键词 vb 氧化镓 垂直布里奇曼法
在线阅读 下载PDF
Detection of Maliciously Disseminated Hate Speech in Spanish Using Fine-Tuning and In-Context Learning Techniques with Large Language Models
13
作者 Tomás Bernal-Beltrán RonghaoPan +3 位作者 JoséAntonio García-Díaz María del Pilar Salas-Zárate Mario Andrés Paredes-Valverde Rafael Valencia-García 《Computers, Materials & Continua》 2026年第4期353-390,共38页
The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in S... The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in Spanish is challenging due to linguistic complexity and the scarcity of annotated resources.In this paper,we compare two predominant AI-based approaches for the forensic detection of malicious hate speech:(1)finetuning encoder-only models that have been trained in Spanish and(2)In-Context Learning techniques(Zero-and Few-Shot Learning)with large-scale language models.Our approach goes beyond binary classification,proposing a comprehensive,multidimensional evaluation that labels each text by:(1)type of speech,(2)recipient,(3)level of intensity(ordinal)and(4)targeted group(multi-label).Performance is evaluated using an annotated Spanish corpus,standard metrics such as precision,recall and F1-score and stability-oriented metrics to evaluate the stability of the transition from zero-shot to few-shot prompting(Zero-to-Few Shot Retention and Zero-to-Few Shot Gain)are applied.The results indicate that fine-tuned encoder-only models(notably MarIA and BETO variants)consistently deliver the strongest and most reliable performance:in our experiments their macro F1-scores lie roughly in the range of approximately 46%–66%depending on the task.Zero-shot approaches are much less stable and typically yield substantially lower performance(observed F1-scores range approximately 0%–39%),often producing invalid outputs in practice.Few-shot prompting(e.g.,Qwen 38B,Mistral 7B)generally improves stability and recall relative to pure zero-shot,bringing F1-scores into a moderate range of approximately 20%–51%but still falling short of fully fine-tuned models.These findings highlight the importance of supervised adaptation and discuss the potential of both paradigms as components in AI-powered cybersecurity and malware forensics systems designed to identify and mitigate coordinated online hate campaigns. 展开更多
关键词 Hate speech detection malicious communication campaigns AI-driven cybersecurity social media analytics large language models prompt-tuning fine-tuning in-context learning natural language processing
在线阅读 下载PDF
On the Evolutionary Logic of Chinese Culture’s Integration Into Foreign Language Education in China:A Bibliometric Study of CSSCI Source Journals(1980-2025)
14
作者 ZOU Yanqun 《Sino-US English Teaching》 2026年第1期1-9,共9页
This paper undertakes a systematic combing of the development of research on integrating Chinese culture into foreign language education in China from the 1980s to 2025,dividing it into three stages:cultural attachmen... This paper undertakes a systematic combing of the development of research on integrating Chinese culture into foreign language education in China from the 1980s to 2025,dividing it into three stages:cultural attachment,cultural compensation,and cultural symbiosis,and reveals the logical shift of the research from the dominance of target language culture to the construction of the subjectivity of Chinese culture.Through quantitative and qualitative analysis of 435 CSSCI papers,three core themes are extracted:what to integrate,why to integrate,and how to integrate.This paper critically analyzes three pairs of contradictions:the imbalance between instrumentality and humanism,the separation of national narrative and individual expression,and the disconnection between traditional inheritance and modern transformation.It is proposed that future research should reconstruct the educational logic based on the Chinese context,integrate the national and individual dimensions,and build a dialogue mechanism between tradition and modernity,so as to provide theoretical and practical reference for the construction of a foreign language education system with Chinese characteristics. 展开更多
关键词 Chinese culture foreign language education cultural integration
在线阅读 下载PDF
When Large Language Models and Machine Learning Meet Multi-Criteria Decision Making: Fully Integrated Approach for Social Media Moderation
15
作者 Noreen Fuentes Janeth Ugang +4 位作者 Narcisan Galamiton Suzette Bacus Samantha Shane Evangelista Fatima Maturan Lanndon Ocampo 《Computers, Materials & Continua》 2026年第1期2137-2162,共26页
This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to use... This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities. 展开更多
关键词 Self-moderation user-generated content k-means clustering TODIM large language models
在线阅读 下载PDF
Task-Structured Curriculum Learning for Multi-Task Distillation:Enhancing Step-by-Step Knowledge Transfer in Language Models
16
作者 Ahmet Ezgi Aytug Onan 《Computers, Materials & Continua》 2026年第3期1647-1673,共27页
Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Re... Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning. 展开更多
关键词 Knowledge distillation curriculum learning language models multi-task learning step-by-step learning
在线阅读 下载PDF
Command-agent:Reconstructing warfare simulation and command decision-making using large language models
17
作者 Mengwei Zhang Minchi Kuang +3 位作者 Heng Shi Jihong Zhu Jingyu Zhu Xiao Jiang 《Defence Technology(防务技术)》 2026年第2期294-313,共20页
War rehearsals have become increasingly important in national security due to the growing complexity of international affairs.However,traditional rehearsal methods,such as military chess simulations,are inefficient an... War rehearsals have become increasingly important in national security due to the growing complexity of international affairs.However,traditional rehearsal methods,such as military chess simulations,are inefficient and inflexible,with particularly pronounced limitations in command and decision-making.The overwhelming volume of information and high decision complexity hinder the realization of autonomous and agile command and control.To address this challenge,an intelligent warfare simulation framework named Command-Agent is proposed,which deeply integrates large language models(LLMs)with digital twin battlefields.By constructing a highly realistic battlefield environment through real-time simulation and multi-source data fusion,the natural language interaction capabilities of LLMs are leveraged to lower the command threshold and to enable autonomous command through the Observe-Orient-Decide-Act(OODA)feedback loop.Within the Command-Agent framework,a multimodel collaborative architecture is further adopted to decouple the decision-generation and command-execution functions of LLMs.By combining specialized models such as Deep Seek-R1 and MCTool,the limitations of single-model capabilities are overcome.MCTool is a lightweight execution model fine-tuned for military Function Calling tasks.The framework also introduces a Vector Knowledge Base to mitigate hallucinations commonly exhibited by LLMs.Experimental results demonstrate that Command-Agent not only enables natural language-driven simulation and control but also deeply understands commander intent.Leveraging the multi-model collaborative architecture,during red-blue UAV confrontations involving 2 to 8 UAVs,the integrated score is improved by an average of 41.8%compared to the single-agent system(MCTool),accompanied by a 161.8%optimization in the battle loss ratio.Furthermore,when compared with multi-agent systems lacking the knowledge base,the inclusion of the Vector Knowledge Base further improves overall performance by 16.8%.In comparison with the general model(Qwen2.5-7B),the fine-tuned MCTool leads by 5%in execution efficiency.Therefore,the proposed Command-Agent introduces a novel perspective to the military command system and offers a feasible solution for intelligent battlefield decision-making. 展开更多
关键词 Digital twin battlefield Large language models Multi-agent system Military command
在线阅读 下载PDF
Mitigating Adversarial Obfuscation in Named Entity Recognition with Robust Secure BERT Finetuning
18
作者 Nouman Ahmad Changsheng Zhang Uroosa Sehar 《Computers, Materials & Continua》 2026年第4期860-876,共17页
Although Named Entity Recognition(NER)in cybersecurity has historically concentrated on threat intelligence,vital security data can be found in a variety of sources,such as open-source intelligence and unprocessed too... Although Named Entity Recognition(NER)in cybersecurity has historically concentrated on threat intelligence,vital security data can be found in a variety of sources,such as open-source intelligence and unprocessed tool outputs.When dealing with technical language,the coexistence of structured and unstructured data poses serious issues for traditional BERT-based techniques.We introduce a three-phase approach for improved NER inmulti-source cybersecurity data that makes use of large language models(LLMs).To ensure thorough entity coverage,our method starts with an identification module that uses dynamic prompting techniques.To lessen hallucinations,the extraction module uses confidence-based self-assessment and cross-checking using regex validation.The tagging module links to knowledge bases for contextual validation and uses SecureBERT in conjunction with conditional random fields to detect entity boundaries precisely.Our framework creates efficient natural language segments by utilizing decoderbased LLMs with 10B parameters.When compared to baseline SecureBERT implementations,evaluation across four cybersecurity data sources shows notable gains,with a 9.4%–25.21%greater recall and a 6.38%–17.3%better F1-score.Our refined model matches larger models and achieves 2.6%–4.9%better F1-score for technical phrase recognition than the state-of-the-art alternatives Claude 3.5 Sonnet,Llama3-8B,and Mixtral-7B.The three-stage architecture identification-extraction-tagging pipeline tackles important cybersecurity NER issues.Through effective architectures,these developments preserve deployability while setting a new standard for entity extraction in challenging security scenarios.The findings show how specific enhancements in hybrid recognition,validation procedures,and prompt engineering raise NER performance above monolithic LLM approaches in cybersecurity applications,especially for technical entity extraction fromheterogeneous sourceswhere conventional techniques fall short.Because of itsmodular nature,the framework can be upgraded at the component level as new methods are developed. 展开更多
关键词 Information extraction large language models NER open-source intelligence security automation
在线阅读 下载PDF
The Xu-Argument:An Innovative Approach to Second Language Acquisition—An Interview With Prof.Wang Chuming
19
作者 Min Wang 《Chinese Journal of Applied Linguistics》 2026年第1期8-20,159,共14页
This interview examines the theoretical foundations,pedagogical applications,developmental trajectory,and future directions of the xu-argument.Professor Wang Chuming offers a comprehensive account of the xu-argument,c... This interview examines the theoretical foundations,pedagogical applications,developmental trajectory,and future directions of the xu-argument.Professor Wang Chuming offers a comprehensive account of the xu-argument,clarifying its theoretical framework,the learning mechanisms underlying xu,and its interface with international theories of second language acquisition(SLA).From the perspective of the xu-argument,he proposes novel interpretations of core issues in SLA.Drawing on the development of the xu-argument,Wang further discusses the essence,directions,and methodology of innovation in SLA theory.He emphasizes that theoretical advances must capture and illuminate underlying natural laws,arguing that innovative approaches are typically rooted in deep reflection on common sense.He also calls for theoretical innovation in SLA in the Chinese context,advocating a robust research paradigm that shifts from local observation to global theoretical generalization,thereby promoting bottom-up theoretical development.In closing,he highlights the promising prospects for SLA theory in the era of artificial intelligence. 展开更多
关键词 Wang Chuming the xu-argument second language acquisition theoretical innovation
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部