Machine translation of low-resource languages(LRLs)has long been hindered by limited corpora and linguistic complexity.This review summarizes key developments,from traditional methods to recent progress with large lan...Machine translation of low-resource languages(LRLs)has long been hindered by limited corpora and linguistic complexity.This review summarizes key developments,from traditional methods to recent progress with large language models(LLMs),while highlighting ongoing challenges such as data bottlenecks,biases,fairness,and computational costs.Finally,it discusses future directions,including efficient parameter fine-tuning,multimodal translation,and community-driven corpus construction,providing insights for advancing LRL translation research.展开更多
Text clustering is an important task because of its vital role in NLP-related tasks.However,existing research on clustering is mainly based on the English language,with limited work on low-resource languages,such as U...Text clustering is an important task because of its vital role in NLP-related tasks.However,existing research on clustering is mainly based on the English language,with limited work on low-resource languages,such as Urdu.Low-resource language text clustering has many drawbacks in the form of limited annotated collections and strong linguistic diversity.Theprimary aim of this paper is twofold:(1)By introducing a clustering dataset namedUNC-2025 comprises 100k Urdu news documents,and(2)a detailed empirical standard of Large Language Model(LLM)improved clusteringmethods for Urdu text.We explicitly evaluate the behavior of the 11multilingual and Urdu-specific embeddings on 3 different clustering algorithms.We carefully evaluated our performance based on a set of internal and external measurements of validity.We discover the best configuration of the mBERT embedding with the HDBSCAN algorithm that attains a new state-of-the-art performance with a high score of external validity of 0.95.This new LLM method has created a new strong standard of Urdu text clustering.Importantly,the results confirm the strength and high scalability of the LLM-generated embeddings towards the ability to generalise the fine,subtle semantics needed to discover topics in low-resource settings and open the door to novel NLP applications in underrepresented languages.展开更多
Creating a parallel corpus for machine translation is a challenging and time-consuming task,especially in a linguistically diverse country like the Philippines,with 185 languages.Although a wealth of text is available...Creating a parallel corpus for machine translation is a challenging and time-consuming task,especially in a linguistically diverse country like the Philippines,with 185 languages.Although a wealth of text is available,annotated data is scarce,particularly for languages like Bikol.Bikol is one of the major languages in the Philippines;however,its underrepresentation in the digital sphere is attributed to the absence of annotated data.This study outlines the development process of BFParCo,a proposed gold standard dataset for the Bikol and Filipino parallel corpus.The corpus underwent refinement through manual phrase alignment,translation,and evaluation.Subsequently,T5 and mT5 transformer models were fine-tuned with the parallel corpus and were evaluated using the BLEU metric.The results showed a notable improvement in Bilingual Evaluation Understudy(BLEU)score after fine-tuning,with an increase of 60.68 in BIK→FIL and 58.93 in FIL→BIK translations.Additionally,human evaluators comprehensively assessed the fine-tuned models'results using Multidimensional Quality Metrics and Scalar Quality Metrics error taxonomies.The fine-tuned models then were made publicly accessible through Hugging Face.This study represents a significant stride in advancing machine translation tools for Bikol and Filipino languages.展开更多
The work in this paper is based on primary research on how to obtain informed consent to medical treatment and or procedure among patients;this study was carried out in Papua New Guinea in both urban and rural health ...The work in this paper is based on primary research on how to obtain informed consent to medical treatment and or procedure among patients;this study was carried out in Papua New Guinea in both urban and rural health settings across customs,cultures,and languages in two provinces,on the basis of qualitative interviews with healthcare professionals including doctors,nurses,other healthcare workers,patients,and traditional healers.We emphasize the views of consent with participants of customs,cultural,and languages regarding informed consent.There are factors between peoples of differing circumstances which can greatly alter how they view consent.Some groups would involve people in the decision-making process that may not traditionally be involved in the decision making of a medical decision.Other groups may dislike certain medical procedures as in Papua New Guinea(PNG).And certain people have different views on what should be disclosed of the patient’s condition.Customs,cultures,and languages are common phenomena which continue to affect the daily lives of many thousands of people.It is unclear in PNG about the characteristics of customs,culture,and language on health care because there is no published information on informed consent and issues that affect the making of informed consent.展开更多
This paper proposes an interdisciplinary talent training model that combines foreign language education with area studies.The model aims to cultivate international ocean affairs professionals with cross-cultural commu...This paper proposes an interdisciplinary talent training model that combines foreign language education with area studies.The model aims to cultivate international ocean affairs professionals with cross-cultural communication skills,in-depth regional and country knowledge,and practical expertise in ocean affairs.Additionally,the paper presents specific training pathways and policy recommendations for implementing this model.展开更多
Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLM...Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLMs.Therefore,in order to better assess the capability of LLMs in the agricultural domain,Agri-Eval was proposed as a benchmark for assessing the knowledge and reasoning ability of LLMs in agriculture.The assessment dataset used in Agri-Eval covered seven major disciplines in the agricultural domain:crop science,horticulture,plant protection,animal husbandry,forest science,aquaculture science,and grass science,and contained a total of 2283 questions.Among domestic general-purpose LLMs,DeepSeek R1 performed best with an accuracy rate of 75.49%.In the realm of international general-purpose LLMs,Gemini 2.0 pro exp 0205 standed out as the top performer,achieving an accuracy rate of 74.28%.As an LLMs in agriculture vertical,Shennong V2.0 outperformed all the LLMs in China,and the answer accuracy rate of agricultural knowledge exceeded that of all the existing general-purpose LLMs.The launch of Agri-Eval helped the LLM developers to comprehensively evaluate the model's capability in the field of agriculture through a variety of tasks and tests to promote the development of the LLMs in the field of agriculture.展开更多
Covert timing channels(CTC)exploit network resources to establish hidden communication pathways,posing signi cant risks to data security and policy compliance.erefore,detecting such hidden and dangerous threats remain...Covert timing channels(CTC)exploit network resources to establish hidden communication pathways,posing signi cant risks to data security and policy compliance.erefore,detecting such hidden and dangerous threats remains one of the security challenges. is paper proposes LinguTimeX,a new framework that combines natural language processing with arti cial intelligence,along with explainable Arti cial Intelligence(AI)not only to detect CTC but also to provide insights into the decision process.LinguTimeX performs multidimensional feature extraction by fusing linguistic attributes with temporal network patterns to identify covert channels precisely.LinguTimeX demonstrates strong e ectiveness in detecting CTC across multiple languages;namely English,Arabic,and Chinese.Speci cally,the LSTM and RNN models achieved F1 scores of 90%on the English dataset,89%on the Arabic dataset,and 88%on the Chinese dataset,showcasing their superior performance and ability to generalize across multiple languages. is highlights their robustness in detecting CTCs within security systems,regardless of the language or cultural context of the data.In contrast,the DeepForest model produced F1-scores ranging from 86%to 87%across the same datasets,further con rming its e ectiveness in CTC detection.Although other algorithms also showed reasonable accuracy,the LSTM and RNN models consistently outperformed them in multilingual settings,suggesting that deep learning models might be better suited for this particular problem.展开更多
As an ordinary Yunnan local,I never imagined becoming so closely connected to the exotic land of Laos.The luckiest event of my life was probably my choice to tick a box on a 2007 college entrance examination applicati...As an ordinary Yunnan local,I never imagined becoming so closely connected to the exotic land of Laos.The luckiest event of my life was probably my choice to tick a box on a 2007 college entrance examination application form,indicating my willingness to enrollin a major other than my preference,which led me into the world of the Lao language.展开更多
Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interact...Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interactions to predict future items of interest.However,many current methods rely on unique user and item IDs,limiting their ability to represent users and items effectively,especially in zero-shot learning scenarios where training data is scarce.With the rapid development of Large Language Models(LLMs),researchers are exploring their potential to enhance recommendation systems.However,there is a semantic gap between the linguistic semantics of LLMs and the collaborative semantics of recommendation systems,where items are typically indexed by IDs.Moreover,most research focuses on item representations,neglecting personalized user modeling.To address these issues,we propose a sequential recommendation framework using LLMs,called CIT-Rec,a model that integrates Collaborative semantics for user representation and Image and Text information for item representation to enhance Recommendations.Specifically,by aligning intuitive image information with text containing semantic features,we can more accurately represent items,improving item representation quality.We focus not only on item representations but also on user representations.To more precisely capture users’personalized preferences,we use traditional sequential recommendation models to train on users’historical interaction data,effectively capturing behavioral patterns.Finally,by combining LLMs and traditional sequential recommendation models,we allow the LLM to understand linguistic semantics while capturing collaborative semantics.Extensive evaluations on real-world datasets show that our model outperforms baseline methods,effectively combining user interaction history with item visual and textual modalities to provide personalized recommendations.展开更多
Objective To develop a clinical decision and prescription generation system(CDPGS)specifically for diarrhea in traditional Chinese medicine(TCM),utilizing a specialized large language model(LLM),Qwen-TCM-Dia,to standa...Objective To develop a clinical decision and prescription generation system(CDPGS)specifically for diarrhea in traditional Chinese medicine(TCM),utilizing a specialized large language model(LLM),Qwen-TCM-Dia,to standardize diagnostic processes and prescription generation.Methods Two primary datasets were constructed:an evaluation benchmark and a fine-tuning dataset consisting of fundamental diarrhea knowledge,medical records,and chain-ofthought(CoT)reasoning datasets.After an initial evaluation of 16 open-source LLMs across inference time,accuracy,and output quality,Qwen2.5 was selected as the base model due to its superior overall performance.We then employed a two-stage low-rank adaptation(LoRA)fine-tuning strategy,integrating continued pre-training on domain-specific knowledge with instruction fine-tuning using CoT-enriched medical records.This approach was designed to embed the clinical logic(symptoms→pathogenesis→therapeutic principles→prescriptions)into the model’s reasoning capabilities.The resulting fine-tuned model,specialized for TCM diarrhea,was designated as Qwen-TCM-Dia.Model performance was evaluated for disease diagnosis and syndrome type differentiation using accuracy,precision,recall,and F1-score.Furthermore,the quality of the generated prescriptions was compared with that of established open-source TCM LLMs.Results Qwen-TCM-Dia achieved peak performance compared to both the base Qwen2.5 model and five other open-source TCM LLMs.It achieved 97.05%accuracy and 91.48%F1-score in disease diagnosis,and 74.54%accuracy and 74.21%F1-score in syndrome type differentiation.Compared with existing open-source TCM LLMs(BianCang,HuangDi,LingDan,TCMLLM-PR,and ZhongJing),Qwen-TCM-Dia exhibited higher fidelity in reconstructing the“symptoms→pathogenesis→therapeutic principles→prescriptions”logic chain.It provided complete prescriptions,whereas other models often omitted dosages or generated mismatched prescriptions.Conclusion By integrating continued pre-training,CoT reasoning,and a two-stage fine-tuning strategy,this study establishes a CDPGS for diarrhea in TCM.The results demonstrate the synergistic effect of strengthening domain representation through pre-training and activating logical reasoning via CoT.This research not only provides critical technical support for the standardized diagnosis and treatment of diarrhea but also offers a scalable paradigm for the digital inheritance of expert TCM experience and the intelligent transformation of TCM.展开更多
The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in S...The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in Spanish is challenging due to linguistic complexity and the scarcity of annotated resources.In this paper,we compare two predominant AI-based approaches for the forensic detection of malicious hate speech:(1)finetuning encoder-only models that have been trained in Spanish and(2)In-Context Learning techniques(Zero-and Few-Shot Learning)with large-scale language models.Our approach goes beyond binary classification,proposing a comprehensive,multidimensional evaluation that labels each text by:(1)type of speech,(2)recipient,(3)level of intensity(ordinal)and(4)targeted group(multi-label).Performance is evaluated using an annotated Spanish corpus,standard metrics such as precision,recall and F1-score and stability-oriented metrics to evaluate the stability of the transition from zero-shot to few-shot prompting(Zero-to-Few Shot Retention and Zero-to-Few Shot Gain)are applied.The results indicate that fine-tuned encoder-only models(notably MarIA and BETO variants)consistently deliver the strongest and most reliable performance:in our experiments their macro F1-scores lie roughly in the range of approximately 46%–66%depending on the task.Zero-shot approaches are much less stable and typically yield substantially lower performance(observed F1-scores range approximately 0%–39%),often producing invalid outputs in practice.Few-shot prompting(e.g.,Qwen 38B,Mistral 7B)generally improves stability and recall relative to pure zero-shot,bringing F1-scores into a moderate range of approximately 20%–51%but still falling short of fully fine-tuned models.These findings highlight the importance of supervised adaptation and discuss the potential of both paradigms as components in AI-powered cybersecurity and malware forensics systems designed to identify and mitigate coordinated online hate campaigns.展开更多
Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentime...Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language.展开更多
Background:Large language models(LLMs)have shown considerable promise in supporting clinical decision-making.However,their adoption and evaluation in dermatology remains limited.This study aimed to explore the prefere...Background:Large language models(LLMs)have shown considerable promise in supporting clinical decision-making.However,their adoption and evaluation in dermatology remains limited.This study aimed to explore the preferences of Chinese dermatologists regarding LLM-generated responses in clinical psoriasis scenarios and to assess how they prioritize key quality dimensions,including accuracy,traceability,and logicality.Methods:A cross-sectional,web-based survey was conducted between December 25,2024,and January 22,2025,following the Checklist for Reporting Results of Internet E-Surveys guidelines.A total of 1247 valid responses were collected from practicing dermatologists across 33 of China's provincial-level administrative divisions.Participants evaluated responses to five categories of clinical questions(etiology,clinical presentation,differential diagnosis,treatment,and case study)generated by five LLMs:ChatGPT-4o,Kimi.ai,Doubao,ZuoYiGPT,and Lingyi-agent.Statistical associations between participant characteristics and model preferences were examined using chi-square tests.Results:ChatGPT-4o(Model 1)emerged as the most preferred model across all clinical tasks,consistently receiving the highest number of votes in case study(n=740),clinical presentation(n=666),differential diagnosis(n=707),etiology(n=602),and treatment(n=656).Significant variation in model preference by professional title was observed only for the differential diagnosis task(χ^(2)=21.13,df=12,p=0.0485),while no significant differences were found across hospital tiers(p>0.05).In terms of evaluation dimensions,accuracy was most frequently rated as“very important”(n=635).A significant association existed between hospital tier and the most valued dimension(χ^(2)=27.667,df=9,p=0.0011),with dermatologists in primary hospitals prioritizing traceability more than their peers in higher-tier hospitals.No significant associations were found across professional titles(p=0.127).Conclusions:Chinese dermatologists suggest a strong preference for ChatGPT-4o over domestic LLMs in psoriasis-related clinical tasks.While accuracy remains the primary criterion,traceability and logicality are also critical,particularly for clinicians in lower-tier hospitals.These findings suggest that future clinical LLMs should prioritize not only content accuracy but also source transparency and structural clarity to meet the diverse needs of different clinical settings.展开更多
This paper undertakes a systematic combing of the development of research on integrating Chinese culture into foreign language education in China from the 1980s to 2025,dividing it into three stages:cultural attachmen...This paper undertakes a systematic combing of the development of research on integrating Chinese culture into foreign language education in China from the 1980s to 2025,dividing it into three stages:cultural attachment,cultural compensation,and cultural symbiosis,and reveals the logical shift of the research from the dominance of target language culture to the construction of the subjectivity of Chinese culture.Through quantitative and qualitative analysis of 435 CSSCI papers,three core themes are extracted:what to integrate,why to integrate,and how to integrate.This paper critically analyzes three pairs of contradictions:the imbalance between instrumentality and humanism,the separation of national narrative and individual expression,and the disconnection between traditional inheritance and modern transformation.It is proposed that future research should reconstruct the educational logic based on the Chinese context,integrate the national and individual dimensions,and build a dialogue mechanism between tradition and modernity,so as to provide theoretical and practical reference for the construction of a foreign language education system with Chinese characteristics.展开更多
Background:Assess ChatGPT and Bard's effectiveness in the initial identification of articles for Otolaryngology—Head and Neck Surgery systematic literature reviews.Methods:Three PRISMA-based systematic reviews(Ja...Background:Assess ChatGPT and Bard's effectiveness in the initial identification of articles for Otolaryngology—Head and Neck Surgery systematic literature reviews.Methods:Three PRISMA-based systematic reviews(Jabbour et al.2017,Wong et al.2018,and Wu et al.2021)were replicated using ChatGPTv3.5 and Bard.Outputs(author,title,publication year,and journal)were compared to the original references and cross-referenced with medical databases for authenticity and recall.Results:Several themes emerged when comparing Bard and ChatGPT across the three reviews.Bard generated more outputs and had greater recall in Wong et al.'s review,with a broader date range in Jabbour et al.'s review.In Wu et al.'s review,ChatGPT-2 had higher recall and identified more authentic outputs than Bard-2.Conclusion:Large language models(LLMs)failed to fully replicate peer-reviewed methodologies,producing outputs with inaccuracies but identifying relevant,especially recent,articles missed by the references.While human-led PRISMA-based reviews remain the gold standard,refining LLMs for literature reviews shows potential.展开更多
Large language models(LLMs)show considerable potential to revolutionize healthcare through their performance across diverse clinical applications.Given the inherent constraints of LLMs and the critical nature of medic...Large language models(LLMs)show considerable potential to revolutionize healthcare through their performance across diverse clinical applications.Given the inherent constraints of LLMs and the critical nature of medical practice,a rigorous and systematic evaluation of their medical competence is imperative.This study presents a comprehensive review of the established methodologies and benchmarks for evaluating the medical competence of LLMs,encompassing a thorough analysis of current assessment practices across medical knowledge,clinical practice competence,and ethical-safety considerations.By integrating clinician competency assessment frameworks into LLMs evaluation,we propose a structured tri-dimensional framework that systematically organizes existing evaluation approaches according to medical theoretical knowledge,clinical practice ability,and ethical-safety considerations.Furthermore,this research provides critical insights into future developmental trajectories while establishing foundational frameworks and standardization protocols for the integration of LLMs into medical practice.展开更多
This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to use...This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.展开更多
Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Re...Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning.展开更多
War rehearsals have become increasingly important in national security due to the growing complexity of international affairs.However,traditional rehearsal methods,such as military chess simulations,are inefficient an...War rehearsals have become increasingly important in national security due to the growing complexity of international affairs.However,traditional rehearsal methods,such as military chess simulations,are inefficient and inflexible,with particularly pronounced limitations in command and decision-making.The overwhelming volume of information and high decision complexity hinder the realization of autonomous and agile command and control.To address this challenge,an intelligent warfare simulation framework named Command-Agent is proposed,which deeply integrates large language models(LLMs)with digital twin battlefields.By constructing a highly realistic battlefield environment through real-time simulation and multi-source data fusion,the natural language interaction capabilities of LLMs are leveraged to lower the command threshold and to enable autonomous command through the Observe-Orient-Decide-Act(OODA)feedback loop.Within the Command-Agent framework,a multimodel collaborative architecture is further adopted to decouple the decision-generation and command-execution functions of LLMs.By combining specialized models such as Deep Seek-R1 and MCTool,the limitations of single-model capabilities are overcome.MCTool is a lightweight execution model fine-tuned for military Function Calling tasks.The framework also introduces a Vector Knowledge Base to mitigate hallucinations commonly exhibited by LLMs.Experimental results demonstrate that Command-Agent not only enables natural language-driven simulation and control but also deeply understands commander intent.Leveraging the multi-model collaborative architecture,during red-blue UAV confrontations involving 2 to 8 UAVs,the integrated score is improved by an average of 41.8%compared to the single-agent system(MCTool),accompanied by a 161.8%optimization in the battle loss ratio.Furthermore,when compared with multi-agent systems lacking the knowledge base,the inclusion of the Vector Knowledge Base further improves overall performance by 16.8%.In comparison with the general model(Qwen2.5-7B),the fine-tuned MCTool leads by 5%in execution efficiency.Therefore,the proposed Command-Agent introduces a novel perspective to the military command system and offers a feasible solution for intelligent battlefield decision-making.展开更多
基金supported by China Undergraduate Innovation Training Program[Grant No.202410699184]Humanities and Social Sciences Research Project funded by the Ministry of Education of China[Grant No.23YJAZH139].
文摘Machine translation of low-resource languages(LRLs)has long been hindered by limited corpora and linguistic complexity.This review summarizes key developments,from traditional methods to recent progress with large language models(LLMs),while highlighting ongoing challenges such as data bottlenecks,biases,fairness,and computational costs.Finally,it discusses future directions,including efficient parameter fine-tuning,multimodal translation,and community-driven corpus construction,providing insights for advancing LRL translation research.
基金Chang Gung University and Chang Gung Memorial Hospital under project number NERPD4Q0021.
文摘Text clustering is an important task because of its vital role in NLP-related tasks.However,existing research on clustering is mainly based on the English language,with limited work on low-resource languages,such as Urdu.Low-resource language text clustering has many drawbacks in the form of limited annotated collections and strong linguistic diversity.Theprimary aim of this paper is twofold:(1)By introducing a clustering dataset namedUNC-2025 comprises 100k Urdu news documents,and(2)a detailed empirical standard of Large Language Model(LLM)improved clusteringmethods for Urdu text.We explicitly evaluate the behavior of the 11multilingual and Urdu-specific embeddings on 3 different clustering algorithms.We carefully evaluated our performance based on a set of internal and external measurements of validity.We discover the best configuration of the mBERT embedding with the HDBSCAN algorithm that attains a new state-of-the-art performance with a high score of external validity of 0.95.This new LLM method has created a new strong standard of Urdu text clustering.Importantly,the results confirm the strength and high scalability of the LLM-generated embeddings towards the ability to generalise the fine,subtle semantics needed to discover topics in low-resource settings and open the door to novel NLP applications in underrepresented languages.
文摘Creating a parallel corpus for machine translation is a challenging and time-consuming task,especially in a linguistically diverse country like the Philippines,with 185 languages.Although a wealth of text is available,annotated data is scarce,particularly for languages like Bikol.Bikol is one of the major languages in the Philippines;however,its underrepresentation in the digital sphere is attributed to the absence of annotated data.This study outlines the development process of BFParCo,a proposed gold standard dataset for the Bikol and Filipino parallel corpus.The corpus underwent refinement through manual phrase alignment,translation,and evaluation.Subsequently,T5 and mT5 transformer models were fine-tuned with the parallel corpus and were evaluated using the BLEU metric.The results showed a notable improvement in Bilingual Evaluation Understudy(BLEU)score after fine-tuning,with an increase of 60.68 in BIK→FIL and 58.93 in FIL→BIK translations.Additionally,human evaluators comprehensively assessed the fine-tuned models'results using Multidimensional Quality Metrics and Scalar Quality Metrics error taxonomies.The fine-tuned models then were made publicly accessible through Hugging Face.This study represents a significant stride in advancing machine translation tools for Bikol and Filipino languages.
文摘The work in this paper is based on primary research on how to obtain informed consent to medical treatment and or procedure among patients;this study was carried out in Papua New Guinea in both urban and rural health settings across customs,cultures,and languages in two provinces,on the basis of qualitative interviews with healthcare professionals including doctors,nurses,other healthcare workers,patients,and traditional healers.We emphasize the views of consent with participants of customs,cultural,and languages regarding informed consent.There are factors between peoples of differing circumstances which can greatly alter how they view consent.Some groups would involve people in the decision-making process that may not traditionally be involved in the decision making of a medical decision.Other groups may dislike certain medical procedures as in Papua New Guinea(PNG).And certain people have different views on what should be disclosed of the patient’s condition.Customs,cultures,and languages are common phenomena which continue to affect the daily lives of many thousands of people.It is unclear in PNG about the characteristics of customs,culture,and language on health care because there is no published information on informed consent and issues that affect the making of informed consent.
基金supported by“Dalian Maritime University Teaching Reform Research Fund 2022 Annual Project”(Fund No.XJG2022-96).
文摘This paper proposes an interdisciplinary talent training model that combines foreign language education with area studies.The model aims to cultivate international ocean affairs professionals with cross-cultural communication skills,in-depth regional and country knowledge,and practical expertise in ocean affairs.Additionally,the paper presents specific training pathways and policy recommendations for implementing this model.
文摘Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLMs.Therefore,in order to better assess the capability of LLMs in the agricultural domain,Agri-Eval was proposed as a benchmark for assessing the knowledge and reasoning ability of LLMs in agriculture.The assessment dataset used in Agri-Eval covered seven major disciplines in the agricultural domain:crop science,horticulture,plant protection,animal husbandry,forest science,aquaculture science,and grass science,and contained a total of 2283 questions.Among domestic general-purpose LLMs,DeepSeek R1 performed best with an accuracy rate of 75.49%.In the realm of international general-purpose LLMs,Gemini 2.0 pro exp 0205 standed out as the top performer,achieving an accuracy rate of 74.28%.As an LLMs in agriculture vertical,Shennong V2.0 outperformed all the LLMs in China,and the answer accuracy rate of agricultural knowledge exceeded that of all the existing general-purpose LLMs.The launch of Agri-Eval helped the LLM developers to comprehensively evaluate the model's capability in the field of agriculture through a variety of tasks and tests to promote the development of the LLMs in the field of agriculture.
基金This study is financed by the European Union-NextGenerationEU,through the National Recovery and Resilience Plan of the Republic of Bulgaria,Project No.BG-RRP-2.013-0001.
文摘Covert timing channels(CTC)exploit network resources to establish hidden communication pathways,posing signi cant risks to data security and policy compliance.erefore,detecting such hidden and dangerous threats remains one of the security challenges. is paper proposes LinguTimeX,a new framework that combines natural language processing with arti cial intelligence,along with explainable Arti cial Intelligence(AI)not only to detect CTC but also to provide insights into the decision process.LinguTimeX performs multidimensional feature extraction by fusing linguistic attributes with temporal network patterns to identify covert channels precisely.LinguTimeX demonstrates strong e ectiveness in detecting CTC across multiple languages;namely English,Arabic,and Chinese.Speci cally,the LSTM and RNN models achieved F1 scores of 90%on the English dataset,89%on the Arabic dataset,and 88%on the Chinese dataset,showcasing their superior performance and ability to generalize across multiple languages. is highlights their robustness in detecting CTCs within security systems,regardless of the language or cultural context of the data.In contrast,the DeepForest model produced F1-scores ranging from 86%to 87%across the same datasets,further con rming its e ectiveness in CTC detection.Although other algorithms also showed reasonable accuracy,the LSTM and RNN models consistently outperformed them in multilingual settings,suggesting that deep learning models might be better suited for this particular problem.
文摘As an ordinary Yunnan local,I never imagined becoming so closely connected to the exotic land of Laos.The luckiest event of my life was probably my choice to tick a box on a 2007 college entrance examination application form,indicating my willingness to enrollin a major other than my preference,which led me into the world of the Lao language.
基金supported by the National Key R&D Program of China[2022YFF0902703]the State Administration for Market Regulation Science and Technology Plan Project(2024MK033).
文摘Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interactions to predict future items of interest.However,many current methods rely on unique user and item IDs,limiting their ability to represent users and items effectively,especially in zero-shot learning scenarios where training data is scarce.With the rapid development of Large Language Models(LLMs),researchers are exploring their potential to enhance recommendation systems.However,there is a semantic gap between the linguistic semantics of LLMs and the collaborative semantics of recommendation systems,where items are typically indexed by IDs.Moreover,most research focuses on item representations,neglecting personalized user modeling.To address these issues,we propose a sequential recommendation framework using LLMs,called CIT-Rec,a model that integrates Collaborative semantics for user representation and Image and Text information for item representation to enhance Recommendations.Specifically,by aligning intuitive image information with text containing semantic features,we can more accurately represent items,improving item representation quality.We focus not only on item representations but also on user representations.To more precisely capture users’personalized preferences,we use traditional sequential recommendation models to train on users’historical interaction data,effectively capturing behavioral patterns.Finally,by combining LLMs and traditional sequential recommendation models,we allow the LLM to understand linguistic semantics while capturing collaborative semantics.Extensive evaluations on real-world datasets show that our model outperforms baseline methods,effectively combining user interaction history with item visual and textual modalities to provide personalized recommendations.
基金National Key Research and Development Program of China(2024YFC3505400)Capital Clinical Project of Beijing Municipal Science&Technology Commission(Z221100007422092)Capital’s Funds for Health Improvement and Research(2024-1-2231).
文摘Objective To develop a clinical decision and prescription generation system(CDPGS)specifically for diarrhea in traditional Chinese medicine(TCM),utilizing a specialized large language model(LLM),Qwen-TCM-Dia,to standardize diagnostic processes and prescription generation.Methods Two primary datasets were constructed:an evaluation benchmark and a fine-tuning dataset consisting of fundamental diarrhea knowledge,medical records,and chain-ofthought(CoT)reasoning datasets.After an initial evaluation of 16 open-source LLMs across inference time,accuracy,and output quality,Qwen2.5 was selected as the base model due to its superior overall performance.We then employed a two-stage low-rank adaptation(LoRA)fine-tuning strategy,integrating continued pre-training on domain-specific knowledge with instruction fine-tuning using CoT-enriched medical records.This approach was designed to embed the clinical logic(symptoms→pathogenesis→therapeutic principles→prescriptions)into the model’s reasoning capabilities.The resulting fine-tuned model,specialized for TCM diarrhea,was designated as Qwen-TCM-Dia.Model performance was evaluated for disease diagnosis and syndrome type differentiation using accuracy,precision,recall,and F1-score.Furthermore,the quality of the generated prescriptions was compared with that of established open-source TCM LLMs.Results Qwen-TCM-Dia achieved peak performance compared to both the base Qwen2.5 model and five other open-source TCM LLMs.It achieved 97.05%accuracy and 91.48%F1-score in disease diagnosis,and 74.54%accuracy and 74.21%F1-score in syndrome type differentiation.Compared with existing open-source TCM LLMs(BianCang,HuangDi,LingDan,TCMLLM-PR,and ZhongJing),Qwen-TCM-Dia exhibited higher fidelity in reconstructing the“symptoms→pathogenesis→therapeutic principles→prescriptions”logic chain.It provided complete prescriptions,whereas other models often omitted dosages or generated mismatched prescriptions.Conclusion By integrating continued pre-training,CoT reasoning,and a two-stage fine-tuning strategy,this study establishes a CDPGS for diarrhea in TCM.The results demonstrate the synergistic effect of strengthening domain representation through pre-training and activating logical reasoning via CoT.This research not only provides critical technical support for the standardized diagnosis and treatment of diarrhea but also offers a scalable paradigm for the digital inheritance of expert TCM experience and the intelligent transformation of TCM.
基金the research project LaTe4PoliticES(PID2022-138099OB-I00)funded by MCIN/AEI/10.13039/501100011033 and the European Fund for Regional Development(ERDF)-a way to make Europe.Tomás Bernal-Beltrán is supported by University of Murcia through the predoctoral programme.
文摘The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in Spanish is challenging due to linguistic complexity and the scarcity of annotated resources.In this paper,we compare two predominant AI-based approaches for the forensic detection of malicious hate speech:(1)finetuning encoder-only models that have been trained in Spanish and(2)In-Context Learning techniques(Zero-and Few-Shot Learning)with large-scale language models.Our approach goes beyond binary classification,proposing a comprehensive,multidimensional evaluation that labels each text by:(1)type of speech,(2)recipient,(3)level of intensity(ordinal)and(4)targeted group(multi-label).Performance is evaluated using an annotated Spanish corpus,standard metrics such as precision,recall and F1-score and stability-oriented metrics to evaluate the stability of the transition from zero-shot to few-shot prompting(Zero-to-Few Shot Retention and Zero-to-Few Shot Gain)are applied.The results indicate that fine-tuned encoder-only models(notably MarIA and BETO variants)consistently deliver the strongest and most reliable performance:in our experiments their macro F1-scores lie roughly in the range of approximately 46%–66%depending on the task.Zero-shot approaches are much less stable and typically yield substantially lower performance(observed F1-scores range approximately 0%–39%),often producing invalid outputs in practice.Few-shot prompting(e.g.,Qwen 38B,Mistral 7B)generally improves stability and recall relative to pure zero-shot,bringing F1-scores into a moderate range of approximately 20%–51%but still falling short of fully fine-tuned models.These findings highlight the importance of supervised adaptation and discuss the potential of both paradigms as components in AI-powered cybersecurity and malware forensics systems designed to identify and mitigate coordinated online hate campaigns.
文摘Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language.
基金National Key Research and Development Program of China,Grant/Award Number:2024YFF0507404Special Clinical Business Fund for High-Level Hospitals of China-Japan Friendship Hospital,Grant/Award Number:2024-NHLHCRF-TS-01。
文摘Background:Large language models(LLMs)have shown considerable promise in supporting clinical decision-making.However,their adoption and evaluation in dermatology remains limited.This study aimed to explore the preferences of Chinese dermatologists regarding LLM-generated responses in clinical psoriasis scenarios and to assess how they prioritize key quality dimensions,including accuracy,traceability,and logicality.Methods:A cross-sectional,web-based survey was conducted between December 25,2024,and January 22,2025,following the Checklist for Reporting Results of Internet E-Surveys guidelines.A total of 1247 valid responses were collected from practicing dermatologists across 33 of China's provincial-level administrative divisions.Participants evaluated responses to five categories of clinical questions(etiology,clinical presentation,differential diagnosis,treatment,and case study)generated by five LLMs:ChatGPT-4o,Kimi.ai,Doubao,ZuoYiGPT,and Lingyi-agent.Statistical associations between participant characteristics and model preferences were examined using chi-square tests.Results:ChatGPT-4o(Model 1)emerged as the most preferred model across all clinical tasks,consistently receiving the highest number of votes in case study(n=740),clinical presentation(n=666),differential diagnosis(n=707),etiology(n=602),and treatment(n=656).Significant variation in model preference by professional title was observed only for the differential diagnosis task(χ^(2)=21.13,df=12,p=0.0485),while no significant differences were found across hospital tiers(p>0.05).In terms of evaluation dimensions,accuracy was most frequently rated as“very important”(n=635).A significant association existed between hospital tier and the most valued dimension(χ^(2)=27.667,df=9,p=0.0011),with dermatologists in primary hospitals prioritizing traceability more than their peers in higher-tier hospitals.No significant associations were found across professional titles(p=0.127).Conclusions:Chinese dermatologists suggest a strong preference for ChatGPT-4o over domestic LLMs in psoriasis-related clinical tasks.While accuracy remains the primary criterion,traceability and logicality are also critical,particularly for clinicians in lower-tier hospitals.These findings suggest that future clinical LLMs should prioritize not only content accuracy but also source transparency and structural clarity to meet the diverse needs of different clinical settings.
基金“A Study on the Value and Path of Integrating Excellent Traditional Chinese Culture Into Intercultural Communication Courses”(ZD2024)a project by the Beijing Higher Education Association,as well as“A Study on the Path of Empowering the Integration of Excellent Traditional Chinese Culture Into Intercultural Communication Courses With Generative AI”(2024),an institutional project of Beijing International Studies University.
文摘This paper undertakes a systematic combing of the development of research on integrating Chinese culture into foreign language education in China from the 1980s to 2025,dividing it into three stages:cultural attachment,cultural compensation,and cultural symbiosis,and reveals the logical shift of the research from the dominance of target language culture to the construction of the subjectivity of Chinese culture.Through quantitative and qualitative analysis of 435 CSSCI papers,three core themes are extracted:what to integrate,why to integrate,and how to integrate.This paper critically analyzes three pairs of contradictions:the imbalance between instrumentality and humanism,the separation of national narrative and individual expression,and the disconnection between traditional inheritance and modern transformation.It is proposed that future research should reconstruct the educational logic based on the Chinese context,integrate the national and individual dimensions,and build a dialogue mechanism between tradition and modernity,so as to provide theoretical and practical reference for the construction of a foreign language education system with Chinese characteristics.
文摘Background:Assess ChatGPT and Bard's effectiveness in the initial identification of articles for Otolaryngology—Head and Neck Surgery systematic literature reviews.Methods:Three PRISMA-based systematic reviews(Jabbour et al.2017,Wong et al.2018,and Wu et al.2021)were replicated using ChatGPTv3.5 and Bard.Outputs(author,title,publication year,and journal)were compared to the original references and cross-referenced with medical databases for authenticity and recall.Results:Several themes emerged when comparing Bard and ChatGPT across the three reviews.Bard generated more outputs and had greater recall in Wong et al.'s review,with a broader date range in Jabbour et al.'s review.In Wu et al.'s review,ChatGPT-2 had higher recall and identified more authentic outputs than Bard-2.Conclusion:Large language models(LLMs)failed to fully replicate peer-reviewed methodologies,producing outputs with inaccuracies but identifying relevant,especially recent,articles missed by the references.While human-led PRISMA-based reviews remain the gold standard,refining LLMs for literature reviews shows potential.
基金Guangzhou Science and Technology Program,Grant/Award Numbers:2025B03J0110,2024A03J1074,2024A03J0927。
文摘Large language models(LLMs)show considerable potential to revolutionize healthcare through their performance across diverse clinical applications.Given the inherent constraints of LLMs and the critical nature of medical practice,a rigorous and systematic evaluation of their medical competence is imperative.This study presents a comprehensive review of the established methodologies and benchmarks for evaluating the medical competence of LLMs,encompassing a thorough analysis of current assessment practices across medical knowledge,clinical practice competence,and ethical-safety considerations.By integrating clinician competency assessment frameworks into LLMs evaluation,we propose a structured tri-dimensional framework that systematically organizes existing evaluation approaches according to medical theoretical knowledge,clinical practice ability,and ethical-safety considerations.Furthermore,this research provides critical insights into future developmental trajectories while establishing foundational frameworks and standardization protocols for the integration of LLMs into medical practice.
基金funded by the Office of the Vice-President for Research and Development of Cebu Technological University.
文摘This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.
文摘Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning.
文摘War rehearsals have become increasingly important in national security due to the growing complexity of international affairs.However,traditional rehearsal methods,such as military chess simulations,are inefficient and inflexible,with particularly pronounced limitations in command and decision-making.The overwhelming volume of information and high decision complexity hinder the realization of autonomous and agile command and control.To address this challenge,an intelligent warfare simulation framework named Command-Agent is proposed,which deeply integrates large language models(LLMs)with digital twin battlefields.By constructing a highly realistic battlefield environment through real-time simulation and multi-source data fusion,the natural language interaction capabilities of LLMs are leveraged to lower the command threshold and to enable autonomous command through the Observe-Orient-Decide-Act(OODA)feedback loop.Within the Command-Agent framework,a multimodel collaborative architecture is further adopted to decouple the decision-generation and command-execution functions of LLMs.By combining specialized models such as Deep Seek-R1 and MCTool,the limitations of single-model capabilities are overcome.MCTool is a lightweight execution model fine-tuned for military Function Calling tasks.The framework also introduces a Vector Knowledge Base to mitigate hallucinations commonly exhibited by LLMs.Experimental results demonstrate that Command-Agent not only enables natural language-driven simulation and control but also deeply understands commander intent.Leveraging the multi-model collaborative architecture,during red-blue UAV confrontations involving 2 to 8 UAVs,the integrated score is improved by an average of 41.8%compared to the single-agent system(MCTool),accompanied by a 161.8%optimization in the battle loss ratio.Furthermore,when compared with multi-agent systems lacking the knowledge base,the inclusion of the Vector Knowledge Base further improves overall performance by 16.8%.In comparison with the general model(Qwen2.5-7B),the fine-tuned MCTool leads by 5%in execution efficiency.Therefore,the proposed Command-Agent introduces a novel perspective to the military command system and offers a feasible solution for intelligent battlefield decision-making.