期刊文献+
共找到111,660篇文章
< 1 2 250 >
每页显示 20 50 100
二语写作研究的现状、反思与展望——基于Journal of Second Language Writing近十年载文分析
1
作者 孙云帆 孙玲 《西部学刊》 2025年第5期164-168,共5页
二语写作是二语习得研究领域的重要组成部分。运用CiteSpace软件对近十年发表在Journal of Second Language Writing的231篇实证研究论文进行可视化分析,研究发现:二语写作研究整体呈波动性上升趋势,研究规模较为稳定,研究关注度逐渐提... 二语写作是二语习得研究领域的重要组成部分。运用CiteSpace软件对近十年发表在Journal of Second Language Writing的231篇实证研究论文进行可视化分析,研究发现:二语写作研究整体呈波动性上升趋势,研究规模较为稳定,研究关注度逐渐提升;二语写作研究领域暂未形成明显的核心作者和机构的合作网络;研究主题主要聚焦二语写作教学方法的多元化、二语写作反馈的多焦点、二语写作评估与测试的科学化,以及学习者个体差异的多维影响等方面。基于此,提出未来该领域发展需加强学者、机构之间的相互合作;关注个体学习者写作过程的认知特征与情感因素,尤其重视青少年二语学习过程的研究;扩大二语写作纵向研究规模,推动研究的深入发展。 展开更多
关键词 二语写作研究 Journal of Second language Writing 可视化分析 现状 反思与展望
在线阅读 下载PDF
Status of anxiety and depression among chronic heart failure patients:Factors influencing poor fluid restriction adherence 被引量:1
2
作者 Yun-Tao Luo Ai-Zhi Ou +5 位作者 Di-Sha Lin Hong Li Fang Zhou Yue-Mei Liu Xin-Ping Ye Xu Deng 《World Journal of Psychiatry》 2025年第6期128-138,共11页
BACKGROUND Anxiety and depression are prevalent among patients with chronic heart failure(CHF)and can adversely contribute to treatment adherence and clinical outcomes.Poor fluid restriction adherence is a widespread ... BACKGROUND Anxiety and depression are prevalent among patients with chronic heart failure(CHF)and can adversely contribute to treatment adherence and clinical outcomes.Poor fluid restriction adherence is a widespread challenge in the management of CHF.To effectively manage disease progression and alleviate symptoms,it is crucial to identify key influencing factors to facilitate the implementation of targeted interventions.AIM To investigate the status of anxiety and depression among patients with CHF and determine the factors contributing to poor fluid restriction adherence.METHODS Three hundred CHF patients seeking medical treatment at The First Hospital of Hunan University of Traditional Chinese Medicine between June 2021 and June 2023 were included in the study.Questionnaires,including the Psychosomatic Symptom Scale,Self-Rating Anxiety Scale,Self-Rating Depression Scale,and Fluid Restriction Adherence Questionnaire were administered to patients.Based on their anxiety and depression scores,patients were categorized into anxiety/depression and non-anxiety/depression groups,as well as fluid restriction adherence and fluid restriction non-adherence groups.General patient data were collected,and univariate and logistic regression analyses were conducted to determine the occurrence of depression and anxiety.Logistic regression analysis was used to identify independent factors influencing fluid restriction adherence.RESULTS Statistically significant differences in age,New York Heart Association(NYHA)grading,marital status,educational attainment,and family support were observed between depressed and non-depressed CHF patients(P<0.05).Age,NYHA grading,marital status,educational attainment,and family support were identified as factors influencing the development of depression.The anxiety and non-anxiety groups differed statistically in terms of gender,age,NYHA grading,smoking history,alcohol consumption history,monthly income,educational attainment,and family support(P<0.05).Gender,smoking,alcohol consumption,monthly income,and educational attainment affected anxiety in these patients.The fluid restriction adherence rate was 28.0%,and thirst sensation,anxiety,and depression were identified as independent influencing factors.CONCLUSION CHF patients are susceptible to anxiety and depression,with multiple associated influencing factors.Moreover,anxiety and depression are independent factors that can influence fluid restriction adherence in these patients. 展开更多
关键词 Chronic heart failure ANXIETY DEPRESSION Fluid restriction adherence
暂未订购
When Software Security Meets Large Language Models:A Survey 被引量:2
3
作者 Xiaogang Zhu Wei Zhou +3 位作者 Qing-Long Han Wanlun Ma Sheng Wen Yang Xiang 《IEEE/CAA Journal of Automatica Sinica》 2025年第2期317-334,共18页
Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, ... Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research. 展开更多
关键词 Large language models(LLMs) software analysis software security software testing
在线阅读 下载PDF
The Security of Using Large Language Models:A Survey With Emphasis on ChatGPT 被引量:2
4
作者 Wei Zhou Xiaogang Zhu +4 位作者 Qing-Long Han Lin Li Xiao Chen Sheng Wen Yang Xiang 《IEEE/CAA Journal of Automatica Sinica》 2025年第1期1-26,共26页
ChatGPT is a powerful artificial intelligence(AI)language model that has demonstrated significant improvements in various natural language processing(NLP) tasks. However, like any technology, it presents potential sec... ChatGPT is a powerful artificial intelligence(AI)language model that has demonstrated significant improvements in various natural language processing(NLP) tasks. However, like any technology, it presents potential security risks that need to be carefully evaluated and addressed. In this survey, we provide an overview of the current state of research on security of using ChatGPT, with aspects of bias, disinformation, ethics, misuse,attacks and privacy. We review and discuss the literature on these topics and highlight open research questions and future directions.Through this survey, we aim to contribute to the academic discourse on AI security, enriching the understanding of potential risks and mitigations. We anticipate that this survey will be valuable for various stakeholders involved in AI development and usage, including AI researchers, developers, policy makers, and end-users. 展开更多
关键词 Artificial intelligence(AI) ChatGPT large language models(LLMs) SECURITY
在线阅读 下载PDF
Evaluating research quality with Large Language Models:An analysis of ChatGPT’s effectiveness with different settings and inputs 被引量:1
5
作者 Mike Thelwall 《Journal of Data and Information Science》 2025年第1期7-25,共19页
Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether ... Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether Large Language Models(LLMs)can play a role in this process.Design/methodology/approach:This article assesses which ChatGPT inputs(full text without tables,figures,and references;title and abstract;title only)produce better quality score estimates,and the extent to which scores are affected by ChatGPT models and system prompts.Findings:The optimal input is the article title and abstract,with average ChatGPT scores based on these(30 iterations on a dataset of 51 papers)correlating at 0.67 with human scores,the highest ever reported.ChatGPT 4o is slightly better than 3.5-turbo(0.66),and 4o-mini(0.66).Research limitations:The data is a convenience sample of the work of a single author,it only includes one field,and the scores are self-evaluations.Practical implications:The results suggest that article full texts might confuse LLM research quality evaluations,even though complex system instructions for the task are more effective than simple ones.Thus,whilst abstracts contain insufficient information for a thorough assessment of rigour,they may contain strong pointers about originality and significance.Finally,linear regression can be used to convert the model scores into the human scale scores,which is 31%more accurate than guessing.Originality/value:This is the first systematic comparison of the impact of different prompts,parameters and inputs for ChatGPT research quality evaluations. 展开更多
关键词 ChatGPT Large language Models LLMs SCIENTOMETRICS Research Assessment
在线阅读 下载PDF
Assessing the possibility of using large language models in ocular surface diseases 被引量:1
6
作者 Qian Ling Zi-Song Xu +11 位作者 Yan-Mei Zeng Qi Hong Xian-Zhe Qian Jin-Yu Hu Chong-Gang Pei Hong Wei Jie Zou Cheng Chen Xiao-Yu Wang Xu Chen Zhen-Kai Wu Yi Shao 《International Journal of Ophthalmology(English edition)》 2025年第1期1-8,共8页
AIM:To assess the possibility of using different large language models(LLMs)in ocular surface diseases by selecting five different LLMS to test their accuracy in answering specialized questions related to ocular surfa... AIM:To assess the possibility of using different large language models(LLMs)in ocular surface diseases by selecting five different LLMS to test their accuracy in answering specialized questions related to ocular surface diseases:ChatGPT-4,ChatGPT-3.5,Claude 2,PaLM2,and SenseNova.METHODS:A group of experienced ophthalmology professors were asked to develop a 100-question singlechoice question on ocular surface diseases designed to assess the performance of LLMs and human participants in answering ophthalmology specialty exam questions.The exam includes questions on the following topics:keratitis disease(20 questions),keratoconus,keratomalaciac,corneal dystrophy,corneal degeneration,erosive corneal ulcers,and corneal lesions associated with systemic diseases(20 questions),conjunctivitis disease(20 questions),trachoma,pterygoid and conjunctival tumor diseases(20 questions),and dry eye disease(20 questions).Then the total score of each LLMs and compared their mean score,mean correlation,variance,and confidence were calculated.RESULTS:GPT-4 exhibited the highest performance in terms of LLMs.Comparing the average scores of the LLMs group with the four human groups,chief physician,attending physician,regular trainee,and graduate student,it was found that except for ChatGPT-4,the total score of the rest of the LLMs is lower than that of the graduate student group,which had the lowest score in the human group.Both ChatGPT-4 and PaLM2 were more likely to give exact and correct answers,giving very little chance of an incorrect answer.ChatGPT-4 showed higher credibility when answering questions,with a success rate of 59%,but gave the wrong answer to the question 28% of the time.CONCLUSION:GPT-4 model exhibits excellent performance in both answer relevance and confidence.PaLM2 shows a positive correlation(up to 0.8)in terms of answer accuracy during the exam.In terms of answer confidence,PaLM2 is second only to GPT4 and surpasses Claude 2,SenseNova,and GPT-3.5.Despite the fact that ocular surface disease is a highly specialized discipline,GPT-4 still exhibits superior performance,suggesting that its potential and ability to be applied in this field is enormous,perhaps with the potential to be a valuable resource for medical students and clinicians in the future. 展开更多
关键词 ChatGPT-4.0 ChatGPT-3.5 large language models ocular surface diseases
原文传递
Optimizing Fine-Tuning in Quantized Language Models:An In-Depth Analysis of Key Variables
7
作者 Ao Shen Zhiquan Lai +1 位作者 Dongsheng Li Xiaoyu Hu 《Computers, Materials & Continua》 SCIE EI 2025年第1期307-325,共19页
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci... Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments. 展开更多
关键词 Large-scale language Model Parameter-Efficient Fine-Tuning parameter quantization key variable trainable parameters experimental analysis
在线阅读 下载PDF
Cognitive Biases in Artificial Intelligence:Susceptibility of a Large Language Model to Framing Effect and Confirmation Bias
8
作者 Li Hao Wang You Yang Xueling 《心理科学》 北大核心 2025年第4期892-906,共15页
The rapid advancement of Artificial Intelligence(AI)and Large Language Models(LLMs)has led to their increasing integration into various domains,from text generation and translation to question-answering.However,a crit... The rapid advancement of Artificial Intelligence(AI)and Large Language Models(LLMs)has led to their increasing integration into various domains,from text generation and translation to question-answering.However,a critical question remains:do these sophisticated models,much like humans,exhibit susceptibility to cognitive biases?Understanding the presence and nature of such biases in AI is paramount for assessing their reliability,enhancing their performance,and predicting their societal impact.This research specifically investigates the susceptibility of Google’s Gemini 1.5 Pro and DeepSeek,two prominent LLMs,to framing effects and confirmation bias.The study meticulously designed a series of experimental trials,systematically manipulating information proportions and presentation orders to evaluate these biases.In the framing effect experiment,a genetic testing decision-making scenario was constructed.The proportion of positive and negative information(e.g.,20%,50%,or 80%positive)and their presentation order were varied.The models’inclination towards undergoing genetic testing was recorded.For the confirmation bias experiment,two reports-one positive and one negative-about“RoboTaxi”autonomous vehicles were provided.The proportion of erroneous information within these reports(10%,30%,and 50%)and their presentation order were systematically altered,and the models’support for each report was assessed.The findings demonstrate that both Gemini 1.5 Pro and DeepSeek are susceptible to framing effects.In the genetic testing scenario,their decision-making was primarily influenced by the proportion of positive and negative information presented.When the proportion of positive information was higher,both models showed a greater inclination to recommend or proceed with genetic testing.Conversely,a higher proportion of negative information led to greater caution or a tendency not to recommend the testing.Importantly,the order in which this information was presented did not significantly influence their decisions in the framing effect scenarios.Regarding confirmation bias,the two models exhibited distinct behaviors.Gemini 1.5 Pro did not show an overall preference for either positive or negative reports.However,its judgments were significantly influenced by the order of information presentation,demonstrating a“recency effect,”meaning it tended to support the report presented later.The proportion of erroneous information within the reports had no significant impact on Gemini 1.5 Pro’s decisions.In contrast,DeepSeek exhibited an overall confirmation bias,showing a clear preference for positive reports.Similar to Gemini 1.5 Pro,DeepSeek’s decisions were also significantly affected by the order of information presentation,while the proportion of misinformation had no significant effect.These results reveal human-like cognitive vulnerabilities in advanced LLMs,highlighting critical challenges to their reliability and objectivity in decision-making processes.Gemini 1.5 Pro’s sensitivity to presentation order and DeepSeek’s general preference for positive information,coupled with its sensitivity to order,underscore the need for careful evaluation of potential cognitive biases during the development and application of AI.The study suggests that effective measures are necessary to mitigate these biases and prevent potential negative societal impacts.Future research should include a broader range of models for comparative analysis and explore more complex interactive scenarios to further understand and address these phenomena.The findings contribute significantly to understanding the limitations and capabilities of current AI systems,guiding their responsible development,and anticipating their potential societal implications. 展开更多
关键词 artificial intelligence large language models cognitive bias confirmation bias framing effect
原文传递
Exploration of augmented prompting methods for information extraction using large language models
9
作者 Yishuo Fu Benfeng Xu +2 位作者 Mingxuan Du Quan Wang Zhendong Mao 《中国科学技术大学学报》 北大核心 2025年第7期15-24,14,I0001,共12页
Information extraction(IE)aims to automatically identify and extract information about specific interests from raw texts.Despite the abundance of solutions based on fine-tuning pretrained language models,IE in the con... Information extraction(IE)aims to automatically identify and extract information about specific interests from raw texts.Despite the abundance of solutions based on fine-tuning pretrained language models,IE in the context of fewshot and zero-shot scenarios remains highly challenging due to the scarcity of training data.Large language models(LLMs),on the other hand,can generalize well to unseen tasks with few-shot demonstrations or even zero-shot instructions and have demonstrated impressive ability for a wide range of natural language understanding or generation tasks.Nevertheless,it is unclear,whether such effectiveness can be replicated in the task of IE,where the target tasks involve specialized schema and quite abstractive entity or relation concepts.In this paper,we first examine the validity of LLMs in executing IE tasks with an established prompting strategy and further propose multiple types of augmented prompting methods,including the structured fundamental prompt(SFP),the structured interactive reasoning prompt(SIRP),and the voting-enabled structured interactive reasoning prompt(VESIRP).The experimental results demonstrate that while directly promotes inferior performance,the proposed augmented prompt methods significantly improve the extraction accuracy,achieving comparable or even better performance(e.g.,zero-shot FewNERD,FewNERD-INTRA)than state-of-theart methods that require large-scale training samples.This study represents a systematic exploration of employing instruction-following LLM for the task of IE.It not only establishes a performance benchmark for this novel paradigm but,more importantly,validates a practical technical pathway through the proposed prompt enhancement method,offering a viable solution for efficient IE in low-resource settings. 展开更多
关键词 prompt learning natural language processing few-shot information extraction zero-shot information extraction
在线阅读 下载PDF
Robust Detection and Analysis of Smart Contract Vulnerabilities with Large Language Model Agents
10
作者 Nishank P. Kuppa Vijay K. Madisetti 《Journal of Information Security》 2025年第1期197-226,共30页
Smart contracts on the Ethereum blockchain continue to revolutionize decentralized applications (dApps) by allowing for self-executing agreements. However, bad actors have continuously found ways to exploit smart cont... Smart contracts on the Ethereum blockchain continue to revolutionize decentralized applications (dApps) by allowing for self-executing agreements. However, bad actors have continuously found ways to exploit smart contracts for personal financial gain, which undermines the integrity of the Ethereum blockchain. This paper proposes a computer program called SADA (Static and Dynamic Analyzer), a novel approach to smart contract vulnerability detection using multiple Large Language Model (LLM) agents to analyze and flag suspicious Solidity code for Ethereum smart contracts. SADA not only improves upon existing vulnerability detection methods but also paves the way for more secure smart contract development practices in the rapidly evolving blockchain ecosystem. 展开更多
关键词 Blockchain Ethereum Smart Contracts Security Decentralized Applications WEB3 Cryptocurrency Large language Models
在线阅读 下载PDF
The Predictive Roles of Foreign Language Anxiety, Enjoyment, and Boredom on Chinese Students’ English Achievements
11
作者 LIN Huachun WANG Chen 《Sino-US English Teaching》 2025年第5期163-173,共11页
This study examines the predictive roles of foreign language classroom anxiety(FLCA),foreign language enjoyment(FLE),and foreign language boredom(FLB)in English achievement among Chinese senior high school students.De... This study examines the predictive roles of foreign language classroom anxiety(FLCA),foreign language enjoyment(FLE),and foreign language boredom(FLB)in English achievement among Chinese senior high school students.Despite extensive research on anxiety in language learning,less attention has been given to boredom,and the combined effects of these three emotions on English achievement remain under-explored,particularly among high school students in China.To address these gaps,a sample of 142 students from Guangzhou was surveyed using questionnaires to assess their emotional experiences and English achievement.The research found that FLE exhibited a positive correlation with academic performance,while FLCA and FLB showed negative associations.Notably,FLE was the most significant predictor of English achievement,followed by FLCA and FLB.Gender differences were observed,with male students reporting significantly higher levels of environmental enjoyment,while female students experienced significantly greater communication anxiety.On this basis,this paper offers suggestions on how to enhance senior high school students’FLE while mitigating FLCA and FLB,thereby promoting more effective and sustained English learning. 展开更多
关键词 foreign language anxiety foreign language enjoyment foreign language boredom English achievement predictive roles
在线阅读 下载PDF
Adapting High-Level Language Programming(C Language)Education in the Era of Large Language Models
12
作者 Baokai Zu Hongyuan Wang +1 位作者 Hongli Chen Yafang Li 《Journal of Contemporary Educational Research》 2025年第5期264-269,共6页
With the widespread application of large language models(LLMs)in natural language processing and code generation,traditional High-Level Language Programming courses are facing unprecedented challenges and opportunitie... With the widespread application of large language models(LLMs)in natural language processing and code generation,traditional High-Level Language Programming courses are facing unprecedented challenges and opportunities.As a core programming language for computer science majors,C language remains irreplaceable due to its foundational nature and engineering adaptability.This paper,based on the rapid development of large model technologies,proposes a systematic reform design for C language teaching,focusing on teaching objectives,content structure,teaching methods,and evaluation systems.The article suggests a teaching framework centered on“human-computer collaborative programming,”integrating prompt training,AI-assisted debugging,and code generation analysis,aiming to enhance students’problem modeling ability,programming expression skills,and AI collaboration literacy. 展开更多
关键词 Large language models(LLMs) High-level language programming C language Human-computer collaborative programming
在线阅读 下载PDF
Research on the Impact of Language Attitude on Language Variation
13
作者 LI Jin-feng ZHOU Yu-liang 《Journal of Literature and Art Studies》 2025年第3期243-247,共5页
This paper empirically studies the impact of Mandarin and Cantonese attitudes on Cantonese variation,and finds that Mandarin values have a significant positive impact on Cantonese variation,while Cantonese emotions an... This paper empirically studies the impact of Mandarin and Cantonese attitudes on Cantonese variation,and finds that Mandarin values have a significant positive impact on Cantonese variation,while Cantonese emotions and values have a significant negative impact on Cantonese variation.The impact of Cantonese emotional attitude on language variation is generally stronger than that of Cantonese value attitude.The protection of dialects and the promotion of popularization policies should be implemented in the same direction to maintain language diversity and promote the harmonious development of language ecology. 展开更多
关键词 language attitude emotional attitude value attitude language variation
在线阅读 下载PDF
Improving Students’Language Proficiency Through Drama in the EFL Classroom
14
作者 LI Pei-qi 《Journal of Literature and Art Studies》 2025年第8期647-650,共4页
This paper examines the application of drama-based pedagogy in EFL classrooms,demonstrating how script analysis,role-playing,and improvisation can effectively enhance students’integrated language skills.The study hig... This paper examines the application of drama-based pedagogy in EFL classrooms,demonstrating how script analysis,role-playing,and improvisation can effectively enhance students’integrated language skills.The study highlights the unique advantages of dramatic texts for pronunciation training,subtext interpretation,and cultural understanding,while providing practical teaching methods including conflict scene selection and stage direction adaptation.Findings indicate that drama techniques reduce learning anxiety,boost motivation,and create authentic language contexts,serving as an effective bridge between literary study and language practice. 展开更多
关键词 drama-based pedagogy EFL teaching language skills enhancement role-playing activities authentic language context
在线阅读 下载PDF
The Development of Large Language Models in the Financial Field
15
作者 Yanling Liu Yun Li 《Proceedings of Business and Economic Studies》 2025年第2期49-54,共6页
With the rapid development of natural language processing(NLP)and machine learning technology,applying large language models(LLMs)in the financial field shows a significant growth trend.This paper systematically revie... With the rapid development of natural language processing(NLP)and machine learning technology,applying large language models(LLMs)in the financial field shows a significant growth trend.This paper systematically reviews the development status,main applications,challenges,and future development direction of LLMs in the financial field.Financial Language models(FinLLMs)have been successfully applied to many scenarios,such as sentiment analysis,automated trading,risk assessment,etc.,through deep learning architectures such as BERT,Llama,and domain data fine-tuning.However,issues such as data privacy,model interpretability,and ethical governance still pose constraints to their widespread application.Future research should focus on improving model performance,addressing bias issues,strengthening privacy protection,and establishing a sound regulatory framework to ensure the healthy development of LLMs in the financial sector. 展开更多
关键词 Large language model Fintech Natural language processing Ethics of artificial intelligence
在线阅读 下载PDF
The Role of Mindfulness in Foreign Language Anxiety:A Systematic Review of Correlational and Intervention Studies
16
作者 Hui Yang Yijie Li 《International Journal of Mental Health Promotion》 2025年第9期1279-1300,共22页
Background:Foreign Language Anxiety(FLA)represents a substantial affective barrier that undermines cognitive performance,motivation,and retention in language learners.Emerging evidence highlights mindfulness-based int... Background:Foreign Language Anxiety(FLA)represents a substantial affective barrier that undermines cognitive performance,motivation,and retention in language learners.Emerging evidence highlights mindfulness-based interventions as promising strategies for enhancing emotional regulation and reducing anxiety across educational contexts.This review synthesizes current research on mindfulness as a psychological intervention,aims to evaluate its efficacy in alleviating FLA,and discusses its broader implications for health-focused educational policy and practice.Methods:Following PRISMA guidelines,we systematically reviewed studies examining the relationships between mindfulness and FLA.Our search of four major databases(November 2023)initially identified 346 articles using terms like“mindfulness AND language anxiety.”After screening,14 studies met our criteria:(1)empirical research in English on mindfulness-FLA relationships;(2)no publication date restrictions.Two independent reviewers selected studies,excluding two due to methodological limitations.We conducted a narrative synthesis given the study heterogeneity(9 correlational and 5 intervention studies).Results:9 non-intervention studies demonstrated that mindfulness is negatively associated with FLA,with 3 studies highlighting the mediating roles of self-efficacy and resilience.5 intervention studies reported inconsistent results regarding the efficacy of mindfulness-based interventions in reducing FLA.Conclusions:The findings suggest that while mindfulness holds promise as a tool to address FLA,its mechanisms and effectiveness require further investigation.This study underscores the need for rigorous research,including Randomized Controlled Trials(RCTs),to inform evidence-based integration of mindfulness into foreign language curricula.For educational policymakers and practitioners,these insights highlight the importance of adopting mindfulness interventions cautiously,ensuring they are tailored to students’needs and supported by evidence. 展开更多
关键词 Foreign language anxiety(FLA) MINDFULNESS language learning educational practice intervention studies
在线阅读 下载PDF
A Critical Review of Methods and Challenges in Large Language Models
17
作者 Milad Moradi Ke Yan +2 位作者 David Colwell Matthias Samwald Rhona Asgari 《Computers, Materials & Continua》 2025年第2期1681-1698,共18页
This critical review provides an in-depth analysis of Large Language Models(LLMs),encompassing their foundational principles,diverse applications,and advanced training methodologies.We critically examine the evolution... This critical review provides an in-depth analysis of Large Language Models(LLMs),encompassing their foundational principles,diverse applications,and advanced training methodologies.We critically examine the evolution from Recurrent Neural Networks(RNNs)to Transformer models,highlighting the significant advancements and innovations in LLM architectures.The review explores state-of-the-art techniques such as in-context learning and various fine-tuning approaches,with an emphasis on optimizing parameter efficiency.We also discuss methods for aligning LLMs with human preferences,including reinforcement learning frameworks and human feedback mechanisms.The emerging technique of retrieval-augmented generation,which integrates external knowledge into LLMs,is also evaluated.Additionally,we address the ethical considerations of deploying LLMs,stressing the importance of responsible and mindful application.By identifying current gaps and suggesting future research directions,this review provides a comprehensive and critical overview of the present state and potential advancements in LLMs.This work serves as an insightful guide for researchers and practitioners in artificial intelligence,offering a unified perspective on the strengths,limitations,and future prospects of LLMs. 展开更多
关键词 Large language models artificial intelligence natural language processing machine learning generative artificial intelligence
在线阅读 下载PDF
Effect of mobile phone applications on medication adherence among patients with coronary artery diseases:A scoping review
18
作者 Mohamed K Seyam Riyaz Ahamed Shaik +8 位作者 Mohammad Miraj Naif S Alzahrani Abdul Rahim Shaik Puneeta Ajmera Sheetal Kalra Shaima Ali Miraj Ghada M Shawky Khulud Mahmood Nurani Prashanth A 《World Journal of Cardiology》 2025年第11期80-91,共12页
Patients with cardiovascular disease rely on medication to achieve favorable longterm clinical results.Poor adherence has been linked to a relative increase in mortality of 50%-80%as well as higher health care costs.T... Patients with cardiovascular disease rely on medication to achieve favorable longterm clinical results.Poor adherence has been linked to a relative increase in mortality of 50%-80%as well as higher health care costs.This scoping review thus aimed to explore the evidence of the effects of mobile health care apps on medication adherence in patients with cardiovascular diseases.A comprehensive data search and extraction was done in line with the updated Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews checklist.A total of 10 studies were included for the review.The mean pooled improvement in adherence was found to be 18%and the most effective tool was the digital therapeutics app discussed in Li et al’s study.Smartphones and apps enhance coronary artery disease management by promoting medication compliance.Challenges include data security and smartphone usage among the elderly.Tailored apps or voice assistants offer potential solutions. 展开更多
关键词 Mobile phone applications Coronary artery disease Medication adherence Digital technology COMPLIANCE
暂未订购
The Impact of Progressive Effect Nutritional Care on Treatment Adherence, Quality of Life, and Nutritional Status in Uremia Patients Undergoing Dialysis
19
作者 Limin Xu Liuping Fu +2 位作者 Yueting Chen Weiwei Dai Jianmin Yao 《Journal of Clinical and Nursing Research》 2025年第10期254-260,共7页
Objective:To investigate the impact of progressive effect nutritional care on uremia patients undergoing dialysis.Methods:A total of 101 uremia patients undergoing dialysis admitted from January 2024 to March 2025 wer... Objective:To investigate the impact of progressive effect nutritional care on uremia patients undergoing dialysis.Methods:A total of 101 uremia patients undergoing dialysis admitted from January 2024 to March 2025 were selected as the study subjects and divided into two groups by lottery method.The control group(55 cases)received routine care,while the observation group(56 cases)received a combination of routine care and progressive effect nutritional care.Results:After 4 weeks of care,the observation group demonstrated higher treatment adherence(P<0.05),better quality of life(P<0.05),and improved nutritional status(P<0.05)compared to the control group.Conclusion:Progressive effect nutritional care can significantly enhance treatment adherence,quality of life,and nutritional status in uremia patients undergoing dialysis. 展开更多
关键词 Nutritional status Progressive effect nutritional care Quality of life Routine care Treatment adherence UREMIA
暂未订购
A Review of Machine Translation Techniques for Low-Resource Languages
20
作者 PENG Cheng-xi MA Zi-han 《Journal of Literature and Art Studies》 2025年第9期725-731,共7页
Machine translation of low-resource languages(LRLs)has long been hindered by limited corpora and linguistic complexity.This review summarizes key developments,from traditional methods to recent progress with large lan... Machine translation of low-resource languages(LRLs)has long been hindered by limited corpora and linguistic complexity.This review summarizes key developments,from traditional methods to recent progress with large language models(LLMs),while highlighting ongoing challenges such as data bottlenecks,biases,fairness,and computational costs.Finally,it discusses future directions,including efficient parameter fine-tuning,multimodal translation,and community-driven corpus construction,providing insights for advancing LRL translation research. 展开更多
关键词 low-resource languages(LRLs) machine translation large language models(LLMs)
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部