目的针对飞机货舱配载方案评估中多属性决策的复杂性,以及现有评估方法在权重确定上过度依赖专家经验导致的主观偏差,或单纯依赖客观数据忽略决策者偏好的局限性,提出一种融合主观先验与客观数据驱动的混合赋权评估模型,以提供更为合理...目的针对飞机货舱配载方案评估中多属性决策的复杂性,以及现有评估方法在权重确定上过度依赖专家经验导致的主观偏差,或单纯依赖客观数据忽略决策者偏好的局限性,提出一种融合主观先验与客观数据驱动的混合赋权评估模型,以提供更为合理、可靠的配载方案择优决策支持。方法首先,引入大型语言模型(Large language model,LLM),构建“虚拟专家委员会”,通过精心设计的提示词工程,获取多维度、多情境下的主观权重。其次,针对传统熵权法对数据分布敏感、难以有效区分指标优劣等问题,提出一种改进的数据预处理熵权法(Improved data preprocessing entropy weighting method,IDPEW),该方法结合指标值的辨识度和信息熵的均衡性来确定客观权重。最后,将LLM生成的主观权重与IDPEW计算的客观权重进行加权组合,构建综合评价函数,对飞机货舱配载方案进行全面评估和排序。结果实验结果表明,LLM模拟专家意见时最关注“装载率”(主观权重0.2250),而IDPEW方法从数据中识别出“横向不平衡度”最具区分力(客观权重0.2481)。混合赋权模型(α=0.5)有效平衡了主客观偏好,在24个方案中精准识别出综合性能最优的方案,验证了模型在复杂情境下的稳定性。结论创新性地利用LLM低成本构建“虚拟专家”获取先验知识,并通过耦合指标辨识度与均衡性的IDPEW方法,提升了客观赋权精度。该模型克服了单一赋权的局限,为飞机货舱配载方案的科学评估提供了一种兼具可解释性和实用性的新范式。展开更多
Liver transplantation(LT)remains the optimal life-saving intervention for patients with end-stage liver disease.Despite the recent advances in LT several barriers,including organ allocation,donor-recipient matching,an...Liver transplantation(LT)remains the optimal life-saving intervention for patients with end-stage liver disease.Despite the recent advances in LT several barriers,including organ allocation,donor-recipient matching,and patient education,persist.With the growing progress of artificial intelligence,particularly large language models(LLMs)like ChatGPT,new applications have emerged in the field of LT.Current studies demonstrating usage of ChatGPT in LT include various areas of application,from clinical settings to research and education.ChatGPT usage can benefit both healthcare professionals,by decreasing the time spent on non-clinical work,but also LT recipients by providing accurate information.Future potential applications include the expanding usage of ChatGPT and other LLMs in the field of LT pathology and radiology as well as the automated creation of discharge summaries or other related paperwork.Additionally,the next models of ChatGPT might have the potential to provide more accurate patient education material with increased readability.Although ChatGPT usage presents promising applications,there are certain ethical and practical limitations.Key concerns include patient data privacy,information accuracy,misinformation possibility and lack of legal framework.Healthcare providers and policymakers should collaborate for the establishment of a controlled framework for the safe use of ChatGPT.The aim of this minireview is to summarize current literature on ChatGPT in LT,highlighting both opportunities and limitations,while also providing future possible applications.展开更多
In the article titled“Inhibiting SHP2 reduces glycolysis,promotes microglial M1 polarization,and alleviates secondary inflammation following spinal cord injury in a mouse model,”published in Neural Regeneration Rese...In the article titled“Inhibiting SHP2 reduces glycolysis,promotes microglial M1 polarization,and alleviates secondary inflammation following spinal cord injury in a mouse model,”published in Neural Regeneration Research(Ding et al.,2025),the title was incorrectly presented due to an error during the language polishing process.展开更多
近年来,大语言模型(large language models,LLMs)在自然语言处理(natural language processing,NLP)等领域取得了显著进展,展现出强大的语言理解与生成能力。然而,在实际应用过程中,大语言模型仍然面临诸多挑战。其中,幻觉(hallucinati...近年来,大语言模型(large language models,LLMs)在自然语言处理(natural language processing,NLP)等领域取得了显著进展,展现出强大的语言理解与生成能力。然而,在实际应用过程中,大语言模型仍然面临诸多挑战。其中,幻觉(hallucination)问题引起了学术界和工业界的广泛关注。如何有效检测大语言模型幻觉,成为确保其在文本生成等下游任务可靠、安全、可信应用的关键挑战。该研究着重对大语言模型幻觉检测方法进行综述:首先,介绍了大语言模型概念,进一步明确了幻觉的定义与分类,系统梳理了大语言模型从构建到部署应用全生命周期各环节的特点,并深入分析了幻觉的产生机制与诱因;其次,立足于实际应用需求,考虑到在不同任务场景下模型透明度的差异等因素,将幻觉检测方法划分为针对白盒模型和黑盒模型2类,并进行了重点梳理和深入对比;而后,分析总结了现阶段主流的幻觉检测基准,为后续开展幻觉检测奠定基础;最后,指出了大语言模型幻觉检测的各种潜在研究方法和新的挑战。展开更多
文摘目的针对飞机货舱配载方案评估中多属性决策的复杂性,以及现有评估方法在权重确定上过度依赖专家经验导致的主观偏差,或单纯依赖客观数据忽略决策者偏好的局限性,提出一种融合主观先验与客观数据驱动的混合赋权评估模型,以提供更为合理、可靠的配载方案择优决策支持。方法首先,引入大型语言模型(Large language model,LLM),构建“虚拟专家委员会”,通过精心设计的提示词工程,获取多维度、多情境下的主观权重。其次,针对传统熵权法对数据分布敏感、难以有效区分指标优劣等问题,提出一种改进的数据预处理熵权法(Improved data preprocessing entropy weighting method,IDPEW),该方法结合指标值的辨识度和信息熵的均衡性来确定客观权重。最后,将LLM生成的主观权重与IDPEW计算的客观权重进行加权组合,构建综合评价函数,对飞机货舱配载方案进行全面评估和排序。结果实验结果表明,LLM模拟专家意见时最关注“装载率”(主观权重0.2250),而IDPEW方法从数据中识别出“横向不平衡度”最具区分力(客观权重0.2481)。混合赋权模型(α=0.5)有效平衡了主客观偏好,在24个方案中精准识别出综合性能最优的方案,验证了模型在复杂情境下的稳定性。结论创新性地利用LLM低成本构建“虚拟专家”获取先验知识,并通过耦合指标辨识度与均衡性的IDPEW方法,提升了客观赋权精度。该模型克服了单一赋权的局限,为飞机货舱配载方案的科学评估提供了一种兼具可解释性和实用性的新范式。
文摘Liver transplantation(LT)remains the optimal life-saving intervention for patients with end-stage liver disease.Despite the recent advances in LT several barriers,including organ allocation,donor-recipient matching,and patient education,persist.With the growing progress of artificial intelligence,particularly large language models(LLMs)like ChatGPT,new applications have emerged in the field of LT.Current studies demonstrating usage of ChatGPT in LT include various areas of application,from clinical settings to research and education.ChatGPT usage can benefit both healthcare professionals,by decreasing the time spent on non-clinical work,but also LT recipients by providing accurate information.Future potential applications include the expanding usage of ChatGPT and other LLMs in the field of LT pathology and radiology as well as the automated creation of discharge summaries or other related paperwork.Additionally,the next models of ChatGPT might have the potential to provide more accurate patient education material with increased readability.Although ChatGPT usage presents promising applications,there are certain ethical and practical limitations.Key concerns include patient data privacy,information accuracy,misinformation possibility and lack of legal framework.Healthcare providers and policymakers should collaborate for the establishment of a controlled framework for the safe use of ChatGPT.The aim of this minireview is to summarize current literature on ChatGPT in LT,highlighting both opportunities and limitations,while also providing future possible applications.
文摘In the article titled“Inhibiting SHP2 reduces glycolysis,promotes microglial M1 polarization,and alleviates secondary inflammation following spinal cord injury in a mouse model,”published in Neural Regeneration Research(Ding et al.,2025),the title was incorrectly presented due to an error during the language polishing process.
文摘近年来,大语言模型(large language models,LLMs)在自然语言处理(natural language processing,NLP)等领域取得了显著进展,展现出强大的语言理解与生成能力。然而,在实际应用过程中,大语言模型仍然面临诸多挑战。其中,幻觉(hallucination)问题引起了学术界和工业界的广泛关注。如何有效检测大语言模型幻觉,成为确保其在文本生成等下游任务可靠、安全、可信应用的关键挑战。该研究着重对大语言模型幻觉检测方法进行综述:首先,介绍了大语言模型概念,进一步明确了幻觉的定义与分类,系统梳理了大语言模型从构建到部署应用全生命周期各环节的特点,并深入分析了幻觉的产生机制与诱因;其次,立足于实际应用需求,考虑到在不同任务场景下模型透明度的差异等因素,将幻觉检测方法划分为针对白盒模型和黑盒模型2类,并进行了重点梳理和深入对比;而后,分析总结了现阶段主流的幻觉检测基准,为后续开展幻觉检测奠定基础;最后,指出了大语言模型幻觉检测的各种潜在研究方法和新的挑战。