Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLM...Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLMs.Therefore,in order to better assess the capability of LLMs in the agricultural domain,Agri-Eval was proposed as a benchmark for assessing the knowledge and reasoning ability of LLMs in agriculture.The assessment dataset used in Agri-Eval covered seven major disciplines in the agricultural domain:crop science,horticulture,plant protection,animal husbandry,forest science,aquaculture science,and grass science,and contained a total of 2283 questions.Among domestic general-purpose LLMs,DeepSeek R1 performed best with an accuracy rate of 75.49%.In the realm of international general-purpose LLMs,Gemini 2.0 pro exp 0205 standed out as the top performer,achieving an accuracy rate of 74.28%.As an LLMs in agriculture vertical,Shennong V2.0 outperformed all the LLMs in China,and the answer accuracy rate of agricultural knowledge exceeded that of all the existing general-purpose LLMs.The launch of Agri-Eval helped the LLM developers to comprehensively evaluate the model's capability in the field of agriculture through a variety of tasks and tests to promote the development of the LLMs in the field of agriculture.展开更多
This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to use...This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.展开更多
AIM:To investigate the clinical characteristics and treatment outcomes,including visual function and overall survival(OS)of patients with ocular adnexal diffuse large B-cell lymphoma(OA-DLBCL).METHODS:This retrospecti...AIM:To investigate the clinical characteristics and treatment outcomes,including visual function and overall survival(OS)of patients with ocular adnexal diffuse large B-cell lymphoma(OA-DLBCL).METHODS:This retrospective cohort study enrolled 29 patients diagnosed with OA-DLBCL based on histopathological biopsy between 2006 and 2023.Patients were stratified into two subgroups:primary OA-DLBCL(no prior history of lymphoma)and secondary OA-DLBCL(history of DLBCL at non-ocular adnexal sites).OS was defined as the time interval from OA-DLBCL diagnosis to death from any cause.Survival analysis was performed using the Kaplan–Meier method,and prognostic factors affecting OS were identified using multivariate Cox proportional hazards regression with a stepwise selection approach.RESULTS:The cohort included 24 patients with primary OA-DLBCL(13 males,11 females;mean age:61.36±18.29y)and 5 patients with secondary OA-DLBCL(2 males,3 females;mean age:50.94±18.17y).Among the primary OA-DLBCL subgroup,12 patients(50%)presented with advanced disease(Ann Arbor stage IIIE–IV),and 16 patients(66%)were classified as T4 disease according to the tumor-node-metastasis(TNM)staging system.The mean final visual acuity was 1.72±1.10 in the primary group and 0.90±1.18 in the secondary group.The 5-year OS rate for the entire cohort was 27.7%.Multivariate analysis identified five factors significantly associated with poor survival outcomes:epiphora[adjusted hazard ratio(aHR),36.95],atherosclerotic cardiovascular disease(aHR,10.08),human immunodeficiency virus(HIV)infection(aHR,12.47),M1 stage(aHR,6.99),and secondary OA-DLBCL(aHR,6.03;all P<0.05).The median OS was 1.68y for primary OA-DLBCL and 1.12y for secondary OA-DLBCL.CONCLUSION:A substantial proportion of patients with primary OA-DLBCL present with advanced-stage disease at diagnosis.Epiphora,atherosclerotic cardiovascular disease,HIV infection,M1 stage,and secondary OA-DLBCL are independent prognostic factors for poor survival outcomes.These findings emphasize the urgent need for optimized therapeutic strategies and early screening protocols to improve the management of OA-DLBCL,particularly in developing countries.展开更多
In the era of AI,especially large models,the importance of open source has become increasingly prominent.First,open source allows innovation to avoid starting from scratch.Through iterative innovation,it promotes tech...In the era of AI,especially large models,the importance of open source has become increasingly prominent.First,open source allows innovation to avoid starting from scratch.Through iterative innovation,it promotes technical exchanges and learning globally.Second,resources required for large model R&D are difficult for a single institution to obtain.The evaluation of general large models also requires the participation of experts from various industries.Third,without open source collaboration,it is difficult to form a unified upper-layer software ecosystem.Therefore,open source has become an important cooperation mechanism to promote the development of AI and large models.There are two cases to illustrate how open source and international standards interact with each other.展开更多
The ability to generate high pressures in a large-volume press(LVP)is crucial for the study of matter under extreme conditions.Here,we have achieved ultrahigh pressures of and 50 GPa,respectively,at room temperature a...The ability to generate high pressures in a large-volume press(LVP)is crucial for the study of matter under extreme conditions.Here,we have achieved ultrahigh pressures of and 50 GPa,respectively,at room temperature and a high temperature of 1900 K∼60within a millimeter-sized sample volume in a Kawai-type LVP(KLVP)using hard tungsten carbide(WC)and newly designed assem-blies.The introduction of electroconductive polycrystalline boron-doped diamond and dense alumina wrapped with Cu foils into a large conventional cell assembly enables the detection of resistance variations in the Fe_(2)O_(3) pressure standard upon compression.The efficiency of pressure generation in the newly developed cell assembly equipped with conventional ZK10F WC anvils is significantly higher than that of conventional assemblies with some ultrahard or tapered WC anvils.Our study has enabled the routine gener-ation of pressures exceeding 50 GPa within a millimeter-sized sample chamber that have been inaccessible with traditional KLVPs.This advance in high-pressure technology not only breaks a record for pressure generation in traditional KLVPs,but also opens up new avenues for exploration of the properties of the Earth’s deep interior and for the synthesis of novel materials at extreme high pressures.展开更多
Following the groundbreaking introduction of the Transformer architecture in 2017,the development of Large Language Models(LLMs)formally commenced.In May 2020,Chat GPT-3,with over one hundred billion parameters,entere...Following the groundbreaking introduction of the Transformer architecture in 2017,the development of Large Language Models(LLMs)formally commenced.In May 2020,Chat GPT-3,with over one hundred billion parameters,entered the public eye,marking a significant milestone in LLM advancement.展开更多
Stall flutter poses great challenges to flight safety.To alleviate this problem,a steady blowing control considering the perturbation and wake-induced vibration at a large angle of attack is developed in this paper,wh...Stall flutter poses great challenges to flight safety.To alleviate this problem,a steady blowing control considering the perturbation and wake-induced vibration at a large angle of attack is developed in this paper,where two blowings are configured on upper and lower tail surfaces to suppress the stall flutter.The stall flutter with one-degree-of-freedom is first evaluated by numerical simulation.The equation of motion for stall flutter is solved by the Newmark-β method.Then,the stall flutter responses for five blowing speeds,i.e.,0,4,12,20,and 28 m/s under the airspeed range of 3–9 m/s,are studied in detail.The stall flutter suppression mechanism can be summarized as follows:a large blowing speed can inject energy into the boundary layer and enhance the high-pressure zone,which delays the flow separation on the suction surface.In this way,the formation of the leading-edge separation vortex is suppressed.Thus,the dynamic stall vortex is weakened and accelerates shedding.In addition,the driving moment is reduced,which leads to a decrement in the stall flutter amplitude.When the blowing speed is 28 m/s(stall flutter amplitude=0.1357 rad),compared with uncontrolled case(stall flutter amplitude=0.6002 rad),the amplitude can decrease by 77.39%,which demonstrates the effectiveness of the proposed steady blowing based active control strategy.展开更多
A two-dimensional large eddy simulation numerical model is proposed to study the transient vortex flow and pressure oscillation of a large-aspect-ratio solid rocket motor.The numerical model is validated through exper...A two-dimensional large eddy simulation numerical model is proposed to study the transient vortex flow and pressure oscillation of a large-aspect-ratio solid rocket motor.The numerical model is validated through experimental data,finite element analysis and cumulative error analysis.The numerical simulations are executed to obtain the characteristics of the vortex-acoustic and pressure oscillation.The results show that the burning surface regression decreases the motor aspect ratio,increasing the corresponding natural frequency from 260 Hz to 293 Hz.The pressure oscillation phenomenon is formed due to the vortex-acoustic coupling.Decreasing the corner vortex shedding intensity shows negative effects on the dimensionless amplitude of the pressure oscillation.The head cavity without the injection can decrease the vortex-acoustic coupling level at the acoustic pressure antinode.The modified motor with head cavity can obtain a lower dimensionless oscillating pressure amplitude 0.00149 in comparison with 0.00895 of the original motor.The aspect ratio and volume of the head cavity without the injection have great effects on the pressure oscillation suppression,particularly at the low aspect ratio or large volume.The reason is that the mass in the region around the acoustic pressure antinode is extracted centrally,reducing the energy contribution to the acoustic system.With the volume increasing,the acoustic energy capacity increases.展开更多
Zeolites are crystalline microporous materials widely used in catalysis,adsorption,and ion exchange owing to their tunable pore structures and acid centers[1].Traditional zeolites,however,often suffer from limitations...Zeolites are crystalline microporous materials widely used in catalysis,adsorption,and ion exchange owing to their tunable pore structures and acid centers[1].Traditional zeolites,however,often suffer from limitations such as restricted molecular diffusion and rapid coking,which hinder their efficiency in processing large molecules.展开更多
With the rapid development of large AI models,large decision models have further broken through the limits of human cognition and promoted the innovation of decision-making paradigms in extensive fields such as medici...With the rapid development of large AI models,large decision models have further broken through the limits of human cognition and promoted the innovation of decision-making paradigms in extensive fields such as medicine and transportation.In this paper,we systematically expound on the intelligent decision-making technology and prospects driven by large AI models.Specifically,we first review the development of large AI models in recent years.Then,from the perspective of methods,we introduce important theories and technologies of large decision models,such as model architecture and model adaptation.Next,from the perspective of applications,we introduce the cutting-edge applications of large decision models in various fields,such as autonomous driving and knowledge decision-making.Finally,we discuss existing challenges,such as security issues,decision bias and hallucination phenomenon as well as future prospects,from both technology development and domain applications.We hope this review paper can help researchers understand the important progress of intelligent decision-making driven by large AI models.展开更多
Fundamental physics often confronts complex symbolic problems with few guiding exemplars or established principles.While artificial intelligence(AI)offers promise,its typical need for vast datasets to learn from hinde...Fundamental physics often confronts complex symbolic problems with few guiding exemplars or established principles.While artificial intelligence(AI)offers promise,its typical need for vast datasets to learn from hinders its use in these information-scarce frontiers.We introduce learning at criticality(LaC),a reinforcement learning scheme that tunes large language models(LLMs)to a sharp learning transition,addressing this information scarcity.At this transition,LLMs achieve peak generalization from minimal data,exemplified by 7-digit base-7 addition-a test of nontrivial arithmetic reasoning.To elucidate this peak,we analyze a minimal concept-network model designed to capture the essence of how LLMs might link tokens.Trained on a single exemplar,this model also undergoes a sharp learning transition.This transition exhibits hallmarks of a second-order phase transition,notably power-law distributed solution path lengths.At this critical point,the system maximizes a“critical thinking pattern”crucial for generalization,enabled by the underlying scale-free exploration.This suggests LLMs reach peak performance by operating at criticality,where such explorative dynamics enable the extraction of underlying operational rules.We demonstrate LaC in quantum field theory:an 8B-parameter LLM,tuned to its critical point by LaC using a few exemplars of symbolic Matsubara sums,solves unseen,higher-order problems,significantly outperforming far larger models.LaC thus leverages critical phenomena,a physical principle,to empower AI for complex,data-sparse challenges in fundamental physics.展开更多
Large models,such as large language models(LLMs),vision-language models(VLMs),and multimodal agents,have become key elements in artificial intelli⁃gence(AI)systems.Their rapid development has greatly improved percepti...Large models,such as large language models(LLMs),vision-language models(VLMs),and multimodal agents,have become key elements in artificial intelli⁃gence(AI)systems.Their rapid development has greatly improved perception,generation,and decision-making in various fields.However,their vast scale and complexity bring about new security challenges.Issues such as backdoor vulnerabilities during training,jailbreaking in multimodal rea⁃soning,and data provenance and copyright auditing have made security a critical focus for both academia and industry.展开更多
BACKGROUND Gastrointestinal diseases have complex etiologies and clinical presentations.An accurate diagnosis requires physicians to integrate diverse information,including medical history,laboratory test results,and ...BACKGROUND Gastrointestinal diseases have complex etiologies and clinical presentations.An accurate diagnosis requires physicians to integrate diverse information,including medical history,laboratory test results,and imaging findings.Existing artificial intelligence-assisted diagnostic tools are limited to single-modality information,resulting in recommendations that are often incomplete and may be associated with clinical or legal risks.AIM To develop and evaluate a collaborative multimodal large language model(LLM)framework for clinical decision-making in digestive diseases.METHODS In this observational study,DeepGut,a multimodal LLM collaborative diagnostic framework,was developed to integrate four distinct large models into a four-tiered structure.The framework sequentially accomplishes multimodal infor-mation extraction,logical“chain”construction,diagnostic and treatment suggestion generation,and risk analysis.The model was evaluated using objective metrics,which assess the reliability and comprehensiveness of model-generated results,and subjective expert opinions,which examine the effectiveness of the framework in assisting physicians.RESULTS The diagnostic and treatment recommendations generated by the DeepGut framework achieved exceptional performance,with a diagnostic accuracy of 97.8%,diagnostic completeness of 93.9%,treatment plan accuracy of 95.2%,and treatment plan completeness of 98.0%,significantly surpassing the capabilities of single-modal LLM-based diagnostic tools.Experts evaluating the framework commended the completeness,relevance,and logical coherence of its outputs.However,the collaborative multimodal LLM approach resulted in increased input and output token counts,leading to higher computational costs and extended diagnostic times.CONCLUSION The framework achieves successful integration of multimodal diagnostic data,demonstrating enhanced performance enabled by multimodal LLM collaboration,which opens new horizons for the clinical application of artificial intelligence-assisted technology.展开更多
In recent years,deep learning has been introduced into the field of Single-pixel imaging(SPI),garnering significant attention.However,conventional networks still exhibit limitations in preserving image details.To addr...In recent years,deep learning has been introduced into the field of Single-pixel imaging(SPI),garnering significant attention.However,conventional networks still exhibit limitations in preserving image details.To address this issue,we integrate Large Kernel Convolution(LKconv)into the U-Net framework,proposing an enhanced network structure named U-LKconv network,which significantly enhances the capability to recover image details even under low sampling conditions.展开更多
Large language models(LLMs)have emerged as transformative tools in radiology artificial intelligence(AI),offering significant capabilities in areas such as image report generation,clinical decision support,and workflo...Large language models(LLMs)have emerged as transformative tools in radiology artificial intelligence(AI),offering significant capabilities in areas such as image report generation,clinical decision support,and workflow optimization.The first part of this manuscript presents a comprehensive overview of the current state of LLM applications in radiology,including their historical evolution,technical foundations,and practical uses.Despite notable advances,inherent architectural constraints,such as token-level sequential processing,limit their ability to perform deep abstract reasoning and holistic contextual understanding,which are critical for fine-grained diagnostic interpretation.We provide a critical perspective on current LLMs and discuss key challenges,including model reliability,bias,and explainability,highlighting the pressing need for novel approaches to advance radiology AI.Large concept models(LCMs)represent a nascent and promising paradigm in radiology AI,designed to transcend the limitations of token-level processing by utilizing higher-order conceptual representations and multimodal data integration.The second part of this manuscript introduces the foundational principles and theoretical framework of LCMs,highlighting their potential to facilitate enhanced semantic reasoning,long-range context synthesis,and improved clinical decision-making.Critically,the core of this section is the proposal of a novel theoretical framework for LCMs,formalized and extended from our group’s foundational concept-based models-the world’s earliest articulation of this paradigm for medical AI.This conceptual shift has since been externally validated and propelled by the recent publication of the LCM architectural proposal by Meta AI,providing a large-scale engineering blueprint for the future development of this technology.We also outline future research directions and the transformative implications of this emerging AI paradigm for radiologic practice,aiming to provide a blueprint for advancing toward human-like conceptual understanding in AI.While challenges persist,we are at the very beginning of a new era,and it is not unreasonable to hope that future advancements will overcome these hurdles,pushing the boundaries of AI in Radiology,far beyond even the most state-of-the-art models of today.展开更多
0 INTRODUCTION Due to the rapid population growth and the accelerated urbanization process,the contradiction between the demand for expanding ground space and the limited available land scale is becoming increasingly ...0 INTRODUCTION Due to the rapid population growth and the accelerated urbanization process,the contradiction between the demand for expanding ground space and the limited available land scale is becoming increasingly prominent.China has implemented and completed several largescale land infilling and excavation projects(Figure 1),which have become the main way to increase land resources and expand construction land.展开更多
This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large mode...This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large models in vertical industries,outlines the challenges and issues confronted in applying large models in the oil and gas sector,and offers prospects for the application of large models in the oil and gas industry.The existing large models can be briefly divided into three categories:large language models,visual large models,and multimodal large models.The application of large models in the oil and gas industry is still in its infancy.Based on open-source large language models,some oil and gas enterprises have released large language model products using methods like fine-tuning and retrieval augmented generation.Scholars have attempted to develop scenario-specific models for oil and gas operations by using visual/multimodal foundation models.A few researchers have constructed pre-trained foundation models for seismic data processing and interpretation,as well as core analysis.The application of large models in the oil and gas industry faces challenges such as current data quantity and quality being difficult to support the training of large models,high research and development costs,and poor algorithm autonomy and control.The application of large models should be guided by the needs of oil and gas business,taking the application of large models as an opportunity to improve data lifecycle management,enhance data governance capabilities,promote the construction of computing power,strengthen the construction of“artificial intelligence+energy”composite teams,and boost the autonomy and control of large model technology.展开更多
Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and langua...Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions.However,for embodied tasks,where robots interact with complex environments,textonly LLMs often face challenges due to a lack of compatibility with robotic visual perception.This study provides a comprehensive overview of the emerging integration of LLMs and multimodal LLMs into various robotic tasks.Additionally,we propose a framework that utilizes multimodal GPT-4V to enhance embodied task planning through the combination of natural language instructions and robot visual perceptions.Our results,based on diverse datasets,indicate that GPT-4V effectively enhances robot performance in embodied tasks.This extensive survey and evaluation of LLMs and multimodal LLMs across a variety of robotic tasks enriches the understanding of LLM-centric embodied intelligence and provides forward-looking insights towards bridging the gap in Human-Robot-Environment interaction.展开更多
The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.De...The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats.展开更多
文摘Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLMs.Therefore,in order to better assess the capability of LLMs in the agricultural domain,Agri-Eval was proposed as a benchmark for assessing the knowledge and reasoning ability of LLMs in agriculture.The assessment dataset used in Agri-Eval covered seven major disciplines in the agricultural domain:crop science,horticulture,plant protection,animal husbandry,forest science,aquaculture science,and grass science,and contained a total of 2283 questions.Among domestic general-purpose LLMs,DeepSeek R1 performed best with an accuracy rate of 75.49%.In the realm of international general-purpose LLMs,Gemini 2.0 pro exp 0205 standed out as the top performer,achieving an accuracy rate of 74.28%.As an LLMs in agriculture vertical,Shennong V2.0 outperformed all the LLMs in China,and the answer accuracy rate of agricultural knowledge exceeded that of all the existing general-purpose LLMs.The launch of Agri-Eval helped the LLM developers to comprehensively evaluate the model's capability in the field of agriculture through a variety of tasks and tests to promote the development of the LLMs in the field of agriculture.
基金funded by the Office of the Vice-President for Research and Development of Cebu Technological University.
文摘This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.
基金Supported by the Faculty of Medicine,Prince of Songkla University.Wainipitapong S has received grants from the Faculty of Medicine,Prince of Songkla University。
文摘AIM:To investigate the clinical characteristics and treatment outcomes,including visual function and overall survival(OS)of patients with ocular adnexal diffuse large B-cell lymphoma(OA-DLBCL).METHODS:This retrospective cohort study enrolled 29 patients diagnosed with OA-DLBCL based on histopathological biopsy between 2006 and 2023.Patients were stratified into two subgroups:primary OA-DLBCL(no prior history of lymphoma)and secondary OA-DLBCL(history of DLBCL at non-ocular adnexal sites).OS was defined as the time interval from OA-DLBCL diagnosis to death from any cause.Survival analysis was performed using the Kaplan–Meier method,and prognostic factors affecting OS were identified using multivariate Cox proportional hazards regression with a stepwise selection approach.RESULTS:The cohort included 24 patients with primary OA-DLBCL(13 males,11 females;mean age:61.36±18.29y)and 5 patients with secondary OA-DLBCL(2 males,3 females;mean age:50.94±18.17y).Among the primary OA-DLBCL subgroup,12 patients(50%)presented with advanced disease(Ann Arbor stage IIIE–IV),and 16 patients(66%)were classified as T4 disease according to the tumor-node-metastasis(TNM)staging system.The mean final visual acuity was 1.72±1.10 in the primary group and 0.90±1.18 in the secondary group.The 5-year OS rate for the entire cohort was 27.7%.Multivariate analysis identified five factors significantly associated with poor survival outcomes:epiphora[adjusted hazard ratio(aHR),36.95],atherosclerotic cardiovascular disease(aHR,10.08),human immunodeficiency virus(HIV)infection(aHR,12.47),M1 stage(aHR,6.99),and secondary OA-DLBCL(aHR,6.03;all P<0.05).The median OS was 1.68y for primary OA-DLBCL and 1.12y for secondary OA-DLBCL.CONCLUSION:A substantial proportion of patients with primary OA-DLBCL present with advanced-stage disease at diagnosis.Epiphora,atherosclerotic cardiovascular disease,HIV infection,M1 stage,and secondary OA-DLBCL are independent prognostic factors for poor survival outcomes.These findings emphasize the urgent need for optimized therapeutic strategies and early screening protocols to improve the management of OA-DLBCL,particularly in developing countries.
文摘In the era of AI,especially large models,the importance of open source has become increasingly prominent.First,open source allows innovation to avoid starting from scratch.Through iterative innovation,it promotes technical exchanges and learning globally.Second,resources required for large model R&D are difficult for a single institution to obtain.The evaluation of general large models also requires the participation of experts from various industries.Third,without open source collaboration,it is difficult to form a unified upper-layer software ecosystem.Therefore,open source has become an important cooperation mechanism to promote the development of AI and large models.There are two cases to illustrate how open source and international standards interact with each other.
基金supported by the National Key R&D Program of China(Grant No.2023YFA1406200)the National Natural Science Foundation of China(Grant Nos.42272041 and 52302043)+2 种基金the National Natural Science Foundation of China(Grant No.U23A20561)the Jilin University High-level Innovation Team Foundation(Grant No.2021TD–05)the Shanghai Synchrotron Radiation Facility(Grant Nos.2024-SSRF-PT-510031 and 505511).
文摘The ability to generate high pressures in a large-volume press(LVP)is crucial for the study of matter under extreme conditions.Here,we have achieved ultrahigh pressures of and 50 GPa,respectively,at room temperature and a high temperature of 1900 K∼60within a millimeter-sized sample volume in a Kawai-type LVP(KLVP)using hard tungsten carbide(WC)and newly designed assem-blies.The introduction of electroconductive polycrystalline boron-doped diamond and dense alumina wrapped with Cu foils into a large conventional cell assembly enables the detection of resistance variations in the Fe_(2)O_(3) pressure standard upon compression.The efficiency of pressure generation in the newly developed cell assembly equipped with conventional ZK10F WC anvils is significantly higher than that of conventional assemblies with some ultrahard or tapered WC anvils.Our study has enabled the routine gener-ation of pressures exceeding 50 GPa within a millimeter-sized sample chamber that have been inaccessible with traditional KLVPs.This advance in high-pressure technology not only breaks a record for pressure generation in traditional KLVPs,but also opens up new avenues for exploration of the properties of the Earth’s deep interior and for the synthesis of novel materials at extreme high pressures.
文摘Following the groundbreaking introduction of the Transformer architecture in 2017,the development of Large Language Models(LLMs)formally commenced.In May 2020,Chat GPT-3,with over one hundred billion parameters,entered the public eye,marking a significant milestone in LLM advancement.
基金co-supported by the National Natural Science Foundation of China(Nos.52472394,52425211,52201327,52272360)。
文摘Stall flutter poses great challenges to flight safety.To alleviate this problem,a steady blowing control considering the perturbation and wake-induced vibration at a large angle of attack is developed in this paper,where two blowings are configured on upper and lower tail surfaces to suppress the stall flutter.The stall flutter with one-degree-of-freedom is first evaluated by numerical simulation.The equation of motion for stall flutter is solved by the Newmark-β method.Then,the stall flutter responses for five blowing speeds,i.e.,0,4,12,20,and 28 m/s under the airspeed range of 3–9 m/s,are studied in detail.The stall flutter suppression mechanism can be summarized as follows:a large blowing speed can inject energy into the boundary layer and enhance the high-pressure zone,which delays the flow separation on the suction surface.In this way,the formation of the leading-edge separation vortex is suppressed.Thus,the dynamic stall vortex is weakened and accelerates shedding.In addition,the driving moment is reduced,which leads to a decrement in the stall flutter amplitude.When the blowing speed is 28 m/s(stall flutter amplitude=0.1357 rad),compared with uncontrolled case(stall flutter amplitude=0.6002 rad),the amplitude can decrease by 77.39%,which demonstrates the effectiveness of the proposed steady blowing based active control strategy.
基金supported by the Natural Science Foundation of Hunan Province of China(No.2023JJ40672)the Innovation Science Fund Project of National University of Defense Technology,China(No.ZK2023-039)。
文摘A two-dimensional large eddy simulation numerical model is proposed to study the transient vortex flow and pressure oscillation of a large-aspect-ratio solid rocket motor.The numerical model is validated through experimental data,finite element analysis and cumulative error analysis.The numerical simulations are executed to obtain the characteristics of the vortex-acoustic and pressure oscillation.The results show that the burning surface regression decreases the motor aspect ratio,increasing the corresponding natural frequency from 260 Hz to 293 Hz.The pressure oscillation phenomenon is formed due to the vortex-acoustic coupling.Decreasing the corner vortex shedding intensity shows negative effects on the dimensionless amplitude of the pressure oscillation.The head cavity without the injection can decrease the vortex-acoustic coupling level at the acoustic pressure antinode.The modified motor with head cavity can obtain a lower dimensionless oscillating pressure amplitude 0.00149 in comparison with 0.00895 of the original motor.The aspect ratio and volume of the head cavity without the injection have great effects on the pressure oscillation suppression,particularly at the low aspect ratio or large volume.The reason is that the mass in the region around the acoustic pressure antinode is extracted centrally,reducing the energy contribution to the acoustic system.With the volume increasing,the acoustic energy capacity increases.
基金the support of the National Natural Science Foundation of China(Nos.22205207 and 22378369).
文摘Zeolites are crystalline microporous materials widely used in catalysis,adsorption,and ion exchange owing to their tunable pore structures and acid centers[1].Traditional zeolites,however,often suffer from limitations such as restricted molecular diffusion and rapid coking,which hinder their efficiency in processing large molecules.
基金supported by the National Natural Science Foundation of China(Grant 62293545)Shenzhen Science and Technology Program(Grant ZDSYS20220323112000001).
文摘With the rapid development of large AI models,large decision models have further broken through the limits of human cognition and promoted the innovation of decision-making paradigms in extensive fields such as medicine and transportation.In this paper,we systematically expound on the intelligent decision-making technology and prospects driven by large AI models.Specifically,we first review the development of large AI models in recent years.Then,from the perspective of methods,we introduce important theories and technologies of large decision models,such as model architecture and model adaptation.Next,from the perspective of applications,we introduce the cutting-edge applications of large decision models in various fields,such as autonomous driving and knowledge decision-making.Finally,we discuss existing challenges,such as security issues,decision bias and hallucination phenomenon as well as future prospects,from both technology development and domain applications.We hope this review paper can help researchers understand the important progress of intelligent decision-making driven by large AI models.
基金supported by the National Key Research and Development Program of China(Grant No.2024YFA1408604 for K.C.and X.C.)the National Natural Science Foundation of China(Grant Nos.12047503,12447103 for K.C.and X.C.,12325501 for P.Z.,and 12275263 for Y.D.and S.H.)+1 种基金the Innovation Program for Quantum Science and Technology(Grant No.2021ZD0301900 for Y.D.and S.H.)the Natural Science Foundation of Fujian Province of China(Grant No.2023J02032 for Y.D.and S.H.)。
文摘Fundamental physics often confronts complex symbolic problems with few guiding exemplars or established principles.While artificial intelligence(AI)offers promise,its typical need for vast datasets to learn from hinders its use in these information-scarce frontiers.We introduce learning at criticality(LaC),a reinforcement learning scheme that tunes large language models(LLMs)to a sharp learning transition,addressing this information scarcity.At this transition,LLMs achieve peak generalization from minimal data,exemplified by 7-digit base-7 addition-a test of nontrivial arithmetic reasoning.To elucidate this peak,we analyze a minimal concept-network model designed to capture the essence of how LLMs might link tokens.Trained on a single exemplar,this model also undergoes a sharp learning transition.This transition exhibits hallmarks of a second-order phase transition,notably power-law distributed solution path lengths.At this critical point,the system maximizes a“critical thinking pattern”crucial for generalization,enabled by the underlying scale-free exploration.This suggests LLMs reach peak performance by operating at criticality,where such explorative dynamics enable the extraction of underlying operational rules.We demonstrate LaC in quantum field theory:an 8B-parameter LLM,tuned to its critical point by LaC using a few exemplars of symbolic Matsubara sums,solves unseen,higher-order problems,significantly outperforming far larger models.LaC thus leverages critical phenomena,a physical principle,to empower AI for complex,data-sparse challenges in fundamental physics.
文摘Large models,such as large language models(LLMs),vision-language models(VLMs),and multimodal agents,have become key elements in artificial intelli⁃gence(AI)systems.Their rapid development has greatly improved perception,generation,and decision-making in various fields.However,their vast scale and complexity bring about new security challenges.Issues such as backdoor vulnerabilities during training,jailbreaking in multimodal rea⁃soning,and data provenance and copyright auditing have made security a critical focus for both academia and industry.
基金Supported by China Health Promotion Foundation Young Doctors’Research Foundation for Inflammatory Bowel DiseaseTaishan Scholars Program of Shandong Province,China,NO.tsqn202306343National Natural Science Foundation of China,No.82270580,No.82070552,No.82270578,and No.82300599.
文摘BACKGROUND Gastrointestinal diseases have complex etiologies and clinical presentations.An accurate diagnosis requires physicians to integrate diverse information,including medical history,laboratory test results,and imaging findings.Existing artificial intelligence-assisted diagnostic tools are limited to single-modality information,resulting in recommendations that are often incomplete and may be associated with clinical or legal risks.AIM To develop and evaluate a collaborative multimodal large language model(LLM)framework for clinical decision-making in digestive diseases.METHODS In this observational study,DeepGut,a multimodal LLM collaborative diagnostic framework,was developed to integrate four distinct large models into a four-tiered structure.The framework sequentially accomplishes multimodal infor-mation extraction,logical“chain”construction,diagnostic and treatment suggestion generation,and risk analysis.The model was evaluated using objective metrics,which assess the reliability and comprehensiveness of model-generated results,and subjective expert opinions,which examine the effectiveness of the framework in assisting physicians.RESULTS The diagnostic and treatment recommendations generated by the DeepGut framework achieved exceptional performance,with a diagnostic accuracy of 97.8%,diagnostic completeness of 93.9%,treatment plan accuracy of 95.2%,and treatment plan completeness of 98.0%,significantly surpassing the capabilities of single-modal LLM-based diagnostic tools.Experts evaluating the framework commended the completeness,relevance,and logical coherence of its outputs.However,the collaborative multimodal LLM approach resulted in increased input and output token counts,leading to higher computational costs and extended diagnostic times.CONCLUSION The framework achieves successful integration of multimodal diagnostic data,demonstrating enhanced performance enabled by multimodal LLM collaboration,which opens new horizons for the clinical application of artificial intelligence-assisted technology.
文摘In recent years,deep learning has been introduced into the field of Single-pixel imaging(SPI),garnering significant attention.However,conventional networks still exhibit limitations in preserving image details.To address this issue,we integrate Large Kernel Convolution(LKconv)into the U-Net framework,proposing an enhanced network structure named U-LKconv network,which significantly enhances the capability to recover image details even under low sampling conditions.
文摘Large language models(LLMs)have emerged as transformative tools in radiology artificial intelligence(AI),offering significant capabilities in areas such as image report generation,clinical decision support,and workflow optimization.The first part of this manuscript presents a comprehensive overview of the current state of LLM applications in radiology,including their historical evolution,technical foundations,and practical uses.Despite notable advances,inherent architectural constraints,such as token-level sequential processing,limit their ability to perform deep abstract reasoning and holistic contextual understanding,which are critical for fine-grained diagnostic interpretation.We provide a critical perspective on current LLMs and discuss key challenges,including model reliability,bias,and explainability,highlighting the pressing need for novel approaches to advance radiology AI.Large concept models(LCMs)represent a nascent and promising paradigm in radiology AI,designed to transcend the limitations of token-level processing by utilizing higher-order conceptual representations and multimodal data integration.The second part of this manuscript introduces the foundational principles and theoretical framework of LCMs,highlighting their potential to facilitate enhanced semantic reasoning,long-range context synthesis,and improved clinical decision-making.Critically,the core of this section is the proposal of a novel theoretical framework for LCMs,formalized and extended from our group’s foundational concept-based models-the world’s earliest articulation of this paradigm for medical AI.This conceptual shift has since been externally validated and propelled by the recent publication of the LCM architectural proposal by Meta AI,providing a large-scale engineering blueprint for the future development of this technology.We also outline future research directions and the transformative implications of this emerging AI paradigm for radiologic practice,aiming to provide a blueprint for advancing toward human-like conceptual understanding in AI.While challenges persist,we are at the very beginning of a new era,and it is not unreasonable to hope that future advancements will overcome these hurdles,pushing the boundaries of AI in Radiology,far beyond even the most state-of-the-art models of today.
基金funded by the Key Research and Development Program of Shaanxi Province(No.2024SFYBXM-669)the National Natural Science Foundation of China(No.42271078)。
文摘0 INTRODUCTION Due to the rapid population growth and the accelerated urbanization process,the contradiction between the demand for expanding ground space and the limited available land scale is becoming increasingly prominent.China has implemented and completed several largescale land infilling and excavation projects(Figure 1),which have become the main way to increase land resources and expand construction land.
基金Supported by the National Natural Science Foundation of China(72088101,42372175)PetroChina Science and Technology Innovation Fund Program(2021DQ02-0904)。
文摘This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large models in vertical industries,outlines the challenges and issues confronted in applying large models in the oil and gas sector,and offers prospects for the application of large models in the oil and gas industry.The existing large models can be briefly divided into three categories:large language models,visual large models,and multimodal large models.The application of large models in the oil and gas industry is still in its infancy.Based on open-source large language models,some oil and gas enterprises have released large language model products using methods like fine-tuning and retrieval augmented generation.Scholars have attempted to develop scenario-specific models for oil and gas operations by using visual/multimodal foundation models.A few researchers have constructed pre-trained foundation models for seismic data processing and interpretation,as well as core analysis.The application of large models in the oil and gas industry faces challenges such as current data quantity and quality being difficult to support the training of large models,high research and development costs,and poor algorithm autonomy and control.The application of large models should be guided by the needs of oil and gas business,taking the application of large models as an opportunity to improve data lifecycle management,enhance data governance capabilities,promote the construction of computing power,strengthen the construction of“artificial intelligence+energy”composite teams,and boost the autonomy and control of large model technology.
基金supported by National Natural Science Foundation of China(62376219 and 62006194)Foundational Research Project in Specialized Discipline(Grant No.G2024WD0146)Faculty Construction Project(Grant No.24GH0201148).
文摘Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions.However,for embodied tasks,where robots interact with complex environments,textonly LLMs often face challenges due to a lack of compatibility with robotic visual perception.This study provides a comprehensive overview of the emerging integration of LLMs and multimodal LLMs into various robotic tasks.Additionally,we propose a framework that utilizes multimodal GPT-4V to enhance embodied task planning through the combination of natural language instructions and robot visual perceptions.Our results,based on diverse datasets,indicate that GPT-4V effectively enhances robot performance in embodied tasks.This extensive survey and evaluation of LLMs and multimodal LLMs across a variety of robotic tasks enriches the understanding of LLM-centric embodied intelligence and provides forward-looking insights towards bridging the gap in Human-Robot-Environment interaction.
基金supported by the National Key R&D Program of China under Grant No.2022YFB3103500the National Natural Science Foundation of China under Grants No.62402087 and No.62020106013+3 种基金the Sichuan Science and Technology Program under Grant No.2023ZYD0142the Chengdu Science and Technology Program under Grant No.2023-XT00-00002-GXthe Fundamental Research Funds for Chinese Central Universities under Grants No.ZYGX2020ZB027 and No.Y030232063003002the Postdoctoral Innovation Talents Support Program under Grant No.BX20230060.
文摘The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats.