CO_(2) capture and utilization(CCU)technologies have been recognized as crucial strategies for mitigating global warming,reducing carbon emission,and promoting resource circularity.As such,the design and development o...CO_(2) capture and utilization(CCU)technologies have been recognized as crucial strategies for mitigating global warming,reducing carbon emission,and promoting resource circularity.As such,the design and development of related materials have attracted considerable research attention.Carbon-based materials,characterized by tunable pore structures,abundant active sites,high specific surface area,and excellent chemical stability,demonstrate significant potential for applications in CO_(2) capture and utilization.This review systematically analyzes the adsorption behaviors and performance variations of typical carbon materials,including activated carbon,porous carbon,graphene,and carbon nanotubes during CO_(2) capture processes.Concerning CO_(2) utilization,emphasis is placed on recent advances in the catalytic applications of carbon-based materials in key reactions such as methanation,reverse water-gas shift,dry reforming of methane,and alcohol synthesis.Moreover,the benefits and drawbacks of carbon materials in terms of CO_(2) adsorption capacity,catalytic activity,and stability are thoroughly evaluated,and their potential applications in integrated CO_(2) capture and utilization technologies are discussed.Finally,key strategies for enhancing the performance of carbonaceous materials through structural modulation and surface modification are elucidated.This review aims to provide theoretical guidance for the future development and large-scale implementation of carbon-based materials in CCU technologies.展开更多
Citrus is the world's most produced fruit.With the rapid growth of citrus cultivation and processing industries globally,the volume of by-products,including dropped fruits,defective fruits,and waste generated duri...Citrus is the world's most produced fruit.With the rapid growth of citrus cultivation and processing industries globally,the volume of by-products,including dropped fruits,defective fruits,and waste generated during processing,has surged.Consequently,resource wastage and environmental pollution due to the low utilization rate of these by-products have become increasingly prominent issues.Currently,citrus by-products are directly utilized as seasonings,tea,and traditional Chinese medicine,or for the extraction of pectin,flavonoids,carotenoids,limonoids,essential oils,synephrine,and other functional ingredients.They are also processed into ethanol,citric acid,feed,and organic fertilizer through biomass fermentation.Despite these applications,the overall utilization rate of citrus by-products remains low.Additionally,there is a lack of key technologies and core equipment,and the production of high value-added functional products is limited.The future direction for citrus by-product utilization lies in green,low-carbon,high-efficiency,and high-value comprehensive recycling.To address the serious environmental pollution and recycling challenges posed by citrus rotting,it is proposed for the first time to develop new products and mold prevention strategies throughout the entire citrus supply chain-"Planting-field management-harvesting-transportation-storage"-to achieve a circular economy approach.This strategy aims to"Take from citrus and give back to citrus"effectively preventing and reducing citrus rotting.Furthermore,it can mitigate the significant economic losses caused by fruit decay and provide insights into the high-quality development of comprehensive citrus by-product utilization.展开更多
BACKGROUND Drug utilization research has an important role in assisting the healthcare administration to know,compute,and refine the prescription whose principal objective is to enable the rational use of drugs.Resear...BACKGROUND Drug utilization research has an important role in assisting the healthcare administration to know,compute,and refine the prescription whose principal objective is to enable the rational use of drugs.Research in developing nations relating to the cost of treatment is scarce when compared with developed countries.Thus,the drug utilization research studies from developing nations are most needed,and their number has been growing.AIM To evaluate patterns of utilization of antipsychotic drugs and direct medical cost analysis in patients newly diagnosed with schizophrenia.METHODS The present study was observational in type and based on a retrospective cohort to evaluate patterns of utilization of antipsychotic drugs using World Health Organization(WHO)core prescribing indicators and anatomical therapeutic chemical/defined daily dose indicators.We also calculated direct medical costs for a period of 6 months.RESULTS This study has found that atypical antipsychotics are the mainstay of treatment for schizophrenia in every age group and subcategories of schizophrenia.The evaluation based on WHO prescribing indicators showed a low average number of drugs per prescription and low prescribing frequency of antipsychotics from the National List of Essential Medicines 2015 and the WHO Essential Medicines List 2019.The total mean drug cost of our study was 1396 Indian rupees.The total mean cost due to the investigation in our study was 1017.34 Indian rupees.Therefore,the total mean direct medical cost incurred on patients in our study was 4337.28 Indian rupees.CONCLUSION The information from the present study can be used for reviewing and updating treatment policy at the institutional level.展开更多
Among the “three data rights,” the data utilization right has been persistently overlooked, and is similar to a neglected “middle child” in the context of the data rights family. However, it is precisely during th...Among the “three data rights,” the data utilization right has been persistently overlooked, and is similar to a neglected “middle child” in the context of the data rights family. However, it is precisely during the stages of processing and utilization that data undergoes its transformations and where its economic value is ultimately created. A series of recent policy documents on treating data as a factor of production have emphasized that the building of a scientific data property rights system requires a fair and efficient mechanism for benefit distribution, which provides reasonable preference for creators of data value and use value in terms of the income generated by data elements. Constrained by the inertial thinking of property right logic, the data utilization right is often regarded as a “transitional fulcrum” wherein the holders of data resources have to authorize the operators of data products to realize data value thereby. In the future structural design and implementation of the coordination mechanism for the property right system against the backdrop of the data factor-oriented reform, the establishment of data processing and utilization as an independent right will require the implementation of two core initiatives: first, attaching importance to the independent protection of the benefit distribution;second, implementing risk regulation for data security through optimization of governance. These two initiatives will serve as the key for optimizing the data factor governance system and accelerating the release of data value.展开更多
Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interact...Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interactions to predict future items of interest.However,many current methods rely on unique user and item IDs,limiting their ability to represent users and items effectively,especially in zero-shot learning scenarios where training data is scarce.With the rapid development of Large Language Models(LLMs),researchers are exploring their potential to enhance recommendation systems.However,there is a semantic gap between the linguistic semantics of LLMs and the collaborative semantics of recommendation systems,where items are typically indexed by IDs.Moreover,most research focuses on item representations,neglecting personalized user modeling.To address these issues,we propose a sequential recommendation framework using LLMs,called CIT-Rec,a model that integrates Collaborative semantics for user representation and Image and Text information for item representation to enhance Recommendations.Specifically,by aligning intuitive image information with text containing semantic features,we can more accurately represent items,improving item representation quality.We focus not only on item representations but also on user representations.To more precisely capture users’personalized preferences,we use traditional sequential recommendation models to train on users’historical interaction data,effectively capturing behavioral patterns.Finally,by combining LLMs and traditional sequential recommendation models,we allow the LLM to understand linguistic semantics while capturing collaborative semantics.Extensive evaluations on real-world datasets show that our model outperforms baseline methods,effectively combining user interaction history with item visual and textual modalities to provide personalized recommendations.展开更多
Climate model prediction has been improved by enhancing model resolution as well as the implementation of sophisticated physical parameterization and refinement of data assimilation systems[section 6.1 in Wang et al.(...Climate model prediction has been improved by enhancing model resolution as well as the implementation of sophisticated physical parameterization and refinement of data assimilation systems[section 6.1 in Wang et al.(2025)].In relation to seasonal forecasting and climate projection in the East Asian summer monsoon season,proper simulation of the seasonal migration of rain bands by models is a challenging and limiting factor[section 7.1 in Wang et al.(2025)].展开更多
Federated Learning(FL)enables joint training over distributed devices without data exchange but is highly vulnerable to attacks by adversaries in the form of model poisoning and malicious update injection.This work pr...Federated Learning(FL)enables joint training over distributed devices without data exchange but is highly vulnerable to attacks by adversaries in the form of model poisoning and malicious update injection.This work proposes Secured-FL,a blockchain-based defensive framework that combines smart contract-based authentication,clustering-driven outlier elimination,and dynamic threshold adjustment to defend against adversarial attacks.The framework was implemented on a private Ethereum network with a Proof-of-Authority consensus algorithm to ensure tamper-resistant and auditable model updates.Large-scale simulation on the Cyber Data dataset,under up to 50%malicious client settings,demonstrates Secured-FL achieves 6%-12%higher accuracy,9%-15%lower latency,and approximately 14%less computational expense compared to the PPSS benchmark framework.Additional tests,including confusion matrices,ROC and Precision-Recall curves,and ablation tests,confirm the interpretability and robustness of the defense.Tests for scalability also show consistent performance up to 500 clients,affirming appropriateness to reasonably large deployments.These results make Secured-FL a feasible,adversarially resilient FL paradigm with promising potential for application in smart cities,medicine,and other mission-critical IoT deployments.展开更多
The rapid advancement of machine learning based tight-binding Hamiltonian(MLTB)methods has opened new avenues for efficient and accurate electronic structure simulations,particularly in large-scale systems and long-ti...The rapid advancement of machine learning based tight-binding Hamiltonian(MLTB)methods has opened new avenues for efficient and accurate electronic structure simulations,particularly in large-scale systems and long-time scenarios.This review begins with a concise overview of traditional tight-binding(TB)models,including both(semi-)empirical and first-principles approaches,establishing the foundation for understanding MLTB developments.We then present a systematic classification of existing MLTB methodologies,grouped into two major categories:direct prediction of TB Hamiltonian elements and inference of empirical parameters.A comparative analysis with other ML-based electronic structure models is also provided,highlighting the advancement of MLTB approaches.Finally,we explore the emerging MLTB application ecosystem,highlighting how the integration of MLTB models with a diverse suite of post-processing tools from linear-scaling solvers to quantum transport frameworks and molecular dynamics interfaces is essential for tackling complex scientific problems across different domains.The continued advancement of this integrated paradigm promises to accelerate materials discovery and open new frontiers in the predictive simulation of complex quantum phenomena.展开更多
Business Process Modelling(BPM)is essential for analyzing,improving,and automating the flow of information within organizations,but traditional approaches based on manual interpretation are slow,error-prone,and requir...Business Process Modelling(BPM)is essential for analyzing,improving,and automating the flow of information within organizations,but traditional approaches based on manual interpretation are slow,error-prone,and require a high level of expertise.This article proposes an innovative alternative solution that overcomes these limitations by automatically generating comprehensive Business Process Modelling and Notation(BPMN)diagrams solely from verbal descriptions of the processes to be modeled,utilizing Large Language Models(LLMs)and multimodal Artificial Intelligence(AI).Experimental results,based on video recordings of process explanations provided by an expert from an organization(in this case,the Commercial Courts of a public justice administration),demonstrate that the proposed methodology successfully enables the automatic generation of complete and accurate BPMN diagrams,leading to significant improvements in the speed,accuracy,and accessibility of process modeling.This research makes a substantial contribution to the field of business process modeling,as its methodology is groundbreaking in its use of LLMs and multimodal AI capabilities to handle different types of source material(text and video),combining several tools to minimize the number of queries and reduce the complexity of the prompts required for the automatic generation of successful BPMN diagrams.展开更多
Pathological scarring,manifested in the form of hypertrophic scars(HTS)and keloid scars(KS),represents a major clinical challenge due to its aesthetic and functional implications for patients.Understanding the molecul...Pathological scarring,manifested in the form of hypertrophic scars(HTS)and keloid scars(KS),represents a major clinical challenge due to its aesthetic and functional implications for patients.Understanding the molecular mechanisms involved in these types of scars and developing effective treatments requires the use of controlled ex-perimental models,especially animals,to overcome the limitations of clinical studies.The aim of this sistematic review is to critically analyze the animal models used in the last five years(2020-2025)for the study of pathological scars,highlighting their advantages,limitations and applicability in the development of new therapeutic strat-egies.Murine,rabbit and porcine models,as well as alternative models,offer varied perspectives on the formation and treatment of HTS and KS,with an emphasis on histological and molecular correlations with human pathology.By synthesizing recent data,the paper highlights the essential role of preclinical research in optimizing an-tifibrotic treatments and in advancing the translation of data into the clinical sphere.Overall,animal models remain essential for bridging mechanistic insights with clinical translation,supporting the development of more effective and personalized anti-scar therapies.展开更多
Carbon dioxide(CO_(2))is the main greenhouse gas(GHG)released by human activities.The substitution of fossil resources by biomass as a bio-renewable resource,has significant potential to reduce GHG emissions.The appro...Carbon dioxide(CO_(2))is the main greenhouse gas(GHG)released by human activities.The substitution of fossil resources by biomass as a bio-renewable resource,has significant potential to reduce GHG emissions.The approach to biomass,as the only true full-scale alternative to fossil resources,is progressing rapidly.Converting biomass into furanic compounds,as versatile platform chemicals for synthesizing a wide range of bio-based products is the cornerstone of sustainable technologies.The extensive body of this review combines the biomass valorization to furanic compounds by CO_(2)utilization and furanic compounds conversion by CO_(2)fixation.These processes can be strategically applied through both‘thermochemical’and‘electrochemical’pathways,by utilizing CO_(2)from the atmosphere or industrial emission point and returning it to the natural carbon cycle.In the thermochemical pathway CO_(2)acts as a carbon source(carboxylation and polymerization)or active reaction assistant in the biomass conversion(CO_(2)-assisted conversion),without altering its oxidation state,facilitating the synthesis of valuable products and polymers.Conversely,in the electrochemical pathway,CO_(2)can be used as a carbon source(electrocarboxylation)to give the corresponding carboxylic acid,or it can undergo reduction,yielding methanol,carbon monoxide(CO),formic acid,and analogous compounds,while on the other side,furanic compounds undergo oxidation yielding high-value-added chemicals.Finally,potential future research directions are suggested to promote CO_(2)utilization and fixation in the valorization of biomass-derived furanic compounds,and challenges facing further research are highlighted.展开更多
This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to use...This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.展开更多
Objectives:The five-year survival rate for pancreatic cancer is notably low,posing a significant challenge to patient health.The primary treatments are radiotherapy and chemotherapy,sometimes combined with targeted th...Objectives:The five-year survival rate for pancreatic cancer is notably low,posing a significant challenge to patient health.The primary treatments are radiotherapy and chemotherapy,sometimes combined with targeted therapy;however,their clinical benefits are limited.Therefore,developing new models to evaluate the therapeutic potential of novel molecules is essential.Fingolimod and Dimethyl Fumarate(DMF),currently used to treat multiple sclerosis,have recently been shown to have anti-cancer effects in several preclinical tumor models.This study aims to evaluate the therapeutic potential of Fingolimod and DMF in pancreatic cancer by investigating their respective in vitro cytotoxicity and in vivo antitumor effects.Methods:In this study,we evaluated for the first time these two drugs in pancreatic preclinical models in vitro using 3D spheroid tumor models and in vivo,which are compared to two standard-of-care consisting of Gemcitabine and Erlotinib.Results:In vitro,both Fingolimod and DMF induced cytotoxicity in spheroids from two pancreatic cell lines.Additionally,Fingolimod and DMF displayed anticancer effects in two subcutaneous xenograft models using PANC-1 and CFPAC-1 cells.Conclusions:Although the responses observed with Fingolimod and DMF were similar to those of Gemcitabine and Erlotinib,these findings indicate a potential emerging interest in Fingolimod and DMF for the treatment of pancreatic cancer.However,further work is still necessary to fully characterize how these drugs affect tumor progression.展开更多
War rehearsals have become increasingly important in national security due to the growing complexity of international affairs.However,traditional rehearsal methods,such as military chess simulations,are inefficient an...War rehearsals have become increasingly important in national security due to the growing complexity of international affairs.However,traditional rehearsal methods,such as military chess simulations,are inefficient and inflexible,with particularly pronounced limitations in command and decision-making.The overwhelming volume of information and high decision complexity hinder the realization of autonomous and agile command and control.To address this challenge,an intelligent warfare simulation framework named Command-Agent is proposed,which deeply integrates large language models(LLMs)with digital twin battlefields.By constructing a highly realistic battlefield environment through real-time simulation and multi-source data fusion,the natural language interaction capabilities of LLMs are leveraged to lower the command threshold and to enable autonomous command through the Observe-Orient-Decide-Act(OODA)feedback loop.Within the Command-Agent framework,a multimodel collaborative architecture is further adopted to decouple the decision-generation and command-execution functions of LLMs.By combining specialized models such as Deep Seek-R1 and MCTool,the limitations of single-model capabilities are overcome.MCTool is a lightweight execution model fine-tuned for military Function Calling tasks.The framework also introduces a Vector Knowledge Base to mitigate hallucinations commonly exhibited by LLMs.Experimental results demonstrate that Command-Agent not only enables natural language-driven simulation and control but also deeply understands commander intent.Leveraging the multi-model collaborative architecture,during red-blue UAV confrontations involving 2 to 8 UAVs,the integrated score is improved by an average of 41.8%compared to the single-agent system(MCTool),accompanied by a 161.8%optimization in the battle loss ratio.Furthermore,when compared with multi-agent systems lacking the knowledge base,the inclusion of the Vector Knowledge Base further improves overall performance by 16.8%.In comparison with the general model(Qwen2.5-7B),the fine-tuned MCTool leads by 5%in execution efficiency.Therefore,the proposed Command-Agent introduces a novel perspective to the military command system and offers a feasible solution for intelligent battlefield decision-making.展开更多
Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Re...Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning.展开更多
Large language models(LLMs)have revolutionized AI applications across diverse domains.However,their widespread deployment has introduced critical security vulnerabilities,particularly prompt injection attacks that man...Large language models(LLMs)have revolutionized AI applications across diverse domains.However,their widespread deployment has introduced critical security vulnerabilities,particularly prompt injection attacks that manipulate model behavior through malicious instructions.Following Kitchenham’s guidelines,this systematic review synthesizes 128 peer-reviewed studies from 2022 to 2025 to provide a unified understanding of this rapidly evolving threat landscape.Our findings reveal a swift progression from simple direct injections to sophisticated multimodal attacks,achieving over 90%success rates against unprotected systems.In response,defense mechanisms show varying effectiveness:input preprocessing achieves 60%–80%detection rates and advanced architectural defenses demonstrate up to 95%protection against known patterns,though significant gaps persist against novel attack vectors.We identified 37 distinct defense approaches across three categories,but standardized evaluation frameworks remain limited.Our analysis attributes these vulnerabilities to fundamental LLM architectural limitations,such as the inability to distinguish instructions from data and attention mechanism vulnerabilities.This highlights critical research directions such as formal verification methods,standardized evaluation protocols,and architectural innovations for inherently secure LLM designs.展开更多
BACKGROUND:This study aims to develop and validate a machine learning-based in-hospital mortality predictive model for acute aortic syndrome(AAS)in the emergency department(ED)and to derive a simplifi ed version suita...BACKGROUND:This study aims to develop and validate a machine learning-based in-hospital mortality predictive model for acute aortic syndrome(AAS)in the emergency department(ED)and to derive a simplifi ed version suitable for rapid clinical application.METHODS:In this multi-center retrospective cohort study,AAS patient data from three hospitals were analyzed.The modeling cohort included data from the First Affiliated Hospital of Zhengzhou University and the People’s Hospital of Xinjiang Uygur Autonomous Region,with Peking University Third Hospital data serving as the external test set.Four machine learning algorithms—logistic regression(LR),multilayer perceptron(MLP),Gaussian naive Bayes(GNB),and random forest(RF)—were used to develop predictive models based on 34 early-accessible clinical variables.A simplifi ed model was then derived based on fi ve key variables(Stanford type,pericardial eff usion,asymmetric peripheral arterial pulsation,decreased bowel sounds,and dyspnea)via Least Absolute Shrinkage and Selection Operator(LASSO)regression to improve ED applicability.RESULTS:A total of 929 patients were included in the modeling cohort,and 210 were included in the external test set.Four machine learning models based on 34 clinical variables were developed,achieving internal and external validation AUCs of 0.85-0.90 and 0.73-0.85,respectively.The simplifi ed model incorporating fi ve key variables demonstrated internal and external validation AUCs of 0.71-0.86 and 0.75-0.78,respectively.Both models showed robust calibration and predictive stability across datasets.CONCLUSION:Both kinds of models were built based on machine learning tools,and proved to have certain prediction performance and extrapolation.展开更多
Aerodynamic performances of axial compressors are significantly affected by variation of Reynolds number in aero-engines.In the design and analysis of compressors,previous correction methods for cascades and stages ha...Aerodynamic performances of axial compressors are significantly affected by variation of Reynolds number in aero-engines.In the design and analysis of compressors,previous correction methods for cascades and stages have difficulties in predicting comprehensively Reynolds number effects on airfoils,matching and characteristics curves.This study proposes Re-correction models for loss,deviation angle and endwall blockage based on classical theories and cascade tests,and loss and deviation models show good agreement in test data of NACA65 and C4 cascades.Throughflow method considering Reynolds number effects is developed by integrating the correction models into a verified Streamline Curvature(SLC)tool.A three-stage axial compressor is investigated through SLC and CFD methods from design Reynolds number(Red=2106)to low Re=4104,and the numerical methods are validated with test data of characteristic curves and spanwise distributions at Red.With Re reduction,SLC method with correction models well predicts variation in overall performances compared with CFD calculations and Wassell's model.Streamwise and spanwise matching such as total pressure and loss distributions in SLC predictions are basically consistent with those in CFD results at near-stall points under design and low Reynolds numbers.SLC and CFD methods share similar detections of stall risks in the third stage(Stg3),and their analyses of diffusion processes deviate to some extent due to different predictions in separated endwall flow.The correction models can be adopted to consider Reynolds number effects in through-flow design and analysis of axial compressors.展开更多
Conversational recommender systems(CRSs)focus on refining preferences and providing personalized recommendations through natural language interactions and dialogue history.Large language models(LLMs)have shown outstan...Conversational recommender systems(CRSs)focus on refining preferences and providing personalized recommendations through natural language interactions and dialogue history.Large language models(LLMs)have shown outstanding performance across various domains,thereby prompting researchers to investigate their applicability in recommendation systems.However,due to the lack of task-specific knowledge and an inefficient feature extraction process,LLMs still have suboptimal performance in recommendation tasks.Therefore,external knowledge sources,such as knowledge graphs(KGs)and knowledge bases(KBs),are often introduced to address the issue of data sparsity.Compared to KGs,KBs possess higher retrieval efficiency,making them more suitable for scenarios where LLMs serve as recommenders.To this end,we introduce a novel framework integrating LLMs with KBs for enhanced retrieval generation,namely LLMKB.LLMKB initially leverages structured knowledge to create mapping dictionaries,extracting entity-relation information from heterogeneous knowledge to construct KBs.Then,LLMKB achieves the embedding calibration between user information representations and documents in KBs through retrieval model fine-tuning.Finally,LLMKB employs retrievalaugmented generation to produce recommendations based on fused text inputs,followed by post-processing.Experiment results on two public CRS datasets demonstrate the effectiveness of our framework.Our code is publicly available at the link:https://anonymous.4open.science/r/LLMKB-6FD0.展开更多
BACKGROUND Non-erosive reflux disease(NERD),the main gastroesophageal reflux subtype,features reflux symptoms without mucosal damage.Anxiety links to visceral hypersensitivity in NERD,yet mechanisms and animal models ...BACKGROUND Non-erosive reflux disease(NERD),the main gastroesophageal reflux subtype,features reflux symptoms without mucosal damage.Anxiety links to visceral hypersensitivity in NERD,yet mechanisms and animal models are unclear.AIM To establish a translational NERD rat model with anxiety comorbidity via tail clamping and study corticotropin-releasing hormone(CRH)-mediated neuroimmune pathways in visceral hypersensitivity and esophageal injury.METHODS Sprague-Dawley(SD)and Wistar rats were grouped into sham,model,and modified groups(n=10 each).The treatments for the modified groups were as follows:SD rats received ovalbumin/aluminum hydroxide suspension+acid perfusion±tail clamping(40 minutes/day for 7 days),while Wistar rats received fructose water+tail clamping.Esophageal pathology,visceral sensitivity,and behavior were assessed.Serum CRH,calcitonin gene-related peptide(CGRP),5-hydroxytryptamine(5-HT),and mast cell tryptase(MCT)and central amygdala(CeA)CRH mRNA were measured via ELISA and qRT-PCR.RESULTS Tail clamping induced anxiety,worsening visceral hypersensitivity(lower abdominal withdrawal reflex thresholds,P<0.05)and esophageal injury(dilated intercellular spaces and mitochondrial edema).Both models showed raised serum CRH,CGRP,5-HT,and MCT(P<0.01)and CeA CRH mRNA expression(P<0.01).Behavioral tests confirmed anxiety-like phenotypes.NERD-anxiety rats showed clinical-like symptom severity without erosion.CONCLUSION Tail clamping induces anxiety in NERD models,worsening visceral hypersensitivity via CRH neuroimmune dysregulation,offering a translational model and highlighting CRH as a treatment target.展开更多
基金Supported by National Key R&D Program of China(2025YFE0109700)the National Natural Science Foundation of China(52106150)。
文摘CO_(2) capture and utilization(CCU)technologies have been recognized as crucial strategies for mitigating global warming,reducing carbon emission,and promoting resource circularity.As such,the design and development of related materials have attracted considerable research attention.Carbon-based materials,characterized by tunable pore structures,abundant active sites,high specific surface area,and excellent chemical stability,demonstrate significant potential for applications in CO_(2) capture and utilization.This review systematically analyzes the adsorption behaviors and performance variations of typical carbon materials,including activated carbon,porous carbon,graphene,and carbon nanotubes during CO_(2) capture processes.Concerning CO_(2) utilization,emphasis is placed on recent advances in the catalytic applications of carbon-based materials in key reactions such as methanation,reverse water-gas shift,dry reforming of methane,and alcohol synthesis.Moreover,the benefits and drawbacks of carbon materials in terms of CO_(2) adsorption capacity,catalytic activity,and stability are thoroughly evaluated,and their potential applications in integrated CO_(2) capture and utilization technologies are discussed.Finally,key strategies for enhancing the performance of carbonaceous materials through structural modulation and surface modification are elucidated.This review aims to provide theoretical guidance for the future development and large-scale implementation of carbon-based materials in CCU technologies.
基金supported by the National Natural Science Foundation of China(82104340)。
文摘Citrus is the world's most produced fruit.With the rapid growth of citrus cultivation and processing industries globally,the volume of by-products,including dropped fruits,defective fruits,and waste generated during processing,has surged.Consequently,resource wastage and environmental pollution due to the low utilization rate of these by-products have become increasingly prominent issues.Currently,citrus by-products are directly utilized as seasonings,tea,and traditional Chinese medicine,or for the extraction of pectin,flavonoids,carotenoids,limonoids,essential oils,synephrine,and other functional ingredients.They are also processed into ethanol,citric acid,feed,and organic fertilizer through biomass fermentation.Despite these applications,the overall utilization rate of citrus by-products remains low.Additionally,there is a lack of key technologies and core equipment,and the production of high value-added functional products is limited.The future direction for citrus by-product utilization lies in green,low-carbon,high-efficiency,and high-value comprehensive recycling.To address the serious environmental pollution and recycling challenges posed by citrus rotting,it is proposed for the first time to develop new products and mold prevention strategies throughout the entire citrus supply chain-"Planting-field management-harvesting-transportation-storage"-to achieve a circular economy approach.This strategy aims to"Take from citrus and give back to citrus"effectively preventing and reducing citrus rotting.Furthermore,it can mitigate the significant economic losses caused by fruit decay and provide insights into the high-quality development of comprehensive citrus by-product utilization.
文摘BACKGROUND Drug utilization research has an important role in assisting the healthcare administration to know,compute,and refine the prescription whose principal objective is to enable the rational use of drugs.Research in developing nations relating to the cost of treatment is scarce when compared with developed countries.Thus,the drug utilization research studies from developing nations are most needed,and their number has been growing.AIM To evaluate patterns of utilization of antipsychotic drugs and direct medical cost analysis in patients newly diagnosed with schizophrenia.METHODS The present study was observational in type and based on a retrospective cohort to evaluate patterns of utilization of antipsychotic drugs using World Health Organization(WHO)core prescribing indicators and anatomical therapeutic chemical/defined daily dose indicators.We also calculated direct medical costs for a period of 6 months.RESULTS This study has found that atypical antipsychotics are the mainstay of treatment for schizophrenia in every age group and subcategories of schizophrenia.The evaluation based on WHO prescribing indicators showed a low average number of drugs per prescription and low prescribing frequency of antipsychotics from the National List of Essential Medicines 2015 and the WHO Essential Medicines List 2019.The total mean drug cost of our study was 1396 Indian rupees.The total mean cost due to the investigation in our study was 1017.34 Indian rupees.Therefore,the total mean direct medical cost incurred on patients in our study was 4337.28 Indian rupees.CONCLUSION The information from the present study can be used for reviewing and updating treatment policy at the institutional level.
文摘Among the “three data rights,” the data utilization right has been persistently overlooked, and is similar to a neglected “middle child” in the context of the data rights family. However, it is precisely during the stages of processing and utilization that data undergoes its transformations and where its economic value is ultimately created. A series of recent policy documents on treating data as a factor of production have emphasized that the building of a scientific data property rights system requires a fair and efficient mechanism for benefit distribution, which provides reasonable preference for creators of data value and use value in terms of the income generated by data elements. Constrained by the inertial thinking of property right logic, the data utilization right is often regarded as a “transitional fulcrum” wherein the holders of data resources have to authorize the operators of data products to realize data value thereby. In the future structural design and implementation of the coordination mechanism for the property right system against the backdrop of the data factor-oriented reform, the establishment of data processing and utilization as an independent right will require the implementation of two core initiatives: first, attaching importance to the independent protection of the benefit distribution;second, implementing risk regulation for data security through optimization of governance. These two initiatives will serve as the key for optimizing the data factor governance system and accelerating the release of data value.
基金supported by the National Key R&D Program of China[2022YFF0902703]the State Administration for Market Regulation Science and Technology Plan Project(2024MK033).
文摘Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interactions to predict future items of interest.However,many current methods rely on unique user and item IDs,limiting their ability to represent users and items effectively,especially in zero-shot learning scenarios where training data is scarce.With the rapid development of Large Language Models(LLMs),researchers are exploring their potential to enhance recommendation systems.However,there is a semantic gap between the linguistic semantics of LLMs and the collaborative semantics of recommendation systems,where items are typically indexed by IDs.Moreover,most research focuses on item representations,neglecting personalized user modeling.To address these issues,we propose a sequential recommendation framework using LLMs,called CIT-Rec,a model that integrates Collaborative semantics for user representation and Image and Text information for item representation to enhance Recommendations.Specifically,by aligning intuitive image information with text containing semantic features,we can more accurately represent items,improving item representation quality.We focus not only on item representations but also on user representations.To more precisely capture users’personalized preferences,we use traditional sequential recommendation models to train on users’historical interaction data,effectively capturing behavioral patterns.Finally,by combining LLMs and traditional sequential recommendation models,we allow the LLM to understand linguistic semantics while capturing collaborative semantics.Extensive evaluations on real-world datasets show that our model outperforms baseline methods,effectively combining user interaction history with item visual and textual modalities to provide personalized recommendations.
文摘Climate model prediction has been improved by enhancing model resolution as well as the implementation of sophisticated physical parameterization and refinement of data assimilation systems[section 6.1 in Wang et al.(2025)].In relation to seasonal forecasting and climate projection in the East Asian summer monsoon season,proper simulation of the seasonal migration of rain bands by models is a challenging and limiting factor[section 7.1 in Wang et al.(2025)].
文摘Federated Learning(FL)enables joint training over distributed devices without data exchange but is highly vulnerable to attacks by adversaries in the form of model poisoning and malicious update injection.This work proposes Secured-FL,a blockchain-based defensive framework that combines smart contract-based authentication,clustering-driven outlier elimination,and dynamic threshold adjustment to defend against adversarial attacks.The framework was implemented on a private Ethereum network with a Proof-of-Authority consensus algorithm to ensure tamper-resistant and auditable model updates.Large-scale simulation on the Cyber Data dataset,under up to 50%malicious client settings,demonstrates Secured-FL achieves 6%-12%higher accuracy,9%-15%lower latency,and approximately 14%less computational expense compared to the PPSS benchmark framework.Additional tests,including confusion matrices,ROC and Precision-Recall curves,and ablation tests,confirm the interpretability and robustness of the defense.Tests for scalability also show consistent performance up to 500 clients,affirming appropriateness to reasonably large deployments.These results make Secured-FL a feasible,adversarially resilient FL paradigm with promising potential for application in smart cities,medicine,and other mission-critical IoT deployments.
基金supported by the Advanced Materials-National Science and Technology Major Project(Grant No.2025ZD0618401)the National Natural Science Foundation of China(Grant No.12504285)+1 种基金the Natural Science Foundation of Jiangsu Province(Grant No.BK20250472)NFSG grant from BITS-Pilani,Dubai campus。
文摘The rapid advancement of machine learning based tight-binding Hamiltonian(MLTB)methods has opened new avenues for efficient and accurate electronic structure simulations,particularly in large-scale systems and long-time scenarios.This review begins with a concise overview of traditional tight-binding(TB)models,including both(semi-)empirical and first-principles approaches,establishing the foundation for understanding MLTB developments.We then present a systematic classification of existing MLTB methodologies,grouped into two major categories:direct prediction of TB Hamiltonian elements and inference of empirical parameters.A comparative analysis with other ML-based electronic structure models is also provided,highlighting the advancement of MLTB approaches.Finally,we explore the emerging MLTB application ecosystem,highlighting how the integration of MLTB models with a diverse suite of post-processing tools from linear-scaling solvers to quantum transport frameworks and molecular dynamics interfaces is essential for tackling complex scientific problems across different domains.The continued advancement of this integrated paradigm promises to accelerate materials discovery and open new frontiers in the predictive simulation of complex quantum phenomena.
基金funded by Fundación CajaCanarias and Fundación Bancaria“la Caixa”,grant number 2023DIG11.
文摘Business Process Modelling(BPM)is essential for analyzing,improving,and automating the flow of information within organizations,but traditional approaches based on manual interpretation are slow,error-prone,and require a high level of expertise.This article proposes an innovative alternative solution that overcomes these limitations by automatically generating comprehensive Business Process Modelling and Notation(BPMN)diagrams solely from verbal descriptions of the processes to be modeled,utilizing Large Language Models(LLMs)and multimodal Artificial Intelligence(AI).Experimental results,based on video recordings of process explanations provided by an expert from an organization(in this case,the Commercial Courts of a public justice administration),demonstrate that the proposed methodology successfully enables the automatic generation of complete and accurate BPMN diagrams,leading to significant improvements in the speed,accuracy,and accessibility of process modeling.This research makes a substantial contribution to the field of business process modeling,as its methodology is groundbreaking in its use of LLMs and multimodal AI capabilities to handle different types of source material(text and video),combining several tools to minimize the number of queries and reduce the complexity of the prompts required for the automatic generation of successful BPMN diagrams.
基金Ministry of Research,Innovation and Digitization,CCCDI-UEFISCDI,Grant/Award Number:PN-IV-P7-7.1-PED-2024-1578,within PNCDI Ⅳ.
文摘Pathological scarring,manifested in the form of hypertrophic scars(HTS)and keloid scars(KS),represents a major clinical challenge due to its aesthetic and functional implications for patients.Understanding the molecular mechanisms involved in these types of scars and developing effective treatments requires the use of controlled ex-perimental models,especially animals,to overcome the limitations of clinical studies.The aim of this sistematic review is to critically analyze the animal models used in the last five years(2020-2025)for the study of pathological scars,highlighting their advantages,limitations and applicability in the development of new therapeutic strat-egies.Murine,rabbit and porcine models,as well as alternative models,offer varied perspectives on the formation and treatment of HTS and KS,with an emphasis on histological and molecular correlations with human pathology.By synthesizing recent data,the paper highlights the essential role of preclinical research in optimizing an-tifibrotic treatments and in advancing the translation of data into the clinical sphere.Overall,animal models remain essential for bridging mechanistic insights with clinical translation,supporting the development of more effective and personalized anti-scar therapies.
基金the National Key R&D Program of China(No.2021YFC2101604)National Natural Science Foundation of China(Nos.U23A20123,22278339)+1 种基金Fujian Provincial Key Science and Technology Program of China(No.2022YZ037013)Xiamen University for the financial support.
文摘Carbon dioxide(CO_(2))is the main greenhouse gas(GHG)released by human activities.The substitution of fossil resources by biomass as a bio-renewable resource,has significant potential to reduce GHG emissions.The approach to biomass,as the only true full-scale alternative to fossil resources,is progressing rapidly.Converting biomass into furanic compounds,as versatile platform chemicals for synthesizing a wide range of bio-based products is the cornerstone of sustainable technologies.The extensive body of this review combines the biomass valorization to furanic compounds by CO_(2)utilization and furanic compounds conversion by CO_(2)fixation.These processes can be strategically applied through both‘thermochemical’and‘electrochemical’pathways,by utilizing CO_(2)from the atmosphere or industrial emission point and returning it to the natural carbon cycle.In the thermochemical pathway CO_(2)acts as a carbon source(carboxylation and polymerization)or active reaction assistant in the biomass conversion(CO_(2)-assisted conversion),without altering its oxidation state,facilitating the synthesis of valuable products and polymers.Conversely,in the electrochemical pathway,CO_(2)can be used as a carbon source(electrocarboxylation)to give the corresponding carboxylic acid,or it can undergo reduction,yielding methanol,carbon monoxide(CO),formic acid,and analogous compounds,while on the other side,furanic compounds undergo oxidation yielding high-value-added chemicals.Finally,potential future research directions are suggested to promote CO_(2)utilization and fixation in the valorization of biomass-derived furanic compounds,and challenges facing further research are highlighted.
基金funded by the Office of the Vice-President for Research and Development of Cebu Technological University.
文摘This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.
基金supported by Porsolt SAS,https://www.porsolt.com/.
文摘Objectives:The five-year survival rate for pancreatic cancer is notably low,posing a significant challenge to patient health.The primary treatments are radiotherapy and chemotherapy,sometimes combined with targeted therapy;however,their clinical benefits are limited.Therefore,developing new models to evaluate the therapeutic potential of novel molecules is essential.Fingolimod and Dimethyl Fumarate(DMF),currently used to treat multiple sclerosis,have recently been shown to have anti-cancer effects in several preclinical tumor models.This study aims to evaluate the therapeutic potential of Fingolimod and DMF in pancreatic cancer by investigating their respective in vitro cytotoxicity and in vivo antitumor effects.Methods:In this study,we evaluated for the first time these two drugs in pancreatic preclinical models in vitro using 3D spheroid tumor models and in vivo,which are compared to two standard-of-care consisting of Gemcitabine and Erlotinib.Results:In vitro,both Fingolimod and DMF induced cytotoxicity in spheroids from two pancreatic cell lines.Additionally,Fingolimod and DMF displayed anticancer effects in two subcutaneous xenograft models using PANC-1 and CFPAC-1 cells.Conclusions:Although the responses observed with Fingolimod and DMF were similar to those of Gemcitabine and Erlotinib,these findings indicate a potential emerging interest in Fingolimod and DMF for the treatment of pancreatic cancer.However,further work is still necessary to fully characterize how these drugs affect tumor progression.
文摘War rehearsals have become increasingly important in national security due to the growing complexity of international affairs.However,traditional rehearsal methods,such as military chess simulations,are inefficient and inflexible,with particularly pronounced limitations in command and decision-making.The overwhelming volume of information and high decision complexity hinder the realization of autonomous and agile command and control.To address this challenge,an intelligent warfare simulation framework named Command-Agent is proposed,which deeply integrates large language models(LLMs)with digital twin battlefields.By constructing a highly realistic battlefield environment through real-time simulation and multi-source data fusion,the natural language interaction capabilities of LLMs are leveraged to lower the command threshold and to enable autonomous command through the Observe-Orient-Decide-Act(OODA)feedback loop.Within the Command-Agent framework,a multimodel collaborative architecture is further adopted to decouple the decision-generation and command-execution functions of LLMs.By combining specialized models such as Deep Seek-R1 and MCTool,the limitations of single-model capabilities are overcome.MCTool is a lightweight execution model fine-tuned for military Function Calling tasks.The framework also introduces a Vector Knowledge Base to mitigate hallucinations commonly exhibited by LLMs.Experimental results demonstrate that Command-Agent not only enables natural language-driven simulation and control but also deeply understands commander intent.Leveraging the multi-model collaborative architecture,during red-blue UAV confrontations involving 2 to 8 UAVs,the integrated score is improved by an average of 41.8%compared to the single-agent system(MCTool),accompanied by a 161.8%optimization in the battle loss ratio.Furthermore,when compared with multi-agent systems lacking the knowledge base,the inclusion of the Vector Knowledge Base further improves overall performance by 16.8%.In comparison with the general model(Qwen2.5-7B),the fine-tuned MCTool leads by 5%in execution efficiency.Therefore,the proposed Command-Agent introduces a novel perspective to the military command system and offers a feasible solution for intelligent battlefield decision-making.
文摘Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning.
基金supported by 2023 Higher Education Scientific Research Planning Project of China Society of Higher Education(No.23PG0408)2023 Philosophy and Social Science Research Programs in Jiangsu Province(No.2023SJSZ0993)+2 种基金Nantong Science and Technology Project(No.JC2023070)Key Project of Jiangsu Province Education Science 14th Five-Year Plan(Grant No.B-b/2024/02/41)the Open Fund of Advanced Cryptography and System Security Key Laboratory of Sichuan Province(Grant No.SKLACSS-202407).
文摘Large language models(LLMs)have revolutionized AI applications across diverse domains.However,their widespread deployment has introduced critical security vulnerabilities,particularly prompt injection attacks that manipulate model behavior through malicious instructions.Following Kitchenham’s guidelines,this systematic review synthesizes 128 peer-reviewed studies from 2022 to 2025 to provide a unified understanding of this rapidly evolving threat landscape.Our findings reveal a swift progression from simple direct injections to sophisticated multimodal attacks,achieving over 90%success rates against unprotected systems.In response,defense mechanisms show varying effectiveness:input preprocessing achieves 60%–80%detection rates and advanced architectural defenses demonstrate up to 95%protection against known patterns,though significant gaps persist against novel attack vectors.We identified 37 distinct defense approaches across three categories,but standardized evaluation frameworks remain limited.Our analysis attributes these vulnerabilities to fundamental LLM architectural limitations,such as the inability to distinguish instructions from data and attention mechanism vulnerabilities.This highlights critical research directions such as formal verification methods,standardized evaluation protocols,and architectural innovations for inherently secure LLM designs.
基金supported by the special fund of the National Clinical Key Specialty Construction Program[(2022)301-2305].
文摘BACKGROUND:This study aims to develop and validate a machine learning-based in-hospital mortality predictive model for acute aortic syndrome(AAS)in the emergency department(ED)and to derive a simplifi ed version suitable for rapid clinical application.METHODS:In this multi-center retrospective cohort study,AAS patient data from three hospitals were analyzed.The modeling cohort included data from the First Affiliated Hospital of Zhengzhou University and the People’s Hospital of Xinjiang Uygur Autonomous Region,with Peking University Third Hospital data serving as the external test set.Four machine learning algorithms—logistic regression(LR),multilayer perceptron(MLP),Gaussian naive Bayes(GNB),and random forest(RF)—were used to develop predictive models based on 34 early-accessible clinical variables.A simplifi ed model was then derived based on fi ve key variables(Stanford type,pericardial eff usion,asymmetric peripheral arterial pulsation,decreased bowel sounds,and dyspnea)via Least Absolute Shrinkage and Selection Operator(LASSO)regression to improve ED applicability.RESULTS:A total of 929 patients were included in the modeling cohort,and 210 were included in the external test set.Four machine learning models based on 34 clinical variables were developed,achieving internal and external validation AUCs of 0.85-0.90 and 0.73-0.85,respectively.The simplifi ed model incorporating fi ve key variables demonstrated internal and external validation AUCs of 0.71-0.86 and 0.75-0.78,respectively.Both models showed robust calibration and predictive stability across datasets.CONCLUSION:Both kinds of models were built based on machine learning tools,and proved to have certain prediction performance and extrapolation.
基金supported by the National Science and Tech-nology Major Project of China(Nos.2017-II-0007-0021 and J2019-II-0017-0038)。
文摘Aerodynamic performances of axial compressors are significantly affected by variation of Reynolds number in aero-engines.In the design and analysis of compressors,previous correction methods for cascades and stages have difficulties in predicting comprehensively Reynolds number effects on airfoils,matching and characteristics curves.This study proposes Re-correction models for loss,deviation angle and endwall blockage based on classical theories and cascade tests,and loss and deviation models show good agreement in test data of NACA65 and C4 cascades.Throughflow method considering Reynolds number effects is developed by integrating the correction models into a verified Streamline Curvature(SLC)tool.A three-stage axial compressor is investigated through SLC and CFD methods from design Reynolds number(Red=2106)to low Re=4104,and the numerical methods are validated with test data of characteristic curves and spanwise distributions at Red.With Re reduction,SLC method with correction models well predicts variation in overall performances compared with CFD calculations and Wassell's model.Streamwise and spanwise matching such as total pressure and loss distributions in SLC predictions are basically consistent with those in CFD results at near-stall points under design and low Reynolds numbers.SLC and CFD methods share similar detections of stall risks in the third stage(Stg3),and their analyses of diffusion processes deviate to some extent due to different predictions in separated endwall flow.The correction models can be adopted to consider Reynolds number effects in through-flow design and analysis of axial compressors.
文摘Conversational recommender systems(CRSs)focus on refining preferences and providing personalized recommendations through natural language interactions and dialogue history.Large language models(LLMs)have shown outstanding performance across various domains,thereby prompting researchers to investigate their applicability in recommendation systems.However,due to the lack of task-specific knowledge and an inefficient feature extraction process,LLMs still have suboptimal performance in recommendation tasks.Therefore,external knowledge sources,such as knowledge graphs(KGs)and knowledge bases(KBs),are often introduced to address the issue of data sparsity.Compared to KGs,KBs possess higher retrieval efficiency,making them more suitable for scenarios where LLMs serve as recommenders.To this end,we introduce a novel framework integrating LLMs with KBs for enhanced retrieval generation,namely LLMKB.LLMKB initially leverages structured knowledge to create mapping dictionaries,extracting entity-relation information from heterogeneous knowledge to construct KBs.Then,LLMKB achieves the embedding calibration between user information representations and documents in KBs through retrieval model fine-tuning.Finally,LLMKB employs retrievalaugmented generation to produce recommendations based on fused text inputs,followed by post-processing.Experiment results on two public CRS datasets demonstrate the effectiveness of our framework.Our code is publicly available at the link:https://anonymous.4open.science/r/LLMKB-6FD0.
基金Supported by the National Key Specialty of Traditional Chinese Medicine(Spleen and Stomach Diseases),No.0500004National Natural Science Foundation of China,No.82205104 and No.82104850+1 种基金Hospital Capability Enhancement Project of Xiyuan Hospital,CACMS,No.XYZX0303-07the Fundamental Research Funds for the Central Public Welfare Research Institutes,Excellent Young Scientists Training Program of China Academy of Chinese Medical Sciences,No.ZZ16-YQ-002.
文摘BACKGROUND Non-erosive reflux disease(NERD),the main gastroesophageal reflux subtype,features reflux symptoms without mucosal damage.Anxiety links to visceral hypersensitivity in NERD,yet mechanisms and animal models are unclear.AIM To establish a translational NERD rat model with anxiety comorbidity via tail clamping and study corticotropin-releasing hormone(CRH)-mediated neuroimmune pathways in visceral hypersensitivity and esophageal injury.METHODS Sprague-Dawley(SD)and Wistar rats were grouped into sham,model,and modified groups(n=10 each).The treatments for the modified groups were as follows:SD rats received ovalbumin/aluminum hydroxide suspension+acid perfusion±tail clamping(40 minutes/day for 7 days),while Wistar rats received fructose water+tail clamping.Esophageal pathology,visceral sensitivity,and behavior were assessed.Serum CRH,calcitonin gene-related peptide(CGRP),5-hydroxytryptamine(5-HT),and mast cell tryptase(MCT)and central amygdala(CeA)CRH mRNA were measured via ELISA and qRT-PCR.RESULTS Tail clamping induced anxiety,worsening visceral hypersensitivity(lower abdominal withdrawal reflex thresholds,P<0.05)and esophageal injury(dilated intercellular spaces and mitochondrial edema).Both models showed raised serum CRH,CGRP,5-HT,and MCT(P<0.01)and CeA CRH mRNA expression(P<0.01).Behavioral tests confirmed anxiety-like phenotypes.NERD-anxiety rats showed clinical-like symptom severity without erosion.CONCLUSION Tail clamping induces anxiety in NERD models,worsening visceral hypersensitivity via CRH neuroimmune dysregulation,offering a translational model and highlighting CRH as a treatment target.