期刊文献+
共找到54,735篇文章
< 1 2 250 >
每页显示 20 50 100
Hepatitis C Patient Education:Large Language Models Show Promise in Disseminating Guidelines
1
作者 Jinyan Chen Ruijie Zhao +10 位作者 Chiyu He Huigang Li Yajie You Zuyuan Lin Ze Xiang Jianyong Zhuo Wei Shen Zhihang Hu Shusen Zheng Xiao Xu Di Lu 《Journal of Clinical and Translational Hepatology》 2026年第1期116-119,共4页
This study evaluated the accuracy,completeness,and comprehensibility of responses from mainstream large language models(LLMs)to hepatitis C virus(HCV)-related questions,aiming to assess their performance in addressing... This study evaluated the accuracy,completeness,and comprehensibility of responses from mainstream large language models(LLMs)to hepatitis C virus(HCV)-related questions,aiming to assess their performance in addressing patient queries about disease and lifestyle behaviors.The models selected were ChatGPT-4o,Gemini 2.0 Pro,Claude 3.5 Sonnet,and DeepSeek V3,with 12 questions chosen by two HCV experts from the domains of prevention,diagnosis,and treatment. 展开更多
关键词 addressing patient queries disease lifestyle behaviorsthe large language models large language models llms GUIDELINES hepatitis C accuracy patient education COMPREHENSIBILITY
原文传递
SDNet:A self-supervised bird recognition method based on large language models and diffusion models for improving long-term bird monitoring
2
作者 Zhongde Zhang Nan Su +3 位作者 Chenxun Deng Yandong Zhao Weiping Liu Qiaoling Han 《Avian Research》 2026年第1期200-215,共16页
The collection and annotation of lar ge-scale bird datasets are resource-intensive and time-consuming processes that significantly limit the scalability and accuracy of biodiversity monitoring systems.While self-super... The collection and annotation of lar ge-scale bird datasets are resource-intensive and time-consuming processes that significantly limit the scalability and accuracy of biodiversity monitoring systems.While self-supervised learning(SSL)has emerged as a promising approach for leveraging unannotated data,current SSL methods face two critical challenges in bird species recognition:(1)long-tailed data distributions that result in poor performance on underrepresented species;and(2)domain shift issues caused by data augmentation strategies designed to mitigate class imbalance.Here we present SDNet,a novel SSL-based bird recognition framework that integrates diffusion models with large language models(LLMs)to overcome these limitations.SDNet employs LLMs to generate semantically rich textual descriptions for tail-class species by prompting the models with species taxonomy,morphological attributes,and habitat information,producing detailed natural language priors that capture fine-grained visual characteristics(e.g.,plumage patterns,body proportions,and distinctive markings).These textual descriptions are subsequently used by a conditional diffusion model to synthesize new bird image samples through cross-attention mechanisms that fuse textual embeddings with intermediate visual feature representations during the denoising process,ensuring generated images preserve species-specific morphological details while maintaining photorealistic quality.Additionally,we incorporate a Swin Transformer as the feature extraction backbone whose hierarchical window-based attention mechanism and shifted windowing scheme enable multi-scale local feature extraction that proves particularly effective at capturing finegrained discriminative patterns(such as beak shape and feather texture)while mitigating domain shift between synthetic and original images through consistent feature representations across both data sources.SDNet is validated on both a self-constructed dataset(Bird_BXS)an d a publicly available benchmark(Birds_25),demonstrating substantial improvements over conventional SSL approaches.Our results indicate that the synergistic integration of LLMs,diffusion models,and the Swin Transformer architecture contributes significantly to recognition accuracy,particularly for rare and morphologically similar species.These findings highlight the potential of SDNet for addressing fundamental limitations of existing SSL methods in avian recognition tasks and establishing a new paradigm for efficient self-supervised learning in large-scale ornithological vision applications. 展开更多
关键词 Biodiversity conservation Bird intelligent monitoring Diffusion models Large-scale language models Long-tailed learning Self-supervised learning
在线阅读 下载PDF
CIT-Rec:Enhancing Sequential Recommendation System with Large Language Models
3
作者 Ziyu Li Zhen Chen +2 位作者 Xuejing Fu Tong Mo Weiping Li 《Computers, Materials & Continua》 2026年第3期2328-2343,共16页
Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interact... Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interactions to predict future items of interest.However,many current methods rely on unique user and item IDs,limiting their ability to represent users and items effectively,especially in zero-shot learning scenarios where training data is scarce.With the rapid development of Large Language Models(LLMs),researchers are exploring their potential to enhance recommendation systems.However,there is a semantic gap between the linguistic semantics of LLMs and the collaborative semantics of recommendation systems,where items are typically indexed by IDs.Moreover,most research focuses on item representations,neglecting personalized user modeling.To address these issues,we propose a sequential recommendation framework using LLMs,called CIT-Rec,a model that integrates Collaborative semantics for user representation and Image and Text information for item representation to enhance Recommendations.Specifically,by aligning intuitive image information with text containing semantic features,we can more accurately represent items,improving item representation quality.We focus not only on item representations but also on user representations.To more precisely capture users’personalized preferences,we use traditional sequential recommendation models to train on users’historical interaction data,effectively capturing behavioral patterns.Finally,by combining LLMs and traditional sequential recommendation models,we allow the LLM to understand linguistic semantics while capturing collaborative semantics.Extensive evaluations on real-world datasets show that our model outperforms baseline methods,effectively combining user interaction history with item visual and textual modalities to provide personalized recommendations. 展开更多
关键词 Large language models vision language models sequential recommendation instruction tuning
在线阅读 下载PDF
Recent Advances and Prospects in Research of In Vitro 3D Functional Skin Tissue Models
4
作者 Li Tao Zhang Liqing 《China Detergent & Cosmetics》 2026年第1期75-88,共14页
With the increasing demand for understanding skin physiology and advancing regenerative medicine,in vitro three-dimensional(3D)functional skin tissue models have become vital tools in dermatological research.These mod... With the increasing demand for understanding skin physiology and advancing regenerative medicine,in vitro three-dimensional(3D)functional skin tissue models have become vital tools in dermatological research.These models effectively mimic the complex structure and functions of human skin.This review comprehensively discusses the latest advancements in construction techniques,material selection,and applications of 3D skin models.It highlights the advantages and challenges associated with cutting-edge technologies such as layer-by-layer cell coating,3D bioprinting,bio-spray technology,and photolithographic microfabrication in creating highly realistic skin models.Moreover,it examines the wide-ranging applications of 3D skin models,includingelucidation of skin disease mechanisms,investigation of skin barrier functions,studies on skin aging and repair,hair regeneration,efficacy screening of therapeutic agents,cosmetic safety assessment,and personalized medicine.Finally,this review anticipates future trends in developing 3D skin models with greater structural and functional complexity,enhanced multifunctionality,and improved clinical translation. 展开更多
关键词 3D skin models tissue engineering BIOPRINTING skin barrier disease modeling drug screening hair regeneration skin aging
暂未订购
Classification of Job Offers into Job Positions Using O*NET and BERT Language Models
5
作者 Lino Gonzalez-Garcia Miguel-Angel Sicilia Elena García-Barriocanal 《Computers, Materials & Continua》 2026年第2期2133-2147,共15页
Classifying job offers into occupational categories is a fundamental task in human resource information systems,as it improves and streamlines indexing,search,and matching between openings and job seekers.Comprehensiv... Classifying job offers into occupational categories is a fundamental task in human resource information systems,as it improves and streamlines indexing,search,and matching between openings and job seekers.Comprehensive occupational databases such as O∗NET or ESCO provide detailed taxonomies of interrelated positions that can be leveraged to align the textual content of postings with occupational categories,thereby facilitating standardization,cross-system interoperability,and access to metadata for each occupation(e.g.,tasks,knowledge,skills,and abilities).In this work,we explore the effectiveness of fine-tuning existing language models(LMs)to classify job offers with occupational descriptors from O∗NET.This enables a more precise assessment of candidate suitability by identifying the specific knowledge and skills required for each position,and helps automate recruitment processes by mitigating human bias and subjectivity in candidate selection.We evaluate three representative BERT-like models:BERT,RoBERTa,and DeBERTa.BERT serves as the baseline encoder-only architecture;RoBERTa incorporates advances in pretraining objectives and data scale;and DeBERTa introduces architectural improvements through disentangled attention mechanisms.The best performance was achieved with the DeBERTa model,although the other models also produced strong results,and no statistically significant differences were observed acrossmodels.We also find that these models typically reach optimal performance after only a few training epochs,and that training with smaller,balanced datasets is effective.Consequently,comparable results can be obtained with models that require fewer computational resources and less training time,facilitating deployment and practical use. 展开更多
关键词 Occupational databases job offer classification language models O∗NET BERT RoBERTa DeBERTa
在线阅读 下载PDF
Amelioration of behavioral and neural deficits in animal models of neurodegenerative disease by nanoformulations of curcumin and quercetin
6
作者 Bridget Martinez Philip V.Peplow 《Neural Regeneration Research》 2026年第8期3311-3322,共12页
Neurodegenerative diseases are increasing in prevalence due largely to aging populations worldwide and improved medical care for the elderly.Currently approved drugs can reduce some of the symptoms of neurodegenerativ... Neurodegenerative diseases are increasing in prevalence due largely to aging populations worldwide and improved medical care for the elderly.Currently approved drugs can reduce some of the symptoms of neurodegenerative diseases but cannot cure them.Inflammation is involved in the development and progression of neurodegenerative diseases,and oxidative stress is implicated in neurodegeneration associated with cognitive decline and age-related cognitive impairment.Polyphenols such as curcumin,quercetin,and resveratrol possess potent anti-inflammatory and antioxidant properties.Nanoformulations of curcumin and quercetin can optimize their pharmacological effects in the treatment of neurodegenerative diseases.Nanocarriers play a crucial role in delivering drugs across the blood-brain barrier,thereby lowering the risk of peripheral side effects.Various nanoforms have been developed to induce bioavailability and solubility of curcumin and quercetin,including nanoparticles and nanoemulsions.The studies reviewed included 17 using curcumin nanoformulations and seven with quercetin nanoformulations and were tested in widely used animal models of Alzheimer’s disease,Parkinson’s disease,Huntington’s disease,and multiple sclerosis.Many of the curcumin and quercetin nanoformulations brought about improvements in learning and memory in behavioral tests of Alzheimer’s disease models and were effective in reducing oxidative stress in the brain.Both nanocurcumin and nanoquercetin decreased the levels of inflammatory markers in the brain.Nanocurcumin formulations improved motor behavior,gait,and memory in Parkinson’s disease models and increased dopaminergic neurons in the striatum and substantia nigra.Furthermore,nanocurcumin improved locomotor activity,memory,and learning,and the number of dendrites of medium spiny neurons in Huntington’s disease models.Nanocurcumin formulations decreased oxidative stress and inflammation in a model of demyelination.Several important limitations were identified in the studies reviewed and these need to be considered in future studies.Also,clinical trials could be performed using the currently available nanoforms of curcumin and quercetin. 展开更多
关键词 animal models behavioral deficits CURCUMIN inflammation nanoformulations neural deficits neurodegeneration oxidative stress QUERCETIN
暂未订购
Semantic Causality Evaluation of Correlation Analysis Utilizing Large Language Models
7
作者 Adam Dudáš 《Computers, Materials & Continua》 2026年第5期2246-2269,共24页
It is known that correlation does not imply causality.Some relationships identified in the analysis of data are coincidental or unknown,and some are produced by real-world causality of the situation,which is problemat... It is known that correlation does not imply causality.Some relationships identified in the analysis of data are coincidental or unknown,and some are produced by real-world causality of the situation,which is problematic,since there is a need to differentiate between these two scenarios.Until recently,the proper−semantic−causality of the relationship could have been determined only by human experts from the area of expertise of the studied data.This has changed with the advance of large language models,which are often utilized as surrogates for such human experts,making the process automated and readily available to all data analysts.This motivates the main objective of this work,which is to introduce the design and implementation of a large language model-based semantic causality evaluator based on correlation analysis,together with its visual analysis model called Causal heatmap.After the implementation itself,the model is evaluated from the point of view of the quality of the visual model,from the point of view of the quality of causal evaluation based on large language models,and from the point of view of comparative analysis,while the results reached in the study highlight the usability of large language models in the task and the potential of the proposed approach in the analysis of unknown datasets.The results of the experimental evaluation demonstrate the usefulness of the Causal heatmap method,supported by the evident highlighting of interesting relationships,while suppressing irrelevant ones. 展开更多
关键词 CORRELATION CAUSALITY correlation analysis large language models VISUALIZATION
在线阅读 下载PDF
Test for Varying-Coefficient Models with High-Dimensional Data
8
作者 YANG Lin GAO Yuzhao QU Lianqiang 《Journal of Systems Science & Complexity》 2026年第1期203-229,共27页
The authors consider the issue of hypothesis testing in varying-coefficient regression models with high-dimensional data.Utilizing kernel smoothing techniques,the authors propose a locally concerned U-statistic method... The authors consider the issue of hypothesis testing in varying-coefficient regression models with high-dimensional data.Utilizing kernel smoothing techniques,the authors propose a locally concerned U-statistic method to assess the overall significance of the coefficients.The authors establish that the proposed test is asymptotically normal under both the null hypothesis and local alternatives.Based on the locally concerned U-statistic,the authors further develop a globally concerned U-statistic to test whether the coefficient function is zero.A stochastic perturbation method is employed to approximate the distribution of the globally concerned test statistic.Monte Carlo simulations demonstrate the validity of the proposed test in finite samples. 展开更多
关键词 Hypothesis testing high-dimensional data kernel smoothing U-STATISTIC varying-coefficient models
原文传递
When Large Language Models and Machine Learning Meet Multi-Criteria Decision Making: Fully Integrated Approach for Social Media Moderation
9
作者 Noreen Fuentes Janeth Ugang +4 位作者 Narcisan Galamiton Suzette Bacus Samantha Shane Evangelista Fatima Maturan Lanndon Ocampo 《Computers, Materials & Continua》 2026年第1期2137-2162,共26页
This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to use... This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities. 展开更多
关键词 Self-moderation user-generated content k-means clustering TODIM large language models
在线阅读 下载PDF
Assessing Large Language Models for Early Article Identification in Otolaryngology—Head and Neck Surgery Systematic Reviews
10
作者 Ajibola B.Bakare Young Lee +2 位作者 Jhuree Hong Claus-Peter Richter Jonathan P.Kuriakose 《Health Care Science》 2026年第1期19-28,共10页
Background:Assess ChatGPT and Bard's effectiveness in the initial identification of articles for Otolaryngology—Head and Neck Surgery systematic literature reviews.Methods:Three PRISMA-based systematic reviews(Ja... Background:Assess ChatGPT and Bard's effectiveness in the initial identification of articles for Otolaryngology—Head and Neck Surgery systematic literature reviews.Methods:Three PRISMA-based systematic reviews(Jabbour et al.2017,Wong et al.2018,and Wu et al.2021)were replicated using ChatGPTv3.5 and Bard.Outputs(author,title,publication year,and journal)were compared to the original references and cross-referenced with medical databases for authenticity and recall.Results:Several themes emerged when comparing Bard and ChatGPT across the three reviews.Bard generated more outputs and had greater recall in Wong et al.'s review,with a broader date range in Jabbour et al.'s review.In Wu et al.'s review,ChatGPT-2 had higher recall and identified more authentic outputs than Bard-2.Conclusion:Large language models(LLMs)failed to fully replicate peer-reviewed methodologies,producing outputs with inaccuracies but identifying relevant,especially recent,articles missed by the references.While human-led PRISMA-based reviews remain the gold standard,refining LLMs for literature reviews shows potential. 展开更多
关键词 artificial intelligence BARD ChatGPT large language models systematic review
暂未订购
Therapeutic Potential of Fingolimod and Dimethyl Fumarate in Preclinical Pancreatic Cancer Models
11
作者 Pauline Gousseau Laurie Genest +1 位作者 Guillaume Froget Tristan Rupp 《Oncology Research》 2026年第3期387-405,共19页
Objectives:The five-year survival rate for pancreatic cancer is notably low,posing a significant challenge to patient health.The primary treatments are radiotherapy and chemotherapy,sometimes combined with targeted th... Objectives:The five-year survival rate for pancreatic cancer is notably low,posing a significant challenge to patient health.The primary treatments are radiotherapy and chemotherapy,sometimes combined with targeted therapy;however,their clinical benefits are limited.Therefore,developing new models to evaluate the therapeutic potential of novel molecules is essential.Fingolimod and Dimethyl Fumarate(DMF),currently used to treat multiple sclerosis,have recently been shown to have anti-cancer effects in several preclinical tumor models.This study aims to evaluate the therapeutic potential of Fingolimod and DMF in pancreatic cancer by investigating their respective in vitro cytotoxicity and in vivo antitumor effects.Methods:In this study,we evaluated for the first time these two drugs in pancreatic preclinical models in vitro using 3D spheroid tumor models and in vivo,which are compared to two standard-of-care consisting of Gemcitabine and Erlotinib.Results:In vitro,both Fingolimod and DMF induced cytotoxicity in spheroids from two pancreatic cell lines.Additionally,Fingolimod and DMF displayed anticancer effects in two subcutaneous xenograft models using PANC-1 and CFPAC-1 cells.Conclusions:Although the responses observed with Fingolimod and DMF were similar to those of Gemcitabine and Erlotinib,these findings indicate a potential emerging interest in Fingolimod and DMF for the treatment of pancreatic cancer.However,further work is still necessary to fully characterize how these drugs affect tumor progression. 展开更多
关键词 Pancreatic cancer preclinical models tumor progression FINGOLIMOD dimethyl Fumarate
暂未订购
Task-Structured Curriculum Learning for Multi-Task Distillation:Enhancing Step-by-Step Knowledge Transfer in Language Models
12
作者 Ahmet Ezgi Aytug Onan 《Computers, Materials & Continua》 2026年第3期1647-1673,共27页
Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Re... Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning. 展开更多
关键词 Knowledge distillation curriculum learning language models multi-task learning step-by-step learning
在线阅读 下载PDF
Machine learning models for predicting carbonation depth in fly ash concrete:performance and interpretability insights
13
作者 Arslan Qayyum Khan Syed Ghulam Muhammad +1 位作者 Ali Raza Amorn Pimanmas 《Journal of Road Engineering》 2026年第1期74-90,共17页
This study aims to develop an accurate and robust machine learning model to predict the carbonation depth of fly ash concrete,overcoming the limitations of traditional predictive methods.Five ensemble-based models,suc... This study aims to develop an accurate and robust machine learning model to predict the carbonation depth of fly ash concrete,overcoming the limitations of traditional predictive methods.Five ensemble-based models,such as adaptive boosting(AdaBoost),categorical boosting(CatBoost),gradient boosting regressor(GBR),hist gradient boosting regressor(HistGBR),and extreme gradient boosting(XGBoost),were developed and optimized using 729 high-quality dataset points incorporating seven input parameters,including cement,CO_(2),exposure time,water-binder ratio,fly ash,curing time,and compressive strength.Several performance evaluation metrics were used to compare the models.The GBR model emerged as the best-performing model,based on high coefficient of determination(R^(2))values and balanced error metrics across both validation and testing datasets.While all models performed exceptionally well on the training data,GBR demonstrated superior generalization capability,with R^(2) values of 0.9438 on the validation set and 0.9310 on the testing set.Furthermore,its low mean squared error(MSE),root mean square error(RMSE),mean absolute error(MAE),and median absolute error(MdAE)confirmed its robustness and accuracy.Moreover,shapley additive explanations(SHAP)analysis enhanced the interpretability of predictions,highlighting the curing time and exposure time as the most critical drivers of carbonation depth. 展开更多
关键词 Fly ash concrete Carbonation depth Machine learning Ensemble models SHAP analysis
在线阅读 下载PDF
ATLAS study:Design,athletic performance,and sex-specific regression models for muscle strength in the Greek population
14
作者 Natia A.Pogosova Despoina Brekou +7 位作者 Ioanna E.Gavra Efthymia A.Katsareli Eleni More Panagiotis G.Symianakis Maria Kafyra Ioanna Panagiota Kalafati Giannis Arnaoutis George V.Dedoussis 《Sports Medicine and Health Science》 2026年第1期79-95,共17页
Purpose:ATLAS is a cross-sectional study aiming to investigate environmental and genetic determinants of athletic performance in healthy Greek competitive athletes(CA).This article presents the study design,investigat... Purpose:ATLAS is a cross-sectional study aiming to investigate environmental and genetic determinants of athletic performance in healthy Greek competitive athletes(CA).This article presents the study design,investigates the muscle strength performance(MSP)of 289 adult and teenage CA,exercisers,and physically inactive individuals(PI),and proposes predictive models of MSP for adults.Methods:Muscle maximal,speed,and explosive strength(MMS/MSS/MES)at unilateral maximal concentric flexion and extension contraction(FC/EC)were evaluated using Biodex System 3 PRO^(TM)at 60°/s,180°/s,and 300°/s,while additional performance markers were assessed through field ergometric testing.Participants were interviewed about their lifestyle,dietary habits,physical activity,injury,and medical history.Body composition was assessed via bioelectrical impedance.gDNA was extracted from biochemical samples and then genotyped.Statistical analysis was conducted using IBM SPSS Statistics v21.0 and R.Results:Age,fitness,and sex impacted correlations of MSP with body composition and anthropometric measurements(p<0.05).Among CA,females outperformed males in accuracy(p<0.001)while,males outperformed females in anaerobic power,MSP,speed,and endurance(p<0.001).Adult CA outperformed exercisers and PI in MMS,MSS,and MES(p<0.05).Multiple linear regression models,with predictors age,FFM,body extremity,training load explained the majority of variation in MMS(R^(2)_(adj):71.4%–88.9%),MSS(R^(2)_(adj):64.8%–78.4%),and MES(R^(2)_(adj):52.7%–68.4%)at EC,FC,and their mean(p<0.001).Conclusions:Muscle-strengthening strategies should be customized according to individual fitness levels,body composition,and anthropometric measurements.The innovative sex-specific regression models assessing MMS,MSS,and MES at EC and FC provide a framework for personalizing rehabilitation and skill-specific training strategies. 展开更多
关键词 Athletic performance Isokinetic dynamometer Muscle strength performance Greek population Predictive models Body composition
在线阅读 下载PDF
Information Diffusion Models and Fuzzing Algorithms for a Privacy-Aware Data Transmission Scheduling in 6G Heterogeneous ad hoc Networks
15
作者 Borja Bordel Sánchez Ramón Alcarria Tomás Robles 《Computer Modeling in Engineering & Sciences》 2026年第2期1214-1234,共21页
In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic h... In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic heterogeneous infrastructures,unstable links and non-uniform hardware capabilities create critical issues regarding security and privacy.Traditional protocols are often too computationally heavy to allow 6G services to achieve their expected Quality-of-Service(QoS).As the transport network is built of ad hoc nodes,there is no guarantee about their trustworthiness or behavior,and transversal functionalities are delegated to the extreme nodes.However,while security can be guaranteed in extreme-to-extreme solutions,privacy cannot,as all intermediate nodes still have to handle the data packets they are transporting.Besides,traditional schemes for private anonymous ad hoc communications are vulnerable against modern intelligent attacks based on learning models.The proposed scheme fulfills this gap.Findings show the probability of a successful intelligent attack reduces by up to 65%compared to ad hoc networks with no privacy protection strategy when used the proposed technology.While congestion probability can remain below 0.001%,as required in 6G services. 展开更多
关键词 6G networks ad hoc networks PRIVACY scheduling algorithms diffusion models fuzzing algorithms
在线阅读 下载PDF
Harnessing computational power for intelligent oncology in the age of large models: Status, challenges, and prospects
16
作者 Kexin Xu Yueran Xu Qing Shi 《Intelligent Oncology》 2026年第1期51-63,共13页
The integration of large-scale foundation models(e.g.,GPT series and AlphaFold)into oncology is fundamentally transforming both research methodologies and clinical practices,driven by unprecedented advancements in com... The integration of large-scale foundation models(e.g.,GPT series and AlphaFold)into oncology is fundamentally transforming both research methodologies and clinical practices,driven by unprecedented advancements in computational power.This review synthesizes recent progress in the application of large language models to core oncological tasks,including medical imaging analysis,genomic interpretation,and personalized treatment planning.Underpinned by advanced computational infrastructures,such as graphics processing unit/tensor processing unit clusters,heterogeneous computing,and cloud platforms,these models enable superior representation learning and generalization across multimodal data sources.This review examines how these infrastructures overcome key bottlenecks in intelligent oncology through scalable optimization strategies,including mixed-precision training,memory optimization,and heterogeneous computing.Alongside these technical advancements,the review explores pressing challenges,such as data heterogeneity,limited model interpretability,regulatory uncertainties,and the environmental impact of artificial intelligence(AI)systems.Special emphasis is placed on emerging solutions,encompassing green AI and edge computing,which offer promising approaches for low-resource deployment scenarios.Additionally,the review highlights the critical role of interdisciplinary collaboration among oncology,computer science,ethics,and policy to ensure that AI systems are not only powerful but also transparent,safe,and clinically relevant.Finally,the review outlines potential avenues for future research aimed at developing robust,scalable,and human-centered frameworks for intelligent oncology. 展开更多
关键词 Large language models Intelligent oncology Medical AI Computational infrastructure High-performance computing
在线阅读 下载PDF
PROMPTx-PE:Adaptive Optimization of Prompt Engineering Strategies for Accuracy and Robustness in Large Language Models
17
作者 Talha Farooq Khan Fahad Ali +2 位作者 Majid Hussain Lal Khan Hsien-Tsung Chang 《Computers, Materials & Continua》 2026年第5期685-715,共31页
The outstanding growth in the applications of large language models(LLMs)demonstrates the significance of adaptive and efficient prompt engineering tactics.The existing methods may not be variable,vigorous and streaml... The outstanding growth in the applications of large language models(LLMs)demonstrates the significance of adaptive and efficient prompt engineering tactics.The existing methods may not be variable,vigorous and streamlined in different domains.The offered study introduces an immediate optimization outline,named PROMPTx-PE,that is going to yield a greater level of precision and strength when it comes to the assignments that are premised on LLM.The proposed systemfeatures a timely selection schemewhich is informed by reinforcement learning,a contextual layer and a dynamic weighting module which is regulated by Lyapunov-based stability guidelines.The PROMPTx-PE dynamically varies the exploration and exploitation of the prompt space,depending on real-time feedback and multi-objective reward development.Extensive testing on both benchmark(GLUE,SuperGLUE)and domain-specific data(Healthcare-QA and Industrial-NER)demonstrates a large best performance to be 89.4%and a strong robustness disconnect with under 3%computation expense.The results confirm the effectiveness,consistency,and scalability of PROMPTx-PE as a platform of adaptive prompt engineering based on recent uses of LLMs. 展开更多
关键词 Prompt engineering large language models adaptive optimization ROBUSTNESS multi-objective optimization reinforcement learning natural language processing
在线阅读 下载PDF
Noisy data-driven identification for errors-in-variables MISO Hammerstein nonlinear models
18
作者 Jie Hou Haoran Wang +1 位作者 Penghua Li Hao Su 《Control Theory and Technology》 2026年第1期111-126,共16页
In this paper,we consider a multiple-input single-output(MISO)Hammerstein system whose inputs and output are disturbed by unknown Gaussian white measurement noises.The parameter estimation of such a system is a typica... In this paper,we consider a multiple-input single-output(MISO)Hammerstein system whose inputs and output are disturbed by unknown Gaussian white measurement noises.The parameter estimation of such a system is a typical errors-in-variables(EIV)nonlinear system identification problem.This paper proposes a bias-correction least squares(BCLS)identification methods to compute a consistent estimate of EIV MISO Hammerstein systems from noisy data.To obtain the unbiased parameter estimates of EIV MISO Hammerstein system,the analytical expression of estimated bias for the standard least squares(LS)algorithm is derived first,which is a function about the variances of noises.And then a recursive algorithm is proposed to estimate the unknown term of noises variances from noisy data.Finally,based on bias estimation scheme,the bias caused by the correlation between the input–output signals exciting the true system and the corresponding measurement noise,resulting in unbiased parameter estimates of the EIV MISO Hammerstein system.The performance of the proposed method is demonstrated through a simulation example and a chemical continuously stirred tank reactor(CSTR)system. 展开更多
关键词 Biased-corrected least squares ERRORS-IN-VARIABLES MISO Hammerstein models Parameter estimation System identification
原文传递
Neural networks and econometric models:Advancing brain connectivity for Alzheimer's drug development
19
作者 Lorenzo Pini Paolo Pigato +1 位作者 Gloria Menegaz Ilaria Boscolo Galazzo 《Neural Regeneration Research》 2026年第7期2928-2929,共2页
Advances in Alzheimer's disease(AD)research have deepened our understanding,yet the mechanisms driving its progression remain unclear.Although a range of in vivo biomarkers is now available(e.g.,measurements of am... Advances in Alzheimer's disease(AD)research have deepened our understanding,yet the mechanisms driving its progression remain unclear.Although a range of in vivo biomarkers is now available(e.g.,measurements of amyloidbeta(Aβ)and ta u accumulation-the molecular hallmarks of AD-structural magnetic resonance imaging(MRI),assessments of brain metabolism,and,more recently,blood-based markers),a definitive diagnosis of AD continues to be challenging.For example,Frisoni et al. 展开更多
关键词 econometric models amyloidbeta alzheimers disease ad research drug development neural networks vivo biomarkers Alzheimers disease brain connectivity
暂未订购
Predicting potential suitable areas of Orchidaceae plants with national key reserve from Heilongjiang province in MaxEnt models
20
作者 Weixue Zhong Xiaoxue Wei +6 位作者 Yujia Yu Xiaoqing Tang Ye Zhang Xinyu Huang Xiaohui Li Ying Liu Dewen Li 《Ecological Frontiers》 2026年第1期18-28,共11页
The study aimed at predicting potential suitable areas with national key reserve Orchidaceae plants in Heilongjiang province and conducive to plant protection.The distribution point data of six Orchidaceae plants and ... The study aimed at predicting potential suitable areas with national key reserve Orchidaceae plants in Heilongjiang province and conducive to plant protection.The distribution point data of six Orchidaceae plants and 19 bioclimatic variables were selected,and the environmental factors required for modeling were screened out by pearson correlation analysis and variance inflation factor(VIF)analysis.The potential suitable areas of Orchidaceae plants were predictat present and under different climate scenarios in 2090s by using geographic information system(GIS)and Maximum Entropy Model(MaxEnt).And then evaluated the prediction accuracy of the MaxEnt model using the AUC value,the TSS value and the Kappa value.The results showed that:1)The area under curve(AUC)values,true skill statistics(TSS)values and KAPPA values predicted by MaxEnt model were separately above 0.9,0.85 and 0.75.2)Under the climate scenario at present,the total suitable area of Orchidaceae plants was about 9.61×10^(6)km^(2),which was mainly distributed in Heilongjiang province.Among them,the high-suitable area of Cypripedium shanxiense S.C.Chen was the largest,the non-suitable area of Cypripedium guttatum Sw was the largest.3)Under different climate scenarios in 2090s,the total suitable area was slightly increasing(9.62×10^(6)km^(2)).Among them,Cypripedium shanxiense S.C.Chen and Gastrodiae Rhizoma both showed the trend of expansion to the southwest,China,and the suitable areas expanded significantly.Comprehensive factor analysis showed that temperature and precipitation were the main bioclimatic variables of suitable areas distribution,and the low emission scenario(SSP 2-4.5)will be more conducive to the survival of Orchidaceae plants. 展开更多
关键词 Orchidaceae plants Potential suitable areas Bioclimatic variables MaxEnt models National key reserve
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部