Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci...Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.展开更多
Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automat...Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automating CFD workflows is underdeveloped.We introduce a novel approach centered on domain-specific LLM adaptation.By fine-tuning Qwen2.5-7B-Instruct on NL2FOAM,our custom dataset of 28,716 natural language-to-OpenFOAM configuration pairs with chain-of-thought(CoT)annotations enables direct translation from natural language descriptions to executable CFD setups.A multi-agent system orchestrates the process,autonomously verifying inputs,generating configurations,running simulations,and correcting errors.Evaluation on a benchmark of 21 diverse flow cases demonstrates state-of-the-art performance,achieving 88.7%solution accuracy and 82.6%first-attempt success rate.This significantly outperforms larger general-purpose models such as Qwen2.5-72B-Instruct,DeepSeek-R1,and Llama3.3-70B-Instruct,while also requiring fewer correction iterations and maintaining high computational efficiency.The results highlight the critical role of domain-specific adaptation in deploying LLM assistants for complex engineering workflows.Our code and fine-tuned model have been deposited at https://github.com/YYgroup/AutoCFD.展开更多
In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance o...In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.展开更多
A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehens...A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance,strengths,and weaknesses.This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies.In this research,50 studies on 25+LLMs,including GPT-3,GPT-4,Claude 3.5,DeepKet,and hybrid multimodal frameworks like ContextDET and GeoRSCLIP,are thoroughly reviewed.We propose LLM application taxonomy by grouping techniques by task focus—healthcare,chemistry,sentiment analysis,agent-based simulations,and multimodal integration.Advanced methods like parameter-efficient tuning(LoRA),quantumenhanced embeddings(DeepKet),retrieval-augmented generation(RAG),and safety-focused models(GalaxyGPT)are evaluated for dataset requirements,computational efficiency,and performance measures.Frameworks for ethical issues,data limited hallucinations,and KDGI-enhanced fine-tuning like Woodpecker’s post-remedy corrections are highlighted.The investigation’s scope,mad,and methods are described,but the primary results are not.The work reveals that domain-specialized fine-tuned LLMs employing RAG and quantum-enhanced embeddings performbetter for context-heavy applications.In medical text normalization,ChatGPT-4 outperforms previous models,while two multimodal frameworks,GeoRSCLIP,increase remote sensing.Parameter-efficient tuning technologies like LoRA have minimal computing cost and similar performance,demonstrating the necessity for adaptive models in multiple domains.To discover the optimum domain-specific models,explain domain-specific fine-tuning,and present quantum andmultimodal LLMs to address scalability and cross-domain issues.The framework helps academics and practitioners identify,adapt,and innovate LLMs for different purposes.This work advances the field of efficient,interpretable,and ethical LLM application research.展开更多
Background: Sperm DNA fragmentation(sDF) has been proved to be an important parameter in order to predict in vitro the potential fertility of a semen sample. Colloid centrifugation could be a suitable technique to ...Background: Sperm DNA fragmentation(sDF) has been proved to be an important parameter in order to predict in vitro the potential fertility of a semen sample. Colloid centrifugation could be a suitable technique to select those donkey sperm more resistant to DNA fragmentation after thawing. Previous studies have shown that to elucidate the latent damage of the DNA molecule, sDF should be assessed dynamically, where the rate of fragmentation between treatments indicates how resistant the DNA is to iatrogenic damage. The rate of fragmentation is calculated using the slope of a linear regression equation. However, it has not been studied if s DF dynamics fit this model. The objectives of this study were to evaluate the effect of different after-thawing centrifugation protocols on sperm DNA fragmentation and elucidate the most accurate mathematical model(linear regression, exponential or polynomial) for DNA fragmentation over time in frozen-thawed donkey semen.Results: After submitting post-thaw semen samples to no centrifugation(UDC), sperm washing(SW) or single layer centrifugation(SLC) protocols, sD F values after 6 h of incubation were significantly lower in SLC samples than in SW or UDC.Coefficient of determination(R-2) values were significantly higher for a second order polynomial model than for linear or exponential. The highest values for acceleration of fragmentation(aSDF) were obtained for SW, fol owed by SLC and UDC.Conclusion: SLC after thawing seems to preserve longer DNA longevity in comparison to UDC and SW. Moreover,the fine-tuning of models has shown that sDF dynamics in frozen-thawed donkey semen fit a second order polynomial model, which implies that fragmentation rate is not constant and fragmentation acceleration must be taken into account to elucidate hidden damage in the DNA molecule.展开更多
Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning...Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.展开更多
In this paper,we establish and study a single-species logistic model with impulsive age-selective harvesting.First,we prove the ultimate boundedness of the solutions of the system.Then,we obtain conditions for the asy...In this paper,we establish and study a single-species logistic model with impulsive age-selective harvesting.First,we prove the ultimate boundedness of the solutions of the system.Then,we obtain conditions for the asymptotic stability of the trivial solution and the positive periodic solution.Finally,numerical simulations are presented to validate our results.Our results show that age-selective harvesting is more conducive to sustainable population survival than non-age-selective harvesting.展开更多
In recent years,there has been an increasing need for climate information across diverse sectors of society.This demand has arisen from the necessity to adapt to and mitigate the impacts of climate variability and cha...In recent years,there has been an increasing need for climate information across diverse sectors of society.This demand has arisen from the necessity to adapt to and mitigate the impacts of climate variability and change.Likewise,this period has seen a significant increase in our understanding of the physical processes and mechanisms that drive precipitation and its variability across different regions of Africa.By leveraging a large volume of climate model outputs,numerous studies have investigated the model representation of African precipitation as well as underlying physical processes.These studies have assessed whether the physical processes are well depicted and whether the models are fit for informing mitigation and adaptation strategies.This paper provides a review of the progress in precipitation simulation overAfrica in state-of-the-science climate models and discusses the major issues and challenges that remain.展开更多
Utilizing finite element analysis,the ballistic protection provided by a combination of perforated D-shaped and base armor plates,collectively referred to as radiator armor,is evaluated.ANSYS Explicit Dynamics is empl...Utilizing finite element analysis,the ballistic protection provided by a combination of perforated D-shaped and base armor plates,collectively referred to as radiator armor,is evaluated.ANSYS Explicit Dynamics is employed to simulate the ballistic impact of 7.62 mm armor-piercing projectiles on Aluminum AA5083-H116 and Steel Secure 500 armors,focusing on the evaluation of material deformation and penetration resistance at varying impact points.While the D-shaped armor plate is penetrated by the armor-piercing projectiles,the combination of the perforated D-shaped and base armor plates successfully halts penetration.A numerical model based on the finite element method is developed using software such as SolidWorks and ANSYS to analyze the interaction between radiator armor and bullet.The perforated design of radiator armor is to maintain airflow for radiator function,with hole sizes smaller than the bullet core diameter to protect radiator assemblies.Predictions are made regarding the brittle fracture resulting from the projectile core′s bending due to asymmetric impact,and the resulting fragments failed to penetrate the perforated base armor plate.Craters are formed on the surface of the perforated D-shaped armor plate due to the impact of projectile fragments.The numerical model accurately predicts hole growth and projectile penetration upon impact with the armor,demonstrating effective protection of the radiator assemblies by the radiator armor.展开更多
Climate model prediction has been improved by enhancing model resolution as well as the implementation of sophisticated physical parameterization and refinement of data assimilation systems[section 6.1 in Wang et al.(...Climate model prediction has been improved by enhancing model resolution as well as the implementation of sophisticated physical parameterization and refinement of data assimilation systems[section 6.1 in Wang et al.(2025)].In relation to seasonal forecasting and climate projection in the East Asian summer monsoon season,proper simulation of the seasonal migration of rain bands by models is a challenging and limiting factor[section 7.1 in Wang et al.(2025)].展开更多
Customer churn is the rate at which customers discontinue doing business with a company over a given time period.It is an essential measure for businesses to monitor high churn rates,as they often indicate underlying ...Customer churn is the rate at which customers discontinue doing business with a company over a given time period.It is an essential measure for businesses to monitor high churn rates,as they often indicate underlying issues with services,products,or customer experience,resulting in considerable income loss.Prediction of customer churn is a crucial task aimed at retaining customers and maintaining revenue growth.Traditional machine learning(ML)models often struggle to capture complex temporal dependencies in client behavior data.To address this,an optimized deep learning(DL)approach using a Regularized Bidirectional Long Short-Term Memory(RBiLSTM)model is proposed to mitigate overfitting and improve generalization error.The model integrates dropout,L2-regularization,and early stopping to enhance predictive accuracy while preventing over-reliance on specific patterns.Moreover,this study investigates the effect of optimization techniques on boosting the training efficiency of the developed model.Experimental results on a recent public customer churn dataset demonstrate that the trained model outperforms the traditional ML models and some other DL models,such as Long Short-Term Memory(LSTM)and Deep Neural Network(DNN),in churn prediction performance and stability.The proposed approach achieves 96.1%accuracy,compared with LSTM and DNN,which attain 94.5%and 94.1%accuracy,respectively.These results confirm that the proposed approach can be used as a valuable tool for businesses to identify at-risk consumers proactively and implement targeted retention strategies.展开更多
This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to use...This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.展开更多
Machine learning-assisted methods for rapid and accurate prediction of temperature field,mushy zone,and grain size were proposed for the heating−cooling combined mold(HCCM)horizontal continuous casting of C70250 alloy...Machine learning-assisted methods for rapid and accurate prediction of temperature field,mushy zone,and grain size were proposed for the heating−cooling combined mold(HCCM)horizontal continuous casting of C70250 alloy plates.First,finite element simulations of casting processes were carried out with various parameters to build a dataset.Subsequently,different machine learning algorithms were employed to achieve high precision in predicting temperature fields,mushy zone locations,mushy zone inclination angle,and billet grain size.Finally,the process parameters were quickly optimized using a strategy consisting of random generation,prediction,and screening,allowing the mushy zone to be controlled to the desired target.The optimized parameters are 1234℃for heating mold temperature,47 mm/min for casting speed,and 10 L/min for cooling water flow rate.The optimized mushy zone is located in the middle of the second heat insulation section and has an inclination angle of roughly 7°.展开更多
Vegetation plays an important role in the environmental transport behavior of organic pollutants,however,the different roles of crops and natural vegetation have been ignored in most previous studies.In this study,we ...Vegetation plays an important role in the environmental transport behavior of organic pollutants,however,the different roles of crops and natural vegetation have been ignored in most previous studies.In this study,we developed the BETR-Urban-Rural-Veg model to quantitatively evaluate the influences of both natural vegetation and crops on the multimedia transport processes of Phenanthrene(PHE)and Benzo(a)pyrene(BaP)in mainland of China.The geographic distribution of polycyclic aromatic hydrocarbon(PAH)emissions and concentrations were consistent,displaying higher levels in northern China while lower levels in southern China.Under seasonal simulations,for both natural vegetation and crops,PAH concentrations in winter and spring were 1.5 to 27-fold higher than in summer and autumn,especially for PHE.Owing to the higher leaf area index(LAI)of natural vegetation and harvesting of crops,the filter and sequestration effect of natural vegetation was stronger than crops,while the seasonal changes of PAH concentrations in crops were more significant than natural vegetation.Temperature,precipitation rates and LAI might have important influences on seasonal concentrations and overall persistence of PAHs.PHE was more sensitive to the impacts of seasonal environmental parameters.Under different landscape scenarios,average annual PAH concentrations in natural vegetation were always a little higher than those in crops,and the overall persistence of BaP was greatly affected increasing by 15.15%-16.47%.This improved model provides a useful tool for environmental management.The results of this study are expected to support land use plans and decision-making in China's mainland.展开更多
Current shipping,tourism,and resource development requirements call for more accurate predictions of the Arctic sea-ice concentration(SIC).However,due to the complex physical processes involved,predicting the spatiote...Current shipping,tourism,and resource development requirements call for more accurate predictions of the Arctic sea-ice concentration(SIC).However,due to the complex physical processes involved,predicting the spatiotemporal distribution of Arctic SIC is more challenging than predicting its total extent.In this study,spatiotemporal prediction models for monthly Arctic SIC at 1-to 3-month leads are developed based on U-Net-an effective convolutional deep-learning approach.Based on explicit Arctic sea-ice-atmosphere interactions,11 variables associated with Arctic sea-ice variations are selected as predictors,including observed Arctic SIC,atmospheric,oceanic,and heat flux variables at 1-to 3-month leads.The prediction skills for the monthly Arctic SIC of the test set(from January 2018 to December 2022)are evaluated by examining the mean absolute error(MAE)and binary accuracy(BA).Results showed that the U-Net model had lower MAE and higher BA for Arctic SIC compared to two dynamic climate prediction systems(CFSv2 and NorCPM).By analyzing the relative importance of each predictor,the prediction accuracy relies more on the SIC at the 1-month lead,but on the surface net solar radiation flux at 2-to 3-month leads.However,dynamic models show limited prediction skills for surface net solar radiation flux and other physical processes,especially in autumn.Therefore,the U-Net model can be used to capture the connections among these key physical processes associated with Arctic sea ice and thus offers a significant advantage in predicting Arctic SIC.展开更多
Rural domestic sewage treatment is critical for environmental protection.This study defines the spatial pattern of villages from the perspective of rural sewage treatment and develops an integrated decision-making sys...Rural domestic sewage treatment is critical for environmental protection.This study defines the spatial pattern of villages from the perspective of rural sewage treatment and develops an integrated decision-making system to propose a sewage treatment mode and scheme suitable for local conditions.By considering the village spatial layout and terrain factors,a decision tree model of residential density and terrain type was constructed with accuracies of 76.47%and 96.00%,respectively.Combined with binary classification probability unit regression,an appropriate sewage treatment mode for the village was determined with 87.00%accuracy.The Analytic Hierarchy Process(AHP),combined with the Technique for Order Preference(TOPSIS)by Similarity to an Ideal Solution model,formed the basis for optimal treatment process selection under different emission standards.Verification was conducted in 542 villages across three counties of the Inner Mongolia Autonomous Region,focusing on the standard effluent effect(0.3773),low investment cost(0.3196),and high standard effluent effect(0.5115)to determine the best treatment process for the same emission standard under different needs.The annual environmental and carbon emission benefits of sewage treatment in these villages were estimated.This model matches village density,geographic feature,and social development level,and provides scientific support and a theoretical basis for rural sewage treatment decision-making.展开更多
This study explores the thin-layer convective solar drying of Marrubium vulgare L.leaves under conditions typical of sun-rich semi-arid climates.Drying experiments were conducted at three inlet-air temperatures(40℃,5...This study explores the thin-layer convective solar drying of Marrubium vulgare L.leaves under conditions typical of sun-rich semi-arid climates.Drying experiments were conducted at three inlet-air temperatures(40℃,50℃,60℃)and two air velocities(1.5 and 2.5 m·s^(-1))using an indirect solar dryer with auxiliary temperature control.Moisture-ratio data were fitted with eight widely used thin-layer models and evaluated using correlation coefficient(r),root-mean-square error(RMSE),and Akaike information criterion(AIC).A complementary heattransfer analysis based on Reynolds and Prandtl numbers with appropriate Nusselt correlations was used to relate flow regime to drying performance,and an energy balance quantified the relative contributions of solar and auxiliary heat.The logarithmic model consistently achieved the lowest RMSE/AIC with r>0.99 across all conditions.Higher temperature and air velocity significantly reduced drying time during the decreasing-rate period,with no constantrate stage observed.On average,solar input supplied the large majority of the thermal demand,while the auxiliary heater compensated short irradiance drops to maintain setpoints.These findings provide a reproducible dataset and a modelling benchmark for M.vulgare leaves,and they support energy-aware design of hybrid solar dryers formedicinal plants in sun-rich regions.展开更多
BACKGROUND Non-erosive reflux disease(NERD),the main gastroesophageal reflux subtype,features reflux symptoms without mucosal damage.Anxiety links to visceral hypersensitivity in NERD,yet mechanisms and animal models ...BACKGROUND Non-erosive reflux disease(NERD),the main gastroesophageal reflux subtype,features reflux symptoms without mucosal damage.Anxiety links to visceral hypersensitivity in NERD,yet mechanisms and animal models are unclear.AIM To establish a translational NERD rat model with anxiety comorbidity via tail clamping and study corticotropin-releasing hormone(CRH)-mediated neuroimmune pathways in visceral hypersensitivity and esophageal injury.METHODS Sprague-Dawley(SD)and Wistar rats were grouped into sham,model,and modified groups(n=10 each).The treatments for the modified groups were as follows:SD rats received ovalbumin/aluminum hydroxide suspension+acid perfusion±tail clamping(40 minutes/day for 7 days),while Wistar rats received fructose water+tail clamping.Esophageal pathology,visceral sensitivity,and behavior were assessed.Serum CRH,calcitonin gene-related peptide(CGRP),5-hydroxytryptamine(5-HT),and mast cell tryptase(MCT)and central amygdala(CeA)CRH mRNA were measured via ELISA and qRT-PCR.RESULTS Tail clamping induced anxiety,worsening visceral hypersensitivity(lower abdominal withdrawal reflex thresholds,P<0.05)and esophageal injury(dilated intercellular spaces and mitochondrial edema).Both models showed raised serum CRH,CGRP,5-HT,and MCT(P<0.01)and CeA CRH mRNA expression(P<0.01).Behavioral tests confirmed anxiety-like phenotypes.NERD-anxiety rats showed clinical-like symptom severity without erosion.CONCLUSION Tail clamping induces anxiety in NERD models,worsening visceral hypersensitivity via CRH neuroimmune dysregulation,offering a translational model and highlighting CRH as a treatment target.展开更多
The moving morphable component(MMC)topology optimization method,as a typical explicit topology optimization method,has been widely concerned.In the MMC topology optimization framework,the surrogate material model is m...The moving morphable component(MMC)topology optimization method,as a typical explicit topology optimization method,has been widely concerned.In the MMC topology optimization framework,the surrogate material model is mainly used for finite element analysis at present,and the effectiveness of the surrogate material model has been fully confirmed.However,there are some accuracy problems when dealing with boundary elements using the surrogate material model,which will affect the topology optimization results.In this study,a boundary element reconstruction(BER)model is proposed based on the surrogate material model under the MMC topology optimization framework to improve the accuracy of topology optimization.The proposed BER model can reconstruct the boundary elements by refining the local meshes and obtaining new nodes in boundary elements.Then the density of boundary elements is recalculated using the new node information,which is more accurate than the original model.Based on the new density of boundary elements,the material properties and volume information of the boundary elements are updated.Compared with other finite element analysis methods,the BER model is simple and feasible and can improve computational accuracy.Finally,the effectiveness and superiority of the proposed method are verified by comparing it with the optimization results of the original surrogate material model through several numerical examples.展开更多
AIM:To build a functional generalized estimating equation(GEE)model to detect glaucomatous visual field progression and compare the performance of the proposed method with that of commonly employed algorithms.METHODS:...AIM:To build a functional generalized estimating equation(GEE)model to detect glaucomatous visual field progression and compare the performance of the proposed method with that of commonly employed algorithms.METHODS:Totally 716 eyes of 716 patients with primary open angle glaucoma(POAG)with at least 5 reliable 24-2 test results and 2y of follow-up were selected.The functional GEE model was used to detect perimetric progression in the training dataset(501 eyes).In the testing dataset(215 eyes),progression was evaluated the functional GEE model,mean deviation(MD)and visual field index(VFI)rates of change,Advanced Glaucoma Intervention Study(AGIS)and Collaborative Initial Glaucoma Treatment Study(CIGTS)scores,and pointwise linear regression(PLR).RESULTS:The proposed method showed the highest proportion of eyes detected as progression(54.4%),followed by the VFI rate(34.4%),PLR(23.3%),and MD rate(21.4%).The CIGTS and AGIS scores had a lower proportion of eyes detected as progression(7.9%and 5.1%,respectively).The time to detection of progression was significantly shorter for the proposed method than that of other algorithms(adjusted P≤0.019).The VFI rate displayed moderate pairwise agreement with the proposed method(k=0.47).CONCLUSION:The functional GEE model shows the highest proportion of eyes detected as perimetric progression and the shortest time to detect perimetric progression in patients with POAG.展开更多
基金supported by the National Key R&D Program of China(No.2021YFB0301200)National Natural Science Foundation of China(No.62025208).
文摘Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.
基金supported by the National Natural Science Foundation of China(Grant Nos.52306126,22350710788,12432010,11988102,92270203)the Xplore Prize.
文摘Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automating CFD workflows is underdeveloped.We introduce a novel approach centered on domain-specific LLM adaptation.By fine-tuning Qwen2.5-7B-Instruct on NL2FOAM,our custom dataset of 28,716 natural language-to-OpenFOAM configuration pairs with chain-of-thought(CoT)annotations enables direct translation from natural language descriptions to executable CFD setups.A multi-agent system orchestrates the process,autonomously verifying inputs,generating configurations,running simulations,and correcting errors.Evaluation on a benchmark of 21 diverse flow cases demonstrates state-of-the-art performance,achieving 88.7%solution accuracy and 82.6%first-attempt success rate.This significantly outperforms larger general-purpose models such as Qwen2.5-72B-Instruct,DeepSeek-R1,and Llama3.3-70B-Instruct,while also requiring fewer correction iterations and maintaining high computational efficiency.The results highlight the critical role of domain-specific adaptation in deploying LLM assistants for complex engineering workflows.Our code and fine-tuned model have been deposited at https://github.com/YYgroup/AutoCFD.
文摘In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.
文摘A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance,strengths,and weaknesses.This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies.In this research,50 studies on 25+LLMs,including GPT-3,GPT-4,Claude 3.5,DeepKet,and hybrid multimodal frameworks like ContextDET and GeoRSCLIP,are thoroughly reviewed.We propose LLM application taxonomy by grouping techniques by task focus—healthcare,chemistry,sentiment analysis,agent-based simulations,and multimodal integration.Advanced methods like parameter-efficient tuning(LoRA),quantumenhanced embeddings(DeepKet),retrieval-augmented generation(RAG),and safety-focused models(GalaxyGPT)are evaluated for dataset requirements,computational efficiency,and performance measures.Frameworks for ethical issues,data limited hallucinations,and KDGI-enhanced fine-tuning like Woodpecker’s post-remedy corrections are highlighted.The investigation’s scope,mad,and methods are described,but the primary results are not.The work reveals that domain-specialized fine-tuned LLMs employing RAG and quantum-enhanced embeddings performbetter for context-heavy applications.In medical text normalization,ChatGPT-4 outperforms previous models,while two multimodal frameworks,GeoRSCLIP,increase remote sensing.Parameter-efficient tuning technologies like LoRA have minimal computing cost and similar performance,demonstrating the necessity for adaptive models in multiple domains.To discover the optimum domain-specific models,explain domain-specific fine-tuning,and present quantum andmultimodal LLMs to address scalability and cross-domain issues.The framework helps academics and practitioners identify,adapt,and innovate LLMs for different purposes.This work advances the field of efficient,interpretable,and ethical LLM application research.
基金partially supported by grants RZ2009-00006-00-00(Instituto Nacional de Investigacion y Tecnología Agraria y Alimentaria,Ministerio de Ciencia e Innovación,Spain)AGL-2013-42726-R(Secretaria de Estado de Investigacion,Desarrollo e Innovacion,Ministerio de Economia y Competitividad,Spain)+1 种基金supported by a Ph.D.fellowship from the ceiA3(Andalucia,Spain)with funding provided by Banco Santander through its Global Division,Santander Universidadesfunded by the Swedish Foundation for Equine Research,Stockholm,Sweden(H14-47-008)
文摘Background: Sperm DNA fragmentation(sDF) has been proved to be an important parameter in order to predict in vitro the potential fertility of a semen sample. Colloid centrifugation could be a suitable technique to select those donkey sperm more resistant to DNA fragmentation after thawing. Previous studies have shown that to elucidate the latent damage of the DNA molecule, sDF should be assessed dynamically, where the rate of fragmentation between treatments indicates how resistant the DNA is to iatrogenic damage. The rate of fragmentation is calculated using the slope of a linear regression equation. However, it has not been studied if s DF dynamics fit this model. The objectives of this study were to evaluate the effect of different after-thawing centrifugation protocols on sperm DNA fragmentation and elucidate the most accurate mathematical model(linear regression, exponential or polynomial) for DNA fragmentation over time in frozen-thawed donkey semen.Results: After submitting post-thaw semen samples to no centrifugation(UDC), sperm washing(SW) or single layer centrifugation(SLC) protocols, sD F values after 6 h of incubation were significantly lower in SLC samples than in SW or UDC.Coefficient of determination(R-2) values were significantly higher for a second order polynomial model than for linear or exponential. The highest values for acceleration of fragmentation(aSDF) were obtained for SW, fol owed by SLC and UDC.Conclusion: SLC after thawing seems to preserve longer DNA longevity in comparison to UDC and SW. Moreover,the fine-tuning of models has shown that sDF dynamics in frozen-thawed donkey semen fit a second order polynomial model, which implies that fragmentation rate is not constant and fragmentation acceleration must be taken into account to elucidate hidden damage in the DNA molecule.
基金This work is part of the research projects LaTe4PoliticES(PID2022-138099OBI00)funded by MICIU/AEI/10.13039/501100011033the European Regional Development Fund(ERDF)-A Way of Making Europe and LT-SWM(TED2021-131167B-I00)funded by MICIU/AEI/10.13039/501100011033the European Union NextGenerationEU/PRTR.Mr.Ronghao Pan is supported by the Programa Investigo grant,funded by the Region of Murcia,the Spanish Ministry of Labour and Social Economy and the European Union-NextGenerationEU under the“Plan de Recuperación,Transformación y Resiliencia(PRTR).”。
文摘Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.
基金Supported by the National Natural Science Foundation of China(12261018)Universities Key Laboratory of Mathematical Modeling and Data Mining in Guizhou Province(2023013)。
文摘In this paper,we establish and study a single-species logistic model with impulsive age-selective harvesting.First,we prove the ultimate boundedness of the solutions of the system.Then,we obtain conditions for the asymptotic stability of the trivial solution and the positive periodic solution.Finally,numerical simulations are presented to validate our results.Our results show that age-selective harvesting is more conducive to sustainable population survival than non-age-selective harvesting.
基金the World Climate Research Programme(WCRP),Climate Variability and Predictability(CLIVAR),and Global Energy and Water Exchanges(GEWEX)for facilitating the coordination of African monsoon researchsupport from the Center for Earth System Modeling,Analysis,and Data at the Pennsylvania State Universitythe support of the Office of Science of the U.S.Department of Energy Biological and Environmental Research as part of the Regional&Global Model Analysis(RGMA)program area。
文摘In recent years,there has been an increasing need for climate information across diverse sectors of society.This demand has arisen from the necessity to adapt to and mitigate the impacts of climate variability and change.Likewise,this period has seen a significant increase in our understanding of the physical processes and mechanisms that drive precipitation and its variability across different regions of Africa.By leveraging a large volume of climate model outputs,numerous studies have investigated the model representation of African precipitation as well as underlying physical processes.These studies have assessed whether the physical processes are well depicted and whether the models are fit for informing mitigation and adaptation strategies.This paper provides a review of the progress in precipitation simulation overAfrica in state-of-the-science climate models and discusses the major issues and challenges that remain.
文摘Utilizing finite element analysis,the ballistic protection provided by a combination of perforated D-shaped and base armor plates,collectively referred to as radiator armor,is evaluated.ANSYS Explicit Dynamics is employed to simulate the ballistic impact of 7.62 mm armor-piercing projectiles on Aluminum AA5083-H116 and Steel Secure 500 armors,focusing on the evaluation of material deformation and penetration resistance at varying impact points.While the D-shaped armor plate is penetrated by the armor-piercing projectiles,the combination of the perforated D-shaped and base armor plates successfully halts penetration.A numerical model based on the finite element method is developed using software such as SolidWorks and ANSYS to analyze the interaction between radiator armor and bullet.The perforated design of radiator armor is to maintain airflow for radiator function,with hole sizes smaller than the bullet core diameter to protect radiator assemblies.Predictions are made regarding the brittle fracture resulting from the projectile core′s bending due to asymmetric impact,and the resulting fragments failed to penetrate the perforated base armor plate.Craters are formed on the surface of the perforated D-shaped armor plate due to the impact of projectile fragments.The numerical model accurately predicts hole growth and projectile penetration upon impact with the armor,demonstrating effective protection of the radiator assemblies by the radiator armor.
文摘Climate model prediction has been improved by enhancing model resolution as well as the implementation of sophisticated physical parameterization and refinement of data assimilation systems[section 6.1 in Wang et al.(2025)].In relation to seasonal forecasting and climate projection in the East Asian summer monsoon season,proper simulation of the seasonal migration of rain bands by models is a challenging and limiting factor[section 7.1 in Wang et al.(2025)].
文摘Customer churn is the rate at which customers discontinue doing business with a company over a given time period.It is an essential measure for businesses to monitor high churn rates,as they often indicate underlying issues with services,products,or customer experience,resulting in considerable income loss.Prediction of customer churn is a crucial task aimed at retaining customers and maintaining revenue growth.Traditional machine learning(ML)models often struggle to capture complex temporal dependencies in client behavior data.To address this,an optimized deep learning(DL)approach using a Regularized Bidirectional Long Short-Term Memory(RBiLSTM)model is proposed to mitigate overfitting and improve generalization error.The model integrates dropout,L2-regularization,and early stopping to enhance predictive accuracy while preventing over-reliance on specific patterns.Moreover,this study investigates the effect of optimization techniques on boosting the training efficiency of the developed model.Experimental results on a recent public customer churn dataset demonstrate that the trained model outperforms the traditional ML models and some other DL models,such as Long Short-Term Memory(LSTM)and Deep Neural Network(DNN),in churn prediction performance and stability.The proposed approach achieves 96.1%accuracy,compared with LSTM and DNN,which attain 94.5%and 94.1%accuracy,respectively.These results confirm that the proposed approach can be used as a valuable tool for businesses to identify at-risk consumers proactively and implement targeted retention strategies.
基金funded by the Office of the Vice-President for Research and Development of Cebu Technological University.
文摘This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.
基金financially supported by the National Key Research and Development Program of China (No. 2023YFB3812601)the National Natural Science Foundation of China (No. 51925401)the Young Elite Scientists Sponsorship Program by CAST, China (No. 2022QNRC001)。
文摘Machine learning-assisted methods for rapid and accurate prediction of temperature field,mushy zone,and grain size were proposed for the heating−cooling combined mold(HCCM)horizontal continuous casting of C70250 alloy plates.First,finite element simulations of casting processes were carried out with various parameters to build a dataset.Subsequently,different machine learning algorithms were employed to achieve high precision in predicting temperature fields,mushy zone locations,mushy zone inclination angle,and billet grain size.Finally,the process parameters were quickly optimized using a strategy consisting of random generation,prediction,and screening,allowing the mushy zone to be controlled to the desired target.The optimized parameters are 1234℃for heating mold temperature,47 mm/min for casting speed,and 10 L/min for cooling water flow rate.The optimized mushy zone is located in the middle of the second heat insulation section and has an inclination angle of roughly 7°.
基金supported by the National Natural Science Foundation of China(Nos.42107420,U23A20157,and U1910207)Shanxi Province Science Foundation for Young Scholars(No.20210302124363).
文摘Vegetation plays an important role in the environmental transport behavior of organic pollutants,however,the different roles of crops and natural vegetation have been ignored in most previous studies.In this study,we developed the BETR-Urban-Rural-Veg model to quantitatively evaluate the influences of both natural vegetation and crops on the multimedia transport processes of Phenanthrene(PHE)and Benzo(a)pyrene(BaP)in mainland of China.The geographic distribution of polycyclic aromatic hydrocarbon(PAH)emissions and concentrations were consistent,displaying higher levels in northern China while lower levels in southern China.Under seasonal simulations,for both natural vegetation and crops,PAH concentrations in winter and spring were 1.5 to 27-fold higher than in summer and autumn,especially for PHE.Owing to the higher leaf area index(LAI)of natural vegetation and harvesting of crops,the filter and sequestration effect of natural vegetation was stronger than crops,while the seasonal changes of PAH concentrations in crops were more significant than natural vegetation.Temperature,precipitation rates and LAI might have important influences on seasonal concentrations and overall persistence of PAHs.PHE was more sensitive to the impacts of seasonal environmental parameters.Under different landscape scenarios,average annual PAH concentrations in natural vegetation were always a little higher than those in crops,and the overall persistence of BaP was greatly affected increasing by 15.15%-16.47%.This improved model provides a useful tool for environmental management.The results of this study are expected to support land use plans and decision-making in China's mainland.
基金supported by the National Key Research and Development Program of China[grant number 2022YFE0106800]an Innovation Group Project of the Southern Marine Science and Engineering Guangdong Laboratory(Zhuhai)[grant number 311024001]+3 种基金a project supported by the Southern Marine Science and Engineering Guangdong Laboratory(Zhuhai)[grant number SML2023SP209]a Research Council of Norway funded project(MAPARC)[grant number 328943]a Nansen Center´s basic institutional funding[grant number 342624]the high-performance computing support from the School of Atmospheric Science at Sun Yat-sen University。
文摘Current shipping,tourism,and resource development requirements call for more accurate predictions of the Arctic sea-ice concentration(SIC).However,due to the complex physical processes involved,predicting the spatiotemporal distribution of Arctic SIC is more challenging than predicting its total extent.In this study,spatiotemporal prediction models for monthly Arctic SIC at 1-to 3-month leads are developed based on U-Net-an effective convolutional deep-learning approach.Based on explicit Arctic sea-ice-atmosphere interactions,11 variables associated with Arctic sea-ice variations are selected as predictors,including observed Arctic SIC,atmospheric,oceanic,and heat flux variables at 1-to 3-month leads.The prediction skills for the monthly Arctic SIC of the test set(from January 2018 to December 2022)are evaluated by examining the mean absolute error(MAE)and binary accuracy(BA).Results showed that the U-Net model had lower MAE and higher BA for Arctic SIC compared to two dynamic climate prediction systems(CFSv2 and NorCPM).By analyzing the relative importance of each predictor,the prediction accuracy relies more on the SIC at the 1-month lead,but on the surface net solar radiation flux at 2-to 3-month leads.However,dynamic models show limited prediction skills for surface net solar radiation flux and other physical processes,especially in autumn.Therefore,the U-Net model can be used to capture the connections among these key physical processes associated with Arctic sea ice and thus offers a significant advantage in predicting Arctic SIC.
基金supported by the Central Government Guiding Local Science and Technology Development Fund Project(No.2024SZY0343)the Joint Research Program for Ecological Conservation and High Quality Development of the Yellow River Basin(No.2022-YRUC-01-050205)+2 种基金the Higher Education Scientific Research Project of Inner Mongolia Autonomous Region(No.NJZZ23078)the project of Inner Mongolia"Prairie Talents"Engineering Innovation Entrepreneurship Talent Team,the Major Projects of Erdos Science and Technology(No.2022EEDSKJZDZX015)the Innovation Team of the Inner Mongolia Academy of Science and Technology(No.CXTD2023-01-016).
文摘Rural domestic sewage treatment is critical for environmental protection.This study defines the spatial pattern of villages from the perspective of rural sewage treatment and develops an integrated decision-making system to propose a sewage treatment mode and scheme suitable for local conditions.By considering the village spatial layout and terrain factors,a decision tree model of residential density and terrain type was constructed with accuracies of 76.47%and 96.00%,respectively.Combined with binary classification probability unit regression,an appropriate sewage treatment mode for the village was determined with 87.00%accuracy.The Analytic Hierarchy Process(AHP),combined with the Technique for Order Preference(TOPSIS)by Similarity to an Ideal Solution model,formed the basis for optimal treatment process selection under different emission standards.Verification was conducted in 542 villages across three counties of the Inner Mongolia Autonomous Region,focusing on the standard effluent effect(0.3773),low investment cost(0.3196),and high standard effluent effect(0.5115)to determine the best treatment process for the same emission standard under different needs.The annual environmental and carbon emission benefits of sewage treatment in these villages were estimated.This model matches village density,geographic feature,and social development level,and provides scientific support and a theoretical basis for rural sewage treatment decision-making.
文摘This study explores the thin-layer convective solar drying of Marrubium vulgare L.leaves under conditions typical of sun-rich semi-arid climates.Drying experiments were conducted at three inlet-air temperatures(40℃,50℃,60℃)and two air velocities(1.5 and 2.5 m·s^(-1))using an indirect solar dryer with auxiliary temperature control.Moisture-ratio data were fitted with eight widely used thin-layer models and evaluated using correlation coefficient(r),root-mean-square error(RMSE),and Akaike information criterion(AIC).A complementary heattransfer analysis based on Reynolds and Prandtl numbers with appropriate Nusselt correlations was used to relate flow regime to drying performance,and an energy balance quantified the relative contributions of solar and auxiliary heat.The logarithmic model consistently achieved the lowest RMSE/AIC with r>0.99 across all conditions.Higher temperature and air velocity significantly reduced drying time during the decreasing-rate period,with no constantrate stage observed.On average,solar input supplied the large majority of the thermal demand,while the auxiliary heater compensated short irradiance drops to maintain setpoints.These findings provide a reproducible dataset and a modelling benchmark for M.vulgare leaves,and they support energy-aware design of hybrid solar dryers formedicinal plants in sun-rich regions.
基金Supported by the National Key Specialty of Traditional Chinese Medicine(Spleen and Stomach Diseases),No.0500004National Natural Science Foundation of China,No.82205104 and No.82104850+1 种基金Hospital Capability Enhancement Project of Xiyuan Hospital,CACMS,No.XYZX0303-07the Fundamental Research Funds for the Central Public Welfare Research Institutes,Excellent Young Scientists Training Program of China Academy of Chinese Medical Sciences,No.ZZ16-YQ-002.
文摘BACKGROUND Non-erosive reflux disease(NERD),the main gastroesophageal reflux subtype,features reflux symptoms without mucosal damage.Anxiety links to visceral hypersensitivity in NERD,yet mechanisms and animal models are unclear.AIM To establish a translational NERD rat model with anxiety comorbidity via tail clamping and study corticotropin-releasing hormone(CRH)-mediated neuroimmune pathways in visceral hypersensitivity and esophageal injury.METHODS Sprague-Dawley(SD)and Wistar rats were grouped into sham,model,and modified groups(n=10 each).The treatments for the modified groups were as follows:SD rats received ovalbumin/aluminum hydroxide suspension+acid perfusion±tail clamping(40 minutes/day for 7 days),while Wistar rats received fructose water+tail clamping.Esophageal pathology,visceral sensitivity,and behavior were assessed.Serum CRH,calcitonin gene-related peptide(CGRP),5-hydroxytryptamine(5-HT),and mast cell tryptase(MCT)and central amygdala(CeA)CRH mRNA were measured via ELISA and qRT-PCR.RESULTS Tail clamping induced anxiety,worsening visceral hypersensitivity(lower abdominal withdrawal reflex thresholds,P<0.05)and esophageal injury(dilated intercellular spaces and mitochondrial edema).Both models showed raised serum CRH,CGRP,5-HT,and MCT(P<0.01)and CeA CRH mRNA expression(P<0.01).Behavioral tests confirmed anxiety-like phenotypes.NERD-anxiety rats showed clinical-like symptom severity without erosion.CONCLUSION Tail clamping induces anxiety in NERD models,worsening visceral hypersensitivity via CRH neuroimmune dysregulation,offering a translational model and highlighting CRH as a treatment target.
基金supported by the Science and Technology Research Project of Henan Province(242102241055)the Industry-University-Research Collaborative Innovation Base on Automobile Lightweight of“Science and Technology Innovation in Central Plains”(2024KCZY315)the Opening Fund of State Key Laboratory of Structural Analysis,Optimization and CAE Software for Industrial Equipment(GZ2024A03-ZZU).
文摘The moving morphable component(MMC)topology optimization method,as a typical explicit topology optimization method,has been widely concerned.In the MMC topology optimization framework,the surrogate material model is mainly used for finite element analysis at present,and the effectiveness of the surrogate material model has been fully confirmed.However,there are some accuracy problems when dealing with boundary elements using the surrogate material model,which will affect the topology optimization results.In this study,a boundary element reconstruction(BER)model is proposed based on the surrogate material model under the MMC topology optimization framework to improve the accuracy of topology optimization.The proposed BER model can reconstruct the boundary elements by refining the local meshes and obtaining new nodes in boundary elements.Then the density of boundary elements is recalculated using the new node information,which is more accurate than the original model.Based on the new density of boundary elements,the material properties and volume information of the boundary elements are updated.Compared with other finite element analysis methods,the BER model is simple and feasible and can improve computational accuracy.Finally,the effectiveness and superiority of the proposed method are verified by comparing it with the optimization results of the original surrogate material model through several numerical examples.
基金Supported by the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI),funded by the Ministry of Health&Welfare,Republic of Korea(No.HR20C0026)the National Research Foundation of Korea(NRF)(No.RS-2023-00247504)the Patient-Centered Clinical Research Coordinating Center,funded by the Ministry of Health&Welfare,Republic of Korea(No.HC19C0276).
文摘AIM:To build a functional generalized estimating equation(GEE)model to detect glaucomatous visual field progression and compare the performance of the proposed method with that of commonly employed algorithms.METHODS:Totally 716 eyes of 716 patients with primary open angle glaucoma(POAG)with at least 5 reliable 24-2 test results and 2y of follow-up were selected.The functional GEE model was used to detect perimetric progression in the training dataset(501 eyes).In the testing dataset(215 eyes),progression was evaluated the functional GEE model,mean deviation(MD)and visual field index(VFI)rates of change,Advanced Glaucoma Intervention Study(AGIS)and Collaborative Initial Glaucoma Treatment Study(CIGTS)scores,and pointwise linear regression(PLR).RESULTS:The proposed method showed the highest proportion of eyes detected as progression(54.4%),followed by the VFI rate(34.4%),PLR(23.3%),and MD rate(21.4%).The CIGTS and AGIS scores had a lower proportion of eyes detected as progression(7.9%and 5.1%,respectively).The time to detection of progression was significantly shorter for the proposed method than that of other algorithms(adjusted P≤0.019).The VFI rate displayed moderate pairwise agreement with the proposed method(k=0.47).CONCLUSION:The functional GEE model shows the highest proportion of eyes detected as perimetric progression and the shortest time to detect perimetric progression in patients with POAG.