Shotcrete is one of the common solutions for shallow sliding.It works by forming a protective layer with high strength and cementing the loose soil particles on the slope surface to prevent shallow sliding.However,the...Shotcrete is one of the common solutions for shallow sliding.It works by forming a protective layer with high strength and cementing the loose soil particles on the slope surface to prevent shallow sliding.However,the solidification time of conventional cement paste is long when shotcrete is used to treat cohesionless soil landslide.The idea of reinforcing slope with polyurethane solidified soil(i.e.,mixture of polyurethane and sand)was proposed.Model tests and finite element analysis were carried out to study the effectiveness of the proposed new method on the emergency treatment of cohesionless soil landslide.Surcharge loading on the crest of the slope was applied step by step until landslide was triggered so as to test and compare the stability and bearing capacity of slope models with different conditions.The simulated slope displacements were relatively close to the measured results,and the simulated slope deformation characteristics were in good agreement with the observed phenomena,which verifies the accuracy of the numerical method.Under the condition of surcharge loading on the crest of the slope,the unreinforced slope slid when the surcharge loading exceeded 30 k Pa,which presented a failure mode of local instability and collapse at the shallow layer of slope top.The reinforced slope remained stable even when the surcharge loading reached 48 k Pa.The displacement of the reinforced slope was reduced by more than 95%.Overall,this study verifies the effectiveness of polyurethane in the emergency treatment of cohesionless soil landslide and should have broad application prospects in the field of geological disasters concerning the safety of people's live.展开更多
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci...Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.展开更多
DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expres...DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expression data generated. To address this, this paper employs a mixed-effects model to analyze gene expression data. In terms of data selection, 1176 genes from the white mouse gene expression dataset under two experimental conditions were chosen, setting up two conditions: pneumococcal infection and no infection, and constructing a mixed-effects model. After preprocessing the gene chip information, the data were imported into the model, preliminary results were calculated, and permutation tests were performed to biologically validate the preliminary results using GSEA. The final dataset consists of 20 groups of gene expression data from pneumococcal infection, which categorizes functionally related genes based on the similarity of their expression profiles, facilitating the study of genes with unknown functions.展开更多
With the rapid development of generative artificial intelligence technologies,represented by large language models,university-level computer science education is undergoing a critical transition-from knowledge-based i...With the rapid development of generative artificial intelligence technologies,represented by large language models,university-level computer science education is undergoing a critical transition-from knowledge-based instruction to competency-oriented teaching.A postgraduate student competency evaluation model can serve as a framework to organize and guide both teaching and research activities at the postgraduate level.A number of relevant research efforts have already been conducted in this area.Graduate education plays a vital role not only as a continuation and enhancement of undergraduate education but also as essential preparation for future research endeavors.An analysis of the acceptance of competency evaluation models refers to the assessment of how various stakeholders perceive the importance of different components within the model.Investigating the degree of acceptance among diverse groups-such as current undergraduate students,current postgraduate students,graduates with less than three years of work experience,and those with more than three years of work experience-can offer valuable insights for improving and optimizing postgraduate education and training practices.展开更多
Background:With the rapid development of artificial intelligence(AI),large language models(LLMs)have emerged as a potent tool for invigorating ophthalmology across clinical,educational,and research fields.Their accura...Background:With the rapid development of artificial intelligence(AI),large language models(LLMs)have emerged as a potent tool for invigorating ophthalmology across clinical,educational,and research fields.Their accuracy and reliability have undergone tested.This bibliometric analysis aims to provide an overview of research on LLMs in ophthalmology from both thematic and geographical perspectives.Methods:All existing and highly cited LLM-related ophthalmology research papers published in English up to 24th April 2025 were sourced from Scopus,PubMed,and Web of Science.The characteristics of these publications,including publication output,authors,journals,countries,institutions,citations,and research domains,were analyzed using Biblioshiny and VOSviewer software.Results:A total of 277 articles from 1,459 authors and 89 journals were included in this study.Although relevant publications began to appear in 2019,there was a significant increase starting from 2023.He M and Shi D are the most prolific authors,while Investigative Ophthalmology&Visual Science stands out as the most prominent journal.Most of the top-publishing countries are high-income economies,with the USA taking the lead,and the University of California is the leading institution.VOSviewer identified 5 clusters in the keyword co-occurrence analysis,indicating that current research focuses on the clinical applications of LLMs,particularly in diagnosis and patient education.Conclusions:While LLMs have demonstrated effectiveness in retaining knowledge,their accuracy in image-based diagnosis remains limited.Therefore,future research should investigate fine-tuning strategies and domain-specific adaptations to close this gap.Although research on the applications of LLMs in ophthalmology is still in its early stages,it holds significant potential for advancing the field.展开更多
Sentiment analysis,a cornerstone of natural language processing,has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across vari...Sentiment analysis,a cornerstone of natural language processing,has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across various domains.However,the deployment of such models in resource-constrained environments presents a unique set of challenges that require innovative solutions.Resource-constrained environments encompass scenarios where computing resources,memory,and energy availability are restricted.To empower sentiment analysis in resource-constrained environments,we address the crucial need by leveraging lightweight pre-trained models.These models,derived from popular architectures such as DistilBERT,MobileBERT,ALBERT,TinyBERT,ELECTRA,and SqueezeBERT,offer a promising solution to the resource limitations imposed by these environments.By distilling the knowledge from larger models into smaller ones and employing various optimization techniques,these lightweight models aim to strike a balance between performance and resource efficiency.This paper endeavors to explore the performance of multiple lightweight pre-trained models in sentiment analysis tasks specific to such environments and provide insights into their viability for practical deployment.展开更多
In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This...In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This article considers subgroup analysis methods for longitudinal data containing multiple covariates and biomarkers.We divide subgroups based on whether a linear combination of these biomarkers exceeds a predetermined threshold,and assess the heterogeneity of treatment effects across subgroups using the interaction between subgroups and exposure variables.Quantile regression is used to better characterize the global distribution of the response variable and sparsity penalties are imposed to achieve variable selection of covariates and biomarkers.The effectiveness of our proposed methodology for both variable selection and parameter estimation is verified through random simulations.Finally,we demonstrate the application of this method by analyzing data from the PA.3 trial,further illustrating the practicality of the method proposed in this paper.展开更多
Wide-band oscillations have become a significant issue limiting the development of wind power.Both large-signal and small-signal analyses require extensive model derivation.Moreover,the large number and high order of ...Wide-band oscillations have become a significant issue limiting the development of wind power.Both large-signal and small-signal analyses require extensive model derivation.Moreover,the large number and high order of wind turbines have driven the development of simplified models,whose applicability remains controversial.In this paper,a wide-band oscillation analysis method based on the average-value model(AVM)is proposed for wind farms(WFs).A novel linearization analysis framework is developed,leveraging the continuous-time characteristics of the AVM and MATLAB/Simulink’s built-in linearization tools.This significantly reduces modeling complexity and computational costs while maintaining model fidelity.Additionally,an object-based initial value estimation method of state variables is introduced,which,when combined with steady-state point-solving tools,greatly reduces the computational effort required for equilibrium point solving in batch linearization analysis.The proposed method is validated in both doubly fed induction generator(DFIG)-based and permanent magnet synchronous generator(PMSG)-based WFs.Furthermore,a comprehensive analysis is conducted for the first time to examine the impact of the machine-side system on the system stability of the nonfully controlled PMSG-based WF.展开更多
In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance o...In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.展开更多
Pingquan City,the origin of five rivers,serves as the core water conservation zone for the Beijing-Tianjin-Hebei region and exemplifies the characteristics of small watersheds in hilly areas.In recent years,excessive ...Pingquan City,the origin of five rivers,serves as the core water conservation zone for the Beijing-Tianjin-Hebei region and exemplifies the characteristics of small watersheds in hilly areas.In recent years,excessive mining and intensified human activities have severely disrupted the local ecosystem,creating an urgent need for ecological vulnerability assessment to enhance water conservation functions.This study employed the sensitivity-resilience-pressure model,integrating various data sources,including regional background,hydro-meteorological data,field investigations,remote sensing analysis,and socio-economic data.The weights of the model indices were determined using an entropy weighting model that combines principal component analysis and the analytic hierarchy process.Using the ArcGIS platform,the spatial distribution and driving forces of ecological vulnerability in 2020 were analyzed,providing valuable insights for regional ecological restoration.The results indicated that the overall Ecological Vulnerability Index(EVI)was 0.389,signifying moderate ecological vulnerability,with significant variation between watersheds.The Daling River Basin had a high EVI,with ecological vulnerability primarily in levels IV and V,indicating high ecological pressure,whereas the Laoniu River Basin had a low EVI,reflecting minimal ecological pressure.Soil type was identified as the primary driving factor,followed by elevation,temperature,and soil erosion as secondary factors.It is recommended to focus on key regions and critical factors while conducting comprehensive monitoring and assessment to ensure the long-term success of ecological management efforts.展开更多
In this paper,the N-soliton solutions for the massive Thirring model(MTM)in laboratory coordinates are analyzed via the Riemann-Hilbert(RH)approach.The direct scattering including the analyticity,symmetries,and asympt...In this paper,the N-soliton solutions for the massive Thirring model(MTM)in laboratory coordinates are analyzed via the Riemann-Hilbert(RH)approach.The direct scattering including the analyticity,symmetries,and asymptotic behaviors of the Jost solutions as|λ|→∞andλ→0 are given.Considering that the scattering coefficients have simple zeros,the matrix RH problem,reconstruction formulas and corresponding trace formulas are also derived.Further,the N-soliton solutions in the reflectionless case are obtained explicitly in the form of determinants.The propagation characteristics of one-soliton solutions and interaction properties of two-soliton solutions are discussed.In particular,the asymptotic expressions of two-soliton solutions as|t|→∞are obtained,which show that the velocities and amplitudes of the asymptotic solitons do not change before and after interaction except the position shifts.In addition,three types of bounded states for two-soliton solutions are presented with certain parametric conditions.展开更多
GNSS time series analysis provides an effective method for research on the earth's surface deformation,and it can be divided into two parts,deterministic models and stochastic models.The former part can be achieve...GNSS time series analysis provides an effective method for research on the earth's surface deformation,and it can be divided into two parts,deterministic models and stochastic models.The former part can be achieved by several parameters,such as polynomial terms,periodic terms,offsets,and post-seismic models.The latter contains some stochastic noises,which can be affected by detecting the former parameters.If there are not enough parameters assumed,modeling errors will occur and adversely affect the analysis results.In this study,we propose a processing strategy in which the commonly-used 1-order of the polynomial term can be replaced with different orders for better fitting GNSS time series of the Crustal Movement Network of China(CMONOC)stations.Initially,we use the Bayesian Information Criterion(BIC)to identify the best order within the range of 1-4 during the fitting process using the white noise plus power-law noise(WN+PL)model.Then,we compare the 1-order and the optimal order on the effect of deterministic models in GNSS time series,including the velocity and its uncertainty,amplitudes,and initial phases of the annual signals.The results indicate that the first-order polynomial in the GNSS time series is not the primary factor.The root mean square(RMS)reduction rates of almost all station components are positive,which means the new fitting of optimal-order polynomial helps to reduce the RMS of residual series.Most stations maintain the velocity difference(VD)within ±1 mm/yr,with percentages of 85.6%,81.9%and 63.4%in the North,East,and Up components,respectively.As for annual signals,the numbers of amplitude difference(AD)remained at ±0.2 mm are 242,239,and 200 in three components,accounting for 99.6%,98.4%,and 82.3%,respectively.This finding reminds us that the detection of the optimal-order polynomial is necessary when we aim to acquire an accurate understanding of the crustal movement features.展开更多
The efficient market hypothesis in traditional financial theory struggles to explain the short-term irrational fluctuations in the A-share market,where investor sentiment fluctuations often serve as the core driver of...The efficient market hypothesis in traditional financial theory struggles to explain the short-term irrational fluctuations in the A-share market,where investor sentiment fluctuations often serve as the core driver of abnormal stock price movements.Traditional sentiment measurement methods suffer from limitations such as lag,high misjudgment rates,and the inability to distinguish confounding factors.To more accurately explore the dynamic correlation between investor sentiment and stock price fluctuations,this paper proposes a sentiment analysis framework based on large language models(LLMs).By constructing continuous sentiment scoring factors and integrating them with a long short-term memory(LSTM)deep learning model,we analyze the correlation between investor sentiment and stock price fluctuations.Empirical results indicate that sentiment factors based on large language models can generate an annualized excess return of 9.3%in the CSI 500 index domain.The LSTM stock price prediction model incorporating sentiment features achieves a mean absolute percentage error(MAPE)as low as 2.72%,significantly outperforming traditional models.Through this analysis,we aim to provide quantitative references for optimizing investment decisions and preventing market risks.展开更多
Anti-jamming performance evaluation has recently received significant attention. For Link-16, the anti-jamming performance evaluation and selection of the optimal anti-jamming technologies are urgent problems to be so...Anti-jamming performance evaluation has recently received significant attention. For Link-16, the anti-jamming performance evaluation and selection of the optimal anti-jamming technologies are urgent problems to be solved. A comprehensive evaluation method is proposed, which combines grey relational analysis (GRA) and cloud model, to evaluate the anti-jamming performances of Link-16. Firstly, on the basis of establishing the anti-jamming performance evaluation indicator system of Link-16, the linear combination of analytic hierarchy process(AHP) and entropy weight method (EWM) are used to calculate the combined weight. Secondly, the qualitative and quantitative concept transformation model, i.e., the cloud model, is introduced to evaluate the anti-jamming abilities of Link-16 under each jamming scheme. In addition, GRA calculates the correlation degree between evaluation indicators and the anti-jamming performance of Link-16, and assesses the best anti-jamming technology. Finally, simulation results prove that the proposed evaluation model can achieve the objective of feasible and practical evaluation, which opens up a novel way for the research of anti-jamming performance evaluations of Link-16.展开更多
reshwater essential for civilization faces risk from untreated effluents discharged by industries,agriculture,urban areas,and other sources.Increasing demand and abstraction of freshwater deteriorate the pollution sce...reshwater essential for civilization faces risk from untreated effluents discharged by industries,agriculture,urban areas,and other sources.Increasing demand and abstraction of freshwater deteriorate the pollution scenario more.Hence,water quality analysis(WQA)is an important task for researchers and policymakers to maintain sustainability and public health.This study aims to gather and discuss the methods used for WQA by the researchers,focusing on their advantages and limitations.Simultaneously,this study compares different WQA methods,discussing their trends and future directions.Publications from the past decade on WQA are reviewed,and insights are explored to aggregate them in particular categories.Three major approaches,namely—water quality indexing,water quality modeling(WQM)and artificial intelligence-based WQM,are recognized.Different methodologies adopted to execute these three approaches are presented in this study,which leads to formulate a comparative discussion.Using statistical operations and soft computing techniques have been done by researchers to combat the subjectivity error in indexing.To achieve better results,WQMs are being modified to incorporate the physical processes influencing water quality more robustly.The utilization of artificial intelligence was primarily restricted to conventional networks,but in the last 5 years,implications of deep learning have increased rapidly and exhibited good results with the hybridization of feature extracting and time series modeling.Overall,this study is a valuable resource for researchers dedicated to WQA.展开更多
Based on BERTopic Model,the paper combines qualitative and quantitative methods to explore the reception of Can Xue’s translated works by analyzing readers’book reviews posted on Goodreads and Lovereading.We first c...Based on BERTopic Model,the paper combines qualitative and quantitative methods to explore the reception of Can Xue’s translated works by analyzing readers’book reviews posted on Goodreads and Lovereading.We first collected book reviews from these two well-known websites by Python.Through topic analysis of these reviews,we identified recurring topics,including details of her translated works and appreciation of their translation quality.Then,employing sentiment and content analysis methods,the paper explored the emotional attitudes and the specific thoughts of readers toward Can Xue and her translated works.The fingdings revealed that,among the 408 reviews,though the reception of Can Xue’s translated works was relatively positive,the current level of attention and recognition remains insufficient.However,based on the research results,the paper can derive valuable insights into the translation and dissemination processes such as adjusting translation and dissemination strategies,so that the global reach of Chinese literature and culture can be better facilitated.展开更多
The authors regret that the original publication of this paper did not include Jawad Fayaz as a co-author.After further discussions and a thorough review of the research contributions,it was agreed that his significan...The authors regret that the original publication of this paper did not include Jawad Fayaz as a co-author.After further discussions and a thorough review of the research contributions,it was agreed that his significant contributions to the foundational aspects of the research warranted recognition,and he has now been added as a co-author.展开更多
Natural soil generally exhibits significant transverse isotropy(TI)due to weathering and sedimentation,meaning that horizontal moduli differ from their vertical counterpart.The TI mechanical model is more appropriate ...Natural soil generally exhibits significant transverse isotropy(TI)due to weathering and sedimentation,meaning that horizontal moduli differ from their vertical counterpart.The TI mechanical model is more appropriate for actual situations.Although soil exhibits material nonlinearity under earthquake excitation,existing research on the TI medium is limited to soil linearity and neglects the nonlinear response of TI sites.A 2D equivalent linear model for a layered TI half-space subjected to seismic waves is derived in the transformed wave number domain using the exact dynamic stiffness matrix of the TI medium.This study introduces a method for determining the effective shear strain of TI sites under oblique wave incidence,and further describes a systematic study on the effects of TI parameters and soil nonlinearity on site responses.Numerical results indicate that seismic responses of the TI medium significantly differ from those of isotropic sites and that the responses are highly dependent on TI parameters,particularly in nonlinear cases,while also being sensitive to incident angle and excitation intensity.Moreover,the differences in peak acceleration and waveform for various TI materials may also be amplified due to the strong nonlinearity.The study provides valuable insights for improving the accuracy of seismic response analysis in engineering applications.展开更多
In recent years,incidents of simultaneous exceedance of PM_(2.5)and O_(3) concentrations,termed PM_(2.5)and O_(3) co-pollution events,have frequently occurred in China.This study conducted atmospheric circulation anal...In recent years,incidents of simultaneous exceedance of PM_(2.5)and O_(3) concentrations,termed PM_(2.5)and O_(3) co-pollution events,have frequently occurred in China.This study conducted atmospheric circulation analysis on two typical co-pollution events in Beijing,occurring from July 22 to July 28,2019,and from April 25 to May 2,2020.These events were categorized into pre-trough southerly airflow type(Type 1)and post-trough northwest flow type(Type 2).Subsequently,sensitivity analyses using the GRAPES-CUACE adjoint model were performed to quantify the contributions of precursor emissions from Beijing and surrounding areas to PM_(2.5)and O_(3) concentrations in Beijing for two types of co-pollution.The results indicated that the spatiotemporal distribution of sensitive source region varied among different circulation types.Primary PM_(2.5)(PPM_(2.5))emissions from Hebei contributed the most to the 24-hour average PM_(2.5)(24-h PM_(2.5))peak concentration(41.6%-45.4%),followed by Beijing emissions(31%-35.7%).The maximum daily 8-hour average ozone peak concentration was primarily influenced by the emissions from Hebei and Beijing,with contribution ratios respectively of 32.8%-44.8% and 29%-42.1%.Additionally,NO_(x)emissions were the main contributors in Type 1,while both NO_(x)and VOCs emissions contributed similarly in Type 2.The iterative emission reduction experiments for two types of co-pollution indicated that Type 1 required emission reductions in NO_(x)(52.4%-71.8%)and VOCs(14.1%-33.8%)only.In contrast,Type 2 required combined emission reductions in NO_(x)(37.0%-65.1%),VOCs(30.7%-56.2%),and PPM_(2.5)(31%-46.9%).This study provided a reference for controlling co-pollution events and improving air quality in Beijing.展开更多
The investigations of physical attributes of oceans,including parameters such as heat flow and bathymetry,have garnered substantial attention and are particularly valuable for examining Earth’s thermal structures and...The investigations of physical attributes of oceans,including parameters such as heat flow and bathymetry,have garnered substantial attention and are particularly valuable for examining Earth’s thermal structures and dynamic processes.Nevertheless,classical plate cooling models exhibit disparities when predicting observed heat flow and seafloor depth for extremely young and old lithospheres.Furthermore,a comprehensive analysis of global heat flow predictions and regional ocean heat flow or bathymetry data with physical models has been lacking.In this study,we employed power-law models derived from the singularity theory of fractal density to meticulously fit the latest ocean heat flow and bathymetry.Notably,power-law models offer distinct advantages over traditional plate cooling models,showcasing robust self-similarity,scale invariance,or scaling properties,and providing a better fit to observed data.The outcomes of our singularity analysis concerning heat flow and bathymetry across diverse oceanic regions exhibit a degree of consistency with the global ocean spreading rate model.In addition,we applied the similarity method to predict a higher resolution(0.1°×0.1°)global heat flow map based on the most recent heat flow data and geological/geophysical observables refined through linear correlation analysis.Regions displaying significant disparities between predicted and observed heat flow are closely linked to hydrothermal vent fields and active structures.Finally,combining the actual bathymetry and predicted heat flow with the power-law models allows for the quantitative and comprehensive detection of anomalous regions of ocean subsidence and heat flow,which deviate from traditional plate cooling models.The anomalous regions of subsidence and heat flow show different degrees of anisotropy,providing new ideas and clues for further analysis of ocean topography or hydrothermal circulation of mid-ocean ridges.展开更多
基金the financial support from the Fujian Science Foundation for Outstanding Youth(2023J06039)the National Natural Science Foundation of China(Grant No.41977259,U2005205,41972268)the Independent Research Project of Technology Innovation Center for Monitoring and Restoration Engineering of Ecological Fragile Zone in Southeast China(KY-090000-04-2022-019)。
文摘Shotcrete is one of the common solutions for shallow sliding.It works by forming a protective layer with high strength and cementing the loose soil particles on the slope surface to prevent shallow sliding.However,the solidification time of conventional cement paste is long when shotcrete is used to treat cohesionless soil landslide.The idea of reinforcing slope with polyurethane solidified soil(i.e.,mixture of polyurethane and sand)was proposed.Model tests and finite element analysis were carried out to study the effectiveness of the proposed new method on the emergency treatment of cohesionless soil landslide.Surcharge loading on the crest of the slope was applied step by step until landslide was triggered so as to test and compare the stability and bearing capacity of slope models with different conditions.The simulated slope displacements were relatively close to the measured results,and the simulated slope deformation characteristics were in good agreement with the observed phenomena,which verifies the accuracy of the numerical method.Under the condition of surcharge loading on the crest of the slope,the unreinforced slope slid when the surcharge loading exceeded 30 k Pa,which presented a failure mode of local instability and collapse at the shallow layer of slope top.The reinforced slope remained stable even when the surcharge loading reached 48 k Pa.The displacement of the reinforced slope was reduced by more than 95%.Overall,this study verifies the effectiveness of polyurethane in the emergency treatment of cohesionless soil landslide and should have broad application prospects in the field of geological disasters concerning the safety of people's live.
基金supported by the National Key R&D Program of China(No.2021YFB0301200)National Natural Science Foundation of China(No.62025208).
文摘Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.
文摘DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expression data generated. To address this, this paper employs a mixed-effects model to analyze gene expression data. In terms of data selection, 1176 genes from the white mouse gene expression dataset under two experimental conditions were chosen, setting up two conditions: pneumococcal infection and no infection, and constructing a mixed-effects model. After preprocessing the gene chip information, the data were imported into the model, preliminary results were calculated, and permutation tests were performed to biologically validate the preliminary results using GSEA. The final dataset consists of 20 groups of gene expression data from pneumococcal infection, which categorizes functionally related genes based on the similarity of their expression profiles, facilitating the study of genes with unknown functions.
文摘With the rapid development of generative artificial intelligence technologies,represented by large language models,university-level computer science education is undergoing a critical transition-from knowledge-based instruction to competency-oriented teaching.A postgraduate student competency evaluation model can serve as a framework to organize and guide both teaching and research activities at the postgraduate level.A number of relevant research efforts have already been conducted in this area.Graduate education plays a vital role not only as a continuation and enhancement of undergraduate education but also as essential preparation for future research endeavors.An analysis of the acceptance of competency evaluation models refers to the assessment of how various stakeholders perceive the importance of different components within the model.Investigating the degree of acceptance among diverse groups-such as current undergraduate students,current postgraduate students,graduates with less than three years of work experience,and those with more than three years of work experience-can offer valuable insights for improving and optimizing postgraduate education and training practices.
基金supported by Health and Medical Research Fund,Hong Kong(11220386,12230246).
文摘Background:With the rapid development of artificial intelligence(AI),large language models(LLMs)have emerged as a potent tool for invigorating ophthalmology across clinical,educational,and research fields.Their accuracy and reliability have undergone tested.This bibliometric analysis aims to provide an overview of research on LLMs in ophthalmology from both thematic and geographical perspectives.Methods:All existing and highly cited LLM-related ophthalmology research papers published in English up to 24th April 2025 were sourced from Scopus,PubMed,and Web of Science.The characteristics of these publications,including publication output,authors,journals,countries,institutions,citations,and research domains,were analyzed using Biblioshiny and VOSviewer software.Results:A total of 277 articles from 1,459 authors and 89 journals were included in this study.Although relevant publications began to appear in 2019,there was a significant increase starting from 2023.He M and Shi D are the most prolific authors,while Investigative Ophthalmology&Visual Science stands out as the most prominent journal.Most of the top-publishing countries are high-income economies,with the USA taking the lead,and the University of California is the leading institution.VOSviewer identified 5 clusters in the keyword co-occurrence analysis,indicating that current research focuses on the clinical applications of LLMs,particularly in diagnosis and patient education.Conclusions:While LLMs have demonstrated effectiveness in retaining knowledge,their accuracy in image-based diagnosis remains limited.Therefore,future research should investigate fine-tuning strategies and domain-specific adaptations to close this gap.Although research on the applications of LLMs in ophthalmology is still in its early stages,it holds significant potential for advancing the field.
文摘Sentiment analysis,a cornerstone of natural language processing,has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across various domains.However,the deployment of such models in resource-constrained environments presents a unique set of challenges that require innovative solutions.Resource-constrained environments encompass scenarios where computing resources,memory,and energy availability are restricted.To empower sentiment analysis in resource-constrained environments,we address the crucial need by leveraging lightweight pre-trained models.These models,derived from popular architectures such as DistilBERT,MobileBERT,ALBERT,TinyBERT,ELECTRA,and SqueezeBERT,offer a promising solution to the resource limitations imposed by these environments.By distilling the knowledge from larger models into smaller ones and employing various optimization techniques,these lightweight models aim to strike a balance between performance and resource efficiency.This paper endeavors to explore the performance of multiple lightweight pre-trained models in sentiment analysis tasks specific to such environments and provide insights into their viability for practical deployment.
基金Supported by the Natural Science Foundation of Fujian Province(2022J011177,2024J01903)the Key Project of Fujian Provincial Education Department(JZ230054)。
文摘In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This article considers subgroup analysis methods for longitudinal data containing multiple covariates and biomarkers.We divide subgroups based on whether a linear combination of these biomarkers exceeds a predetermined threshold,and assess the heterogeneity of treatment effects across subgroups using the interaction between subgroups and exposure variables.Quantile regression is used to better characterize the global distribution of the response variable and sparsity penalties are imposed to achieve variable selection of covariates and biomarkers.The effectiveness of our proposed methodology for both variable selection and parameter estimation is verified through random simulations.Finally,we demonstrate the application of this method by analyzing data from the PA.3 trial,further illustrating the practicality of the method proposed in this paper.
基金supported by the National Natural Science Foundation of China under Grant 52277072.
文摘Wide-band oscillations have become a significant issue limiting the development of wind power.Both large-signal and small-signal analyses require extensive model derivation.Moreover,the large number and high order of wind turbines have driven the development of simplified models,whose applicability remains controversial.In this paper,a wide-band oscillation analysis method based on the average-value model(AVM)is proposed for wind farms(WFs).A novel linearization analysis framework is developed,leveraging the continuous-time characteristics of the AVM and MATLAB/Simulink’s built-in linearization tools.This significantly reduces modeling complexity and computational costs while maintaining model fidelity.Additionally,an object-based initial value estimation method of state variables is introduced,which,when combined with steady-state point-solving tools,greatly reduces the computational effort required for equilibrium point solving in batch linearization analysis.The proposed method is validated in both doubly fed induction generator(DFIG)-based and permanent magnet synchronous generator(PMSG)-based WFs.Furthermore,a comprehensive analysis is conducted for the first time to examine the impact of the machine-side system on the system stability of the nonfully controlled PMSG-based WF.
文摘In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.
基金supported by the project of China Geological Survey(No.DD20220954)Open Funding Project of the Key Laboratory of Groundwater Sciences and Engineering,Ministry of Natural Resources(No.SK202301-4)+1 种基金Open Foundation of the Key Laboratory of Coupling Process and Effect of Natural Resources Elements(No.2022KFKTC009)Yanzhao Shanshui Science and Innovation Fund of Langfang Integrated Natural Resources Survey Center,China Geological Survey(No.YZSSJJ202401-001).
文摘Pingquan City,the origin of five rivers,serves as the core water conservation zone for the Beijing-Tianjin-Hebei region and exemplifies the characteristics of small watersheds in hilly areas.In recent years,excessive mining and intensified human activities have severely disrupted the local ecosystem,creating an urgent need for ecological vulnerability assessment to enhance water conservation functions.This study employed the sensitivity-resilience-pressure model,integrating various data sources,including regional background,hydro-meteorological data,field investigations,remote sensing analysis,and socio-economic data.The weights of the model indices were determined using an entropy weighting model that combines principal component analysis and the analytic hierarchy process.Using the ArcGIS platform,the spatial distribution and driving forces of ecological vulnerability in 2020 were analyzed,providing valuable insights for regional ecological restoration.The results indicated that the overall Ecological Vulnerability Index(EVI)was 0.389,signifying moderate ecological vulnerability,with significant variation between watersheds.The Daling River Basin had a high EVI,with ecological vulnerability primarily in levels IV and V,indicating high ecological pressure,whereas the Laoniu River Basin had a low EVI,reflecting minimal ecological pressure.Soil type was identified as the primary driving factor,followed by elevation,temperature,and soil erosion as secondary factors.It is recommended to focus on key regions and critical factors while conducting comprehensive monitoring and assessment to ensure the long-term success of ecological management efforts.
基金supported by the National Natural Science Foundation of China(Grant Nos.12475003 and11705284)by the Natural Science Foundation of Beijing Municipality(Grant Nos.1232022 and 1212007)。
文摘In this paper,the N-soliton solutions for the massive Thirring model(MTM)in laboratory coordinates are analyzed via the Riemann-Hilbert(RH)approach.The direct scattering including the analyticity,symmetries,and asymptotic behaviors of the Jost solutions as|λ|→∞andλ→0 are given.Considering that the scattering coefficients have simple zeros,the matrix RH problem,reconstruction formulas and corresponding trace formulas are also derived.Further,the N-soliton solutions in the reflectionless case are obtained explicitly in the form of determinants.The propagation characteristics of one-soliton solutions and interaction properties of two-soliton solutions are discussed.In particular,the asymptotic expressions of two-soliton solutions as|t|→∞are obtained,which show that the velocities and amplitudes of the asymptotic solitons do not change before and after interaction except the position shifts.In addition,three types of bounded states for two-soliton solutions are presented with certain parametric conditions.
基金supported by the National Natural Science Foundation of China(Grant Nos.42404017,42122025 and 42174030).
文摘GNSS time series analysis provides an effective method for research on the earth's surface deformation,and it can be divided into two parts,deterministic models and stochastic models.The former part can be achieved by several parameters,such as polynomial terms,periodic terms,offsets,and post-seismic models.The latter contains some stochastic noises,which can be affected by detecting the former parameters.If there are not enough parameters assumed,modeling errors will occur and adversely affect the analysis results.In this study,we propose a processing strategy in which the commonly-used 1-order of the polynomial term can be replaced with different orders for better fitting GNSS time series of the Crustal Movement Network of China(CMONOC)stations.Initially,we use the Bayesian Information Criterion(BIC)to identify the best order within the range of 1-4 during the fitting process using the white noise plus power-law noise(WN+PL)model.Then,we compare the 1-order and the optimal order on the effect of deterministic models in GNSS time series,including the velocity and its uncertainty,amplitudes,and initial phases of the annual signals.The results indicate that the first-order polynomial in the GNSS time series is not the primary factor.The root mean square(RMS)reduction rates of almost all station components are positive,which means the new fitting of optimal-order polynomial helps to reduce the RMS of residual series.Most stations maintain the velocity difference(VD)within ±1 mm/yr,with percentages of 85.6%,81.9%and 63.4%in the North,East,and Up components,respectively.As for annual signals,the numbers of amplitude difference(AD)remained at ±0.2 mm are 242,239,and 200 in three components,accounting for 99.6%,98.4%,and 82.3%,respectively.This finding reminds us that the detection of the optimal-order polynomial is necessary when we aim to acquire an accurate understanding of the crustal movement features.
文摘The efficient market hypothesis in traditional financial theory struggles to explain the short-term irrational fluctuations in the A-share market,where investor sentiment fluctuations often serve as the core driver of abnormal stock price movements.Traditional sentiment measurement methods suffer from limitations such as lag,high misjudgment rates,and the inability to distinguish confounding factors.To more accurately explore the dynamic correlation between investor sentiment and stock price fluctuations,this paper proposes a sentiment analysis framework based on large language models(LLMs).By constructing continuous sentiment scoring factors and integrating them with a long short-term memory(LSTM)deep learning model,we analyze the correlation between investor sentiment and stock price fluctuations.Empirical results indicate that sentiment factors based on large language models can generate an annualized excess return of 9.3%in the CSI 500 index domain.The LSTM stock price prediction model incorporating sentiment features achieves a mean absolute percentage error(MAPE)as low as 2.72%,significantly outperforming traditional models.Through this analysis,we aim to provide quantitative references for optimizing investment decisions and preventing market risks.
基金Heilongjiang Provincial Natural Science Foundation of China (LH2021F009)。
文摘Anti-jamming performance evaluation has recently received significant attention. For Link-16, the anti-jamming performance evaluation and selection of the optimal anti-jamming technologies are urgent problems to be solved. A comprehensive evaluation method is proposed, which combines grey relational analysis (GRA) and cloud model, to evaluate the anti-jamming performances of Link-16. Firstly, on the basis of establishing the anti-jamming performance evaluation indicator system of Link-16, the linear combination of analytic hierarchy process(AHP) and entropy weight method (EWM) are used to calculate the combined weight. Secondly, the qualitative and quantitative concept transformation model, i.e., the cloud model, is introduced to evaluate the anti-jamming abilities of Link-16 under each jamming scheme. In addition, GRA calculates the correlation degree between evaluation indicators and the anti-jamming performance of Link-16, and assesses the best anti-jamming technology. Finally, simulation results prove that the proposed evaluation model can achieve the objective of feasible and practical evaluation, which opens up a novel way for the research of anti-jamming performance evaluations of Link-16.
基金State University Research Excellence(SURE),SERB,GOI,Grant/Award Number:SUR/2022/001557。
文摘reshwater essential for civilization faces risk from untreated effluents discharged by industries,agriculture,urban areas,and other sources.Increasing demand and abstraction of freshwater deteriorate the pollution scenario more.Hence,water quality analysis(WQA)is an important task for researchers and policymakers to maintain sustainability and public health.This study aims to gather and discuss the methods used for WQA by the researchers,focusing on their advantages and limitations.Simultaneously,this study compares different WQA methods,discussing their trends and future directions.Publications from the past decade on WQA are reviewed,and insights are explored to aggregate them in particular categories.Three major approaches,namely—water quality indexing,water quality modeling(WQM)and artificial intelligence-based WQM,are recognized.Different methodologies adopted to execute these three approaches are presented in this study,which leads to formulate a comparative discussion.Using statistical operations and soft computing techniques have been done by researchers to combat the subjectivity error in indexing.To achieve better results,WQMs are being modified to incorporate the physical processes influencing water quality more robustly.The utilization of artificial intelligence was primarily restricted to conventional networks,but in the last 5 years,implications of deep learning have increased rapidly and exhibited good results with the hybridization of feature extracting and time series modeling.Overall,this study is a valuable resource for researchers dedicated to WQA.
基金supported by the 2023 Youth Fund for Humanities and Social Sciences Research by the Ministry of Education of the People’s Republic of China(Grant No.23YJC740004).
文摘Based on BERTopic Model,the paper combines qualitative and quantitative methods to explore the reception of Can Xue’s translated works by analyzing readers’book reviews posted on Goodreads and Lovereading.We first collected book reviews from these two well-known websites by Python.Through topic analysis of these reviews,we identified recurring topics,including details of her translated works and appreciation of their translation quality.Then,employing sentiment and content analysis methods,the paper explored the emotional attitudes and the specific thoughts of readers toward Can Xue and her translated works.The fingdings revealed that,among the 408 reviews,though the reception of Can Xue’s translated works was relatively positive,the current level of attention and recognition remains insufficient.However,based on the research results,the paper can derive valuable insights into the translation and dissemination processes such as adjusting translation and dissemination strategies,so that the global reach of Chinese literature and culture can be better facilitated.
文摘The authors regret that the original publication of this paper did not include Jawad Fayaz as a co-author.After further discussions and a thorough review of the research contributions,it was agreed that his significant contributions to the foundational aspects of the research warranted recognition,and he has now been added as a co-author.
基金National Natural Science Foundation of China under Grant No.U2139208。
文摘Natural soil generally exhibits significant transverse isotropy(TI)due to weathering and sedimentation,meaning that horizontal moduli differ from their vertical counterpart.The TI mechanical model is more appropriate for actual situations.Although soil exhibits material nonlinearity under earthquake excitation,existing research on the TI medium is limited to soil linearity and neglects the nonlinear response of TI sites.A 2D equivalent linear model for a layered TI half-space subjected to seismic waves is derived in the transformed wave number domain using the exact dynamic stiffness matrix of the TI medium.This study introduces a method for determining the effective shear strain of TI sites under oblique wave incidence,and further describes a systematic study on the effects of TI parameters and soil nonlinearity on site responses.Numerical results indicate that seismic responses of the TI medium significantly differ from those of isotropic sites and that the responses are highly dependent on TI parameters,particularly in nonlinear cases,while also being sensitive to incident angle and excitation intensity.Moreover,the differences in peak acceleration and waveform for various TI materials may also be amplified due to the strong nonlinearity.The study provides valuable insights for improving the accuracy of seismic response analysis in engineering applications.
基金supported by the National Key Research and Development Program of China(No.2022YFC3701205)the National Natural Science Foundation of China(No.41975173)the Science and Technology Development Fund of the Chinese Academy of Meteorological Sciences(No.2021KJ011)。
文摘In recent years,incidents of simultaneous exceedance of PM_(2.5)and O_(3) concentrations,termed PM_(2.5)and O_(3) co-pollution events,have frequently occurred in China.This study conducted atmospheric circulation analysis on two typical co-pollution events in Beijing,occurring from July 22 to July 28,2019,and from April 25 to May 2,2020.These events were categorized into pre-trough southerly airflow type(Type 1)and post-trough northwest flow type(Type 2).Subsequently,sensitivity analyses using the GRAPES-CUACE adjoint model were performed to quantify the contributions of precursor emissions from Beijing and surrounding areas to PM_(2.5)and O_(3) concentrations in Beijing for two types of co-pollution.The results indicated that the spatiotemporal distribution of sensitive source region varied among different circulation types.Primary PM_(2.5)(PPM_(2.5))emissions from Hebei contributed the most to the 24-hour average PM_(2.5)(24-h PM_(2.5))peak concentration(41.6%-45.4%),followed by Beijing emissions(31%-35.7%).The maximum daily 8-hour average ozone peak concentration was primarily influenced by the emissions from Hebei and Beijing,with contribution ratios respectively of 32.8%-44.8% and 29%-42.1%.Additionally,NO_(x)emissions were the main contributors in Type 1,while both NO_(x)and VOCs emissions contributed similarly in Type 2.The iterative emission reduction experiments for two types of co-pollution indicated that Type 1 required emission reductions in NO_(x)(52.4%-71.8%)and VOCs(14.1%-33.8%)only.In contrast,Type 2 required combined emission reductions in NO_(x)(37.0%-65.1%),VOCs(30.7%-56.2%),and PPM_(2.5)(31%-46.9%).This study provided a reference for controlling co-pollution events and improving air quality in Beijing.
基金supported by the Guangdong Province Introduced Innovative R&D Team of Big Data-Mathematical Earth Sciences and Extreme Geological Events Team(grant number 2021ZT09H399)the National Natural Science Foundation of China(grant number 42430111,42050103).
文摘The investigations of physical attributes of oceans,including parameters such as heat flow and bathymetry,have garnered substantial attention and are particularly valuable for examining Earth’s thermal structures and dynamic processes.Nevertheless,classical plate cooling models exhibit disparities when predicting observed heat flow and seafloor depth for extremely young and old lithospheres.Furthermore,a comprehensive analysis of global heat flow predictions and regional ocean heat flow or bathymetry data with physical models has been lacking.In this study,we employed power-law models derived from the singularity theory of fractal density to meticulously fit the latest ocean heat flow and bathymetry.Notably,power-law models offer distinct advantages over traditional plate cooling models,showcasing robust self-similarity,scale invariance,or scaling properties,and providing a better fit to observed data.The outcomes of our singularity analysis concerning heat flow and bathymetry across diverse oceanic regions exhibit a degree of consistency with the global ocean spreading rate model.In addition,we applied the similarity method to predict a higher resolution(0.1°×0.1°)global heat flow map based on the most recent heat flow data and geological/geophysical observables refined through linear correlation analysis.Regions displaying significant disparities between predicted and observed heat flow are closely linked to hydrothermal vent fields and active structures.Finally,combining the actual bathymetry and predicted heat flow with the power-law models allows for the quantitative and comprehensive detection of anomalous regions of ocean subsidence and heat flow,which deviate from traditional plate cooling models.The anomalous regions of subsidence and heat flow show different degrees of anisotropy,providing new ideas and clues for further analysis of ocean topography or hydrothermal circulation of mid-ocean ridges.