Shotcrete is one of the common solutions for shallow sliding.It works by forming a protective layer with high strength and cementing the loose soil particles on the slope surface to prevent shallow sliding.However,the...Shotcrete is one of the common solutions for shallow sliding.It works by forming a protective layer with high strength and cementing the loose soil particles on the slope surface to prevent shallow sliding.However,the solidification time of conventional cement paste is long when shotcrete is used to treat cohesionless soil landslide.The idea of reinforcing slope with polyurethane solidified soil(i.e.,mixture of polyurethane and sand)was proposed.Model tests and finite element analysis were carried out to study the effectiveness of the proposed new method on the emergency treatment of cohesionless soil landslide.Surcharge loading on the crest of the slope was applied step by step until landslide was triggered so as to test and compare the stability and bearing capacity of slope models with different conditions.The simulated slope displacements were relatively close to the measured results,and the simulated slope deformation characteristics were in good agreement with the observed phenomena,which verifies the accuracy of the numerical method.Under the condition of surcharge loading on the crest of the slope,the unreinforced slope slid when the surcharge loading exceeded 30 k Pa,which presented a failure mode of local instability and collapse at the shallow layer of slope top.The reinforced slope remained stable even when the surcharge loading reached 48 k Pa.The displacement of the reinforced slope was reduced by more than 95%.Overall,this study verifies the effectiveness of polyurethane in the emergency treatment of cohesionless soil landslide and should have broad application prospects in the field of geological disasters concerning the safety of people's live.展开更多
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci...Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.展开更多
DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expres...DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expression data generated. To address this, this paper employs a mixed-effects model to analyze gene expression data. In terms of data selection, 1176 genes from the white mouse gene expression dataset under two experimental conditions were chosen, setting up two conditions: pneumococcal infection and no infection, and constructing a mixed-effects model. After preprocessing the gene chip information, the data were imported into the model, preliminary results were calculated, and permutation tests were performed to biologically validate the preliminary results using GSEA. The final dataset consists of 20 groups of gene expression data from pneumococcal infection, which categorizes functionally related genes based on the similarity of their expression profiles, facilitating the study of genes with unknown functions.展开更多
With the rapid development of generative artificial intelligence technologies,represented by large language models,university-level computer science education is undergoing a critical transition-from knowledge-based i...With the rapid development of generative artificial intelligence technologies,represented by large language models,university-level computer science education is undergoing a critical transition-from knowledge-based instruction to competency-oriented teaching.A postgraduate student competency evaluation model can serve as a framework to organize and guide both teaching and research activities at the postgraduate level.A number of relevant research efforts have already been conducted in this area.Graduate education plays a vital role not only as a continuation and enhancement of undergraduate education but also as essential preparation for future research endeavors.An analysis of the acceptance of competency evaluation models refers to the assessment of how various stakeholders perceive the importance of different components within the model.Investigating the degree of acceptance among diverse groups-such as current undergraduate students,current postgraduate students,graduates with less than three years of work experience,and those with more than three years of work experience-can offer valuable insights for improving and optimizing postgraduate education and training practices.展开更多
The dynamic,heterogeneous nature of Edge computing in the Internet of Things(Edge-IoT)and Industrial IoT(IIoT)networks brings unique and evolving cybersecurity challenges.This study maps cyber threats in Edge-IoT/IIoT...The dynamic,heterogeneous nature of Edge computing in the Internet of Things(Edge-IoT)and Industrial IoT(IIoT)networks brings unique and evolving cybersecurity challenges.This study maps cyber threats in Edge-IoT/IIoT environments to the Adversarial Tactics,Techniques,and Common Knowledge(ATT&CK)framework by MITRE and introduces a lightweight,data-driven scoring model that enables rapid identification and prioritization of attacks.Inspired by the Factor Analysis of Information Risk model,our proposed scoring model integrates four key metrics:Common Vulnerability Scoring System(CVSS)-based severity scoring,Cyber Kill Chain–based difficulty estimation,Deep Neural Networks-driven detection scoring,and frequency analysis based on dataset prevalence.By aggregating these indicators,the model generates comprehensive risk profiles,facilitating actionable prioritization of threats.Robustness and stability of the scoring model are validated through non-parametric correlation analysis using Spearman’s and Kendall’s rank correlation coefficients,demonstrating consistent performance across diverse scenarios.The approach culminates in a prioritized attack ranking that provides actionable guidance for risk mitigation and resource allocation in Edge-IoT/IIoT security operations.By leveraging real-world data to align MITRE ATT&CK techniques with CVSS metrics,the framework offers a standardized and practically applicable solution for consistent threat assessment in operational settings.The proposed lightweight scoring model delivers rapid and reliable results under dynamic cyber conditions,facilitating timely identification of attack scenarios and prioritization of response strategies.Our systematic integration of established taxonomies with data-driven indicators strengthens practical risk management and supports strategic planning in next-generation IoT deployments.Ultimately,this work advances adaptive threat modeling for Edge/IIoT ecosystems and establishes a robust foundation for evidence-based prioritization in emerging cyber-physical infrastructures.展开更多
Background:With the rapid development of artificial intelligence(AI),large language models(LLMs)have emerged as a potent tool for invigorating ophthalmology across clinical,educational,and research fields.Their accura...Background:With the rapid development of artificial intelligence(AI),large language models(LLMs)have emerged as a potent tool for invigorating ophthalmology across clinical,educational,and research fields.Their accuracy and reliability have undergone tested.This bibliometric analysis aims to provide an overview of research on LLMs in ophthalmology from both thematic and geographical perspectives.Methods:All existing and highly cited LLM-related ophthalmology research papers published in English up to 24th April 2025 were sourced from Scopus,PubMed,and Web of Science.The characteristics of these publications,including publication output,authors,journals,countries,institutions,citations,and research domains,were analyzed using Biblioshiny and VOSviewer software.Results:A total of 277 articles from 1,459 authors and 89 journals were included in this study.Although relevant publications began to appear in 2019,there was a significant increase starting from 2023.He M and Shi D are the most prolific authors,while Investigative Ophthalmology&Visual Science stands out as the most prominent journal.Most of the top-publishing countries are high-income economies,with the USA taking the lead,and the University of California is the leading institution.VOSviewer identified 5 clusters in the keyword co-occurrence analysis,indicating that current research focuses on the clinical applications of LLMs,particularly in diagnosis and patient education.Conclusions:While LLMs have demonstrated effectiveness in retaining knowledge,their accuracy in image-based diagnosis remains limited.Therefore,future research should investigate fine-tuning strategies and domain-specific adaptations to close this gap.Although research on the applications of LLMs in ophthalmology is still in its early stages,it holds significant potential for advancing the field.展开更多
Sentiment analysis,a cornerstone of natural language processing,has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across vari...Sentiment analysis,a cornerstone of natural language processing,has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across various domains.However,the deployment of such models in resource-constrained environments presents a unique set of challenges that require innovative solutions.Resource-constrained environments encompass scenarios where computing resources,memory,and energy availability are restricted.To empower sentiment analysis in resource-constrained environments,we address the crucial need by leveraging lightweight pre-trained models.These models,derived from popular architectures such as DistilBERT,MobileBERT,ALBERT,TinyBERT,ELECTRA,and SqueezeBERT,offer a promising solution to the resource limitations imposed by these environments.By distilling the knowledge from larger models into smaller ones and employing various optimization techniques,these lightweight models aim to strike a balance between performance and resource efficiency.This paper endeavors to explore the performance of multiple lightweight pre-trained models in sentiment analysis tasks specific to such environments and provide insights into their viability for practical deployment.展开更多
In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This...In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This article considers subgroup analysis methods for longitudinal data containing multiple covariates and biomarkers.We divide subgroups based on whether a linear combination of these biomarkers exceeds a predetermined threshold,and assess the heterogeneity of treatment effects across subgroups using the interaction between subgroups and exposure variables.Quantile regression is used to better characterize the global distribution of the response variable and sparsity penalties are imposed to achieve variable selection of covariates and biomarkers.The effectiveness of our proposed methodology for both variable selection and parameter estimation is verified through random simulations.Finally,we demonstrate the application of this method by analyzing data from the PA.3 trial,further illustrating the practicality of the method proposed in this paper.展开更多
Wide-band oscillations have become a significant issue limiting the development of wind power.Both large-signal and small-signal analyses require extensive model derivation.Moreover,the large number and high order of ...Wide-band oscillations have become a significant issue limiting the development of wind power.Both large-signal and small-signal analyses require extensive model derivation.Moreover,the large number and high order of wind turbines have driven the development of simplified models,whose applicability remains controversial.In this paper,a wide-band oscillation analysis method based on the average-value model(AVM)is proposed for wind farms(WFs).A novel linearization analysis framework is developed,leveraging the continuous-time characteristics of the AVM and MATLAB/Simulink’s built-in linearization tools.This significantly reduces modeling complexity and computational costs while maintaining model fidelity.Additionally,an object-based initial value estimation method of state variables is introduced,which,when combined with steady-state point-solving tools,greatly reduces the computational effort required for equilibrium point solving in batch linearization analysis.The proposed method is validated in both doubly fed induction generator(DFIG)-based and permanent magnet synchronous generator(PMSG)-based WFs.Furthermore,a comprehensive analysis is conducted for the first time to examine the impact of the machine-side system on the system stability of the nonfully controlled PMSG-based WF.展开更多
In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance o...In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.展开更多
Pingquan City,the origin of five rivers,serves as the core water conservation zone for the Beijing-Tianjin-Hebei region and exemplifies the characteristics of small watersheds in hilly areas.In recent years,excessive ...Pingquan City,the origin of five rivers,serves as the core water conservation zone for the Beijing-Tianjin-Hebei region and exemplifies the characteristics of small watersheds in hilly areas.In recent years,excessive mining and intensified human activities have severely disrupted the local ecosystem,creating an urgent need for ecological vulnerability assessment to enhance water conservation functions.This study employed the sensitivity-resilience-pressure model,integrating various data sources,including regional background,hydro-meteorological data,field investigations,remote sensing analysis,and socio-economic data.The weights of the model indices were determined using an entropy weighting model that combines principal component analysis and the analytic hierarchy process.Using the ArcGIS platform,the spatial distribution and driving forces of ecological vulnerability in 2020 were analyzed,providing valuable insights for regional ecological restoration.The results indicated that the overall Ecological Vulnerability Index(EVI)was 0.389,signifying moderate ecological vulnerability,with significant variation between watersheds.The Daling River Basin had a high EVI,with ecological vulnerability primarily in levels IV and V,indicating high ecological pressure,whereas the Laoniu River Basin had a low EVI,reflecting minimal ecological pressure.Soil type was identified as the primary driving factor,followed by elevation,temperature,and soil erosion as secondary factors.It is recommended to focus on key regions and critical factors while conducting comprehensive monitoring and assessment to ensure the long-term success of ecological management efforts.展开更多
We analyzed accident factors in a 2020 ship collision case that occurred off Kii Oshima Island using the SHELL model analysis and examined corresponding collision prevention measures.The SHELL model analysis is a fram...We analyzed accident factors in a 2020 ship collision case that occurred off Kii Oshima Island using the SHELL model analysis and examined corresponding collision prevention measures.The SHELL model analysis is a framework for identifying accident factors related to human abilities and characteristics,hardware,software,and the environment.Beyond assessing the accident factors in each element,we also examined the interrelationship between humans and each element.This study highlights the importance of(1)training to enhance situational awareness,(2)improving decision-making skills,and(3)establishing structured decision-making procedures to prevent maritime collision accidents.Additionally,we considered safety measures through(4)hardware enhancements and(5)environmental measures.Furthermore,to prevent accidents,implementing measures grounded in(6)predictions is deemed effective.This study identified accident factors through prediction alongside the SHELL model analysis and proposed countermeasures based on the findings.By applying these predictions,more countermeasures can be derived,which,when combined strategically,can significantly aid in preventing maritime collision accidents.展开更多
In this paper,the N-soliton solutions for the massive Thirring model(MTM)in laboratory coordinates are analyzed via the Riemann-Hilbert(RH)approach.The direct scattering including the analyticity,symmetries,and asympt...In this paper,the N-soliton solutions for the massive Thirring model(MTM)in laboratory coordinates are analyzed via the Riemann-Hilbert(RH)approach.The direct scattering including the analyticity,symmetries,and asymptotic behaviors of the Jost solutions as|λ|→∞andλ→0 are given.Considering that the scattering coefficients have simple zeros,the matrix RH problem,reconstruction formulas and corresponding trace formulas are also derived.Further,the N-soliton solutions in the reflectionless case are obtained explicitly in the form of determinants.The propagation characteristics of one-soliton solutions and interaction properties of two-soliton solutions are discussed.In particular,the asymptotic expressions of two-soliton solutions as|t|→∞are obtained,which show that the velocities and amplitudes of the asymptotic solitons do not change before and after interaction except the position shifts.In addition,three types of bounded states for two-soliton solutions are presented with certain parametric conditions.展开更多
This research presents an advanced study on the modeling and stability analysis of electro-hydraulic control modules used in intelligent chassis systems.Firstly,a comprehensive nonlinear mathematical model of the elec...This research presents an advanced study on the modeling and stability analysis of electro-hydraulic control modules used in intelligent chassis systems.Firstly,a comprehensive nonlinear mathematical model of the electro-hydraulic power-shift system is developed,incorporating pipeline characteristics through impedance analysis and examining coupling effects between the pilot solenoid valve,main valve,and pipeline.Then,the model’s accuracy is validated through experimental testing,demonstrating high precision and minimal model errors.A comparative analysis between simulation data(both with and without pipeline characteristics)and experimental results reveals that the model considering pipeline parameters aligns more closely with experimental data,highlighting its superior accuracy.The research further explores the influence of key factors on system stability,including damping coefficient,feedback cavity orifice diameter,spring stiffness,pipeline length,and pipeline diameter.Significant findings include the critical impact of damping coefficient,orifice diameter,and pipeline length on stability,while spring stiffness has a minimal effect.These findings provide valuable insights for optimizing electro-hydraulic control modules in intelligent chassis systems,with practical implications for automotive and construction machinery applications.展开更多
The efficient market hypothesis in traditional financial theory struggles to explain the short-term irrational fluctuations in the A-share market,where investor sentiment fluctuations often serve as the core driver of...The efficient market hypothesis in traditional financial theory struggles to explain the short-term irrational fluctuations in the A-share market,where investor sentiment fluctuations often serve as the core driver of abnormal stock price movements.Traditional sentiment measurement methods suffer from limitations such as lag,high misjudgment rates,and the inability to distinguish confounding factors.To more accurately explore the dynamic correlation between investor sentiment and stock price fluctuations,this paper proposes a sentiment analysis framework based on large language models(LLMs).By constructing continuous sentiment scoring factors and integrating them with a long short-term memory(LSTM)deep learning model,we analyze the correlation between investor sentiment and stock price fluctuations.Empirical results indicate that sentiment factors based on large language models can generate an annualized excess return of 9.3%in the CSI 500 index domain.The LSTM stock price prediction model incorporating sentiment features achieves a mean absolute percentage error(MAPE)as low as 2.72%,significantly outperforming traditional models.Through this analysis,we aim to provide quantitative references for optimizing investment decisions and preventing market risks.展开更多
GNSS time series analysis provides an effective method for research on the earth's surface deformation,and it can be divided into two parts,deterministic models and stochastic models.The former part can be achieve...GNSS time series analysis provides an effective method for research on the earth's surface deformation,and it can be divided into two parts,deterministic models and stochastic models.The former part can be achieved by several parameters,such as polynomial terms,periodic terms,offsets,and post-seismic models.The latter contains some stochastic noises,which can be affected by detecting the former parameters.If there are not enough parameters assumed,modeling errors will occur and adversely affect the analysis results.In this study,we propose a processing strategy in which the commonly-used 1-order of the polynomial term can be replaced with different orders for better fitting GNSS time series of the Crustal Movement Network of China(CMONOC)stations.Initially,we use the Bayesian Information Criterion(BIC)to identify the best order within the range of 1-4 during the fitting process using the white noise plus power-law noise(WN+PL)model.Then,we compare the 1-order and the optimal order on the effect of deterministic models in GNSS time series,including the velocity and its uncertainty,amplitudes,and initial phases of the annual signals.The results indicate that the first-order polynomial in the GNSS time series is not the primary factor.The root mean square(RMS)reduction rates of almost all station components are positive,which means the new fitting of optimal-order polynomial helps to reduce the RMS of residual series.Most stations maintain the velocity difference(VD)within ±1 mm/yr,with percentages of 85.6%,81.9%and 63.4%in the North,East,and Up components,respectively.As for annual signals,the numbers of amplitude difference(AD)remained at ±0.2 mm are 242,239,and 200 in three components,accounting for 99.6%,98.4%,and 82.3%,respectively.This finding reminds us that the detection of the optimal-order polynomial is necessary when we aim to acquire an accurate understanding of the crustal movement features.展开更多
Accurate quantification of carbon and water fluxes dynamics in arid and semi-arid ecosystems is a critical scientific challenge for regional carbon neutrality assessments and sustainable water resource management.In t...Accurate quantification of carbon and water fluxes dynamics in arid and semi-arid ecosystems is a critical scientific challenge for regional carbon neutrality assessments and sustainable water resource management.In this study,we developed a multi-flux global sensitivity discriminant index(D_(sen))by integrating the Biome-BGCMuSo model with eddy covariance flux observations.This index was combined with a Bayesian optimization algorithm to conduct parameter optimization.The results demonstrated that:(1)Sensitivity analysis identified 13 highly sensitive parameters affecting carbon and water fluxes.Among these,the canopy light extinction coefficient(k)and the fraction of leaf N in Rubisco(FLNR)exhibited significantly higher sensitivity to carbon fluxes(GPP,NEE,Reco;D_(sen)>10%)compared to water flux(ET).This highlights the strong dependence of carbon cycle simulations on vegetation physiological parameters.(2)The Bayesian optimization framework efficiently converged 30 parameter spaces within 50 iterations,markedly improving carbon fluxes simulation accuracy.The Kling-Gupta efficiency(KGE)values for Gross Primary Production(GPP),Net Ecosystem Exchange(NEE),and Total Respiration(Reco)increased by 44.94%,69.23%and 123%,respectively.The optimization prioritized highly sensitive parameters,underscoring the necessity of parameter sensitivity stratification.(3)The optimized model effectively reproduced carbon sink characteristics in mountain meadows during the growing season(cumulative NEE=-375 g C/m^(2)).It revealed synergistic carbon-water fluxes interactions governed by coupled photosynthesis-stomatal pathways and identified substrate supply limitations on heterotrophic respiration.This study proposes a novel multi-flux sensitivity index and an efficient optimization framework,elucidating the coupling mechanisms between vegetation physiological regulation(k,FLNR)and environmental stressors(VPD,SWD)in carbonwater cycles.The methodology offers a practical approach for arid ecosystem model optimization and provides theoretical insights for grassland management through canopy structure regulation and water-use efficiency enhancement.展开更多
Anti-jamming performance evaluation has recently received significant attention. For Link-16, the anti-jamming performance evaluation and selection of the optimal anti-jamming technologies are urgent problems to be so...Anti-jamming performance evaluation has recently received significant attention. For Link-16, the anti-jamming performance evaluation and selection of the optimal anti-jamming technologies are urgent problems to be solved. A comprehensive evaluation method is proposed, which combines grey relational analysis (GRA) and cloud model, to evaluate the anti-jamming performances of Link-16. Firstly, on the basis of establishing the anti-jamming performance evaluation indicator system of Link-16, the linear combination of analytic hierarchy process(AHP) and entropy weight method (EWM) are used to calculate the combined weight. Secondly, the qualitative and quantitative concept transformation model, i.e., the cloud model, is introduced to evaluate the anti-jamming abilities of Link-16 under each jamming scheme. In addition, GRA calculates the correlation degree between evaluation indicators and the anti-jamming performance of Link-16, and assesses the best anti-jamming technology. Finally, simulation results prove that the proposed evaluation model can achieve the objective of feasible and practical evaluation, which opens up a novel way for the research of anti-jamming performance evaluations of Link-16.展开更多
BACKGROUND Neck pain,a primary symptom of cervical spondylosis,affects patients'physical and mental health,reducing their quality of life.Pain and emotional state interact;however,their longitudinal interrelations...BACKGROUND Neck pain,a primary symptom of cervical spondylosis,affects patients'physical and mental health,reducing their quality of life.Pain and emotional state interact;however,their longitudinal interrelationship remains unclear.In this study,we applied a dual-trajectory model to assess how neck pain and emotional state evolve together over time and how clinical interventions,particularly acupuncture,influence these trajectories.AIM To investigate the longitudinal relationship between neck pain and emotional state in patients with cervical spondylosis.METHODS This prospective cohort study included 472 patients with cervical spondylosis from eight Chinese hospitals.Participants received acupuncture or medication and were followed up at baseline,and at 1,2,4,6,and 8 weeks.Neck pain and emotional distress were assessed using the Northwick Park Neck Pain Questionnaire(NPQ)and the affective subscale of the Short-Form McGill Pain Questionnaire(SF-MPQ),respectively.Group-based trajectory models and dual trajectory analysis were used to identify and correlate pain-emotion trajectories.Multivariate logistic regression identified predictors of group membership.RESULTS Three trajectory groups were identified for NPQ and SF-MPQ scores(low,medium,and high).Higher NPQ trajectory was associated with older age(OR=1.058,P<0.001)and was significantly reduced by acupuncture(OR=0.382,P<0.001).Similarly,acupuncture lowered the odds of high SF-MPQ trajectory membership(OR=0.336,P<0.001),while age increased it(OR=1.037,P<0.001).Dual-trajectory analysis revealed bidirectional associations:69.1%of patients with low NPQ had low SF-MPQ scores,and 42.6%of patients with high SF-MPQ also had high NPQ scores.Gender was a predictor for medium SF-MPQ trajectory(OR=1.629,P=0.094).Occupation and education levels differed significantly across the trajectory groups(P<0.05).CONCLUSION Over time,neck pain and emotional distress are closely associated in patients with cervical spondylosis.Acupuncture alleviates both outcomes significantly,while age is a risk factor.Integrated approaches to pain and emotional management are encouraged.展开更多
reshwater essential for civilization faces risk from untreated effluents discharged by industries,agriculture,urban areas,and other sources.Increasing demand and abstraction of freshwater deteriorate the pollution sce...reshwater essential for civilization faces risk from untreated effluents discharged by industries,agriculture,urban areas,and other sources.Increasing demand and abstraction of freshwater deteriorate the pollution scenario more.Hence,water quality analysis(WQA)is an important task for researchers and policymakers to maintain sustainability and public health.This study aims to gather and discuss the methods used for WQA by the researchers,focusing on their advantages and limitations.Simultaneously,this study compares different WQA methods,discussing their trends and future directions.Publications from the past decade on WQA are reviewed,and insights are explored to aggregate them in particular categories.Three major approaches,namely—water quality indexing,water quality modeling(WQM)and artificial intelligence-based WQM,are recognized.Different methodologies adopted to execute these three approaches are presented in this study,which leads to formulate a comparative discussion.Using statistical operations and soft computing techniques have been done by researchers to combat the subjectivity error in indexing.To achieve better results,WQMs are being modified to incorporate the physical processes influencing water quality more robustly.The utilization of artificial intelligence was primarily restricted to conventional networks,but in the last 5 years,implications of deep learning have increased rapidly and exhibited good results with the hybridization of feature extracting and time series modeling.Overall,this study is a valuable resource for researchers dedicated to WQA.展开更多
基金the financial support from the Fujian Science Foundation for Outstanding Youth(2023J06039)the National Natural Science Foundation of China(Grant No.41977259,U2005205,41972268)the Independent Research Project of Technology Innovation Center for Monitoring and Restoration Engineering of Ecological Fragile Zone in Southeast China(KY-090000-04-2022-019)。
文摘Shotcrete is one of the common solutions for shallow sliding.It works by forming a protective layer with high strength and cementing the loose soil particles on the slope surface to prevent shallow sliding.However,the solidification time of conventional cement paste is long when shotcrete is used to treat cohesionless soil landslide.The idea of reinforcing slope with polyurethane solidified soil(i.e.,mixture of polyurethane and sand)was proposed.Model tests and finite element analysis were carried out to study the effectiveness of the proposed new method on the emergency treatment of cohesionless soil landslide.Surcharge loading on the crest of the slope was applied step by step until landslide was triggered so as to test and compare the stability and bearing capacity of slope models with different conditions.The simulated slope displacements were relatively close to the measured results,and the simulated slope deformation characteristics were in good agreement with the observed phenomena,which verifies the accuracy of the numerical method.Under the condition of surcharge loading on the crest of the slope,the unreinforced slope slid when the surcharge loading exceeded 30 k Pa,which presented a failure mode of local instability and collapse at the shallow layer of slope top.The reinforced slope remained stable even when the surcharge loading reached 48 k Pa.The displacement of the reinforced slope was reduced by more than 95%.Overall,this study verifies the effectiveness of polyurethane in the emergency treatment of cohesionless soil landslide and should have broad application prospects in the field of geological disasters concerning the safety of people's live.
基金supported by the National Key R&D Program of China(No.2021YFB0301200)National Natural Science Foundation of China(No.62025208).
文摘Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.
文摘DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expression data generated. To address this, this paper employs a mixed-effects model to analyze gene expression data. In terms of data selection, 1176 genes from the white mouse gene expression dataset under two experimental conditions were chosen, setting up two conditions: pneumococcal infection and no infection, and constructing a mixed-effects model. After preprocessing the gene chip information, the data were imported into the model, preliminary results were calculated, and permutation tests were performed to biologically validate the preliminary results using GSEA. The final dataset consists of 20 groups of gene expression data from pneumococcal infection, which categorizes functionally related genes based on the similarity of their expression profiles, facilitating the study of genes with unknown functions.
文摘With the rapid development of generative artificial intelligence technologies,represented by large language models,university-level computer science education is undergoing a critical transition-from knowledge-based instruction to competency-oriented teaching.A postgraduate student competency evaluation model can serve as a framework to organize and guide both teaching and research activities at the postgraduate level.A number of relevant research efforts have already been conducted in this area.Graduate education plays a vital role not only as a continuation and enhancement of undergraduate education but also as essential preparation for future research endeavors.An analysis of the acceptance of competency evaluation models refers to the assessment of how various stakeholders perceive the importance of different components within the model.Investigating the degree of acceptance among diverse groups-such as current undergraduate students,current postgraduate students,graduates with less than three years of work experience,and those with more than three years of work experience-can offer valuable insights for improving and optimizing postgraduate education and training practices.
基金supported by the“Regional Innovation System&Education(RISE)”through the Seoul RISE Center,funded by the Ministry of Education(MOE)and the Seoul Metropolitan Government(2025-RISE-01-018-05)supported by Quad Miners Corp。
文摘The dynamic,heterogeneous nature of Edge computing in the Internet of Things(Edge-IoT)and Industrial IoT(IIoT)networks brings unique and evolving cybersecurity challenges.This study maps cyber threats in Edge-IoT/IIoT environments to the Adversarial Tactics,Techniques,and Common Knowledge(ATT&CK)framework by MITRE and introduces a lightweight,data-driven scoring model that enables rapid identification and prioritization of attacks.Inspired by the Factor Analysis of Information Risk model,our proposed scoring model integrates four key metrics:Common Vulnerability Scoring System(CVSS)-based severity scoring,Cyber Kill Chain–based difficulty estimation,Deep Neural Networks-driven detection scoring,and frequency analysis based on dataset prevalence.By aggregating these indicators,the model generates comprehensive risk profiles,facilitating actionable prioritization of threats.Robustness and stability of the scoring model are validated through non-parametric correlation analysis using Spearman’s and Kendall’s rank correlation coefficients,demonstrating consistent performance across diverse scenarios.The approach culminates in a prioritized attack ranking that provides actionable guidance for risk mitigation and resource allocation in Edge-IoT/IIoT security operations.By leveraging real-world data to align MITRE ATT&CK techniques with CVSS metrics,the framework offers a standardized and practically applicable solution for consistent threat assessment in operational settings.The proposed lightweight scoring model delivers rapid and reliable results under dynamic cyber conditions,facilitating timely identification of attack scenarios and prioritization of response strategies.Our systematic integration of established taxonomies with data-driven indicators strengthens practical risk management and supports strategic planning in next-generation IoT deployments.Ultimately,this work advances adaptive threat modeling for Edge/IIoT ecosystems and establishes a robust foundation for evidence-based prioritization in emerging cyber-physical infrastructures.
基金supported by Health and Medical Research Fund,Hong Kong(11220386,12230246).
文摘Background:With the rapid development of artificial intelligence(AI),large language models(LLMs)have emerged as a potent tool for invigorating ophthalmology across clinical,educational,and research fields.Their accuracy and reliability have undergone tested.This bibliometric analysis aims to provide an overview of research on LLMs in ophthalmology from both thematic and geographical perspectives.Methods:All existing and highly cited LLM-related ophthalmology research papers published in English up to 24th April 2025 were sourced from Scopus,PubMed,and Web of Science.The characteristics of these publications,including publication output,authors,journals,countries,institutions,citations,and research domains,were analyzed using Biblioshiny and VOSviewer software.Results:A total of 277 articles from 1,459 authors and 89 journals were included in this study.Although relevant publications began to appear in 2019,there was a significant increase starting from 2023.He M and Shi D are the most prolific authors,while Investigative Ophthalmology&Visual Science stands out as the most prominent journal.Most of the top-publishing countries are high-income economies,with the USA taking the lead,and the University of California is the leading institution.VOSviewer identified 5 clusters in the keyword co-occurrence analysis,indicating that current research focuses on the clinical applications of LLMs,particularly in diagnosis and patient education.Conclusions:While LLMs have demonstrated effectiveness in retaining knowledge,their accuracy in image-based diagnosis remains limited.Therefore,future research should investigate fine-tuning strategies and domain-specific adaptations to close this gap.Although research on the applications of LLMs in ophthalmology is still in its early stages,it holds significant potential for advancing the field.
文摘Sentiment analysis,a cornerstone of natural language processing,has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across various domains.However,the deployment of such models in resource-constrained environments presents a unique set of challenges that require innovative solutions.Resource-constrained environments encompass scenarios where computing resources,memory,and energy availability are restricted.To empower sentiment analysis in resource-constrained environments,we address the crucial need by leveraging lightweight pre-trained models.These models,derived from popular architectures such as DistilBERT,MobileBERT,ALBERT,TinyBERT,ELECTRA,and SqueezeBERT,offer a promising solution to the resource limitations imposed by these environments.By distilling the knowledge from larger models into smaller ones and employing various optimization techniques,these lightweight models aim to strike a balance between performance and resource efficiency.This paper endeavors to explore the performance of multiple lightweight pre-trained models in sentiment analysis tasks specific to such environments and provide insights into their viability for practical deployment.
基金Supported by the Natural Science Foundation of Fujian Province(2022J011177,2024J01903)the Key Project of Fujian Provincial Education Department(JZ230054)。
文摘In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This article considers subgroup analysis methods for longitudinal data containing multiple covariates and biomarkers.We divide subgroups based on whether a linear combination of these biomarkers exceeds a predetermined threshold,and assess the heterogeneity of treatment effects across subgroups using the interaction between subgroups and exposure variables.Quantile regression is used to better characterize the global distribution of the response variable and sparsity penalties are imposed to achieve variable selection of covariates and biomarkers.The effectiveness of our proposed methodology for both variable selection and parameter estimation is verified through random simulations.Finally,we demonstrate the application of this method by analyzing data from the PA.3 trial,further illustrating the practicality of the method proposed in this paper.
基金supported by the National Natural Science Foundation of China under Grant 52277072.
文摘Wide-band oscillations have become a significant issue limiting the development of wind power.Both large-signal and small-signal analyses require extensive model derivation.Moreover,the large number and high order of wind turbines have driven the development of simplified models,whose applicability remains controversial.In this paper,a wide-band oscillation analysis method based on the average-value model(AVM)is proposed for wind farms(WFs).A novel linearization analysis framework is developed,leveraging the continuous-time characteristics of the AVM and MATLAB/Simulink’s built-in linearization tools.This significantly reduces modeling complexity and computational costs while maintaining model fidelity.Additionally,an object-based initial value estimation method of state variables is introduced,which,when combined with steady-state point-solving tools,greatly reduces the computational effort required for equilibrium point solving in batch linearization analysis.The proposed method is validated in both doubly fed induction generator(DFIG)-based and permanent magnet synchronous generator(PMSG)-based WFs.Furthermore,a comprehensive analysis is conducted for the first time to examine the impact of the machine-side system on the system stability of the nonfully controlled PMSG-based WF.
文摘In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.
基金supported by the project of China Geological Survey(No.DD20220954)Open Funding Project of the Key Laboratory of Groundwater Sciences and Engineering,Ministry of Natural Resources(No.SK202301-4)+1 种基金Open Foundation of the Key Laboratory of Coupling Process and Effect of Natural Resources Elements(No.2022KFKTC009)Yanzhao Shanshui Science and Innovation Fund of Langfang Integrated Natural Resources Survey Center,China Geological Survey(No.YZSSJJ202401-001).
文摘Pingquan City,the origin of five rivers,serves as the core water conservation zone for the Beijing-Tianjin-Hebei region and exemplifies the characteristics of small watersheds in hilly areas.In recent years,excessive mining and intensified human activities have severely disrupted the local ecosystem,creating an urgent need for ecological vulnerability assessment to enhance water conservation functions.This study employed the sensitivity-resilience-pressure model,integrating various data sources,including regional background,hydro-meteorological data,field investigations,remote sensing analysis,and socio-economic data.The weights of the model indices were determined using an entropy weighting model that combines principal component analysis and the analytic hierarchy process.Using the ArcGIS platform,the spatial distribution and driving forces of ecological vulnerability in 2020 were analyzed,providing valuable insights for regional ecological restoration.The results indicated that the overall Ecological Vulnerability Index(EVI)was 0.389,signifying moderate ecological vulnerability,with significant variation between watersheds.The Daling River Basin had a high EVI,with ecological vulnerability primarily in levels IV and V,indicating high ecological pressure,whereas the Laoniu River Basin had a low EVI,reflecting minimal ecological pressure.Soil type was identified as the primary driving factor,followed by elevation,temperature,and soil erosion as secondary factors.It is recommended to focus on key regions and critical factors while conducting comprehensive monitoring and assessment to ensure the long-term success of ecological management efforts.
文摘We analyzed accident factors in a 2020 ship collision case that occurred off Kii Oshima Island using the SHELL model analysis and examined corresponding collision prevention measures.The SHELL model analysis is a framework for identifying accident factors related to human abilities and characteristics,hardware,software,and the environment.Beyond assessing the accident factors in each element,we also examined the interrelationship between humans and each element.This study highlights the importance of(1)training to enhance situational awareness,(2)improving decision-making skills,and(3)establishing structured decision-making procedures to prevent maritime collision accidents.Additionally,we considered safety measures through(4)hardware enhancements and(5)environmental measures.Furthermore,to prevent accidents,implementing measures grounded in(6)predictions is deemed effective.This study identified accident factors through prediction alongside the SHELL model analysis and proposed countermeasures based on the findings.By applying these predictions,more countermeasures can be derived,which,when combined strategically,can significantly aid in preventing maritime collision accidents.
基金supported by the National Natural Science Foundation of China(Grant Nos.12475003 and11705284)by the Natural Science Foundation of Beijing Municipality(Grant Nos.1232022 and 1212007)。
文摘In this paper,the N-soliton solutions for the massive Thirring model(MTM)in laboratory coordinates are analyzed via the Riemann-Hilbert(RH)approach.The direct scattering including the analyticity,symmetries,and asymptotic behaviors of the Jost solutions as|λ|→∞andλ→0 are given.Considering that the scattering coefficients have simple zeros,the matrix RH problem,reconstruction formulas and corresponding trace formulas are also derived.Further,the N-soliton solutions in the reflectionless case are obtained explicitly in the form of determinants.The propagation characteristics of one-soliton solutions and interaction properties of two-soliton solutions are discussed.In particular,the asymptotic expressions of two-soliton solutions as|t|→∞are obtained,which show that the velocities and amplitudes of the asymptotic solitons do not change before and after interaction except the position shifts.In addition,three types of bounded states for two-soliton solutions are presented with certain parametric conditions.
基金Supported by the Basic Product Innovation Plan for Vehicle Power Scientific Research Project(Grant No.JCCPCX201704).
文摘This research presents an advanced study on the modeling and stability analysis of electro-hydraulic control modules used in intelligent chassis systems.Firstly,a comprehensive nonlinear mathematical model of the electro-hydraulic power-shift system is developed,incorporating pipeline characteristics through impedance analysis and examining coupling effects between the pilot solenoid valve,main valve,and pipeline.Then,the model’s accuracy is validated through experimental testing,demonstrating high precision and minimal model errors.A comparative analysis between simulation data(both with and without pipeline characteristics)and experimental results reveals that the model considering pipeline parameters aligns more closely with experimental data,highlighting its superior accuracy.The research further explores the influence of key factors on system stability,including damping coefficient,feedback cavity orifice diameter,spring stiffness,pipeline length,and pipeline diameter.Significant findings include the critical impact of damping coefficient,orifice diameter,and pipeline length on stability,while spring stiffness has a minimal effect.These findings provide valuable insights for optimizing electro-hydraulic control modules in intelligent chassis systems,with practical implications for automotive and construction machinery applications.
文摘The efficient market hypothesis in traditional financial theory struggles to explain the short-term irrational fluctuations in the A-share market,where investor sentiment fluctuations often serve as the core driver of abnormal stock price movements.Traditional sentiment measurement methods suffer from limitations such as lag,high misjudgment rates,and the inability to distinguish confounding factors.To more accurately explore the dynamic correlation between investor sentiment and stock price fluctuations,this paper proposes a sentiment analysis framework based on large language models(LLMs).By constructing continuous sentiment scoring factors and integrating them with a long short-term memory(LSTM)deep learning model,we analyze the correlation between investor sentiment and stock price fluctuations.Empirical results indicate that sentiment factors based on large language models can generate an annualized excess return of 9.3%in the CSI 500 index domain.The LSTM stock price prediction model incorporating sentiment features achieves a mean absolute percentage error(MAPE)as low as 2.72%,significantly outperforming traditional models.Through this analysis,we aim to provide quantitative references for optimizing investment decisions and preventing market risks.
基金supported by the National Natural Science Foundation of China(Grant Nos.42404017,42122025 and 42174030).
文摘GNSS time series analysis provides an effective method for research on the earth's surface deformation,and it can be divided into two parts,deterministic models and stochastic models.The former part can be achieved by several parameters,such as polynomial terms,periodic terms,offsets,and post-seismic models.The latter contains some stochastic noises,which can be affected by detecting the former parameters.If there are not enough parameters assumed,modeling errors will occur and adversely affect the analysis results.In this study,we propose a processing strategy in which the commonly-used 1-order of the polynomial term can be replaced with different orders for better fitting GNSS time series of the Crustal Movement Network of China(CMONOC)stations.Initially,we use the Bayesian Information Criterion(BIC)to identify the best order within the range of 1-4 during the fitting process using the white noise plus power-law noise(WN+PL)model.Then,we compare the 1-order and the optimal order on the effect of deterministic models in GNSS time series,including the velocity and its uncertainty,amplitudes,and initial phases of the annual signals.The results indicate that the first-order polynomial in the GNSS time series is not the primary factor.The root mean square(RMS)reduction rates of almost all station components are positive,which means the new fitting of optimal-order polynomial helps to reduce the RMS of residual series.Most stations maintain the velocity difference(VD)within ±1 mm/yr,with percentages of 85.6%,81.9%and 63.4%in the North,East,and Up components,respectively.As for annual signals,the numbers of amplitude difference(AD)remained at ±0.2 mm are 242,239,and 200 in three components,accounting for 99.6%,98.4%,and 82.3%,respectively.This finding reminds us that the detection of the optimal-order polynomial is necessary when we aim to acquire an accurate understanding of the crustal movement features.
基金jointly funded by the National Natural Science Foundation of China(Grant No.42161024)the Central Financial Forestry and Grassland Science and Technology Extension Demonstration Project(2025)(Grant No.Xin[2025]TG 09)。
文摘Accurate quantification of carbon and water fluxes dynamics in arid and semi-arid ecosystems is a critical scientific challenge for regional carbon neutrality assessments and sustainable water resource management.In this study,we developed a multi-flux global sensitivity discriminant index(D_(sen))by integrating the Biome-BGCMuSo model with eddy covariance flux observations.This index was combined with a Bayesian optimization algorithm to conduct parameter optimization.The results demonstrated that:(1)Sensitivity analysis identified 13 highly sensitive parameters affecting carbon and water fluxes.Among these,the canopy light extinction coefficient(k)and the fraction of leaf N in Rubisco(FLNR)exhibited significantly higher sensitivity to carbon fluxes(GPP,NEE,Reco;D_(sen)>10%)compared to water flux(ET).This highlights the strong dependence of carbon cycle simulations on vegetation physiological parameters.(2)The Bayesian optimization framework efficiently converged 30 parameter spaces within 50 iterations,markedly improving carbon fluxes simulation accuracy.The Kling-Gupta efficiency(KGE)values for Gross Primary Production(GPP),Net Ecosystem Exchange(NEE),and Total Respiration(Reco)increased by 44.94%,69.23%and 123%,respectively.The optimization prioritized highly sensitive parameters,underscoring the necessity of parameter sensitivity stratification.(3)The optimized model effectively reproduced carbon sink characteristics in mountain meadows during the growing season(cumulative NEE=-375 g C/m^(2)).It revealed synergistic carbon-water fluxes interactions governed by coupled photosynthesis-stomatal pathways and identified substrate supply limitations on heterotrophic respiration.This study proposes a novel multi-flux sensitivity index and an efficient optimization framework,elucidating the coupling mechanisms between vegetation physiological regulation(k,FLNR)and environmental stressors(VPD,SWD)in carbonwater cycles.The methodology offers a practical approach for arid ecosystem model optimization and provides theoretical insights for grassland management through canopy structure regulation and water-use efficiency enhancement.
基金Heilongjiang Provincial Natural Science Foundation of China (LH2021F009)。
文摘Anti-jamming performance evaluation has recently received significant attention. For Link-16, the anti-jamming performance evaluation and selection of the optimal anti-jamming technologies are urgent problems to be solved. A comprehensive evaluation method is proposed, which combines grey relational analysis (GRA) and cloud model, to evaluate the anti-jamming performances of Link-16. Firstly, on the basis of establishing the anti-jamming performance evaluation indicator system of Link-16, the linear combination of analytic hierarchy process(AHP) and entropy weight method (EWM) are used to calculate the combined weight. Secondly, the qualitative and quantitative concept transformation model, i.e., the cloud model, is introduced to evaluate the anti-jamming abilities of Link-16 under each jamming scheme. In addition, GRA calculates the correlation degree between evaluation indicators and the anti-jamming performance of Link-16, and assesses the best anti-jamming technology. Finally, simulation results prove that the proposed evaluation model can achieve the objective of feasible and practical evaluation, which opens up a novel way for the research of anti-jamming performance evaluations of Link-16.
基金Supported by 2022 Chinese Medicine Scientific Research Project of Hebei Administration of Traditional Chinese Medicine,No.20221572025 Annual Scientific Research Project of Higher Education Institutions in Hebei Province,No.QN2025654.
文摘BACKGROUND Neck pain,a primary symptom of cervical spondylosis,affects patients'physical and mental health,reducing their quality of life.Pain and emotional state interact;however,their longitudinal interrelationship remains unclear.In this study,we applied a dual-trajectory model to assess how neck pain and emotional state evolve together over time and how clinical interventions,particularly acupuncture,influence these trajectories.AIM To investigate the longitudinal relationship between neck pain and emotional state in patients with cervical spondylosis.METHODS This prospective cohort study included 472 patients with cervical spondylosis from eight Chinese hospitals.Participants received acupuncture or medication and were followed up at baseline,and at 1,2,4,6,and 8 weeks.Neck pain and emotional distress were assessed using the Northwick Park Neck Pain Questionnaire(NPQ)and the affective subscale of the Short-Form McGill Pain Questionnaire(SF-MPQ),respectively.Group-based trajectory models and dual trajectory analysis were used to identify and correlate pain-emotion trajectories.Multivariate logistic regression identified predictors of group membership.RESULTS Three trajectory groups were identified for NPQ and SF-MPQ scores(low,medium,and high).Higher NPQ trajectory was associated with older age(OR=1.058,P<0.001)and was significantly reduced by acupuncture(OR=0.382,P<0.001).Similarly,acupuncture lowered the odds of high SF-MPQ trajectory membership(OR=0.336,P<0.001),while age increased it(OR=1.037,P<0.001).Dual-trajectory analysis revealed bidirectional associations:69.1%of patients with low NPQ had low SF-MPQ scores,and 42.6%of patients with high SF-MPQ also had high NPQ scores.Gender was a predictor for medium SF-MPQ trajectory(OR=1.629,P=0.094).Occupation and education levels differed significantly across the trajectory groups(P<0.05).CONCLUSION Over time,neck pain and emotional distress are closely associated in patients with cervical spondylosis.Acupuncture alleviates both outcomes significantly,while age is a risk factor.Integrated approaches to pain and emotional management are encouraged.
基金State University Research Excellence(SURE),SERB,GOI,Grant/Award Number:SUR/2022/001557。
文摘reshwater essential for civilization faces risk from untreated effluents discharged by industries,agriculture,urban areas,and other sources.Increasing demand and abstraction of freshwater deteriorate the pollution scenario more.Hence,water quality analysis(WQA)is an important task for researchers and policymakers to maintain sustainability and public health.This study aims to gather and discuss the methods used for WQA by the researchers,focusing on their advantages and limitations.Simultaneously,this study compares different WQA methods,discussing their trends and future directions.Publications from the past decade on WQA are reviewed,and insights are explored to aggregate them in particular categories.Three major approaches,namely—water quality indexing,water quality modeling(WQM)and artificial intelligence-based WQM,are recognized.Different methodologies adopted to execute these three approaches are presented in this study,which leads to formulate a comparative discussion.Using statistical operations and soft computing techniques have been done by researchers to combat the subjectivity error in indexing.To achieve better results,WQMs are being modified to incorporate the physical processes influencing water quality more robustly.The utilization of artificial intelligence was primarily restricted to conventional networks,but in the last 5 years,implications of deep learning have increased rapidly and exhibited good results with the hybridization of feature extracting and time series modeling.Overall,this study is a valuable resource for researchers dedicated to WQA.