Activation pruning reduces neural network complexity by eliminating low-importance neuron activations,yet identifying the critical pruning threshold—beyond which accuracy rapidly deteriorates—remains computationally...Activation pruning reduces neural network complexity by eliminating low-importance neuron activations,yet identifying the critical pruning threshold—beyond which accuracy rapidly deteriorates—remains computationally expensive and typically requires exhaustive search.We introduce a thermodynamics-inspired framework that treats activation distributions as energy-filtered physical systems and employs the free energy of activations as a principled evaluation metric.Phase-transition-like phenomena in the free-energy profile—such as extrema,inflection points,and curvature changes—yield reliable estimates of the critical pruning threshold,providing a theoretically grounded means of predicting sharp accuracy degradation.To further enhance efficiency,we propose a renormalized free energy technique that approximates full-evaluation free energy using only the activation distribution of the unpruned network.This eliminates repeated forward passes,dramatically reducing computational overhead and achieving speedups of up to 550×for MLPs.Extensive experiments across diverse vision architectures(MLP,CNN,ResNet,MobileNet,Vision Transformer)and text models(LSTM,BERT,ELECTRA,T5,GPT-2)on multiple datasets validate the generality,robustness,and computational efficiency of our approach.Overall,this work establishes a theoretically grounded and practically effective framework for activation pruning,bridging the gap between analytical understanding and efficient deployment of sparse neural networks.展开更多
This study investigates in-station pressure drop mechanisms in a shale gas gathering system,providing a quantitative basis for flow system optimization.Computational fluid dynamics(CFD)simulations,based on field-measu...This study investigates in-station pressure drop mechanisms in a shale gas gathering system,providing a quantitative basis for flow system optimization.Computational fluid dynamics(CFD)simulations,based on field-measured parameters related to a representative case(a shale gas platform located in Sichuan,China)are conducted to analyze the flow characteristics of specific fittings and manifolds,and to quantify fitting resistance coefficients and manifold inlet interference.The resulting coefficients are integrated into a full-station gathering network model in PipeSim,which,combined with production data,enables evaluation of pressure losses and identification of equivalent pipeline blockages.The results indicate that the resistance coefficients,valid only for fittings under the studied field-specific geometries,are 0.21 for 90◦elbows in the fully open position,0.16 for gate valve passages in the fully open position,and 2.3 for globe valve passages.Manifold interference decreases with lower high-pressure inlet values,whereas inlets farther from the high-pressure side experience stronger disturbances.Interestingly,significant discrepancies between simulated and measured pressure drops reveal partial blockages,corresponding to effective diameter reductions of 65 mm,38 mm,44 mm,38 mm,and 28 mm for Wells 1#,3#,5#,and 6#,respectively.展开更多
Under sustained strong stochastic impact loads,floating-supported friction plates are susceptible to the formation of fatigue cracks that propagate along the rim.The nonlinearity and randomness introduced by the crack...Under sustained strong stochastic impact loads,floating-supported friction plates are susceptible to the formation of fatigue cracks that propagate along the rim.The nonlinearity and randomness introduced by the cracked teeth participating in the impacts significantly influence the service life and reliability of the transmission system.In this paper,an improved stiffness excitation modeling method is developed for friction plate teeth with rim cracks.It overcomes the limitations of traditional approaches that fail to accurately assess the narrow-band,large-diameter friction plate teeth with rim cracks due to constraints imposed by boundary conditions.Then,an original dynamic impact model for the floating-supported friction plate and inner hub system is proposed,incorporating the effects of bending-torsional-axial-tilting coupled motions on tooth mesh excitations and dynamic responses.This model addresses the limitations of conventional models that only consider bending-torsion coupling,thereby providing a more comprehensive representation of the system's multi-dimensional dynamic behavior.The effects of the crack propagation depth and the number of cracked teeth on the stochastic impact characteristics and vibration responses of the system are investigated.Furthermore,finite element simulations and experimental tests are conducted to validate the cracked tooth stiffness excitations and dynamic impact responses,respectively.The proposed model is anticipated to provide both a theoretical foundation and practical guidance for fault diagnosis and reliability assessment of clutch friction plates.展开更多
The northern segment of the North-South Seismic Belt is characterized by intense crustal deformation,well-developed active tectonics,and frequent occurrences of strong earthquakes.Therefore,conducting a Probabilistic ...The northern segment of the North-South Seismic Belt is characterized by intense crustal deformation,well-developed active tectonics,and frequent occurrences of strong earthquakes.Therefore,conducting a Probabilistic Seismic Hazard Analysis(PSHA)for this region is of significant importance for supporting seismic fortification in major engineering projects and formulating disaster prevention and mitigation policies.In this study,a composite seismic source model was constructed by integrating data on historical earthquakes,active faults,and paleoseismicity.Furthermore,a logic tree framework was employed to quantify epistemic uncertainties,enabling a systematic seismic hazard assessment of the region.To more accurately characterize the spatial heterogeneity of seismic activity,improvements were made to both the Circular Spatial Smoothing Model(CSSM)with a fixed radius and the Adaptive Spatial Smoothing Model(ASSM),with full consideration given to the spatiotemporal completeness of historical earthquake magnitudes.Regarding the CSSM,for scenarios involving small sample sizes in earthquake catalogs,the cross-validation method proposed in this study demonstrated higher robustness than the maximum likelihood method in determining the optimal correlation distance.Performance evaluation results indicate that while both models effectively characterize seismic activity,the ASSM exhibits superior overall predictive performance compared to the CSSM,owing to its ability to adaptively adjust the smoothing radius according to seismic density.Significant discrepancies were observed in the Peak Ground Acceleration(PGA)results calculated with a 10%probability of exceedance in 50 years across different combinations of seismic source models.The single spatially smoothed point-source model yielded a maximum PGA of approximately 0.52 g,with high-value areas concentrated near historical epicenters,thereby significantly underestimating the hazard associated with major fault zones.When combined with the simple fault-source model,the maximum PGA increased to 0.8 g,with high-value zones exhibiting a zonal distribution along faults;however,the risk remained underestimated for faults with low slip rates that are nevertheless approaching their recurrence cycles.Following the introduction of the time-dependent characteristic fault-source model,local PGA values for faults in the middle-to-late stages of their recurrence cycles increased by a factor of 2 to 7 compared to the single model.These results demonstrate that the characteristic fault-source model reasonably delineates the time-dependence of large earthquake recurrence,thereby providing a more accurate assessment of imminent seismic risks.By comprehensively applying the improved spatially smoothed pointsource model,the simple fault-source model,and the characteristic fault-source model,the following faults within the region were identified as having high seismic hazard:the Huangxianggou,Zhangxian,and Tianshui segments of the Xiqinling northern edge fault;the Maqin-Maqu segment of the Dongkunlun fault;the Longriqu fault;the Maoergai fault;the Elashan fault;the Riyueshan fault;the eastern segment of the Lenglongling fault;the Maxianshan segment of the Maxianshan northern Margin fault;and the Maomaoshan-Jinqianghe segment of the Laohushan-Maomaoshan fault.As these faults are located within seismic gaps or are approaching the recurrence periods of large earthquakes,they should be prioritized for current and future seismic monitoring as well as disaster prevention and mitigation efforts.展开更多
Cascading failures pose a serious threat to the survivability of underwater unmanned swarm networks(UUSNs),significantly limiting their service ability in collaborative missions such as military reconnaissance and env...Cascading failures pose a serious threat to the survivability of underwater unmanned swarm networks(UUSNs),significantly limiting their service ability in collaborative missions such as military reconnaissance and environmental monitoring.Existing failure models primarily focus on power grids and traffic systems,and don't address the unique challenges of weak-communication UUSNs.In UUSNs,cascading failure present a complex and dynamic process driven by the coupling of unstable acoustic channels,passive node drift,adversarial attacks,and network heterogeneity.To address these challenges,a directed weighted graph model of UUSNs is first developed,in which node positions are updated according to ocean-current-driven drift and link weights reflect the probability of successful acoustic transmission.Building on this UUSNs graph model,a cascading failure model is proposed that integrates a normal-failure-recovery state-cycle mechanism,multiple attack strategies,and routingbased load redistribution.Finally,under a five-level connectivity UUSNs scheme,simulations are conducted to analyze how dynamic topology,network load,node recovery delay,and attack modes jointly affect network survivability.The main findings are:(1)moderate node drift can improve survivability by activating weak links;(2)based-energy routing(BER)outperform based-depth routing(BDR)in harsh conditions;(3)node self-recovery time is critical to network survivability;(4)traditional degree-based critical node metrics are inadequate for weak-communication UUSNs.These results provide a theoretical foundation for designing robust survivability mechanisms in weak-communication UUSNs.展开更多
Owing to intensified globalization and informatization,the structures of the urban scale hierarchy and urban networks between cities have become increasingly intertwined,resulting in different spatial effects.Therefor...Owing to intensified globalization and informatization,the structures of the urban scale hierarchy and urban networks between cities have become increasingly intertwined,resulting in different spatial effects.Therefore,this paper analyzes the spatial interaction between urban scale hierarchy and urban networks in China from 2019 to 2023,drawing on Baidu migration data and employing a spatial simultaneous equation model.The results reveal a significant positive spatial correlation between cities with higher hierarchy and those with greater network centrality.Within a static framework,we identify a positive interaction between urban scale hierarchy and urban network centrality,while their spatial cross-effects manifest as negative neighborhood interactions based on geographical distance and positive cross-scale interactions shaped by network connections.Within a dynamic framework,changes in urban scale hierarchy and urban networks are mutually reinforcing,thereby widening disparities within the urban hierarchy.Furthermore,an increase in a city’s network centrality had a dampening effect on the population growth of neighboring cities and network-connected cities.This study enhances understanding of the spatial organisation of urban systems and offers insights for coordinated regional development.展开更多
Deep rock engineering is affected by coupled thermo-hydro-mechanical(THM)-dynamic fields,necessitating the elucidation of the dynamic mechanical behavior and failure mechanisms.This study utilized a Multi-field Couple...Deep rock engineering is affected by coupled thermo-hydro-mechanical(THM)-dynamic fields,necessitating the elucidation of the dynamic mechanical behavior and failure mechanisms.This study utilized a Multi-field Coupled Controlled Split Hopkinson Pressure Bar(MCC-SHPB)system to elucidate the cross-scale dynamic responses of rocks and the boundaries of failure modes under THM coupling.Impact tests were conducted on green sandstone under coupled conditions of temperature(25℃-80℃),confining pressure(0-15 MPa),and seepage water pressure(0-15 MPa).Scanning electron microscopy(SEM)microstructural characterization and COMSOL Multiphysics numerical simulations were conducted,and a dynamic constitutive theoretical framework and failure-prediction methodology were established.We investigated the impact toughness index(I_(t)),dynamic modulus(E_(d)),dynamic triaxial compressive strength(TCS_(d)),fragmentation degree(W),and failure modes of green sandstone under thermo-confining pressure-seepage-impact loading conditions.The key findings reveal that the(I_(t))reflects different energy regulation mechanisms across different confining pressure regimes.Thermal-microcrack interactions dominate at low pressure,and energy absorption prevails at high pressure.A triphasic dynamic modulus model captures stiffness evolution under energy-driven conditions,revealing cross-scale crack nucleation-propagation and fragment reorganization.The TCSd inflection point signifies energy dissipation shifts,causing nonlinear skeleton bearing-capacity degradation.A critical criterion based on the W was established to distinguish between the two failure modes and predict the unstable failure initiation.Numerical simulations were used to elucidate the effects of inertia-dominated crack propagation and stress wave interference,validating the critical criterion and the predictive accuracy of the theoretical model during cross-scale failure.This study provides a theoretical foundation for assessing the dynamic stability of rock masses subjected to multi-field coupling during deep resource exploitation.展开更多
Anti-jamming performance evaluation has recently received significant attention. For Link-16, the anti-jamming performance evaluation and selection of the optimal anti-jamming technologies are urgent problems to be so...Anti-jamming performance evaluation has recently received significant attention. For Link-16, the anti-jamming performance evaluation and selection of the optimal anti-jamming technologies are urgent problems to be solved. A comprehensive evaluation method is proposed, which combines grey relational analysis (GRA) and cloud model, to evaluate the anti-jamming performances of Link-16. Firstly, on the basis of establishing the anti-jamming performance evaluation indicator system of Link-16, the linear combination of analytic hierarchy process(AHP) and entropy weight method (EWM) are used to calculate the combined weight. Secondly, the qualitative and quantitative concept transformation model, i.e., the cloud model, is introduced to evaluate the anti-jamming abilities of Link-16 under each jamming scheme. In addition, GRA calculates the correlation degree between evaluation indicators and the anti-jamming performance of Link-16, and assesses the best anti-jamming technology. Finally, simulation results prove that the proposed evaluation model can achieve the objective of feasible and practical evaluation, which opens up a novel way for the research of anti-jamming performance evaluations of Link-16.展开更多
Shotcrete is one of the common solutions for shallow sliding.It works by forming a protective layer with high strength and cementing the loose soil particles on the slope surface to prevent shallow sliding.However,the...Shotcrete is one of the common solutions for shallow sliding.It works by forming a protective layer with high strength and cementing the loose soil particles on the slope surface to prevent shallow sliding.However,the solidification time of conventional cement paste is long when shotcrete is used to treat cohesionless soil landslide.The idea of reinforcing slope with polyurethane solidified soil(i.e.,mixture of polyurethane and sand)was proposed.Model tests and finite element analysis were carried out to study the effectiveness of the proposed new method on the emergency treatment of cohesionless soil landslide.Surcharge loading on the crest of the slope was applied step by step until landslide was triggered so as to test and compare the stability and bearing capacity of slope models with different conditions.The simulated slope displacements were relatively close to the measured results,and the simulated slope deformation characteristics were in good agreement with the observed phenomena,which verifies the accuracy of the numerical method.Under the condition of surcharge loading on the crest of the slope,the unreinforced slope slid when the surcharge loading exceeded 30 k Pa,which presented a failure mode of local instability and collapse at the shallow layer of slope top.The reinforced slope remained stable even when the surcharge loading reached 48 k Pa.The displacement of the reinforced slope was reduced by more than 95%.Overall,this study verifies the effectiveness of polyurethane in the emergency treatment of cohesionless soil landslide and should have broad application prospects in the field of geological disasters concerning the safety of people's live.展开更多
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci...Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.展开更多
Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,su...Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.展开更多
DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expres...DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expression data generated. To address this, this paper employs a mixed-effects model to analyze gene expression data. In terms of data selection, 1176 genes from the white mouse gene expression dataset under two experimental conditions were chosen, setting up two conditions: pneumococcal infection and no infection, and constructing a mixed-effects model. After preprocessing the gene chip information, the data were imported into the model, preliminary results were calculated, and permutation tests were performed to biologically validate the preliminary results using GSEA. The final dataset consists of 20 groups of gene expression data from pneumococcal infection, which categorizes functionally related genes based on the similarity of their expression profiles, facilitating the study of genes with unknown functions.展开更多
Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,...Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.展开更多
Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether ...Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether Large Language Models(LLMs)can play a role in this process.Design/methodology/approach:This article assesses which ChatGPT inputs(full text without tables,figures,and references;title and abstract;title only)produce better quality score estimates,and the extent to which scores are affected by ChatGPT models and system prompts.Findings:The optimal input is the article title and abstract,with average ChatGPT scores based on these(30 iterations on a dataset of 51 papers)correlating at 0.67 with human scores,the highest ever reported.ChatGPT 4o is slightly better than 3.5-turbo(0.66),and 4o-mini(0.66).Research limitations:The data is a convenience sample of the work of a single author,it only includes one field,and the scores are self-evaluations.Practical implications:The results suggest that article full texts might confuse LLM research quality evaluations,even though complex system instructions for the task are more effective than simple ones.Thus,whilst abstracts contain insufficient information for a thorough assessment of rigour,they may contain strong pointers about originality and significance.Finally,linear regression can be used to convert the model scores into the human scale scores,which is 31%more accurate than guessing.Originality/value:This is the first systematic comparison of the impact of different prompts,parameters and inputs for ChatGPT research quality evaluations.展开更多
Wireless technologies and the Internet of Things(IoT)are being extensively utilized for advanced development in traditional communication systems.This evolution lowers the cost of the extensive use of sensors,changing...Wireless technologies and the Internet of Things(IoT)are being extensively utilized for advanced development in traditional communication systems.This evolution lowers the cost of the extensive use of sensors,changing the way devices interact and communicate in dynamic and uncertain situations.Such a constantly evolving environment presents enormous challenges to preserving a secure and lightweight IoT system.Therefore,it leads to the design of effective and trusted routing to support sustainable smart cities.This research study proposed a Genetic Algorithm sentiment-enhanced secured optimization model,which combines big data analytics and analysis rules to evaluate user feedback.The sentiment analysis is utilized to assess the perception of network performance,allowing the classification of device behavior as positive,neutral,or negative.By integrating sentiment-driven insights,the IoT network adjusts the system configurations to enhance the performance using network behaviour in terms of latency,reliability,fault tolerance,and sentiment score.Accordingly to the analysis,the proposed model categorizes the behavior of devices as positive,neutral,or negative,facilitating real-time monitoring for crucial applications.Experimental results revealed a significant improvement in the proposed model for threat prevention and network efficiency,demonstrating its resilience for real-time IoT applications.展开更多
With the rapid development of generative artificial intelligence technologies,represented by large language models,university-level computer science education is undergoing a critical transition-from knowledge-based i...With the rapid development of generative artificial intelligence technologies,represented by large language models,university-level computer science education is undergoing a critical transition-from knowledge-based instruction to competency-oriented teaching.A postgraduate student competency evaluation model can serve as a framework to organize and guide both teaching and research activities at the postgraduate level.A number of relevant research efforts have already been conducted in this area.Graduate education plays a vital role not only as a continuation and enhancement of undergraduate education but also as essential preparation for future research endeavors.An analysis of the acceptance of competency evaluation models refers to the assessment of how various stakeholders perceive the importance of different components within the model.Investigating the degree of acceptance among diverse groups-such as current undergraduate students,current postgraduate students,graduates with less than three years of work experience,and those with more than three years of work experience-can offer valuable insights for improving and optimizing postgraduate education and training practices.展开更多
The dynamic,heterogeneous nature of Edge computing in the Internet of Things(Edge-IoT)and Industrial IoT(IIoT)networks brings unique and evolving cybersecurity challenges.This study maps cyber threats in Edge-IoT/IIoT...The dynamic,heterogeneous nature of Edge computing in the Internet of Things(Edge-IoT)and Industrial IoT(IIoT)networks brings unique and evolving cybersecurity challenges.This study maps cyber threats in Edge-IoT/IIoT environments to the Adversarial Tactics,Techniques,and Common Knowledge(ATT&CK)framework by MITRE and introduces a lightweight,data-driven scoring model that enables rapid identification and prioritization of attacks.Inspired by the Factor Analysis of Information Risk model,our proposed scoring model integrates four key metrics:Common Vulnerability Scoring System(CVSS)-based severity scoring,Cyber Kill Chain–based difficulty estimation,Deep Neural Networks-driven detection scoring,and frequency analysis based on dataset prevalence.By aggregating these indicators,the model generates comprehensive risk profiles,facilitating actionable prioritization of threats.Robustness and stability of the scoring model are validated through non-parametric correlation analysis using Spearman’s and Kendall’s rank correlation coefficients,demonstrating consistent performance across diverse scenarios.The approach culminates in a prioritized attack ranking that provides actionable guidance for risk mitigation and resource allocation in Edge-IoT/IIoT security operations.By leveraging real-world data to align MITRE ATT&CK techniques with CVSS metrics,the framework offers a standardized and practically applicable solution for consistent threat assessment in operational settings.The proposed lightweight scoring model delivers rapid and reliable results under dynamic cyber conditions,facilitating timely identification of attack scenarios and prioritization of response strategies.Our systematic integration of established taxonomies with data-driven indicators strengthens practical risk management and supports strategic planning in next-generation IoT deployments.Ultimately,this work advances adaptive threat modeling for Edge/IIoT ecosystems and establishes a robust foundation for evidence-based prioritization in emerging cyber-physical infrastructures.展开更多
Background:With the rapid development of artificial intelligence(AI),large language models(LLMs)have emerged as a potent tool for invigorating ophthalmology across clinical,educational,and research fields.Their accura...Background:With the rapid development of artificial intelligence(AI),large language models(LLMs)have emerged as a potent tool for invigorating ophthalmology across clinical,educational,and research fields.Their accuracy and reliability have undergone tested.This bibliometric analysis aims to provide an overview of research on LLMs in ophthalmology from both thematic and geographical perspectives.Methods:All existing and highly cited LLM-related ophthalmology research papers published in English up to 24th April 2025 were sourced from Scopus,PubMed,and Web of Science.The characteristics of these publications,including publication output,authors,journals,countries,institutions,citations,and research domains,were analyzed using Biblioshiny and VOSviewer software.Results:A total of 277 articles from 1,459 authors and 89 journals were included in this study.Although relevant publications began to appear in 2019,there was a significant increase starting from 2023.He M and Shi D are the most prolific authors,while Investigative Ophthalmology&Visual Science stands out as the most prominent journal.Most of the top-publishing countries are high-income economies,with the USA taking the lead,and the University of California is the leading institution.VOSviewer identified 5 clusters in the keyword co-occurrence analysis,indicating that current research focuses on the clinical applications of LLMs,particularly in diagnosis and patient education.Conclusions:While LLMs have demonstrated effectiveness in retaining knowledge,their accuracy in image-based diagnosis remains limited.Therefore,future research should investigate fine-tuning strategies and domain-specific adaptations to close this gap.Although research on the applications of LLMs in ophthalmology is still in its early stages,it holds significant potential for advancing the field.展开更多
Sentiment analysis,a cornerstone of natural language processing,has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across vari...Sentiment analysis,a cornerstone of natural language processing,has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across various domains.However,the deployment of such models in resource-constrained environments presents a unique set of challenges that require innovative solutions.Resource-constrained environments encompass scenarios where computing resources,memory,and energy availability are restricted.To empower sentiment analysis in resource-constrained environments,we address the crucial need by leveraging lightweight pre-trained models.These models,derived from popular architectures such as DistilBERT,MobileBERT,ALBERT,TinyBERT,ELECTRA,and SqueezeBERT,offer a promising solution to the resource limitations imposed by these environments.By distilling the knowledge from larger models into smaller ones and employing various optimization techniques,these lightweight models aim to strike a balance between performance and resource efficiency.This paper endeavors to explore the performance of multiple lightweight pre-trained models in sentiment analysis tasks specific to such environments and provide insights into their viability for practical deployment.展开更多
In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This...In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This article considers subgroup analysis methods for longitudinal data containing multiple covariates and biomarkers.We divide subgroups based on whether a linear combination of these biomarkers exceeds a predetermined threshold,and assess the heterogeneity of treatment effects across subgroups using the interaction between subgroups and exposure variables.Quantile regression is used to better characterize the global distribution of the response variable and sparsity penalties are imposed to achieve variable selection of covariates and biomarkers.The effectiveness of our proposed methodology for both variable selection and parameter estimation is verified through random simulations.Finally,we demonstrate the application of this method by analyzing data from the PA.3 trial,further illustrating the practicality of the method proposed in this paper.展开更多
基金output of a research project implemented as part of the Basic Research Program at HSE University。
文摘Activation pruning reduces neural network complexity by eliminating low-importance neuron activations,yet identifying the critical pruning threshold—beyond which accuracy rapidly deteriorates—remains computationally expensive and typically requires exhaustive search.We introduce a thermodynamics-inspired framework that treats activation distributions as energy-filtered physical systems and employs the free energy of activations as a principled evaluation metric.Phase-transition-like phenomena in the free-energy profile—such as extrema,inflection points,and curvature changes—yield reliable estimates of the critical pruning threshold,providing a theoretically grounded means of predicting sharp accuracy degradation.To further enhance efficiency,we propose a renormalized free energy technique that approximates full-evaluation free energy using only the activation distribution of the unpruned network.This eliminates repeated forward passes,dramatically reducing computational overhead and achieving speedups of up to 550×for MLPs.Extensive experiments across diverse vision architectures(MLP,CNN,ResNet,MobileNet,Vision Transformer)and text models(LSTM,BERT,ELECTRA,T5,GPT-2)on multiple datasets validate the generality,robustness,and computational efficiency of our approach.Overall,this work establishes a theoretically grounded and practically effective framework for activation pruning,bridging the gap between analytical understanding and efficient deployment of sparse neural networks.
基金the National Natural Science Foundation of China under Grant 52441411,52325402 and 52274057Deep Earth Probe and Mineral Resources Exploration-National Science and Technology Major Project under Grant 2024ZD1004302-04the National Key R&D Program of China under Grant 2023YFB4104200.
文摘This study investigates in-station pressure drop mechanisms in a shale gas gathering system,providing a quantitative basis for flow system optimization.Computational fluid dynamics(CFD)simulations,based on field-measured parameters related to a representative case(a shale gas platform located in Sichuan,China)are conducted to analyze the flow characteristics of specific fittings and manifolds,and to quantify fitting resistance coefficients and manifold inlet interference.The resulting coefficients are integrated into a full-station gathering network model in PipeSim,which,combined with production data,enables evaluation of pressure losses and identification of equivalent pipeline blockages.The results indicate that the resistance coefficients,valid only for fittings under the studied field-specific geometries,are 0.21 for 90◦elbows in the fully open position,0.16 for gate valve passages in the fully open position,and 2.3 for globe valve passages.Manifold interference decreases with lower high-pressure inlet values,whereas inlets farther from the high-pressure side experience stronger disturbances.Interestingly,significant discrepancies between simulated and measured pressure drops reveal partial blockages,corresponding to effective diameter reductions of 65 mm,38 mm,44 mm,38 mm,and 28 mm for Wells 1#,3#,5#,and 6#,respectively.
基金supported by the National Natural Science Foundation of China(Grant Nos.52505101,52475087,52475089,52365010)the Early-Career Young Scientists and Technologists Project of Jiangxi Province(Grant No.20252BEJ730175)。
文摘Under sustained strong stochastic impact loads,floating-supported friction plates are susceptible to the formation of fatigue cracks that propagate along the rim.The nonlinearity and randomness introduced by the cracked teeth participating in the impacts significantly influence the service life and reliability of the transmission system.In this paper,an improved stiffness excitation modeling method is developed for friction plate teeth with rim cracks.It overcomes the limitations of traditional approaches that fail to accurately assess the narrow-band,large-diameter friction plate teeth with rim cracks due to constraints imposed by boundary conditions.Then,an original dynamic impact model for the floating-supported friction plate and inner hub system is proposed,incorporating the effects of bending-torsional-axial-tilting coupled motions on tooth mesh excitations and dynamic responses.This model addresses the limitations of conventional models that only consider bending-torsion coupling,thereby providing a more comprehensive representation of the system's multi-dimensional dynamic behavior.The effects of the crack propagation depth and the number of cracked teeth on the stochastic impact characteristics and vibration responses of the system are investigated.Furthermore,finite element simulations and experimental tests are conducted to validate the cracked tooth stiffness excitations and dynamic impact responses,respectively.The proposed model is anticipated to provide both a theoretical foundation and practical guidance for fault diagnosis and reliability assessment of clutch friction plates.
基金supported by the National Key R&D Program of China(No.2022YFC3003502).
文摘The northern segment of the North-South Seismic Belt is characterized by intense crustal deformation,well-developed active tectonics,and frequent occurrences of strong earthquakes.Therefore,conducting a Probabilistic Seismic Hazard Analysis(PSHA)for this region is of significant importance for supporting seismic fortification in major engineering projects and formulating disaster prevention and mitigation policies.In this study,a composite seismic source model was constructed by integrating data on historical earthquakes,active faults,and paleoseismicity.Furthermore,a logic tree framework was employed to quantify epistemic uncertainties,enabling a systematic seismic hazard assessment of the region.To more accurately characterize the spatial heterogeneity of seismic activity,improvements were made to both the Circular Spatial Smoothing Model(CSSM)with a fixed radius and the Adaptive Spatial Smoothing Model(ASSM),with full consideration given to the spatiotemporal completeness of historical earthquake magnitudes.Regarding the CSSM,for scenarios involving small sample sizes in earthquake catalogs,the cross-validation method proposed in this study demonstrated higher robustness than the maximum likelihood method in determining the optimal correlation distance.Performance evaluation results indicate that while both models effectively characterize seismic activity,the ASSM exhibits superior overall predictive performance compared to the CSSM,owing to its ability to adaptively adjust the smoothing radius according to seismic density.Significant discrepancies were observed in the Peak Ground Acceleration(PGA)results calculated with a 10%probability of exceedance in 50 years across different combinations of seismic source models.The single spatially smoothed point-source model yielded a maximum PGA of approximately 0.52 g,with high-value areas concentrated near historical epicenters,thereby significantly underestimating the hazard associated with major fault zones.When combined with the simple fault-source model,the maximum PGA increased to 0.8 g,with high-value zones exhibiting a zonal distribution along faults;however,the risk remained underestimated for faults with low slip rates that are nevertheless approaching their recurrence cycles.Following the introduction of the time-dependent characteristic fault-source model,local PGA values for faults in the middle-to-late stages of their recurrence cycles increased by a factor of 2 to 7 compared to the single model.These results demonstrate that the characteristic fault-source model reasonably delineates the time-dependence of large earthquake recurrence,thereby providing a more accurate assessment of imminent seismic risks.By comprehensively applying the improved spatially smoothed pointsource model,the simple fault-source model,and the characteristic fault-source model,the following faults within the region were identified as having high seismic hazard:the Huangxianggou,Zhangxian,and Tianshui segments of the Xiqinling northern edge fault;the Maqin-Maqu segment of the Dongkunlun fault;the Longriqu fault;the Maoergai fault;the Elashan fault;the Riyueshan fault;the eastern segment of the Lenglongling fault;the Maxianshan segment of the Maxianshan northern Margin fault;and the Maomaoshan-Jinqianghe segment of the Laohushan-Maomaoshan fault.As these faults are located within seismic gaps or are approaching the recurrence periods of large earthquakes,they should be prioritized for current and future seismic monitoring as well as disaster prevention and mitigation efforts.
基金supported in part by the National Natural Science Foundation of China(Key Program)under Grant No.62031021。
文摘Cascading failures pose a serious threat to the survivability of underwater unmanned swarm networks(UUSNs),significantly limiting their service ability in collaborative missions such as military reconnaissance and environmental monitoring.Existing failure models primarily focus on power grids and traffic systems,and don't address the unique challenges of weak-communication UUSNs.In UUSNs,cascading failure present a complex and dynamic process driven by the coupling of unstable acoustic channels,passive node drift,adversarial attacks,and network heterogeneity.To address these challenges,a directed weighted graph model of UUSNs is first developed,in which node positions are updated according to ocean-current-driven drift and link weights reflect the probability of successful acoustic transmission.Building on this UUSNs graph model,a cascading failure model is proposed that integrates a normal-failure-recovery state-cycle mechanism,multiple attack strategies,and routingbased load redistribution.Finally,under a five-level connectivity UUSNs scheme,simulations are conducted to analyze how dynamic topology,network load,node recovery delay,and attack modes jointly affect network survivability.The main findings are:(1)moderate node drift can improve survivability by activating weak links;(2)based-energy routing(BER)outperform based-depth routing(BDR)in harsh conditions;(3)node self-recovery time is critical to network survivability;(4)traditional degree-based critical node metrics are inadequate for weak-communication UUSNs.These results provide a theoretical foundation for designing robust survivability mechanisms in weak-communication UUSNs.
基金Under the auspices of the National Natural Science Foundation of China(No.42371222,41971167)Fundamental Scientific Research Funds of Central China Normal University(No.CCNU24ZZ120)。
文摘Owing to intensified globalization and informatization,the structures of the urban scale hierarchy and urban networks between cities have become increasingly intertwined,resulting in different spatial effects.Therefore,this paper analyzes the spatial interaction between urban scale hierarchy and urban networks in China from 2019 to 2023,drawing on Baidu migration data and employing a spatial simultaneous equation model.The results reveal a significant positive spatial correlation between cities with higher hierarchy and those with greater network centrality.Within a static framework,we identify a positive interaction between urban scale hierarchy and urban network centrality,while their spatial cross-effects manifest as negative neighborhood interactions based on geographical distance and positive cross-scale interactions shaped by network connections.Within a dynamic framework,changes in urban scale hierarchy and urban networks are mutually reinforcing,thereby widening disparities within the urban hierarchy.Furthermore,an increase in a city’s network centrality had a dampening effect on the population growth of neighboring cities and network-connected cities.This study enhances understanding of the spatial organisation of urban systems and offers insights for coordinated regional development.
基金supported by the National Natural Science Foundation of China(Grant Nos.12272411 and 42007259).
文摘Deep rock engineering is affected by coupled thermo-hydro-mechanical(THM)-dynamic fields,necessitating the elucidation of the dynamic mechanical behavior and failure mechanisms.This study utilized a Multi-field Coupled Controlled Split Hopkinson Pressure Bar(MCC-SHPB)system to elucidate the cross-scale dynamic responses of rocks and the boundaries of failure modes under THM coupling.Impact tests were conducted on green sandstone under coupled conditions of temperature(25℃-80℃),confining pressure(0-15 MPa),and seepage water pressure(0-15 MPa).Scanning electron microscopy(SEM)microstructural characterization and COMSOL Multiphysics numerical simulations were conducted,and a dynamic constitutive theoretical framework and failure-prediction methodology were established.We investigated the impact toughness index(I_(t)),dynamic modulus(E_(d)),dynamic triaxial compressive strength(TCS_(d)),fragmentation degree(W),and failure modes of green sandstone under thermo-confining pressure-seepage-impact loading conditions.The key findings reveal that the(I_(t))reflects different energy regulation mechanisms across different confining pressure regimes.Thermal-microcrack interactions dominate at low pressure,and energy absorption prevails at high pressure.A triphasic dynamic modulus model captures stiffness evolution under energy-driven conditions,revealing cross-scale crack nucleation-propagation and fragment reorganization.The TCSd inflection point signifies energy dissipation shifts,causing nonlinear skeleton bearing-capacity degradation.A critical criterion based on the W was established to distinguish between the two failure modes and predict the unstable failure initiation.Numerical simulations were used to elucidate the effects of inertia-dominated crack propagation and stress wave interference,validating the critical criterion and the predictive accuracy of the theoretical model during cross-scale failure.This study provides a theoretical foundation for assessing the dynamic stability of rock masses subjected to multi-field coupling during deep resource exploitation.
基金Heilongjiang Provincial Natural Science Foundation of China (LH2021F009)。
文摘Anti-jamming performance evaluation has recently received significant attention. For Link-16, the anti-jamming performance evaluation and selection of the optimal anti-jamming technologies are urgent problems to be solved. A comprehensive evaluation method is proposed, which combines grey relational analysis (GRA) and cloud model, to evaluate the anti-jamming performances of Link-16. Firstly, on the basis of establishing the anti-jamming performance evaluation indicator system of Link-16, the linear combination of analytic hierarchy process(AHP) and entropy weight method (EWM) are used to calculate the combined weight. Secondly, the qualitative and quantitative concept transformation model, i.e., the cloud model, is introduced to evaluate the anti-jamming abilities of Link-16 under each jamming scheme. In addition, GRA calculates the correlation degree between evaluation indicators and the anti-jamming performance of Link-16, and assesses the best anti-jamming technology. Finally, simulation results prove that the proposed evaluation model can achieve the objective of feasible and practical evaluation, which opens up a novel way for the research of anti-jamming performance evaluations of Link-16.
基金the financial support from the Fujian Science Foundation for Outstanding Youth(2023J06039)the National Natural Science Foundation of China(Grant No.41977259,U2005205,41972268)the Independent Research Project of Technology Innovation Center for Monitoring and Restoration Engineering of Ecological Fragile Zone in Southeast China(KY-090000-04-2022-019)。
文摘Shotcrete is one of the common solutions for shallow sliding.It works by forming a protective layer with high strength and cementing the loose soil particles on the slope surface to prevent shallow sliding.However,the solidification time of conventional cement paste is long when shotcrete is used to treat cohesionless soil landslide.The idea of reinforcing slope with polyurethane solidified soil(i.e.,mixture of polyurethane and sand)was proposed.Model tests and finite element analysis were carried out to study the effectiveness of the proposed new method on the emergency treatment of cohesionless soil landslide.Surcharge loading on the crest of the slope was applied step by step until landslide was triggered so as to test and compare the stability and bearing capacity of slope models with different conditions.The simulated slope displacements were relatively close to the measured results,and the simulated slope deformation characteristics were in good agreement with the observed phenomena,which verifies the accuracy of the numerical method.Under the condition of surcharge loading on the crest of the slope,the unreinforced slope slid when the surcharge loading exceeded 30 k Pa,which presented a failure mode of local instability and collapse at the shallow layer of slope top.The reinforced slope remained stable even when the surcharge loading reached 48 k Pa.The displacement of the reinforced slope was reduced by more than 95%.Overall,this study verifies the effectiveness of polyurethane in the emergency treatment of cohesionless soil landslide and should have broad application prospects in the field of geological disasters concerning the safety of people's live.
基金supported by the National Key R&D Program of China(No.2021YFB0301200)National Natural Science Foundation of China(No.62025208).
文摘Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.
基金funded through India Meteorological Department,New Delhi,India under the Forecasting Agricultural output using Space,Agrometeorol ogy and Land based observations(FASAL)project and fund number:No.ASC/FASAL/KT-11/01/HQ-2010.
文摘Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.
文摘DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expression data generated. To address this, this paper employs a mixed-effects model to analyze gene expression data. In terms of data selection, 1176 genes from the white mouse gene expression dataset under two experimental conditions were chosen, setting up two conditions: pneumococcal infection and no infection, and constructing a mixed-effects model. After preprocessing the gene chip information, the data were imported into the model, preliminary results were calculated, and permutation tests were performed to biologically validate the preliminary results using GSEA. The final dataset consists of 20 groups of gene expression data from pneumococcal infection, which categorizes functionally related genes based on the similarity of their expression profiles, facilitating the study of genes with unknown functions.
基金supported by the Project of Stable Support for Youth Team in Basic Research Field,CAS(grant No.YSBR-018)the National Natural Science Foundation of China(grant Nos.42188101,42130204)+4 种基金the B-type Strategic Priority Program of CAS(grant no.XDB41000000)the National Natural Science Foundation of China(NSFC)Distinguished Overseas Young Talents Program,Innovation Program for Quantum Science and Technology(2021ZD0300301)the Open Research Project of Large Research Infrastructures of CAS-“Study on the interaction between low/mid-latitude atmosphere and ionosphere based on the Chinese Meridian Project”.The project was supported also by the National Key Laboratory of Deep Space Exploration(Grant No.NKLDSE2023A002)the Open Fund of Anhui Provincial Key Laboratory of Intelligent Underground Detection(Grant No.APKLIUD23KF01)the China National Space Administration(CNSA)pre-research Project on Civil Aerospace Technologies No.D010305,D010301.
文摘Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.
文摘Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether Large Language Models(LLMs)can play a role in this process.Design/methodology/approach:This article assesses which ChatGPT inputs(full text without tables,figures,and references;title and abstract;title only)produce better quality score estimates,and the extent to which scores are affected by ChatGPT models and system prompts.Findings:The optimal input is the article title and abstract,with average ChatGPT scores based on these(30 iterations on a dataset of 51 papers)correlating at 0.67 with human scores,the highest ever reported.ChatGPT 4o is slightly better than 3.5-turbo(0.66),and 4o-mini(0.66).Research limitations:The data is a convenience sample of the work of a single author,it only includes one field,and the scores are self-evaluations.Practical implications:The results suggest that article full texts might confuse LLM research quality evaluations,even though complex system instructions for the task are more effective than simple ones.Thus,whilst abstracts contain insufficient information for a thorough assessment of rigour,they may contain strong pointers about originality and significance.Finally,linear regression can be used to convert the model scores into the human scale scores,which is 31%more accurate than guessing.Originality/value:This is the first systematic comparison of the impact of different prompts,parameters and inputs for ChatGPT research quality evaluations.
基金supported by the Deanship of Graduate Studies and Scientific Research at Jouf University under Grant No.DGSSR-2024-02-01011.
文摘Wireless technologies and the Internet of Things(IoT)are being extensively utilized for advanced development in traditional communication systems.This evolution lowers the cost of the extensive use of sensors,changing the way devices interact and communicate in dynamic and uncertain situations.Such a constantly evolving environment presents enormous challenges to preserving a secure and lightweight IoT system.Therefore,it leads to the design of effective and trusted routing to support sustainable smart cities.This research study proposed a Genetic Algorithm sentiment-enhanced secured optimization model,which combines big data analytics and analysis rules to evaluate user feedback.The sentiment analysis is utilized to assess the perception of network performance,allowing the classification of device behavior as positive,neutral,or negative.By integrating sentiment-driven insights,the IoT network adjusts the system configurations to enhance the performance using network behaviour in terms of latency,reliability,fault tolerance,and sentiment score.Accordingly to the analysis,the proposed model categorizes the behavior of devices as positive,neutral,or negative,facilitating real-time monitoring for crucial applications.Experimental results revealed a significant improvement in the proposed model for threat prevention and network efficiency,demonstrating its resilience for real-time IoT applications.
文摘With the rapid development of generative artificial intelligence technologies,represented by large language models,university-level computer science education is undergoing a critical transition-from knowledge-based instruction to competency-oriented teaching.A postgraduate student competency evaluation model can serve as a framework to organize and guide both teaching and research activities at the postgraduate level.A number of relevant research efforts have already been conducted in this area.Graduate education plays a vital role not only as a continuation and enhancement of undergraduate education but also as essential preparation for future research endeavors.An analysis of the acceptance of competency evaluation models refers to the assessment of how various stakeholders perceive the importance of different components within the model.Investigating the degree of acceptance among diverse groups-such as current undergraduate students,current postgraduate students,graduates with less than three years of work experience,and those with more than three years of work experience-can offer valuable insights for improving and optimizing postgraduate education and training practices.
基金supported by the“Regional Innovation System&Education(RISE)”through the Seoul RISE Center,funded by the Ministry of Education(MOE)and the Seoul Metropolitan Government(2025-RISE-01-018-05)supported by Quad Miners Corp。
文摘The dynamic,heterogeneous nature of Edge computing in the Internet of Things(Edge-IoT)and Industrial IoT(IIoT)networks brings unique and evolving cybersecurity challenges.This study maps cyber threats in Edge-IoT/IIoT environments to the Adversarial Tactics,Techniques,and Common Knowledge(ATT&CK)framework by MITRE and introduces a lightweight,data-driven scoring model that enables rapid identification and prioritization of attacks.Inspired by the Factor Analysis of Information Risk model,our proposed scoring model integrates four key metrics:Common Vulnerability Scoring System(CVSS)-based severity scoring,Cyber Kill Chain–based difficulty estimation,Deep Neural Networks-driven detection scoring,and frequency analysis based on dataset prevalence.By aggregating these indicators,the model generates comprehensive risk profiles,facilitating actionable prioritization of threats.Robustness and stability of the scoring model are validated through non-parametric correlation analysis using Spearman’s and Kendall’s rank correlation coefficients,demonstrating consistent performance across diverse scenarios.The approach culminates in a prioritized attack ranking that provides actionable guidance for risk mitigation and resource allocation in Edge-IoT/IIoT security operations.By leveraging real-world data to align MITRE ATT&CK techniques with CVSS metrics,the framework offers a standardized and practically applicable solution for consistent threat assessment in operational settings.The proposed lightweight scoring model delivers rapid and reliable results under dynamic cyber conditions,facilitating timely identification of attack scenarios and prioritization of response strategies.Our systematic integration of established taxonomies with data-driven indicators strengthens practical risk management and supports strategic planning in next-generation IoT deployments.Ultimately,this work advances adaptive threat modeling for Edge/IIoT ecosystems and establishes a robust foundation for evidence-based prioritization in emerging cyber-physical infrastructures.
基金supported by Health and Medical Research Fund,Hong Kong(11220386,12230246).
文摘Background:With the rapid development of artificial intelligence(AI),large language models(LLMs)have emerged as a potent tool for invigorating ophthalmology across clinical,educational,and research fields.Their accuracy and reliability have undergone tested.This bibliometric analysis aims to provide an overview of research on LLMs in ophthalmology from both thematic and geographical perspectives.Methods:All existing and highly cited LLM-related ophthalmology research papers published in English up to 24th April 2025 were sourced from Scopus,PubMed,and Web of Science.The characteristics of these publications,including publication output,authors,journals,countries,institutions,citations,and research domains,were analyzed using Biblioshiny and VOSviewer software.Results:A total of 277 articles from 1,459 authors and 89 journals were included in this study.Although relevant publications began to appear in 2019,there was a significant increase starting from 2023.He M and Shi D are the most prolific authors,while Investigative Ophthalmology&Visual Science stands out as the most prominent journal.Most of the top-publishing countries are high-income economies,with the USA taking the lead,and the University of California is the leading institution.VOSviewer identified 5 clusters in the keyword co-occurrence analysis,indicating that current research focuses on the clinical applications of LLMs,particularly in diagnosis and patient education.Conclusions:While LLMs have demonstrated effectiveness in retaining knowledge,their accuracy in image-based diagnosis remains limited.Therefore,future research should investigate fine-tuning strategies and domain-specific adaptations to close this gap.Although research on the applications of LLMs in ophthalmology is still in its early stages,it holds significant potential for advancing the field.
文摘Sentiment analysis,a cornerstone of natural language processing,has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across various domains.However,the deployment of such models in resource-constrained environments presents a unique set of challenges that require innovative solutions.Resource-constrained environments encompass scenarios where computing resources,memory,and energy availability are restricted.To empower sentiment analysis in resource-constrained environments,we address the crucial need by leveraging lightweight pre-trained models.These models,derived from popular architectures such as DistilBERT,MobileBERT,ALBERT,TinyBERT,ELECTRA,and SqueezeBERT,offer a promising solution to the resource limitations imposed by these environments.By distilling the knowledge from larger models into smaller ones and employing various optimization techniques,these lightweight models aim to strike a balance between performance and resource efficiency.This paper endeavors to explore the performance of multiple lightweight pre-trained models in sentiment analysis tasks specific to such environments and provide insights into their viability for practical deployment.
基金Supported by the Natural Science Foundation of Fujian Province(2022J011177,2024J01903)the Key Project of Fujian Provincial Education Department(JZ230054)。
文摘In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This article considers subgroup analysis methods for longitudinal data containing multiple covariates and biomarkers.We divide subgroups based on whether a linear combination of these biomarkers exceeds a predetermined threshold,and assess the heterogeneity of treatment effects across subgroups using the interaction between subgroups and exposure variables.Quantile regression is used to better characterize the global distribution of the response variable and sparsity penalties are imposed to achieve variable selection of covariates and biomarkers.The effectiveness of our proposed methodology for both variable selection and parameter estimation is verified through random simulations.Finally,we demonstrate the application of this method by analyzing data from the PA.3 trial,further illustrating the practicality of the method proposed in this paper.