期刊文献+
共找到61,554篇文章
< 1 2 250 >
每页显示 20 50 100
Semantic Causality Evaluation of Correlation Analysis Utilizing Large Language Models
1
作者 Adam Dudáš 《Computers, Materials & Continua》 2026年第5期2246-2269,共24页
It is known that correlation does not imply causality.Some relationships identified in the analysis of data are coincidental or unknown,and some are produced by real-world causality of the situation,which is problemat... It is known that correlation does not imply causality.Some relationships identified in the analysis of data are coincidental or unknown,and some are produced by real-world causality of the situation,which is problematic,since there is a need to differentiate between these two scenarios.Until recently,the proper−semantic−causality of the relationship could have been determined only by human experts from the area of expertise of the studied data.This has changed with the advance of large language models,which are often utilized as surrogates for such human experts,making the process automated and readily available to all data analysts.This motivates the main objective of this work,which is to introduce the design and implementation of a large language model-based semantic causality evaluator based on correlation analysis,together with its visual analysis model called Causal heatmap.After the implementation itself,the model is evaluated from the point of view of the quality of the visual model,from the point of view of the quality of causal evaluation based on large language models,and from the point of view of comparative analysis,while the results reached in the study highlight the usability of large language models in the task and the potential of the proposed approach in the analysis of unknown datasets.The results of the experimental evaluation demonstrate the usefulness of the Causal heatmap method,supported by the evident highlighting of interesting relationships,while suppressing irrelevant ones. 展开更多
关键词 CORRELATION CAUSALITY correlation analysis large language models VISUALIZATION
在线阅读 下载PDF
Error Analysis of Geomagnetic Field Reconstruction Model Using Negative Learning for Seismic Anomaly Detection
2
作者 Nur Syaiful Afrizal KhairulAdibYusof +5 位作者 Lokman Hakim Muhamad Nurul Shazana Abdul Hamid Mardina Abdullah Mohd Amiruddin Abd Rahman Syamsiah Mashohor Masashi Hayakawa 《Computers, Materials & Continua》 2026年第2期1338-1353,共16页
Detecting geomagnetic anomalies preceding earthquakes is a challenging yet promising area of research that has gained increasing attention in recent years.This study introduces a novel reconstruction-based modeling ap... Detecting geomagnetic anomalies preceding earthquakes is a challenging yet promising area of research that has gained increasing attention in recent years.This study introduces a novel reconstruction-based modeling approach enhanced by negative learning,employing a Bidirectional Long Short-Term Memory(BiLSTM)network explicitly trained to accurately reconstruct non-seismic geomagnetic signals while intentionally amplifying reconstruction errors for seismic signals.By penalizing the model for accurately reconstructing seismic anomalies,the negative learning approach effectively magnifies the differences between normal and anomalous data.This strategic differentiation enhances the sensitivity of the BiLSTM network,enabling improved detection of subtle geomagnetic anomalies that may serve as earthquake precursors.Experimental validation clearly demonstrated statistically significant higher reconstruction errors for seismic signals compared to non-seismic signals,confirmed through the Mann-Whitney U test with a p-value of 0.0035 for Root Mean Square Error(RMSE).These results provide compelling evidence of the enhanced anomaly detection capability achieved through negative learning.Unlike traditional classification-based methods,negative learning explicitly encourages sensitivity to subtle precursor signals embedded within complex geomagnetic data,establishing a robust basis for further development of reliable earthquake prediction methods. 展开更多
关键词 Error analysis geomagnetic field BiLSTM model negative learning earthquake precursor
在线阅读 下载PDF
Impedance Modeling and Stability Analysis of LCC-HVDC Transmission System
3
作者 Bo Zhang Xiong Du +3 位作者 Shangning Tan Junliang Liu Haijiao Wang Yiding Jin 《CSEE Journal of Power and Energy Systems》 2026年第1期366-376,共11页
Interaction between the converter and the grid may lead to harmonic oscillations.The impedance-based method is an effective way to deal with the stability issue.In this study,the impedance-based method is used to inve... Interaction between the converter and the grid may lead to harmonic oscillations.The impedance-based method is an effective way to deal with the stability issue.In this study,the impedance-based method is used to investigate the small-signal stability of a cascaded 12-pulse line-commutated converter-based high-voltage direct current(LCC-HVDC)transmission system.In the modeling part,the impedance models of the single rectifier and inverter are established respectively with consideration to the effect of frequency coupling,which has improved the accuracy of the models.Based on the models,the AC impedance models of the cascaded LCC-HVDC transmission system are established both on the rectifier and inverter side.In the stability analysis part,the stability of the system is analyzed under different working conditions.The simulation results reveal that the established impedance model can properly represent the stability of this system.The findings of this study can provide a theoretical reference for the stability design and oscillation suppression strategy of LCC-HVDC transmission systems and LCC interconnected systems. 展开更多
关键词 Frequency coupling LCC impedance modeling stability analysis
原文传递
A Deterministic and Stochastic Fractional-Order Model for Computer Virus Propagation with Caputo-Fabrizio Derivative:Analysis,Numerics,and Dynamics
4
作者 Najat Almutairi Mohammed Messaoudi +1 位作者 Faisal Muteb K.Almalki Sayed Saber 《Computer Modeling in Engineering & Sciences》 2026年第3期806-843,共38页
This paper introduces a novel fractional-order model based on the Caputo-Fabrizio(CF)derivative for analyzing computer virus propagation in networked environments.The model partitions the computer population into four... This paper introduces a novel fractional-order model based on the Caputo-Fabrizio(CF)derivative for analyzing computer virus propagation in networked environments.The model partitions the computer population into four compartments:susceptible,latently infected,breaking-out,and antivirus-capable systems.By employing the CF derivative—which uses a nonsingular exponential kernel—the framework effectively captures memory-dependent and nonlocal characteristics intrinsic to cyber systems,aspects inadequately represented by traditional integer-order models.Under Lipschitz continuity and boundedness assumptions,the existence and uniqueness of solutions are rigorously established via fixed-point theory.We develop a tailored two-step Adams-Bashforth numerical scheme for the CF framework and prove its second-order accuracy.Extensive numerical simulations across various fractional orders reveal that memory effects significantly influence virus transmission and control dynamics;smaller fractional orders produce more pronounced memory effects,delaying both infection spread and antivirus activation.Further theoretical analysis,including Hyers-Ulam stability and sensitivity assessments,reinforces the model’s robustness and identifies key parameters governing virus dynamics.The study also extends the framework to incorporate stochastic effects through a stochastic CF formulation.These results underscore fractional-order modeling as a powerful analytical tool for developing robust and effective cybersecurity strategies. 展开更多
关键词 Caputo-Fabrizio derivative fractional-order computer virus model stochastic fractional dynamics Adams-Bashforth scheme Hyers-Ulam stability sensitivity analysis cyber-epidemiology memory effects nonsingular kernel
在线阅读 下载PDF
Computational Analysis of Thermal Buckling in Doubly-Curved Shells Reinforced with Origami-Inspired Auxetic Graphene Metamaterials
5
作者 Ehsan Arshid 《Computer Modeling in Engineering & Sciences》 2026年第1期286-318,共33页
In this work,a computational modelling and analysis framework is developed to investigate the thermal buckling behavior of doubly-curved composite shells reinforced with graphene-origami(G-Ori)auxetic metamaterials.A ... In this work,a computational modelling and analysis framework is developed to investigate the thermal buckling behavior of doubly-curved composite shells reinforced with graphene-origami(G-Ori)auxetic metamaterials.A semi-analytical formulation based on the First-Order Shear Deformation Theory(FSDT)and the principle of virtual displacements is established,and closed-form solutions are derived via Navier’s method for simply supported boundary conditions.The G-Ori metamaterial reinforcements are treated as programmable constructs whose effective thermo-mechanical properties are obtained via micromechanical homogenization and incorporated into the shell model.A comprehensive parametric study examines the influence of folding geometry,dispersion arrangement,reinforcement weight fraction,curvature parameters,and elastic foundation support on the critical buckling temperature(CBT).The results reveal that,under optimal folding geometry and reinforcement alignment with principal stress trajectories,the CBT can increase by more than 150%.Furthermore,the combined effect of G-Ori reinforcement and elastic foundation substantially enhances thermal buckling resistance.These findings establish design guidelines for architected composite shells in applications such as aerospace thermal skins,morphing structures,and thermally-responsive systems,and illustrate the potential of auxetic graphene metamaterials for multifunctional,lightweight,and thermally robust structural components. 展开更多
关键词 Thermal buckling analysis semi-analytical modelling graphene-origami auxetic metamaterials doubly-curved shells elastic foundation
在线阅读 下载PDF
Longitudinal trajectory analysis of sepsis after laparoscopic surgery
6
作者 Boming Xia Chengqiao Jiang +9 位作者 Jie Yang Suibi Yang Bo Zhang Zhihao Wang Shengze Wu Yang Wang Qian Gao Yucai Hong Huiqing Ge Zhongheng Zhang 《Laparoscopic, Endoscopic and Robotic Surgery》 2026年第1期34-51,共18页
Objective:Sepsis exhibits remarkable heterogeneity in disease progression trajectories,and accurate identificationof distinct trajectory-based phenotypes is critical for implementing personalized therapeutic strategie... Objective:Sepsis exhibits remarkable heterogeneity in disease progression trajectories,and accurate identificationof distinct trajectory-based phenotypes is critical for implementing personalized therapeutic strategies and prognostic assessment.However,trajectory clustering analysis of time-series clinical data poses substantial methodological challenges for researchers.This study provides a comprehensive tutorial framework demonstrating six trajectory modeling approaches integrated with proteomic analysis to guide researchers in identifying sepsis subtypes after laparoscopic surgery.Methods:This study employs simulated longitudinal data from 300 septic patients after laparoscopic surgery to demonstrate six trajectory modeling methods(group-based trajectory modeling,latent growth mixture modeling,latent transition analysis,time-varying effect modeling,K-means for longitudinal data,agglomerative hierarchical clustering)for identifying associations between predefinedsequential organ failure assessment trajectories and 25 proteomic biomarkers.Clustering performance was evaluated via multiple metrics,and a biomarker discovery pipeline integrating principal component analysis,random forests,feature selection,and receiver operating characteristic analysis was developed.Results:The six methods demonstrated varying performance in identifying trajectory structures,with each approach exhibiting distinct analytical characteristics.The performance metrics revealed differences across methods,which may inform context-specificmethod selection and interpretation strategies.Conclusion:This study illustrates practical implementations of trajectory modeling approaches under controlled conditions,facilitating informed method selection for clinical researchers.The inclusion of complete R code and integrated proteomics workflows offers a reproducible analytical framework connecting temporal pattern recognition to biomarker discovery.Beyond sepsis,this pipeline-oriented approach may be adapted to diverse clinical scenarios requiring longitudinal disease characterization and precision medicine applications.The comparative analysis reveals that each method has distinct strengths,providing a practical guide for clinical researchers in selecting appropriate methods based on their specificstudy goals and data characteristics. 展开更多
关键词 Laparoscopic surgery SEPSIS Longitudinal trajectory Group-based trajectory modeling Latent class analysis PHENOTYPING
原文传递
Machine Learning Based Uncertain Free Vibration Analysis of Hybrid Composite Plates
7
作者 Bindi Saurabh Thakkar Pradeep Kumar Karsh 《Computers, Materials & Continua》 2026年第2期333-354,共22页
This study investigates the uncertain dynamic characterization of hybrid composite plates by employing advanced machine-assisted finite element methodologies.Hybrid composites,widely used in aerospace,automotive,and s... This study investigates the uncertain dynamic characterization of hybrid composite plates by employing advanced machine-assisted finite element methodologies.Hybrid composites,widely used in aerospace,automotive,and structural applications,often face variability in material properties,geometric configurations,and manufacturing processes,leading to uncertainty in their dynamic response.To address this,three surrogate-based machine learning approaches like radial basis function(RBF),multivariate adaptive regression splines(MARS),and polynomial neural networks(PNN)are integrated with a finite element framework to efficiently capture the stochastic behavior of these plates.The research focuses on predicting the first three natural frequencies under material uncertainties,which are critical to ensuring structural reliability.Monte Carlo simulation(MCS)is used as a benchmark for generating probabilistic datasets,including mean values,standard deviations,and probability density functions.The surrogate models are then trained and validated against these datasets,enabling accurate representation of uncertainty with substantially fewer samples compared to conventionalMCS.Among the methods studied,the RBFmodel demonstrates superior performance,closely approximating MCS results with a reduced sample size,thereby achieving significant computational savings.The proposed framework not only reduces computational time and costs but also maintains high predictive accuracy,making it well-suited for complex engineering systems.Beyond free vibration analysis,the methodology can be extended to more sophisticated scenarios,such as forced vibration,damping effects,and nonlinear structural responses.Overall,this work presents a computationally efficient and robust approach for surrogate-based uncertainty quantification,advancing the analysis and design of hybrid composite structures under uncertainty. 展开更多
关键词 Hybrid composite surrogate model RBF MARS PNN uncertain free vibration analysis machine learning
在线阅读 下载PDF
Evolution and insights of China’s environmental governance policies:An LDA-based policy text analysis
8
作者 HUA Yu-chen YANG Jia-meng +2 位作者 WEI Ren-jie CHENG Xiu LIU Zhi-yong 《Ecological Economy》 2026年第1期2-30,共29页
China’s environmental governance strategy provides a distinctive pathway for integrating sustainable development into national policy.Understanding its policy trajectory is essential for assessing China’s contributi... China’s environmental governance strategy provides a distinctive pathway for integrating sustainable development into national policy.Understanding its policy trajectory is essential for assessing China’s contribution to global sustainable development and the United Nations Sustainable Development Goals(SDGs).This study constructs a comprehensive database of 425 national environmental governance policy documents issued between 1978 and 2022 and applies Latent Dirichlet Allocation(LDA)modeling to examine the evolution of policy themes and discourse.The results show that China’s environmental governance has undergone four stages-initial exploration,detailed development,transformative leap,and diverse prosperity-reflecting a progressive shift toward more integrated and coordinated governance.Policy priorities have evolved from a primary focus on pollution control and energy transition to an emphasis on institutional construction and organizational reform,thereby strengthening alignment with the SDGs.This transformation is characterized by recurring developmental themes and increasingly preventive,forward-looking,and system-oriented governance approaches.Moreover,the co-evolution of policy concepts and implementation has driven a transition from localized,end-of-pipe responses to comprehensive governance frameworks,alongside a shift from normative guidance towards effectiveness-oriented policy design.By employing a data-driven text analysis approach,this study offers a systematic framework for tracing long-term policy evolution and assessing its implications for sustainable development. 展开更多
关键词 environmental governance policy text analysis LDA topic modeling topic evolution sustainable development policy policy transformation
原文传递
Time-varying reliability analysis of a reservoir bank slope considering creep behavior and sequential Bayesian updating
9
作者 Wenyu Zhuang Qingchao Lyu +4 位作者 Yaoru Liu Kai Zhang Ting Liu Junlei Bai Qun Zhang 《Journal of Rock Mechanics and Geotechnical Engineering》 2026年第3期2104-2121,共18页
The probabilistic stability evolution analysis of reservoir bank slopes is a crucial aspect of risk assessment,with core challenges including the consideration of deformation mechanisms and accurate determination of m... The probabilistic stability evolution analysis of reservoir bank slopes is a crucial aspect of risk assessment,with core challenges including the consideration of deformation mechanisms and accurate determination of mechanical parameters.In this study,a novel time-varying reliability analysis framework based on sequential Bayesian updating of mechanical parameters is proposed.The inverse parameters account for damage time-dependent behavior,incorporating water effect and a strain-driven softening-hardening process that depends on sliding states.The likelihood function is enhanced to simultaneously consider observation error,surrogate model prediction error,and model structural error,with the introduction of physical penalty.Exploration of the high-dimensional parameter space is achieved via the Hamiltonian Monte Carlo(HMC)method and the physics knowledge-based time-dependent deformation surrogate model.The time-varying reliability analysis of the slope is performed using the multi-grid method.Taking a reservoir bank slope as a case study,the sequential updating of 12 mechanical parameters is conducted based on deformation time series from 16 monitoring points,thereby validating the proposed framework.The results indicate that the proposed framework effectively captures the posterior distribution of mechanical parameters,with the case slope remaining in a critically stable state after overall sliding,showing a high failure probability.Introducing model structural error can reduce parameter compensation,and a reasonable sequential updating step size can improve inversion accuracy. 展开更多
关键词 Sequential Bayesian updating Probabilistic back analysis Time-varying reliability Reservoir bank slope model structural error Surrogate model prediction error Markov chain Monte Carlo(MCMC)method Multi-grid method
在线阅读 下载PDF
Interdisciplinary integration and development trends of intelligent diagnosis in traditional Chinese medicine:a topic evolution analysis
10
作者 Chenggong Xie Keying Huang +2 位作者 Zhengquan Du Xinyi Huang Bin Wang 《Digital Chinese Medicine》 2026年第1期43-56,共14页
Objective To systematically characterize the developmental trajectory and interdisciplinary integration of intelligent diagnosis in traditional Chinese medicine(TCM)through quantitative topic evolution analysis,we add... Objective To systematically characterize the developmental trajectory and interdisciplinary integration of intelligent diagnosis in traditional Chinese medicine(TCM)through quantitative topic evolution analysis,we addressed the fragmentation of existing research and clarified the long-term research structure and evolutionary patterns of the field.Methods A topic evolution analysis was performed on Chinese-language literature pertaining to intelligent diagnosis in TCM.Publications were retrieved from the China National Knowledge Infrastructure(CNKI),Wanfang Data,and China Science and Technology Journal Database(VIP),covering the period from database inception to July 3,2025.A hybrid segmentation approach,based on cumulative publication growth trends and inflection point detection,was applied to divide the research timeline into distinct stages.Subsequently,the latent Dirichlet allocation(LDA)model was used to extract research topics,followed by alignment and evolutionary analysis of topics across different stages.Results A total of 3919 publications published between 2003 and 2025 were included,and the research trajectory was divided into five stages based on data-driven breakpoint detection.The field exhibited a clear evolutionary shift from early rule-based systems and tonguepulse image and signal analysis(2006–2010),to machine-learning-based syndrome and prescription modeling(2011–2015),followed by deep-learning-driven pattern recognition and formula association(2016–2020).Since 2021,research has increasingly emphasized knowledge-graph construction,multimodal integration,and intelligent clinical decision-support systems,with recent studies(2024–2025)showing the emergence of large language models and agent-based diagnostic frameworks.Topic evolution analysis further revealed sustained cross-stage continuity in syndrome modeling and prescription association analysis,alongside the progressive consolidation of integrated intelligent diagnostic platforms.Conclusion By identifying key technological transitions and persistent core research themes,our findings offer a structured reference framework for the design of intelligent diagnostic systems,the construction of knowledge-driven clinical decision-support tools,and the alignment of AI models with TCM diagnostic logic.Importantly,the stage-based evolutionary insights derived from this analysis can inform future methodological choices,improve model interpretability and clinical applicability,and support the translation of intelligent TCM diagnosis from experimental research to real-world clinical practice. 展开更多
关键词 Traditional Chinese medicine diagnosis Artificial intelligence Interdisciplinary integration Research stage identification Topic evolution analysis Latent Dirichlet allocation model
在线阅读 下载PDF
Link-16 anti-jamming performance evaluation based on grey relational analysis and cloud model 被引量:1
11
作者 NING Xiaoyan WANG Ying +1 位作者 WANG Zhenduo SUN Zhiguo 《Journal of Systems Engineering and Electronics》 2025年第1期62-72,共11页
Anti-jamming performance evaluation has recently received significant attention. For Link-16, the anti-jamming performance evaluation and selection of the optimal anti-jamming technologies are urgent problems to be so... Anti-jamming performance evaluation has recently received significant attention. For Link-16, the anti-jamming performance evaluation and selection of the optimal anti-jamming technologies are urgent problems to be solved. A comprehensive evaluation method is proposed, which combines grey relational analysis (GRA) and cloud model, to evaluate the anti-jamming performances of Link-16. Firstly, on the basis of establishing the anti-jamming performance evaluation indicator system of Link-16, the linear combination of analytic hierarchy process(AHP) and entropy weight method (EWM) are used to calculate the combined weight. Secondly, the qualitative and quantitative concept transformation model, i.e., the cloud model, is introduced to evaluate the anti-jamming abilities of Link-16 under each jamming scheme. In addition, GRA calculates the correlation degree between evaluation indicators and the anti-jamming performance of Link-16, and assesses the best anti-jamming technology. Finally, simulation results prove that the proposed evaluation model can achieve the objective of feasible and practical evaluation, which opens up a novel way for the research of anti-jamming performance evaluations of Link-16. 展开更多
关键词 LINK-16 ANTI-JAMMING grey relational analysis(GRA) cloud model combination weights
在线阅读 下载PDF
Model tests and numerical analysis of emergency treatment of cohesionless soil landslide with quick-setting polyurethane 被引量:1
12
作者 ZHANG Zhichao TANG Xuefeng +2 位作者 HUANG Rufa CAI Zhenjie GAO Anhua 《Journal of Mountain Science》 2025年第1期110-121,共12页
Shotcrete is one of the common solutions for shallow sliding.It works by forming a protective layer with high strength and cementing the loose soil particles on the slope surface to prevent shallow sliding.However,the... Shotcrete is one of the common solutions for shallow sliding.It works by forming a protective layer with high strength and cementing the loose soil particles on the slope surface to prevent shallow sliding.However,the solidification time of conventional cement paste is long when shotcrete is used to treat cohesionless soil landslide.The idea of reinforcing slope with polyurethane solidified soil(i.e.,mixture of polyurethane and sand)was proposed.Model tests and finite element analysis were carried out to study the effectiveness of the proposed new method on the emergency treatment of cohesionless soil landslide.Surcharge loading on the crest of the slope was applied step by step until landslide was triggered so as to test and compare the stability and bearing capacity of slope models with different conditions.The simulated slope displacements were relatively close to the measured results,and the simulated slope deformation characteristics were in good agreement with the observed phenomena,which verifies the accuracy of the numerical method.Under the condition of surcharge loading on the crest of the slope,the unreinforced slope slid when the surcharge loading exceeded 30 k Pa,which presented a failure mode of local instability and collapse at the shallow layer of slope top.The reinforced slope remained stable even when the surcharge loading reached 48 k Pa.The displacement of the reinforced slope was reduced by more than 95%.Overall,this study verifies the effectiveness of polyurethane in the emergency treatment of cohesionless soil landslide and should have broad application prospects in the field of geological disasters concerning the safety of people's live. 展开更多
关键词 Cohesionless soil landslide POLYURETHANE Emergency treatment Reinforcement effect model test Finite element analysis
原文传递
Optimizing Fine-Tuning in Quantized Language Models:An In-Depth Analysis of Key Variables
13
作者 Ao Shen Zhiquan Lai +1 位作者 Dongsheng Li Xiaoyu Hu 《Computers, Materials & Continua》 SCIE EI 2025年第1期307-325,共19页
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci... Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments. 展开更多
关键词 Large-scale Language model Parameter-Efficient Fine-Tuning parameter quantization key variable trainable parameters experimental analysis
在线阅读 下载PDF
Gene Expression Data Analysis Based on Mixed Effects Model
14
作者 Yuanbo Dai 《Journal of Computer and Communications》 2025年第2期223-235,共13页
DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expres... DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expression data generated. To address this, this paper employs a mixed-effects model to analyze gene expression data. In terms of data selection, 1176 genes from the white mouse gene expression dataset under two experimental conditions were chosen, setting up two conditions: pneumococcal infection and no infection, and constructing a mixed-effects model. After preprocessing the gene chip information, the data were imported into the model, preliminary results were calculated, and permutation tests were performed to biologically validate the preliminary results using GSEA. The final dataset consists of 20 groups of gene expression data from pneumococcal infection, which categorizes functionally related genes based on the similarity of their expression profiles, facilitating the study of genes with unknown functions. 展开更多
关键词 Mixed Effects model Gene Expression Data analysis Gene analysis Gene Chip
暂未订购
A Study on the Postgraduate Quality Evaluation Model and Its Recognition Analysis
15
作者 Peng Xu Xinyuan Liu Yuzhu Hu 《Journal of Contemporary Educational Research》 2025年第7期158-168,共11页
With the rapid development of generative artificial intelligence technologies,represented by large language models,university-level computer science education is undergoing a critical transition-from knowledge-based i... With the rapid development of generative artificial intelligence technologies,represented by large language models,university-level computer science education is undergoing a critical transition-from knowledge-based instruction to competency-oriented teaching.A postgraduate student competency evaluation model can serve as a framework to organize and guide both teaching and research activities at the postgraduate level.A number of relevant research efforts have already been conducted in this area.Graduate education plays a vital role not only as a continuation and enhancement of undergraduate education but also as essential preparation for future research endeavors.An analysis of the acceptance of competency evaluation models refers to the assessment of how various stakeholders perceive the importance of different components within the model.Investigating the degree of acceptance among diverse groups-such as current undergraduate students,current postgraduate students,graduates with less than three years of work experience,and those with more than three years of work experience-can offer valuable insights for improving and optimizing postgraduate education and training practices. 展开更多
关键词 POSTGRADUATE Quality evaluation model IMPORTANCE Recognition analysis
在线阅读 下载PDF
MITRE ATT&CK-Driven Threat Analysis for Edge-IoT Environment and a Quantitative Risk Scoring Model
16
作者 Tae-hyeon Yun Moohong Min 《Computer Modeling in Engineering & Sciences》 2025年第11期2707-2731,共25页
The dynamic,heterogeneous nature of Edge computing in the Internet of Things(Edge-IoT)and Industrial IoT(IIoT)networks brings unique and evolving cybersecurity challenges.This study maps cyber threats in Edge-IoT/IIoT... The dynamic,heterogeneous nature of Edge computing in the Internet of Things(Edge-IoT)and Industrial IoT(IIoT)networks brings unique and evolving cybersecurity challenges.This study maps cyber threats in Edge-IoT/IIoT environments to the Adversarial Tactics,Techniques,and Common Knowledge(ATT&CK)framework by MITRE and introduces a lightweight,data-driven scoring model that enables rapid identification and prioritization of attacks.Inspired by the Factor Analysis of Information Risk model,our proposed scoring model integrates four key metrics:Common Vulnerability Scoring System(CVSS)-based severity scoring,Cyber Kill Chain–based difficulty estimation,Deep Neural Networks-driven detection scoring,and frequency analysis based on dataset prevalence.By aggregating these indicators,the model generates comprehensive risk profiles,facilitating actionable prioritization of threats.Robustness and stability of the scoring model are validated through non-parametric correlation analysis using Spearman’s and Kendall’s rank correlation coefficients,demonstrating consistent performance across diverse scenarios.The approach culminates in a prioritized attack ranking that provides actionable guidance for risk mitigation and resource allocation in Edge-IoT/IIoT security operations.By leveraging real-world data to align MITRE ATT&CK techniques with CVSS metrics,the framework offers a standardized and practically applicable solution for consistent threat assessment in operational settings.The proposed lightweight scoring model delivers rapid and reliable results under dynamic cyber conditions,facilitating timely identification of attack scenarios and prioritization of response strategies.Our systematic integration of established taxonomies with data-driven indicators strengthens practical risk management and supports strategic planning in next-generation IoT deployments.Ultimately,this work advances adaptive threat modeling for Edge/IIoT ecosystems and establishes a robust foundation for evidence-based prioritization in emerging cyber-physical infrastructures. 展开更多
关键词 MITRE ATT&CK edge environment IOT threat analysis quantitative analysis deep neural network CVSS risk assessment scoring model
在线阅读 下载PDF
Large language models in ophthalmology: a bibliometric analysis
17
作者 Ruyue Shen Eunice See Heng Lee +2 位作者 Xiaoyan Hu Clement C.Tham Carol Y.Cheung 《Eye Science》 2025年第3期222-237,共16页
Background:With the rapid development of artificial intelligence(AI),large language models(LLMs)have emerged as a potent tool for invigorating ophthalmology across clinical,educational,and research fields.Their accura... Background:With the rapid development of artificial intelligence(AI),large language models(LLMs)have emerged as a potent tool for invigorating ophthalmology across clinical,educational,and research fields.Their accuracy and reliability have undergone tested.This bibliometric analysis aims to provide an overview of research on LLMs in ophthalmology from both thematic and geographical perspectives.Methods:All existing and highly cited LLM-related ophthalmology research papers published in English up to 24th April 2025 were sourced from Scopus,PubMed,and Web of Science.The characteristics of these publications,including publication output,authors,journals,countries,institutions,citations,and research domains,were analyzed using Biblioshiny and VOSviewer software.Results:A total of 277 articles from 1,459 authors and 89 journals were included in this study.Although relevant publications began to appear in 2019,there was a significant increase starting from 2023.He M and Shi D are the most prolific authors,while Investigative Ophthalmology&Visual Science stands out as the most prominent journal.Most of the top-publishing countries are high-income economies,with the USA taking the lead,and the University of California is the leading institution.VOSviewer identified 5 clusters in the keyword co-occurrence analysis,indicating that current research focuses on the clinical applications of LLMs,particularly in diagnosis and patient education.Conclusions:While LLMs have demonstrated effectiveness in retaining knowledge,their accuracy in image-based diagnosis remains limited.Therefore,future research should investigate fine-tuning strategies and domain-specific adaptations to close this gap.Although research on the applications of LLMs in ophthalmology is still in its early stages,it holds significant potential for advancing the field. 展开更多
关键词 artificial intelligence large language models OPHTHALMOLOGY bibliometric analysis
在线阅读 下载PDF
Empowering Sentiment Analysis in Resource-Constrained Environments:Leveraging Lightweight Pre-trained Models for Optimal Performance
18
作者 V.Prema V.Elavazhahan 《Journal of Harbin Institute of Technology(New Series)》 2025年第1期76-84,共9页
Sentiment analysis,a cornerstone of natural language processing,has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across vari... Sentiment analysis,a cornerstone of natural language processing,has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across various domains.However,the deployment of such models in resource-constrained environments presents a unique set of challenges that require innovative solutions.Resource-constrained environments encompass scenarios where computing resources,memory,and energy availability are restricted.To empower sentiment analysis in resource-constrained environments,we address the crucial need by leveraging lightweight pre-trained models.These models,derived from popular architectures such as DistilBERT,MobileBERT,ALBERT,TinyBERT,ELECTRA,and SqueezeBERT,offer a promising solution to the resource limitations imposed by these environments.By distilling the knowledge from larger models into smaller ones and employing various optimization techniques,these lightweight models aim to strike a balance between performance and resource efficiency.This paper endeavors to explore the performance of multiple lightweight pre-trained models in sentiment analysis tasks specific to such environments and provide insights into their viability for practical deployment. 展开更多
关键词 sentiment analysis light weight models resource⁃constrained environment pre⁃trained models
在线阅读 下载PDF
Subgroup Analysis of a Single-Index Threshold Penalty Quantile Regression Model Based on Variable Selection
19
作者 QI Hui XUE Yaxin 《Wuhan University Journal of Natural Sciences》 2025年第2期169-183,共15页
In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This... In clinical research,subgroup analysis can help identify patient groups that respond better or worse to specific treatments,improve therapeutic effect and safety,and is of great significance in precision medicine.This article considers subgroup analysis methods for longitudinal data containing multiple covariates and biomarkers.We divide subgroups based on whether a linear combination of these biomarkers exceeds a predetermined threshold,and assess the heterogeneity of treatment effects across subgroups using the interaction between subgroups and exposure variables.Quantile regression is used to better characterize the global distribution of the response variable and sparsity penalties are imposed to achieve variable selection of covariates and biomarkers.The effectiveness of our proposed methodology for both variable selection and parameter estimation is verified through random simulations.Finally,we demonstrate the application of this method by analyzing data from the PA.3 trial,further illustrating the practicality of the method proposed in this paper. 展开更多
关键词 longitudinal data subgroup analysis threshold model quantile regression variable selection
原文传递
Facilitating wide-band oscillation analysis in wind farms with a novel linearization analysis framework based on the average-value model
20
作者 Qiufang Zhang Yin Xu Jinghan He 《iEnergy》 2025年第2期132-148,共17页
Wide-band oscillations have become a significant issue limiting the development of wind power.Both large-signal and small-signal analyses require extensive model derivation.Moreover,the large number and high order of ... Wide-band oscillations have become a significant issue limiting the development of wind power.Both large-signal and small-signal analyses require extensive model derivation.Moreover,the large number and high order of wind turbines have driven the development of simplified models,whose applicability remains controversial.In this paper,a wide-band oscillation analysis method based on the average-value model(AVM)is proposed for wind farms(WFs).A novel linearization analysis framework is developed,leveraging the continuous-time characteristics of the AVM and MATLAB/Simulink’s built-in linearization tools.This significantly reduces modeling complexity and computational costs while maintaining model fidelity.Additionally,an object-based initial value estimation method of state variables is introduced,which,when combined with steady-state point-solving tools,greatly reduces the computational effort required for equilibrium point solving in batch linearization analysis.The proposed method is validated in both doubly fed induction generator(DFIG)-based and permanent magnet synchronous generator(PMSG)-based WFs.Furthermore,a comprehensive analysis is conducted for the first time to examine the impact of the machine-side system on the system stability of the nonfully controlled PMSG-based WF. 展开更多
关键词 Wide-band oscillation analysis average-value model(AVM) doubly fed induction generator(DFIG) permanent magnet synchronous generator(PMSG) eigenvalue analysis impedance analysis
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部