期刊文献+
共找到501,089篇文章
< 1 2 250 >
每页显示 20 50 100
川滇地区人工智能地震预测模型应用
1
作者 孟令媛 胡峰 +7 位作者 臧阳 司旭 闫伟 田雷 赵小艳 张致伟 韩颜颜 王月 《地震研究》 北大核心 2026年第1期43-50,共8页
针对中国地震科学实验场的科学目标和主要科学问题,基于川滇地区地震目录和地球物理观测数据,在对川滇地区进行区域划分并建立图神经网络的基础上,构建了川滇地区地震预测模型。该模型综合考虑约3万条地震目录数据、基于地震目录的3种... 针对中国地震科学实验场的科学目标和主要科学问题,基于川滇地区地震目录和地球物理观测数据,在对川滇地区进行区域划分并建立图神经网络的基础上,构建了川滇地区地震预测模型。该模型综合考虑约3万条地震目录数据、基于地震目录的3种地震活动性参数,以及116台项地球物理观测数据,通过将传统经验预测指标方法与人工智能技术结合,给出了适用于川滇地区的多源异构数据图神经网络地震预测模型,实现了川滇地区不同数据源下短期与中期地震预测功能。模型应用结果显示,在CD2、CD8和CD10区域月尺度预测效果较好,年尺度无震预测有一定对应效果。 展开更多
关键词 中国地震科学实验场 多源异构数据 图神经网络 地震预测模型 川滇地区
在线阅读 下载PDF
肢体淋巴水肿:中药治疗核心药物作用机制的网络药理学和分子对接分析
2
作者 宓宝来 刘羽飞 +3 位作者 杨俏丽 康砚澜 袁梁 曹建春 《中国组织工程研究》 北大核心 2026年第18期4814-4824,共11页
背景:肢体淋巴水肿尚缺乏现代医学特效疗法,现有药物疗效有限。中医药在消肿领域积淀深厚,古籍蕴藏丰富方剂经验,但各学派观点差异导致组方分散,作用机制未明。尤其缺乏对古籍所载中药组方治疗肢体淋巴水肿规律的系统梳理。目的:基于古... 背景:肢体淋巴水肿尚缺乏现代医学特效疗法,现有药物疗效有限。中医药在消肿领域积淀深厚,古籍蕴藏丰富方剂经验,但各学派观点差异导致组方分散,作用机制未明。尤其缺乏对古籍所载中药组方治疗肢体淋巴水肿规律的系统梳理。目的:基于古今医案云平台和Cytoscape分析中医典籍治疗肢体淋巴水肿的用药规律,并通过网络药理学与分子对接探讨核心药物的作用机制。方法:收集“博览医书”数据库中截至2024-05-01收录的能治疗肢体淋巴水肿的消肿中药组方,筛选符合标准的核心药物。通过TCMSP数据库获取核心药物的成分与靶点,联合GeneCards、TTD、OMIM数据库筛选疾病靶点;构建药物-成分-靶点网络和蛋白互作网络,利用Metascape进行基因本体/京都基因与基因组百科全书富集分析,并通过分子对接验证。结果与结论:最终纳入223首方剂(含355味中药),经配伍与复杂网络分析确定核心药物组合:陈皮-茯苓-槟榔-白术-木香。网络药理学分析显示核心靶点(如TP53、SRC、AKT1)富集于磷脂酰肌醇3激酶-蛋白激酶B、丝裂原活化蛋白激酶、缺氧诱导因子1及癌症相关等通路。分子对接验证了3′,5,7-三羟基-4-甲氧基黄酮、啤酒甾醇与SRC、AKT1的强结合活性。研究表明,该核心药物组合可能通过调控SRC、AKT1及上述通路治疗肢体淋巴水肿。 展开更多
关键词 肢体淋巴水肿 数据挖掘 网络药理学 分子对接 博览医书 古今医案云平台 陈皮 茯苓
暂未订购
离子吸附型稀土矿区水体硝酸盐分布、来源及其转化过程:以江西足洞稀土矿为例
3
作者 韦春伊 余圣品 +7 位作者 白细民 刘海燕 王振 葛勤 陈功新 周仲魁 孙占学 郭华明 《地学前缘》 北大核心 2026年第1期121-134,共14页
离子吸附型稀土矿开采导致矿区水土环境遭受严重的氮污染,但矿山排水影响下水体中硝酸盐(NO_(3)^(-)-N)的分布、迁移转化过程及污染来源仍缺乏系统研究。以江西赣南足洞离子吸附型稀土矿区下游地表水和地下水为研究对象,结合水化学分析... 离子吸附型稀土矿开采导致矿区水土环境遭受严重的氮污染,但矿山排水影响下水体中硝酸盐(NO_(3)^(-)-N)的分布、迁移转化过程及污染来源仍缺乏系统研究。以江西赣南足洞离子吸附型稀土矿区下游地表水和地下水为研究对象,结合水化学分析及多同位素技术(δ^(18)O-H_(2)O、δ^(15)N-NO_(3)^(-)和δ^(18)O-NO_(3)^(-)),研究了水体中NO_(3)^(-)-N来源及转化过程,并借助MixSIAR模型定量评估了各污染源的贡献率。结果表明,区内水体呈弱酸性、低矿化度特征,地表水以SO 4-Ca型为主,80%地下水为HCO_(3)-Ca型;地表水的总氮(TN)及NO_(3)^(-)-N、氨氮(NH_(4)^(+)-N)浓度显著高于地下水,表明其氮污染与采矿活动密切相关。空间上,氮来源与开矿使用高氨氮卤水密切相关,矿山开采活动对地表水氮污染影响显著。土地利用类型分布表明,地表水和地下水氮的来源不同,地表水氮源主要是林地中的矿山排水,而地下水氮主要来源于耕地上的农业活动。δ^(18)O-H_(2)O、δ^(15)N-NO_(3)^(-)和δ^(18)O-NO_(3)^(-)同位素组成特征及其分馏系数分析表明,地表水和地下水均以硝化作用为主。利用δ^(15)N-NO_(3)^(-)和δ^(18)O-NO_(3)^(-)及其重建值进行端元分析显示,SW1的NO_(3)^(-)-N主要来源于铵态氮,即采矿活动排出的高NH_(4)^(+)-N水;SW2受到采矿和农业活动的共同影响,其来源包括铵态氮、土壤氮和污水粪肥,GW主要来源于土壤氮和污水粪肥。MixSIAR定量结果表明,矿山排水对靠近开采区的地表水(即SW1)的NO_(3)^(-)-N贡献率平均值超过50%,其中65%~94%NO_(3)^(-)-N来源于矿山排水中原生NO_(3)^(-)-N的直接贡献,而在远离开采区的地表水中(即SW2),矿山排水的NO_(3)^(-)-N贡献值递减至约30%。不确定性分析UI 90显示,大气降水的贡献率最为稳定,矿山排水、粪肥污水和土壤氮的贡献率存在较大的变异性。本研究揭示了离子吸附型稀土矿区硝酸盐污染的形成机制,为矿区氮污染的精准管控提供了科学依据。 展开更多
关键词 稀土矿 氮污染 硝化作用 贝叶斯同位素混合模型 分异富集
在线阅读 下载PDF
A Composite Loss-Based Autoencoder for Accurate and Scalable Missing Data Imputation
4
作者 Thierry Mugenzi Cahit Perkgoz 《Computers, Materials & Continua》 2026年第1期1985-2005,共21页
Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel a... Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications. 展开更多
关键词 Missing data imputation autoencoder deep learning missing mechanisms
在线阅读 下载PDF
Rapid-Update Assimilation of All-Sky FY-4A/AGRI Radiances for the Analysis and Prediction of Severe Convective Weather
5
作者 Peiwen ZHONG Yuanbing WANG +1 位作者 Yaodeng CHEN Xin LI 《Advances in Atmospheric Sciences》 2026年第1期213-232,共20页
High spatiotemporal resolution infrared radiances from FY-4A/AGRI(Advanced Geostationary Radiation Imager)can provide crucial information for rapidly developing severe convective weather.This study established a symme... High spatiotemporal resolution infrared radiances from FY-4A/AGRI(Advanced Geostationary Radiation Imager)can provide crucial information for rapidly developing severe convective weather.This study established a symmetric observation error model that differentiates between land and sea for FY-4A/AGRI all-sky assimilation,developed an all-sky assimilation scheme for FY-4A/AGRI based on hydrometeor control variables,and investigated the impacts of all-sky FY-4A/AGRI water vapor channels at different altitudes and rapid-update assimilation at different frequencies on the assimilation and forecasting of a severe convective weather event.Results show that simultaneous assimilation of two water vapor channels can enhance precipitation forecasts compared to single-channel assimilation,which is mainly attributable to a more accurate analysis of water vapor and hydrometeor information.Experiments with different assimilation frequencies demonstrate that the hourly assimilation frequency,compared to other frequencies,incorporates the high-frequency information from AGRI while reducing the impact of spurious oscillations caused by excessively high-frequency assimilation.This hourly assimilation frequency reduces the incoordination among thermal,dynamical,and water vapor conditions caused by excessively fast or slow assimilation frequencies,thus improving the forecast accuracy compared to other frequencies. 展开更多
关键词 data assimilation FY-4A AGRI ALL-SKY rapid-update
在线阅读 下载PDF
DriftXMiner: A Resilient Process Intelligence Approach for Safe and Transparent Detection of Incremental Concept Drift in Process Mining
6
作者 Puneetha B.H Manoj Kumar M.V +1 位作者 Prashanth B.S. Piyush Kumar Pareek 《Computers, Materials & Continua》 2026年第1期1086-1118,共33页
Processes supported by process-aware information systems are subject to continuous and often subtle changes due to evolving operational,organizational,or regulatory factors.These changes,referred to as incremental con... Processes supported by process-aware information systems are subject to continuous and often subtle changes due to evolving operational,organizational,or regulatory factors.These changes,referred to as incremental concept drift,gradually alter the behavior or structure of processes,making their detection and localization a challenging task.Traditional process mining techniques frequently assume process stationarity and are limited in their ability to detect such drift,particularly from a control-flow perspective.The objective of this research is to develop an interpretable and robust framework capable of detecting and localizing incremental concept drift in event logs,with a specific emphasis on the structural evolution of control-flow semantics in processes.We propose DriftXMiner,a control-flow-aware hybrid framework that combines statistical,machine learning,and process model analysis techniques.The approach comprises three key components:(1)Cumulative Drift Scanner that tracks directional statistical deviations to detect early drift signals;(2)a Temporal Clustering and Drift-Aware Forest Ensemble(DAFE)to capture distributional and classification-level changes in process behavior;and(3)Petri net-based process model reconstruction,which enables the precise localization of structural drift using transition deviation metrics and replay fitness scores.Experimental validation on the BPI Challenge 2017 event log demonstrates that DriftXMiner effectively identifies and localizes gradual and incremental process drift over time.The framework achieves a detection accuracy of 92.5%,a localization precision of 90.3%,and an F1-score of 0.91,outperforming competitive baselines such as CUSUM+Histograms and ADWIN+Alpha Miner.Visual analyses further confirm that identified drift points align with transitions in control-flow models and behavioral cluster structures.DriftXMiner offers a novel and interpretable solution for incremental concept drift detection and localization in dynamic,process-aware systems.By integrating statistical signal accumulation,temporal behavior profiling,and structural process mining,the framework enables finegrained drift explanation and supports adaptive process intelligence in evolving environments.Its modular architecture supports extension to streaming data and real-time monitoring contexts. 展开更多
关键词 Process mining concept drift gradual drift incremental drift clustering ensemble techniques process model event log
在线阅读 下载PDF
P4LoF: Scheduling Loop-Free Multi-Flow Updates in Programmable Networks
7
作者 Jiqiang Xia Qi Zhan +2 位作者 Le Tian Yuxiang Hu Jianhua Peng 《Computers, Materials & Continua》 2026年第1期1236-1254,共19页
The rapid growth of distributed data-centric applications and AI workloads increases demand for low-latency,high-throughput communication,necessitating frequent and flexible updates to network routing configurations.H... The rapid growth of distributed data-centric applications and AI workloads increases demand for low-latency,high-throughput communication,necessitating frequent and flexible updates to network routing configurations.However,maintaining consistent forwarding states during these updates is challenging,particularly when rerouting multiple flows simultaneously.Existing approaches pay little attention to multi-flow update,where improper update sequences across data plane nodes may construct deadlock dependencies.Moreover,these methods typically involve excessive control-data plane interactions,incurring significant resource overhead and performance degradation.This paper presents P4LoF,an efficient loop-free update approach that enables the controller to reroute multiple flows through minimal interactions.P4LoF first utilizes a greedy-based algorithm to generate the shortest update dependency chain for the single-flow update.These chains are then dynamically merged into a dependency graph and resolved as a Shortest Common Super-sequence(SCS)problem to produce the update sequence of multi-flow update.To address deadlock dependencies in multi-flow updates,P4LoF builds a deadlock-fix forwarding model that leverages the flexible packet processing capabilities of the programmable data plane.Experimental results show that P4LoF reduces control-data plane interactions by at least 32.6%with modest overhead,while effectively guaranteeing loop-free consistency. 展开更多
关键词 Network management update consistency programmable data plane P4
在线阅读 下载PDF
Advances in Machine Learning for Explainable Intrusion Detection Using Imbalance Datasets in Cybersecurity with Harris Hawks Optimization
8
作者 Amjad Rehman Tanzila Saba +2 位作者 Mona M.Jamjoom Shaha Al-Otaibi Muhammad I.Khan 《Computers, Materials & Continua》 2026年第1期1804-1818,共15页
Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness a... Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability. 展开更多
关键词 Intrusion detection XAI machine learning ensemble method CYBERSECURITY imbalance data
在线阅读 下载PDF
Enhanced Capacity Reversible Data Hiding Based on Pixel Value Ordering in Triple Stego Images
9
作者 Kim Sao Nguyen Ngoc Dung Bui 《Computers, Materials & Continua》 2026年第1期1571-1586,共16页
Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi... Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography. 展开更多
关键词 RDH reversible data hiding PVO RDH base three stego images
在线阅读 下载PDF
Impact of Data Processing Techniques on AI Models for Attack-Based Imbalanced and Encrypted Traffic within IoT Environments
10
作者 Yeasul Kim Chaeeun Won Hwankuk Kim 《Computers, Materials & Continua》 2026年第1期247-274,共28页
With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp... With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy. 展开更多
关键词 Encrypted traffic attack detection data sampling technique AI-based detection IoT environment
在线阅读 下载PDF
Efficient Arabic Essay Scoring with Hybrid Models: Feature Selection, Data Optimization, and Performance Trade-Offs
11
作者 Mohamed Ezz Meshrif Alruily +4 位作者 Ayman Mohamed Mostafa Alaa SAlaerjan Bader Aldughayfiq Hisham Allahem Abdulaziz Shehab 《Computers, Materials & Continua》 2026年第1期2274-2301,共28页
Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic... Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage. 展开更多
关键词 Automated essay scoring text-based features vector-based features embedding-based features feature selection optimal data efficiency
在线阅读 下载PDF
Individual Software Expertise Formalization and Assessment from Project Management Tool Databases
12
作者 Traian-Radu Plosca Alexandru-Mihai Pescaru +1 位作者 Bianca-Valeria Rus Daniel-Ioan Curiac 《Computers, Materials & Continua》 2026年第1期389-411,共23页
Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods... Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results. 展开更多
关键词 Expertise formalization transformer-based models natural language processing augmented data project management tool skill classification
在线阅读 下载PDF
UGEA-LMD: A Continuous-Time Dynamic Graph Representation Enhancement Framework for Lateral Movement Detection
13
作者 Jizhao Liu Yuanyuan Shao +2 位作者 Shuqin Zhang Fangfang Shan Jun Li 《Computers, Materials & Continua》 2026年第1期1924-1943,共20页
Lateral movement represents the most covert and critical phase of Advanced Persistent Threats(APTs),and its detection still faces two primary challenges:sample scarcity and“cold start”of new entities.To address thes... Lateral movement represents the most covert and critical phase of Advanced Persistent Threats(APTs),and its detection still faces two primary challenges:sample scarcity and“cold start”of new entities.To address these challenges,we propose an Uncertainty-Driven Graph Embedding-Enhanced Lateral Movement Detection framework(UGEA-LMD).First,the framework employs event-level incremental encoding on a continuous-time graph to capture fine-grained behavioral evolution,enabling newly appearing nodes to retain temporal contextual awareness even in the absence of historical interactions and thereby fundamentally mitigating the cold-start problem.Second,in the embedding space,we model the dependency structure among feature dimensions using a Gaussian copula to quantify the uncertainty distribution,and generate augmented samples with consistent structural and semantic properties through adaptive sampling,thus expanding the representation space of sparse samples and enhancing the model’s generalization under sparse sample conditions.Unlike static graph methods that cannot model temporal dependencies or data augmentation techniques that depend on predefined structures,UGEA-LMD offers both superior temporaldynamic modeling and structural generalization.Experimental results on the large-scale LANL log dataset demonstrate that,under the transductive setting,UGEA-LMD achieves an AUC of 0.9254;even when 10%of nodes or edges are withheld during training,UGEA-LMD significantly outperforms baseline methods on metrics such as recall and AUC,confirming its robustness and generalization capability in sparse-sample and cold-start scenarios. 展开更多
关键词 Advanced persistent threat(APTs) lateral movement detection continuous-time dynamic graph data enhancement
在线阅读 下载PDF
信息处理者安全保障义务的体系阐释
14
作者 苏成慧 《河北法学》 北大核心 2026年第1期120-138,共19页
安全保障义务本质上是一种危险、风险防免义务,其保障的安全权益包括国家安全、公共安全和个人安全。法律在风险防范中的价值追求为信息处理者安全保障义务的承担提供正当性基础。数字技术条件下,“信息处理者”的主体范围并不限于机构... 安全保障义务本质上是一种危险、风险防免义务,其保障的安全权益包括国家安全、公共安全和个人安全。法律在风险防范中的价值追求为信息处理者安全保障义务的承担提供正当性基础。数字技术条件下,“信息处理者”的主体范围并不限于机构主体,还应包括自然人主体。信息处理者安全保障义务包括积极义务和消极义务,其具体内容体现在不同领域、性质、等级的法规范中,以强制性规范为主要表达方式。信息处理者安全保障义务的体系展开应以宪法规定的基本权利为基点,在以强制性规范为主的公法体系中设置具体行为规范,《民法典》中相关引致条款和转介条款具有实现安全保障义务规范在公、私法体系中的衔接功能,使得作为保护性规范的安全保障义务规范在个人信息权益受损时的私法救济体系中能发挥“违法推定过失”的规范效果。 展开更多
关键词 数据安全保护 信息处理者 数据安全保障义务 数据安全风险 数据安全法治
原文传递
DYRK2:基于东亚和欧洲人群揭示类风湿关节炎合并骨质疏松症的治疗新靶点
15
作者 吴治林 何秦 +4 位作者 王枰稀 石现 袁松 张骏 王浩 《中国组织工程研究》 北大核心 2026年第6期1569-1579,共11页
背景:研究表明,类风湿关节炎与骨质疏松症呈正相关趋势,但因果关系和相关机制仍未得到证实。随着计算机科学和生命科学的交叉融合,基于全基因组关联研究数据和转录组测序数据进行孟德尔随机化和生信分析,可以评估两疾病间的因果关系、... 背景:研究表明,类风湿关节炎与骨质疏松症呈正相关趋势,但因果关系和相关机制仍未得到证实。随着计算机科学和生命科学的交叉融合,基于全基因组关联研究数据和转录组测序数据进行孟德尔随机化和生信分析,可以评估两疾病间的因果关系、探索相关机制以及挖掘治疗靶点,这将利于类风湿关节炎合并骨质疏松症的精准治疗。目的:采用双样本孟德尔随机化分析类风湿关节炎和骨质疏松症间的因果关系,同时基于汇总数据的孟德尔随机化分析和生信分析挖掘潜在共病靶点和靶向药物,旨在为类风湿关节炎合并骨质疏松症的机制探索和精准治疗提供理论依据。方法:①从基于亚洲人群和欧洲人群的GWAS Catalog、IEU Open GWAS、FinnGen以及eQTLGen数据库下载类风湿关节炎、骨质疏松症和顺式表达数量性状位点的全基因组关联研究数据,用于双样本孟德尔随机化和基于汇总数据的孟德尔随机化分析。②从GEO数据库下载类风湿关节炎的转录组测序数据(GSE93272和GSE15573),用于生物信息学分析。③以逆方差加权法作为主要分析方法,进行类风湿关节炎和骨质疏松症之间的正向和反向双样本孟德尔随机化分析,并用MR Egger法、简单模式法、加权中位数法和加权模式法对结果加以佐证。④基于汇总数据的孟德尔随机化分析鉴定与类风湿关节炎和骨质疏松症相关的基因,并基于交叉分析挖掘出类风湿关节炎和骨质疏松症共病靶点。同时,基于生信分析和细胞实验验证共病靶点的生物学功能。⑤此外,基于DYRK2构建类风湿关节炎风险预测诺莫图,通过受试者特征曲线、矫正曲线和决策曲线验证预测性能。最后,基于Enrichr数据库挖掘靶点潜在药物并进行分子对接。结果与结论:①正向孟德尔随机化分析结果显示,除外GCST90044540和GCST90086118无统计学意义,其他所有结果均表明类风湿关节炎和骨质疏松症间存在显著因果关系,并且呈正相关。②反向孟德尔随机化分析结果提示,骨质疏松症和类风湿关节炎间未见显著因果关系。③基于汇总数据的孟德尔随机化分析共鉴定出412和344个与类风湿关节炎和骨质疏松症正相关的基因,421和347个负相关基因。基于交叉分析得到26个共病基因。其中,DYRK2是潜在治疗靶点,后续生信分析和细胞实验证实DYRK2在类风湿关节炎和骨质疏松症的进展过程中发挥重要作用。④此外,构建的诺莫图具有出色的预测性能。最后,挖掘出4个DYRK2的潜在靶向药物(Undecanoic Acid、Metyrapone、JNJ-38877605和ACA),分子对接也证明具有可靠的靶向能力。⑤总之,基于亚洲人群和欧洲人群的全基因组关联研究数据证明了类风湿关节炎和骨质疏松症在遗传学层面存在着因果关系,DYRK2是潜在治疗靶点,有4种小分子是潜在靶向药物。 展开更多
关键词 类风湿关节炎 骨质疏松症 孟德尔随机化 基于汇总数据的孟德尔随机化 共病基因 DYRK2
暂未订购
知识嵌入增强的对比推荐模型
16
作者 谢涛 葛慧丽 +3 位作者 陈宁 汪晓锋 李延松 黄晓峰 《浙江大学学报(工学版)》 北大核心 2026年第1期90-98,共9页
为了缓解对比推荐模型因过度依赖结构扰动进行数据增强而导致性能下降的问题,提出知识嵌入增强的对比推荐模型,利用知识图谱的嵌入表征来引导对比学习过程,从而实现高效的物品推荐.通过关系感知的知识聚合模块捕获知识图谱中的异质性关... 为了缓解对比推荐模型因过度依赖结构扰动进行数据增强而导致性能下降的问题,提出知识嵌入增强的对比推荐模型,利用知识图谱的嵌入表征来引导对比学习过程,从而实现高效的物品推荐.通过关系感知的知识聚合模块捕获知识图谱中的异质性关系信息以获得知识嵌入,利用图神经网络编码器从用户-项目交互图中获取实体表征;通过基于知识增强的对比推荐模块将知识嵌入融入用户交互图的表征学习中,强化用户和项目嵌入表示,从而提升推荐精度.在企业服务、书籍和新闻3个数据集上进行大量实验,结果表明,所提模型在处理稀疏数据集时具有明显优势.相较于基线模型KGAT、CKAN,所提模型在Recall和NDCG指标上的平均提升幅度超过20%;与性能优越的KGIN、KGCL、MGDCF等对比学习模型相比,实现了平均10%的性能增益,说明所提方法具有全面的性能优势. 展开更多
关键词 推荐系统 知识图谱 对比学习 数据增强 数据稀疏
在线阅读 下载PDF
A Convolutional Neural Network-Based Deep Support Vector Machine for Parkinson’s Disease Detection with Small-Scale and Imbalanced Datasets
17
作者 Kwok Tai Chui Varsha Arya +2 位作者 Brij B.Gupta Miguel Torres-Ruiz Razaz Waheeb Attar 《Computers, Materials & Continua》 2026年第1期1410-1432,共23页
Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d... Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested. 展开更多
关键词 Convolutional neural network data generation deep support vector machine feature extraction generative artificial intelligence imbalanced dataset medical diagnosis Parkinson’s disease small-scale dataset
在线阅读 下载PDF
Do Higher Horizontal Resolution Models Perform Better?
18
作者 Shoji KUSUNOKI 《Advances in Atmospheric Sciences》 2026年第1期259-262,共4页
Climate model prediction has been improved by enhancing model resolution as well as the implementation of sophisticated physical parameterization and refinement of data assimilation systems[section 6.1 in Wang et al.(... Climate model prediction has been improved by enhancing model resolution as well as the implementation of sophisticated physical parameterization and refinement of data assimilation systems[section 6.1 in Wang et al.(2025)].In relation to seasonal forecasting and climate projection in the East Asian summer monsoon season,proper simulation of the seasonal migration of rain bands by models is a challenging and limiting factor[section 7.1 in Wang et al.(2025)]. 展开更多
关键词 enhancing model resolution refinement data assimilation systems section climate model climate projection higher horizontal resolution seasonal forecasting simulation seasonal migration rain bands model resolution
在线阅读 下载PDF
大岙古滑坡形变特征与稳定性分析
19
作者 刘朝海 张静 袁仁茂 《地震研究》 北大核心 2026年第1期75-83,共9页
基于野外调查资料和2017—2021年PS-InSAR形变数据,对云南金沙江流域大岙古滑坡进行了时间序列上的形变特征监测和触发机制分析,利用极限平衡法评估了古滑坡在地震、降雨条件下的稳定性。结果表明:①大岙古滑坡是一个具有3次明显滑动事... 基于野外调查资料和2017—2021年PS-InSAR形变数据,对云南金沙江流域大岙古滑坡进行了时间序列上的形变特征监测和触发机制分析,利用极限平衡法评估了古滑坡在地震、降雨条件下的稳定性。结果表明:①大岙古滑坡是一个具有3次明显滑动事件的复发性古滑坡。目前该滑坡正处于持续形变阶段,并发育有新的次级滑坡,具有再次复发的可能。②在滑坡体中部和前缘的形变较显著,累计形变量达到-55 mm。基于形变速率,古滑坡体可分为稳定区和形变区。形变区可进一步细分为加速形变区(-10.6~-5 mm/a)、快速形变区(-5~-2 mm/a)和缓慢形变区(-2~0 mm/a)。③古滑坡体呈现出年际波动的变化特征,且与降雨量存在显著的相关性,降雨量的突然增加会导致形变量增大,但具有4~5个月的时间滞后性。地震也被确定为另一个重要的触发因素,可导致古滑坡产生瞬时形变加剧。④水库蓄水、暴雨和地震等外部因素将进一步加剧滑坡体的不稳定性,可能引发更严重的块体解体和失稳破坏。 展开更多
关键词 大岙古滑坡 形变 SAR数据 稳定性分析 金沙江
在线阅读 下载PDF
肠道菌群与肌萎缩侧索硬化症的因果关系:IEU Open GWAS数据库的样本分析
20
作者 汪涛 闵友江 +5 位作者 王敏 王顺谱 李乐 张宸 肖伟平 余艺萍 《中国组织工程研究》 北大核心 2026年第12期3182-3189,共8页
背景:近期研究表明,肠道菌群可能会影响肌萎缩侧索硬化症的发展进程,然而两者之间的因果关系尚不清楚。目的:利用孟德尔随机化方法探索肠道菌群与肌萎缩侧索硬化症之间的因果关系。方法:从IEU Open GWAS数据库(由英国布里斯托尔大学的... 背景:近期研究表明,肠道菌群可能会影响肌萎缩侧索硬化症的发展进程,然而两者之间的因果关系尚不清楚。目的:利用孟德尔随机化方法探索肠道菌群与肌萎缩侧索硬化症之间的因果关系。方法:从IEU Open GWAS数据库(由英国布里斯托尔大学的英国医学研究委员会和遗传流行病学研究所开发,旨在提供与多种疾病相关的全基因组关联研究数据,为开放数据库)中分别获取肠道菌群和肌萎缩侧索硬化症的GWAS数据,以肠道菌群为暴露因素、肌萎缩侧索硬化症为结局变量,使用逆方差加权法、MR-Egger回归法、加权中位数法、加权模型法和简单模型法来探究两者之间的因果关系。使用敏感性分析检验孟德尔随机化结果的可靠性,使用反向孟德尔随机化分析进一步验证两者间的因果关系。结果与结论:(1)正向孟德尔随机化分析结果表明,6种肠道菌群与肌萎缩侧索硬化症之间存在因果关系,其中嗜胆菌属(β=0.206,OR=1.229)、毛螺菌属(β=0.288,OR=1.333)、马文-布莱恩特氏菌属(β=0.196,OR=1.216)、瘤胃球菌UCG010属(β=0.254,OR=1.289)和泰泽氏菌属3型(β=0.128,OR=1.136)可能是肌萎缩侧索硬化症的潜在危险因素,肠杆菌属(β=-0.203,OR=0.816)可能是肌萎缩侧索硬化症的保护因素;(2)在敏感性分析中,未发现显著的异质性和水平多效性(P均> 0.05),反向孟德尔随机化分析亦未揭示肠道菌群与肌萎缩侧索硬化症之间存在反向因果关系;(3)该研究结果不仅为肌萎缩侧索硬化症治疗提供了潜在的生物标志物,还为开发基于肠道菌群的新的干预治疗方案提供了理论依据,对中国基础医学研究具有一定的启示意义。 展开更多
关键词 肠道菌群 肌萎缩侧索硬化症 孟德尔随机化 因果关系 逆方差加权法 全基因组关联研究数据
暂未订购
上一页 1 2 250 下一页 到第
使用帮助 返回顶部