期刊文献+
共找到359,226篇文章
< 1 2 250 >
每页显示 20 50 100
Quick Evaluation of Present-Day Low-Total Organic Carbon Carbonate Source Rocks from Rock-Eval Data:Middle-Upper Ordovician in the Tabei Uplift,Tarim Basin 被引量:5
1
作者 CHEN Junqing PANG Xiongqi +2 位作者 YANG Haljun PANG Hong PANG Bo 《Acta Geologica Sinica(English Edition)》 CAS CSCD 2018年第4期1558-1573,共16页
Previous studies have postulated the contribution of present-day low-total organic carbon(TOC)marine carbonate source rocks to oil accumulations in the Tabei Uplift,Tarim Basin,China.However,not all present-day low-TO... Previous studies have postulated the contribution of present-day low-total organic carbon(TOC)marine carbonate source rocks to oil accumulations in the Tabei Uplift,Tarim Basin,China.However,not all present-day low-TOC carbonates have generated and expelled hydrocarbons;therefore,to distinguish the source rocks that have already expelled sufficient hydrocarbons from those not expelled hydrocarbons,is crucial in source rock evaluation and resource assessment in the Tabei Uplift.Mass balance can be used to identify modern low-TOC carbonates resulting from hydrocarbon expulsion.However,the process is quite complicated,requiring many parameters and coefficients and thus also a massive data source.In this paper,we provide a quick and cost effective method for identifying carbonate source rock with present-day low TOC,using widely available Rock-Eval data.First,we identify present-day low-TOC carbonate source rocks in typical wells according to the mass balance approach.Second,we build an optimal model to evaluate source rocks from the analysis of the rocks'characteristics and their influencing factors,reported as positive or negative values of a dimensionless index of Rock-Eval data(IR).Positive IR corresponds to those samples which have expelled hydrocarbons.The optimal model optimizes complicated calculations and simulation processes;thus it could be widely applicable and competitive in the evaluation of present-day low TOC carbonates.By applying the model to the Rock-Eval dataset of the Tabei Uplift,we identify present-day iow-TOC carbonate source rocks and primarily evaluate the contribution equivalent of 11.87×10^9 t oil. 展开更多
关键词 present-day low-TOC carbonate source rocks quick evaluation model rock-eval TabeiUplift
在线阅读 下载PDF
高寒草甸有机碳结构和成分组成及Rock-Eval热解研究
2
作者 张永利 冉勇 《地球化学》 北大核心 2025年第3期426-435,共10页
近年来对高寒草甸有机碳结构和成分的研究开始增多,但未见将先进13C固态核磁共振(SS-NMR)技术和Rock-Eval热解技术同时应用于高寒草甸有机碳表征的研究。本研究采用化学分离法将高寒草甸样品连续分为原始(OS)级分、去碳酸盐(DC)级分、去... 近年来对高寒草甸有机碳结构和成分的研究开始增多,但未见将先进13C固态核磁共振(SS-NMR)技术和Rock-Eval热解技术同时应用于高寒草甸有机碳表征的研究。本研究采用化学分离法将高寒草甸样品连续分为原始(OS)级分、去碳酸盐(DC)级分、去矿(DM)级分、去类脂(DF)级分和酸不可水解(NHC)级分。元素分析结果显示,OS、DC、DM、DF和NHC级分的有机碳含量分别为7.7%、8.8%、41.6%、38.4%和54.2%,稳定C同位素(δ^(13)C)值分别为−23.7‰、−25.0‰、−25.4‰、−25.1‰和−26.5‰。稳定C、N同位素和贝叶斯混合模型分析结果表明,高寒草甸有机碳的C3植物源和微生物源分别为46.5%和53.5%。13C SS-NMR分析结果表明,高寒草甸有机碳的结构组成中,脂肪碳和芳香碳分别占总有机碳的55.4%和34.5%。其中,脂肪碳主要包括氧烷基碳(O-alkyl C,23.8%)、烷基碳(Alkyl C,19.0%)和甲氧基碳/N-烷基碳(OCH3/NCH,12.6%);芳香碳主要包括质子化芳香碳(Arom C-H,15.6%)和非质子化芳香碳(Arom C-C,13.1%)。生物大分子成分组成分析结果表明,不稳定的碳水化合物和蛋白质是高寒草甸的主要成分,非质子化木炭和具有Alkyl C结构的孢粉素是稳定有机碳的重要成分。Rock-Eval热解分析结果表明,氧指数(OI)与极性碳(Polar C)含量、碳水化合物+蛋白质+木质素、Alkyl C/O-alkyl C值呈显著相关关系(R^(2)≥0.96,P<0.01),说明OI可表征高寒草甸不稳定有机碳的分解潜力和腐殖化状态;S2与稳定生物源的Alkyl C和孢粉素呈显著正相关关系(R^(2)≥0.96,P<0.01),残留有机碳(RC)与非质子化木炭呈显著正相关关系(R^(2)=0.91,P<0.05),表明S2和RC可代表高寒草甸稳定有机碳的结构和成分组成。 展开更多
关键词 脂肪碳 芳香碳 同位素 固态核磁共振(SS-NMR) rock-eval
在线阅读 下载PDF
Oil-source correlation and Paleozoic source rock analysis in the Siwa Basin,Western Desert:Insights from well-logs,Rock-Eval pyrolysis,and biomarker data
3
作者 Mohamed I.Abdel-Fattah Mohamed Reda +3 位作者 Mohamed Fathy Diaa A.Saadawi Fahad Alshehri Mohamed S.Ahmed 《Energy Geoscience》 EI 2024年第3期313-327,共15页
Understanding the origins of potential source rocks and unraveling the intricate connections between reservoir oils and their source formations in the Siwa Basin(Western Desert,Egypt)necessitate a thorough oil-source ... Understanding the origins of potential source rocks and unraveling the intricate connections between reservoir oils and their source formations in the Siwa Basin(Western Desert,Egypt)necessitate a thorough oil-source correlation investigation.This objective is achieved through a meticulous analysis of well-log responses,Rock-Eval pyrolysis,and biomarker data.The analysis of Total Organic Carbon across 31 samples representing Paleozoic formations in the Siwa A-1X well reveals a spectrum of organic richness ranging from 0.17 wt%to 2.04 wt%,thereby highlighting diverse levels of organic content and the presence of both Type II and Type III kerogen.Examination of the fingerprint characteristics of eight samples from the well suggests that the Dhiffah Formation comprises a blend of terrestrial and marine organic matter.Notably,a significant contribution from more oxidized residual organic matter and gas-prone Type III kerogen is observed.Contrarily,the Desouky and Zeitoun formations exhibit mixed organic matter indicative of a transitional environment,and thus featuring a pronounced marine influence within a more reducing setting,which is associated with Type II kerogen.Through analysis of five oil samples from different wells—SIWA L-1X,SIWA R-3X,SIWA D-1X,PTAH 5X,and PTAH 6X,it is evident that terrestrial organic matter,augmented by considerable marine input,was deposited in an oxidizing environment,and contains Type III kerogen.Geochemical scrutiny confirms the coexistence of mixed terrestrial organic matter within varying redox environments.Noteworthy is the uniformity of identified kerogen Types II and III across all samples,known to have potential for hydrocarbon generation.The discovery presented in this paper unveils captivating prospects concerning the genesis of oil in the Jurassic Safa reservoir,suggesting potential links to Paleozoic sources or even originating from the Safa Member itself.These revelations mark a substantial advancement in understanding source rock dynamics and their intricate relationship with reservoir oils within the Siwa Basin.By illuminating the processes of hydrocarbon genesis in the region,this study significantly enriches our knowledge base. 展开更多
关键词 Biomarker data Oil-source correlation rock-eval pyrolysis Source rocks Siwa Basin
在线阅读 下载PDF
A Composite Loss-Based Autoencoder for Accurate and Scalable Missing Data Imputation
4
作者 Thierry Mugenzi Cahit Perkgoz 《Computers, Materials & Continua》 2026年第1期1985-2005,共21页
Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel a... Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications. 展开更多
关键词 Missing data imputation autoencoder deep learning missing mechanisms
在线阅读 下载PDF
Advances in Machine Learning for Explainable Intrusion Detection Using Imbalance Datasets in Cybersecurity with Harris Hawks Optimization
5
作者 Amjad Rehman Tanzila Saba +2 位作者 Mona M.Jamjoom Shaha Al-Otaibi Muhammad I.Khan 《Computers, Materials & Continua》 2026年第1期1804-1818,共15页
Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness a... Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability. 展开更多
关键词 Intrusion detection XAI machine learning ensemble method CYBERSECURITY imbalance data
在线阅读 下载PDF
Enhanced Capacity Reversible Data Hiding Based on Pixel Value Ordering in Triple Stego Images
6
作者 Kim Sao Nguyen Ngoc Dung Bui 《Computers, Materials & Continua》 2026年第1期1571-1586,共16页
Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi... Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography. 展开更多
关键词 RDH reversible data hiding PVO RDH base three stego images
在线阅读 下载PDF
Impact of Data Processing Techniques on AI Models for Attack-Based Imbalanced and Encrypted Traffic within IoT Environments
7
作者 Yeasul Kim Chaeeun Won Hwankuk Kim 《Computers, Materials & Continua》 2026年第1期247-274,共28页
With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp... With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy. 展开更多
关键词 Encrypted traffic attack detection data sampling technique AI-based detection IoT environment
在线阅读 下载PDF
Graph-Based Unified Settlement Framework for Complex Electricity Markets:Data Integration and Automated Refund Clearing
8
作者 Xiaozhe Guo Suyan Long +4 位作者 Ziyu Yue Yifan Wang Guanting Yin Yuyang Wang Zhaoyuan Wu 《Energy Engineering》 2026年第1期56-90,共35页
The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack... The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack a unified data structure,and depend heavily on manual intervention to process high-frequency and retroactive transactions.To address these limitations,a graph-based unified settlement framework is proposed to enhance automation,flexibility,and adaptability in electricity market settlements.A flexible attribute-graph model is employed to represent heterogeneousmulti-market data,enabling standardized integration,rapid querying,and seamless adaptation to evolving business requirements.An extensible operator library is designed to support configurable settlement rules,and a suite of modular tools—including dataset generation,formula configuration,billing templates,and task scheduling—facilitates end-to-end automated settlement processing.A robust refund-clearing mechanism is further incorporated,utilizing sandbox execution,data-version snapshots,dynamic lineage tracing,and real-time changecapture technologies to enable rapid and accurate recalculations under dynamic policy and data revisions.Case studies based on real-world data from regional Chinese markets validate the effectiveness of the proposed approach,demonstrating marked improvements in computational efficiency,system robustness,and automation.Moreover,enhanced settlement accuracy and high temporal granularity improve price-signal fidelity,promote cost-reflective tariffs,and incentivize energy-efficient and demand-responsive behavior among market participants.The method not only supports equitable and transparent market operations but also provides a generalizable,scalable foundation for modern electricity settlement platforms in increasingly complex and dynamic market environments. 展开更多
关键词 Electricity market market settlement data model graph database market refund clearing
在线阅读 下载PDF
Efficient Arabic Essay Scoring with Hybrid Models: Feature Selection, Data Optimization, and Performance Trade-Offs
9
作者 Mohamed Ezz Meshrif Alruily +4 位作者 Ayman Mohamed Mostafa Alaa SAlaerjan Bader Aldughayfiq Hisham Allahem Abdulaziz Shehab 《Computers, Materials & Continua》 2026年第1期2274-2301,共28页
Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic... Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage. 展开更多
关键词 Automated essay scoring text-based features vector-based features embedding-based features feature selection optimal data efficiency
在线阅读 下载PDF
Individual Software Expertise Formalization and Assessment from Project Management Tool Databases
10
作者 Traian-Radu Plosca Alexandru-Mihai Pescaru +1 位作者 Bianca-Valeria Rus Daniel-Ioan Curiac 《Computers, Materials & Continua》 2026年第1期389-411,共23页
Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods... Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results. 展开更多
关键词 Expertise formalization transformer-based models natural language processing augmented data project management tool skill classification
在线阅读 下载PDF
Harnessing deep learning for the discovery of latent patterns in multi-omics medical data
11
作者 Okechukwu Paul-Chima Ugwu Fabian COgenyi +8 位作者 Chinyere Nkemjika Anyanwu Melvin Nnaemeka Ugwu Esther Ugo Alum Mariam Basajja Joseph Obiezu Chukwujekwu Ezeonwumelu Daniel Ejim Uti Ibe Michael Usman Chukwuebuka Gabriel Eze Simeon Ikechukwu Egba 《Medical Data Mining》 2026年第1期32-45,共14页
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities... The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders. 展开更多
关键词 deep learning multi-omics integration biomedical data mining precision medicine graph neural networks autoencoders and transformers
在线阅读 下载PDF
AI-driven integration of multi-omics and multimodal data for precision medicine
12
作者 Heng-Rui Liu 《Medical Data Mining》 2026年第1期1-2,共2页
High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging ... High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging foundation models and multimodal learning frameworks are enabling scalable and transferable representations of cellular states,while advances in interpretability and real-world data integration are bridging the gap between discovery and clinical application.This paper outlines a concise roadmap for AI-driven,transcriptome-centered multi-omics integration in precision medicine(Figure 1). 展开更多
关键词 high throughput transcriptomics multi omics single cell multimodal learning frameworks foundation models omics data modalitiesemerging ai driven precision medicine
在线阅读 下载PDF
Multimodal artificial intelligence integrates imaging,endoscopic,and omics data for intelligent decision-making in individualized gastrointestinal tumor treatment
13
作者 Hui Nian Yi-Bin Wu +5 位作者 Yu Bai Zhi-Long Zhang Xiao-Huang Tu Qi-Zhi Liu De-Hua Zhou Qian-Cheng Du 《Artificial Intelligence in Gastroenterology》 2026年第1期1-19,共19页
Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including ... Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including computed tomography(CT),magnetic resonance imaging(MRI),endoscopic imaging,and genomic profiles-to enable intelligent decision-making for individualized therapy.This approach leverages AI algorithms to fuse imaging,endoscopic,and omics data,facilitating comprehensive characterization of tumor biology,prediction of treatment response,and optimization of therapeutic strategies.By combining CT and MRI for structural assessment,endoscopic data for real-time visual inspection,and genomic information for molecular profiling,multimodal AI enhances the accuracy of patient stratification and treatment personalization.The clinical implementation of this technology demonstrates potential for improving patient outcomes,advancing precision oncology,and supporting individualized care in gastrointestinal cancers.Ultimately,multimodal AI serves as a transformative tool in oncology,bridging data integration with clinical application to effectively tailor therapies. 展开更多
关键词 Multimodal artificial intelligence Gastrointestinal tumors Individualized therapy Intelligent diagnosis Treatment optimization Prognostic prediction data fusion Deep learning Precision medicine
在线阅读 下载PDF
Cosmic Acceleration and the Hubble Tension from Baryon Acoustic Oscillation Data
14
作者 Xuchen Lu Shengqing Gao Yungui Gong 《Chinese Physics Letters》 2026年第1期327-332,共6页
We investigate the null tests of cosmic accelerated expansion by using the baryon acoustic oscillation(BAO)data measured by the dark energy spectroscopic instrument(DESI)and reconstruct the dimensionless Hubble parame... We investigate the null tests of cosmic accelerated expansion by using the baryon acoustic oscillation(BAO)data measured by the dark energy spectroscopic instrument(DESI)and reconstruct the dimensionless Hubble parameter E(z)from the DESI BAO Alcock-Paczynski(AP)data using Gaussian process to perform the null test.We find strong evidence of accelerated expansion from the DESI BAO AP data.By reconstructing the deceleration parameter q(z) from the DESI BAO AP data,we find that accelerated expansion persisted until z■0.7 with a 99.7%confidence level.Additionally,to provide insights into the Hubble tension problem,we propose combining the reconstructed E(z) with D_(H)/r_(d) data to derive a model-independent result r_(d)h=99.8±3.1 Mpc.This result is consistent with measurements from cosmic microwave background(CMB)anisotropies using the ΛCDM model.We also propose a model-independent method for reconstructing the comoving angular diameter distance D_(M)(z) from the distance modulus μ,using SNe Ia data and combining this result with DESI BAO data of D_(M)/r_(d) to constrain the value of r_(d).We find that the value of r_(d),derived from this model-independent method,is smaller than that obtained from CMB measurements,with a significant discrepancy of at least 4.17σ.All the conclusions drawn in this paper are independent of cosmological models and gravitational theories. 展开更多
关键词 baryon acoustic oscillation bao data cosmic accelerated expansion dimensionless hubble parameter reconstructing deceleration parameter null testwe accelerated expansion null tests gaussian process
原文传递
A Convolutional Neural Network-Based Deep Support Vector Machine for Parkinson’s Disease Detection with Small-Scale and Imbalanced Datasets
15
作者 Kwok Tai Chui Varsha Arya +2 位作者 Brij B.Gupta Miguel Torres-Ruiz Razaz Waheeb Attar 《Computers, Materials & Continua》 2026年第1期1410-1432,共23页
Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d... Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested. 展开更多
关键词 Convolutional neural network data generation deep support vector machine feature extraction generative artificial intelligence imbalanced dataset medical diagnosis Parkinson’s disease small-scale dataset
在线阅读 下载PDF
Rock-Eval热分解法及其在土壤有机碳研究中的应用 被引量:3
16
作者 张延 高燕 +5 位作者 张旸 Gregorich Edward 李秀军 陈学文 张士秀 梁爱珍 《土壤与作物》 2022年第3期282-289,共8页
土壤有机碳的稳定性影响土壤固碳潜力,如何提取土壤活性及稳定性碳组分用以定量表征土壤有机碳稳定性,是土壤固碳研究领域的关键科学问题。当前,提取土壤有机碳活性及稳定性组分的方法多样,包括物理、化学及生物手段,导致结果难以比较,... 土壤有机碳的稳定性影响土壤固碳潜力,如何提取土壤活性及稳定性碳组分用以定量表征土壤有机碳稳定性,是土壤固碳研究领域的关键科学问题。当前,提取土壤有机碳活性及稳定性组分的方法多样,包括物理、化学及生物手段,导致结果难以比较,同时存在耗时长、成本高及操作步骤繁琐等缺点。所以,亟需一种高效、可信度高且应用广泛的测定方法。对比分析不同热分解技术的优缺点,包括热裂解气相-质谱联用测定技术、热重分析技术、差示扫描量热分析技术及Rock-Eval(RE)热分解方法,普遍认为RE方法操作简单、耗时短、成本低、结果易于分析,可信度较高,可以很好地表征土壤有机碳稳定性,有利于土壤有机碳研究的横向对比。本文通过Citespace软件综述了RE方法在土壤学和有机碳研究中发展过程,梳理了其应用现状及进展。提出RE方法有助于通过建立不同土地利用方式、气候带以及不同土壤质地间活性及稳定性碳库的比较网络体系,完善我国土壤碳库监测系统。 展开更多
关键词 土壤有机碳 热分解 rock-eval 活性及稳定性碳
在线阅读 下载PDF
Evaluation of Organic Matter Richness of Eocene Strata Based on Calcareous Nannofossils and Rock-Eval Analysis in North Dezful, Iran 被引量:1
17
作者 Mohammad Parandavar Jalil Sadouni 《Journal of Earth Science》 SCIE CAS CSCD 2021年第4期1022-1034,共13页
Hydrocarbon source potential of the Paleogene Pabdeh Formation was studied by means of organic geochemistry and distribution of calcareous nannofossils.Based on the results,an Eoceneaged organic matter(OM)-rich interv... Hydrocarbon source potential of the Paleogene Pabdeh Formation was studied by means of organic geochemistry and distribution of calcareous nannofossils.Based on the results,an Eoceneaged organic matter(OM)-rich interval was identified and traced across different parts of the North Dezful zone and partly Abadan Plain.In order to characterize the OM quality and richness of the studied intervals,Rock-Eval pyrolysis and nannofossils evaluation were performed,and the geochemical data collected along selected wells were correlated to capture the variations of thickness and source potential of the OM-rich interval.Accordingly,remarkable variations were identified within the depth ranges of 2480–2552 m and also 2200–2210 m,which were attributed to the maximum increase in the rate of growth R-selected species.This increase in the productivity rate was found to be well correlated to high Rock-Eval total organic carbon(TOC)and hydrogen index(HI)values.Given that the maturity of Pabdeh Formation in the studied area was found to have reached the oil window,we expect significant hydrocarbon generation(Type II kerogen),making the play economically highly promising. 展开更多
关键词 calcareous nannofossils rock-eval organic geochemistry PALEOGENE paleo-productivity diversity Dezful
原文传递
A New Method for Hydrocarbon Loss Recovering of Rock-Eval Pyrolysis 被引量:1
18
作者 吴欣松 王延斌 邹小勇 《Journal of China University of Mining and Technology》 2004年第2期200-203,共4页
How to accurately recover the hydrocarbon loss is a crucial step in reservoir evaluation by Rock-Eval pyrolysis. However, it is very difficult to determine the recovering coefficients because there are numerous factor... How to accurately recover the hydrocarbon loss is a crucial step in reservoir evaluation by Rock-Eval pyrolysis. However, it is very difficult to determine the recovering coefficients because there are numerous factors causing the hydrocarbon loss. Aiming at this problem, a new method named critical point analysis is put forward in this paper. The first step of the method is to find the critical point by drawing the scatterplot of hydrocarbon contents versus the ratio of the light component of with the heavy component of;And the second step is to calculate the recovering coefficient by contrasting the pyrolysis parameters at the critical point of different sample types. This method is not only been explained reasonably theoretically,but also has got a good application effect in Huanghua depression. 展开更多
关键词 GEOCHEMISTRY RESERVOIR evaluation rock-eval PYROLYSIS
在线阅读 下载PDF
Impact of Particle Crush-Size and Weight on Rock-Eval S2,S4,and Kinetics of Shales
19
作者 Deependra Pratap Singh David A.Wood +2 位作者 Vivek Singh Bodhisatwa Hazra Pradeep K.Singh 《Journal of Earth Science》 SCIE CAS CSCD 2022年第2期513-524,共12页
The Rock-Eval technique in the last few decades has found extensive application for source rock analysis.The impact of shale particle crush-size and sample weight on key Rock-Eval measurements,viz.the S;curve(heavier ... The Rock-Eval technique in the last few decades has found extensive application for source rock analysis.The impact of shale particle crush-size and sample weight on key Rock-Eval measurements,viz.the S;curve(heavier hydrocarbons released during the non-isothermal pyrolysis-stage)and the S;curve(CO_(2)released from oxidation of organic matter during the oxidation-stage)are investigated in this study.For high and low total organic carbon(TOC)samples of different thermal maturity levels,it is apparent that particle crush-size has a strong influence on the results obtained from RockEval analysis,with the effect being stronger in high-TOC samples.In comparison to the coarser-splits,S;and pyrolyzable carbon(PC)were found to be higher for the finer crush sizes in all the shales studied.The S_(2)CO_(2)oxidation curve shapes of Permian shales show contrasting signatures in comparison to the Paleocene-aged lignitic shale,both from Indian basins.A reduced TOC was observed with rising sample weight for a mature Permian shale from the Jharia basin,while the other shales sampled showed no significant reduction.The results indicate that the S_(2)CO_(2)curve and the S_(2)T_(2),are strongly dependent on the type of organic-matter present and its level of thermal maturity.Sample weight and particle size both influence the S;-curve shapes at different heating rates.With increasing sample weights,an increase in S;-curve magnitude was observed for the shales of diverse maturities.These differences in the S;curve shape lead to substantially different kinetic distributions being fitted to these curves.These findings are considered to have significant implications for the accuracy of reaction kinetics obtained from pyrolysis experiments using different sample characteristics. 展开更多
关键词 high-TOC shale analysis rock-eval pyrolysis total organic carbon sample specifications thermal maturity shale reaction kinetics petroleum geology
原文传递
IoT Empowered Early Warning of Transmission Line Galloping Based on Integrated Optical Fiber Sensing and Weather Forecast Time Series Data 被引量:1
20
作者 Zhe Li Yun Liang +1 位作者 Jinyu Wang Yang Gao 《Computers, Materials & Continua》 SCIE EI 2025年第1期1171-1192,共22页
Iced transmission line galloping poses a significant threat to the safety and reliability of power systems,leading directly to line tripping,disconnections,and power outages.Existing early warning methods of iced tran... Iced transmission line galloping poses a significant threat to the safety and reliability of power systems,leading directly to line tripping,disconnections,and power outages.Existing early warning methods of iced transmission line galloping suffer from issues such as reliance on a single data source,neglect of irregular time series,and lack of attention-based closed-loop feedback,resulting in high rates of missed and false alarms.To address these challenges,we propose an Internet of Things(IoT)empowered early warning method of transmission line galloping that integrates time series data from optical fiber sensing and weather forecast.Initially,the method applies a primary adaptive weighted fusion to the IoT empowered optical fiber real-time sensing data and weather forecast data,followed by a secondary fusion based on a Back Propagation(BP)neural network,and uses the K-medoids algorithm for clustering the fused data.Furthermore,an adaptive irregular time series perception adjustment module is introduced into the traditional Gated Recurrent Unit(GRU)network,and closed-loop feedback based on attentionmechanism is employed to update network parameters through gradient feedback of the loss function,enabling closed-loop training and time series data prediction of the GRU network model.Subsequently,considering various types of prediction data and the duration of icing,an iced transmission line galloping risk coefficient is established,and warnings are categorized based on this coefficient.Finally,using an IoT-driven realistic dataset of iced transmission line galloping,the effectiveness of the proposed method is validated through multi-dimensional simulation scenarios. 展开更多
关键词 Optical fiber sensing multi-source data fusion early warning of galloping time series data IOT adaptive weighted learning irregular time series perception closed-loop attention mechanism
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部