期刊文献+
共找到362,233篇文章
< 1 2 250 >
每页显示 20 50 100
A Study on the Rail Transit Economic Field and Its Interactive Development Relationship with Cities from the Temporal-Spatial Perspective
1
作者 Xiaorong Wang 《Proceedings of Business and Economic Studies》 2025年第5期11-19,共9页
As metropolitan areas expand spatially,they encounter constraints imposed by the fixed daily time budget.Rail transit enhances transport efficiency,reduces costs,and facilitates the formation of a“transit economic fi... As metropolitan areas expand spatially,they encounter constraints imposed by the fixed daily time budget.Rail transit enhances transport efficiency,reduces costs,and facilitates the formation of a“transit economic field”centered on rail networks,thereby alleviating such temporal-spatial pressures.This paper adopts an integrated temporal-spatial analytical framework.Following a conceptual clarification of the transit economic field,it dissects the mechanisms through which rail transit improves mobility and examines how this field influences urban spatial patterns,temporal dynamics,and their interrelationships.It constructs a theoretical framework to explain the co-development of transit economic fields and cities,supplemented by empirical case studies.The key findings are as follows:Firstly,the transit economic field represents a high-density development model that expands both horizontally and vertically around rail networks.It mitigates temporal-spatial conflicts.Secondly,with rail networks as the core,the field integrates diverse spatial functions,facilitating the establishment of economic connections and stabilizing temporal-spatial relationships.Thirdly,the transit economic field contributes to the preservation of urban natural ecosystems and enhances urban livability.Overall,this research can provide insights for promoting rail transit-oriented development transitions in large cities and urban agglomerations. 展开更多
关键词 Rail transit Rail transit economic field temporal-spatial relationship ACCESSIBILITY
在线阅读 下载PDF
Temporal-spatial dynamic characteristics of vehicle emissions on intercity roads in Guangdong Province based on vehicle identity detection data
2
作者 Hui Ding Yongming Zhao +2 位作者 Shenhua Miao Tong Chen Yonghong Liu 《Journal of Environmental Sciences》 SCIE EI CAS CSCD 2023年第8期126-138,共13页
Estimating intercity vehicle emissions precisely would benefit collaborative control in multiple cities.Considering the variability of emissions caused by vehicles,roads,and traffic,the 24-hour change characteristics ... Estimating intercity vehicle emissions precisely would benefit collaborative control in multiple cities.Considering the variability of emissions caused by vehicles,roads,and traffic,the 24-hour change characteristics of air pollutants(CO,HC,NO_(X),PM_(2.5))on the intercity road network of Guangdong Province by vehicle categories and road links were revealed based on vehicle identity detection data in real-life traffic for each hour in July 2018.The results showed that the spatial diversity of emissions caused by the unbalanced economywas obvious.The vehicle emissions in the Pearl River Delta region(PRD)with a higher economic level were approximately 1–2 times those in the non-Pearl RiverDelta region(non-PRD).Provincial roads with high loads became potential sources of high emissions.Therefore,emission control policies must emphasize the PRD and key roads by travel guidance to achieve greater reduction.Gasoline passenger cars with a large proportion of traffic dominated morning and evening peaks in the 24-hour period and were the dominant contributors to CO and HC emissions,contributing more than 50%in the daytime(7:00–23:00)and higher than 26%at night(0:00–6:00).Diesel trucks made up 10%of traffic,but were the dominant player at night,contributed 50%–90%to NO_(X) and PM_(2.5) emissions,with amarked 24-hour change rule of more than 80%at night(23:00–5:00)and less than 60%during daytime.Therefore,targeted control measures by time-section should be set up on collaborative control.These findings provide time-varying decision support for variable vehicle emission control on a large scale. 展开更多
关键词 Intercity roads Dynamic vehicle emissions Vehicle identity detection data Diesel trucks
原文传递
Combining different climate datasets better reflects the response of warm-temperate forests to climate:a case study from Mt.Dongling,Beijing
3
作者 Shengjie Wang Haiyang Liu +1 位作者 Shuai Yuan Chenxi Xu 《Journal of Forestry Research》 2026年第2期131-143,共13页
Accurately assessing the relationship between tree growth and climatic factors is of great importance in dendrochronology.This study evaluated the consistency between alternative climate datasets(including station and... Accurately assessing the relationship between tree growth and climatic factors is of great importance in dendrochronology.This study evaluated the consistency between alternative climate datasets(including station and gridded data)and actual climate data(fixed-point observations near the sampling sites),in northeastern China’s warm temperate zone and analyzed differences in their correlations with tree-ring width index.The results were:(1)Gridded temperature data,as well as precipitation and relative humidity data from the Huailai meteorological station,was more consistent with the actual climate data;in contrast,gridded soil moisture content data showed significant discrepancies.(2)Horizontal distance had a greater impact on the representativeness of actual climate conditions than vertical elevation differences.(3)Differences in consistency between alternative and actual climate data also affected their correlations with tree-ring width indices.In some growing season months,correlation coefficients,both in magnitude and sign,differed significantly from those based on actual data.The selection of different alternative climate datasets can lead to biased results in assessing forest responses to climate change,which is detrimental to the management of forest ecosystems in harsh environments.Therefore,the scientific and rational selection of alternative climate data is essential for dendroecological and climatological research. 展开更多
关键词 Climate data representativeness Alternative climate data selection Response differences Deciduous broad-leaf forest Warm temperate zone
在线阅读 下载PDF
Photoacoustic-computed tomography 3D data compression method and system based on Wavelet-Transformer
4
作者 Jialin Li Tingting Li +2 位作者 Yiming Ma Yi Shen Mingjian Sun 《Journal of Innovative Optical Health Sciences》 2026年第1期110-125,共16页
Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.Howev... Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.However,the increasing demand for higher resolution and real-time imaging results in significant data volume,limiting data storage,transmission and processing efficiency of system.Therefore,there is an urgent need for an effective method to compress the raw data without compromising image quality.This paper presents a photoacoustic-computed tomography 3D data compression method and system based on Wavelet-Transformer.This method is based on the cooperative compression framework that integrates wavelet hard coding with deep learning-based soft decoding.It combines the multiscale analysis capability of wavelet transforms with the global feature modeling advantage of Transformers,achieving high-quality data compression and reconstruction.Experimental results using k-wave simulation suggest that the proposed compression system has advantages under extreme compression conditions,achieving a raw data compression ratio of up to 1:40.Furthermore,three-dimensional data compression experiment using in vivo mouse demonstrated that the maximum peak signal-to-noise ratio(PSNR)and structural similarity index(SSIM)values of reconstructed images reached 38.60 and 0.9583,effectively overcoming detail loss and artifacts introduced by raw data compression.All the results suggest that the proposed system can significantly reduce storage requirements and hardware cost,enhancing computational efficiency and image quality.These advantages support the development of photoacoustic-computed tomography toward higher efficiency,real-time performance and intelligent functionality. 展开更多
关键词 Photoacoustic-computed tomography data compression TRANSFORMER
原文传递
A Composite Loss-Based Autoencoder for Accurate and Scalable Missing Data Imputation
5
作者 Thierry Mugenzi Cahit Perkgoz 《Computers, Materials & Continua》 2026年第1期1985-2005,共21页
Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel a... Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications. 展开更多
关键词 Missing data imputation autoencoder deep learning missing mechanisms
在线阅读 下载PDF
Research on the Optimal Allocation of Community Elderly Care Service Resources Based on Big Data Technology
6
作者 Shuying Li 《Journal of Clinical and Nursing Research》 2026年第1期241-246,共6页
With the accelerating aging process of China’s population,the demand for community elderly care services has shown diversified and personalized characteristics.However,problems such as insufficient total care service... With the accelerating aging process of China’s population,the demand for community elderly care services has shown diversified and personalized characteristics.However,problems such as insufficient total care service resources,uneven distribution,and prominent supply-demand contradictions have seriously affected service quality.Big data technology,with core advantages including data collection,analysis and mining,and accurate prediction,provides a new solution for the allocation of community elderly care service resources.This paper systematically studies the application value of big data technology in the allocation of community elderly care service resources from three aspects:resource allocation efficiency,service accuracy,and management intelligence.Combined with practical needs,it proposes optimal allocation strategies such as building a big data analysis platform and accurately grasping the elderly’s care needs,striving to provide operable path references for the construction of community elderly care service systems,promoting the early realization of the elderly care service goal of“adequate support and proper care for the elderly”,and boosting the high-quality development of China’s elderly care service industry. 展开更多
关键词 Big data technology COMMUNITY Elderly care Service resources
在线阅读 下载PDF
Multivariate Data Anomaly Detection Based on Graph Structure Learning
7
作者 Haoxiang Wen Zhaoyang Wang +2 位作者 Zhonglin Ye Haixing Zhao Maosong Sun 《Computer Modeling in Engineering & Sciences》 2026年第1期1174-1206,共33页
Multivariate anomaly detection plays a critical role in maintaining the stable operation of information systems.However,in existing research,multivariate data are often influenced by various factors during the data co... Multivariate anomaly detection plays a critical role in maintaining the stable operation of information systems.However,in existing research,multivariate data are often influenced by various factors during the data collection process,resulting in temporal misalignment or displacement.Due to these factors,the node representations carry substantial noise,which reduces the adaptability of the multivariate coupled network structure and subsequently degrades anomaly detection performance.Accordingly,this study proposes a novel multivariate anomaly detection model grounded in graph structure learning.Firstly,a recommendation strategy is employed to identify strongly coupled variable pairs,which are then used to construct a recommendation-driven multivariate coupling network.Secondly,a multi-channel graph encoding layer is used to dynamically optimize the structural properties of the multivariate coupling network,while a multi-head attention mechanism enhances the spatial characteristics of the multivariate data.Finally,unsupervised anomaly detection is conducted using a dynamic threshold selection algorithm.Experimental results demonstrate that effectively integrating the structural and spatial features of multivariate data significantly mitigates anomalies caused by temporal dependency misalignment. 展开更多
关键词 Multivariate data anomaly detection graph structure learning coupled network
在线阅读 下载PDF
Constructions of Control Sequence Set for Hierarchical Access in Data Link Network
8
作者 Niu Xianhua Ma Jiabei +3 位作者 Zhou Enzhi Wang Yaoxuan Zeng Bosen Li Zhiping 《China Communications》 2026年第1期67-80,共14页
As an important resource in data link,time slots should be strategically allocated to enhance transmission efficiency and resist eavesdropping,especially considering the tremendous increase in the number of nodes and ... As an important resource in data link,time slots should be strategically allocated to enhance transmission efficiency and resist eavesdropping,especially considering the tremendous increase in the number of nodes and diverse communication needs.It is crucial to design control sequences with robust randomness and conflict-freeness to properly address differentiated access control in data link.In this paper,we propose a hierarchical access control scheme based on control sequences to achieve high utilization of time slots and differentiated access control.A theoretical bound of the hierarchical control sequence set is derived to characterize the constraints on the parameters of the sequence set.Moreover,two classes of optimal hierarchical control sequence sets satisfying the theoretical bound are constructed,both of which enable the scheme to achieve maximum utilization of time slots.Compared with the fixed time slot allocation scheme,our scheme reduces the symbol error rate by up to 9%,which indicates a significant improvement in anti-interference and eavesdropping capabilities. 展开更多
关键词 control sequence data link hierarchical access control theoretical bound
在线阅读 下载PDF
Advances in Machine Learning for Explainable Intrusion Detection Using Imbalance Datasets in Cybersecurity with Harris Hawks Optimization
9
作者 Amjad Rehman Tanzila Saba +2 位作者 Mona M.Jamjoom Shaha Al-Otaibi Muhammad I.Khan 《Computers, Materials & Continua》 2026年第1期1804-1818,共15页
Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness a... Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability. 展开更多
关键词 Intrusion detection XAI machine learning ensemble method CYBERSECURITY imbalance data
在线阅读 下载PDF
Enhanced Capacity Reversible Data Hiding Based on Pixel Value Ordering in Triple Stego Images
10
作者 Kim Sao Nguyen Ngoc Dung Bui 《Computers, Materials & Continua》 2026年第1期1571-1586,共16页
Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi... Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography. 展开更多
关键词 RDH reversible data hiding PVO RDH base three stego images
在线阅读 下载PDF
Research on the Independent Status of the Data Utilization Right
11
作者 Zhang Ying 《Contemporary Social Sciences》 2026年第1期140-155,共16页
Among the “three data rights,” the data utilization right has been persistently overlooked, and is similar to a neglected “middle child” in the context of the data rights family. However, it is precisely during th... Among the “three data rights,” the data utilization right has been persistently overlooked, and is similar to a neglected “middle child” in the context of the data rights family. However, it is precisely during the stages of processing and utilization that data undergoes its transformations and where its economic value is ultimately created. A series of recent policy documents on treating data as a factor of production have emphasized that the building of a scientific data property rights system requires a fair and efficient mechanism for benefit distribution, which provides reasonable preference for creators of data value and use value in terms of the income generated by data elements. Constrained by the inertial thinking of property right logic, the data utilization right is often regarded as a “transitional fulcrum” wherein the holders of data resources have to authorize the operators of data products to realize data value thereby. In the future structural design and implementation of the coordination mechanism for the property right system against the backdrop of the data factor-oriented reform, the establishment of data processing and utilization as an independent right will require the implementation of two core initiatives: first, attaching importance to the independent protection of the benefit distribution;second, implementing risk regulation for data security through optimization of governance. These two initiatives will serve as the key for optimizing the data factor governance system and accelerating the release of data value. 展开更多
关键词 utilization right data property rights structure benefit distribution risk regulation
在线阅读 下载PDF
Graph-Based Unified Settlement Framework for Complex Electricity Markets:Data Integration and Automated Refund Clearing
12
作者 Xiaozhe Guo Suyan Long +4 位作者 Ziyu Yue Yifan Wang Guanting Yin Yuyang Wang Zhaoyuan Wu 《Energy Engineering》 2026年第1期56-90,共35页
The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack... The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack a unified data structure,and depend heavily on manual intervention to process high-frequency and retroactive transactions.To address these limitations,a graph-based unified settlement framework is proposed to enhance automation,flexibility,and adaptability in electricity market settlements.A flexible attribute-graph model is employed to represent heterogeneousmulti-market data,enabling standardized integration,rapid querying,and seamless adaptation to evolving business requirements.An extensible operator library is designed to support configurable settlement rules,and a suite of modular tools—including dataset generation,formula configuration,billing templates,and task scheduling—facilitates end-to-end automated settlement processing.A robust refund-clearing mechanism is further incorporated,utilizing sandbox execution,data-version snapshots,dynamic lineage tracing,and real-time changecapture technologies to enable rapid and accurate recalculations under dynamic policy and data revisions.Case studies based on real-world data from regional Chinese markets validate the effectiveness of the proposed approach,demonstrating marked improvements in computational efficiency,system robustness,and automation.Moreover,enhanced settlement accuracy and high temporal granularity improve price-signal fidelity,promote cost-reflective tariffs,and incentivize energy-efficient and demand-responsive behavior among market participants.The method not only supports equitable and transparent market operations but also provides a generalizable,scalable foundation for modern electricity settlement platforms in increasingly complex and dynamic market environments. 展开更多
关键词 Electricity market market settlement data model graph database market refund clearing
在线阅读 下载PDF
Impact of Data Processing Techniques on AI Models for Attack-Based Imbalanced and Encrypted Traffic within IoT Environments
13
作者 Yeasul Kim Chaeeun Won Hwankuk Kim 《Computers, Materials & Continua》 2026年第1期247-274,共28页
With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp... With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy. 展开更多
关键词 Encrypted traffic attack detection data sampling technique AI-based detection IoT environment
在线阅读 下载PDF
Data Processing Solutions on Low Signal-to-noise Data in Loess Plateau Area:A Case Study in Ordos Basin,China
14
作者 GAO Rongtao CHENG Yun +1 位作者 TANG Ziqi LIU Zhao 《CT理论与应用研究(中英文)》 2026年第1期154-162,共9页
While the Ordos Basin is recognized for its substantial hydrocarbon exploration prospects,its rugged loess tableland terrain has rendered seismic exploration exceptionally challenging[1-3].Persistent obstacles such as... While the Ordos Basin is recognized for its substantial hydrocarbon exploration prospects,its rugged loess tableland terrain has rendered seismic exploration exceptionally challenging[1-3].Persistent obstacles such as complex 3D survey planning,low signal-tonoise ratio raw data,inadequate near-surface velocity modeling,and imaging inaccuracy have long hindered the advancement of seismic exploration across this region.Through a problem-solving approach rooted in geological target analysis,this research systematically investigates the behavioral patterns of nodal seismometer-based high-density seismic acquisition in loess plateau.Tailored advancements in waveform enhancement and depth velocity modelling methodologies have been engineered.Field validations confirm that the optimized workflow demonstrates marked improvements in amplitude preservation and imaging resolution,offering novel insights for future reservoir characterization endeavors. 展开更多
关键词 loess plateau ACQUISITION low signal to noise ratio data processing depth modeling
原文传递
Efficient Arabic Essay Scoring with Hybrid Models: Feature Selection, Data Optimization, and Performance Trade-Offs
15
作者 Mohamed Ezz Meshrif Alruily +4 位作者 Ayman Mohamed Mostafa Alaa SAlaerjan Bader Aldughayfiq Hisham Allahem Abdulaziz Shehab 《Computers, Materials & Continua》 2026年第1期2274-2301,共28页
Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic... Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage. 展开更多
关键词 Automated essay scoring text-based features vector-based features embedding-based features feature selection optimal data efficiency
在线阅读 下载PDF
Individual Software Expertise Formalization and Assessment from Project Management Tool Databases
16
作者 Traian-Radu Plosca Alexandru-Mihai Pescaru +1 位作者 Bianca-Valeria Rus Daniel-Ioan Curiac 《Computers, Materials & Continua》 2026年第1期389-411,共23页
Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods... Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results. 展开更多
关键词 Expertise formalization transformer-based models natural language processing augmented data project management tool skill classification
在线阅读 下载PDF
Harnessing deep learning for the discovery of latent patterns in multi-omics medical data
17
作者 Okechukwu Paul-Chima Ugwu Fabian COgenyi +8 位作者 Chinyere Nkemjika Anyanwu Melvin Nnaemeka Ugwu Esther Ugo Alum Mariam Basajja Joseph Obiezu Chukwujekwu Ezeonwumelu Daniel Ejim Uti Ibe Michael Usman Chukwuebuka Gabriel Eze Simeon Ikechukwu Egba 《Medical Data Mining》 2026年第1期32-45,共14页
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities... The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders. 展开更多
关键词 deep learning multi-omics integration biomedical data mining precision medicine graph neural networks autoencoders and transformers
在线阅读 下载PDF
Battery SOH enhanced solution:Voltage reconstruction and image recognition response to loss of data scenarios
18
作者 Xinghua Liu Linxiang Zhou +4 位作者 Jiaqiang Tian Longxing Wu Zhongbao Wei Hany M.Hasanien Peng Wang 《Journal of Energy Chemistry》 2026年第1期155-169,I0005,共16页
Accurate estimation of battery health status plays a crucial role in battery management systems.However,the lack of operational data still affects the accuracy of battery state of health(SOH)estimation.For this reason... Accurate estimation of battery health status plays a crucial role in battery management systems.However,the lack of operational data still affects the accuracy of battery state of health(SOH)estimation.For this reason,a SOH estimation method is proposed based on charging data reconstruction combined with image processing.The charging voltage data is used to train the least squares generative adversarial network(LSGAN),which is validated under different levels of missing data.From a visual perspective,the Gram angle field method is applied to convert one-dimensional time series data into image data.This method fully preserves the time series characteristics and nonlinear evolution patterns,which avoids the difficulties and limited expressive power associated with manual feature extraction.At the same time,the Swin Transformer model is introduced to extract global structures and local details from images,enabling better capture of sequence change trends.Combined with the long short-term memory network(LSTM),this enables accurate estimation of battery SOH.Two different types of batteries are used to validate the test.The experimental results show that the proposed method has good estimation accuracy under different training proportions. 展开更多
关键词 State of health Voltage data reconstruction Least squares generative adversarial NETWORK Gramicci angle field Swin Transformer-LSTM network
在线阅读 下载PDF
AI-driven integration of multi-omics and multimodal data for precision medicine
19
作者 Heng-Rui Liu 《Medical Data Mining》 2026年第1期1-2,共2页
High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging ... High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging foundation models and multimodal learning frameworks are enabling scalable and transferable representations of cellular states,while advances in interpretability and real-world data integration are bridging the gap between discovery and clinical application.This paper outlines a concise roadmap for AI-driven,transcriptome-centered multi-omics integration in precision medicine(Figure 1). 展开更多
关键词 high throughput transcriptomics multi omics single cell multimodal learning frameworks foundation models omics data modalitiesemerging ai driven precision medicine
在线阅读 下载PDF
Multimodal artificial intelligence integrates imaging,endoscopic,and omics data for intelligent decision-making in individualized gastrointestinal tumor treatment
20
作者 Hui Nian Yi-Bin Wu +5 位作者 Yu Bai Zhi-Long Zhang Xiao-Huang Tu Qi-Zhi Liu De-Hua Zhou Qian-Cheng Du 《Artificial Intelligence in Gastroenterology》 2026年第1期1-19,共19页
Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including ... Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including computed tomography(CT),magnetic resonance imaging(MRI),endoscopic imaging,and genomic profiles-to enable intelligent decision-making for individualized therapy.This approach leverages AI algorithms to fuse imaging,endoscopic,and omics data,facilitating comprehensive characterization of tumor biology,prediction of treatment response,and optimization of therapeutic strategies.By combining CT and MRI for structural assessment,endoscopic data for real-time visual inspection,and genomic information for molecular profiling,multimodal AI enhances the accuracy of patient stratification and treatment personalization.The clinical implementation of this technology demonstrates potential for improving patient outcomes,advancing precision oncology,and supporting individualized care in gastrointestinal cancers.Ultimately,multimodal AI serves as a transformative tool in oncology,bridging data integration with clinical application to effectively tailor therapies. 展开更多
关键词 Multimodal artificial intelligence Gastrointestinal tumors Individualized therapy Intelligent diagnosis Treatment optimization Prognostic prediction data fusion Deep learning Precision medicine
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部