期刊文献+
共找到40,872篇文章
< 1 2 250 >
每页显示 20 50 100
Harnessing deep learning for the discovery of latent patterns in multi-omics medical data
1
作者 Okechukwu Paul-Chima Ugwu Fabian COgenyi +8 位作者 Chinyere Nkemjika Anyanwu Melvin Nnaemeka Ugwu Esther Ugo Alum Mariam Basajja Joseph Obiezu Chukwujekwu Ezeonwumelu Daniel Ejim Uti Ibe Michael Usman Chukwuebuka Gabriel Eze Simeon Ikechukwu Egba 《Medical Data Mining》 2026年第1期32-45,共14页
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities... The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders. 展开更多
关键词 deep learning multi-omics integration biomedical data mining precision medicine graph neural networks autoencoders and transformers
在线阅读 下载PDF
Strengthening Biomedical Big Data Management and Unleashing the Value of Data Elements 被引量:1
2
作者 Wei Zhou Jing-Chen Zhang De-Pei Liu 《Chinese Medical Sciences Journal》 2025年第1期1-2,I0001,共3页
On October 18,2017,the 19th National Congress Report called for the implementation of the Healthy China Strategy.The development of biomedical data plays a pivotal role in advancing this strategy.Since the 18th Nation... On October 18,2017,the 19th National Congress Report called for the implementation of the Healthy China Strategy.The development of biomedical data plays a pivotal role in advancing this strategy.Since the 18th National Congress of the Communist Party of China,China has vigorously promoted the integration and implementation of the Healthy China and Digital China strategies.The National Health Commission has prioritized the development of health and medical big data,issuing policies to promote standardized applica-tions and foster innovation in"Internet+Healthcare."Biomedical data has significantly contributed to preci-sion medicine,personalized health management,drug development,disease diagnosis,public health monitor-ing,and epidemic prediction capabilities. 展开更多
关键词 health medical big dataissuing drug development precision medicine disease diagnosis development biomedical data personalized health management standardized app biomedical big data
暂未订购
Standardizing Healthcare Datasets in China:Challenges and Strategies
3
作者 Zheng-Yong Hu Xiao-Lei Xiu +2 位作者 Jing-Yu Zhang Wan-Fei Hu Si-Zhu Wu 《Chinese Medical Sciences Journal》 2025年第4期253-267,I0001,共16页
Standardized datasets are foundational to healthcare informatization by enhancing data quality and unleashing the value of data elements.Using bibliometrics and content analysis,this study examines China's healthc... Standardized datasets are foundational to healthcare informatization by enhancing data quality and unleashing the value of data elements.Using bibliometrics and content analysis,this study examines China's healthcare dataset standards from 2011 to 2025.It analyzes their evolution across types,applications,institutions,and themes,highlighting key achievements including substantial growth in quantity,optimized typology,expansion into innovative application scenarios such as health decision support,and broadened institutional involvement.The study also identifies critical challenges,including imbalanced development,insufficient quality control,and a lack of essential metadata—such as authoritative data element mappings and privacy annotations—which hampers the delivery of intelligent services.To address these challenges,the study proposes a multi-faceted strategy focused on optimizing the standard system's architecture,enhancing quality and implementation,and advancing both data governance—through authoritative tracing and privacy protection—and intelligent service provision.These strategies aim to promote the application of dataset standards,thereby fostering and securing the development of new productive forces in healthcare. 展开更多
关键词 healthcare dataset standards data standardization data management
在线阅读 下载PDF
Best practice for developing integrative Chinese-Western medicine databases using electronic health records
4
作者 REN Yan JIA Yulong +7 位作者 LIANG Wengxue XU Ye LIU Xuehong XIONG Yiquan JIANG Hao ZOU Kang SUN Xin TAN Jing 《World Journal of Integrated Traditional and Western Medicine》 2025年第3期174-186,共13页
Objectives:Electronic health records(EHRs)offer valuable real-world data(RWD)for Chinese medicine research.However,significant methodological challenges remain in developing integrative Chinese-Western medicine(ICWM)d... Objectives:Electronic health records(EHRs)offer valuable real-world data(RWD)for Chinese medicine research.However,significant methodological challenges remain in developing integrative Chinese-Western medicine(ICWM)databases.This study aims to establish a best-practice methodological framework,referred to as BRIDGE,to guide the construction of ICWM databases using EHRs.Methods:We developed the methodological framework through a comprehensive process,including systematic literature review,synthesis of empirical experiences,thematic expert discussions,and consultation with an external panel to reach consensus.Results:The BRIDGE framework outlines 6 core components for ICWM-EHR database development:Overall design,database architecture,data extraction and linkage,data governance,data verification,and data quality evaluation.Key data elements include variables related to study population,treatment or exposure,outcomes,and confounders.These databases support various research applications,particularly in evaluating the effectiveness and safety of integrative therapies.To demonstrate its practical value,we developed an ICWM-EHR database on women’s reproductive lifespan,encompassing 2,064,482 patients.This database captures women’s health conditions across the life course,from reproductive age to older adulthood.Conclusions:The BRIDGE methodological framework provides a standardized approach to building high-quality ICWM-EHR databases.It offers a unique opportunity to strengthen the methodological rigor and real-world relevance of Chinese medicine research in integrated healthcare settings. 展开更多
关键词 Chinese-Western medicine database Electronic health records Methodological framework Database development Women’s reproductive health database
暂未订购
Nuclear data measurement and propagation in Back-n experiments:methodologies and instrumentation
5
作者 Min-Hao Gu Jie-Ming Xue +7 位作者 Ya-Kang Li Ping Cao Jie Ren Yong-Hao Chen Wei Jiang Han Yi Peng Hu Rui-Rui Fan 《Nuclear Science and Techniques》 2025年第11期69-82,共14页
This article introduces the methodologies and instrumentation for data measurement and propagation at the Back-n white neutron facility of the China Spallation Neutron Source.The Back-n facility employs backscattering... This article introduces the methodologies and instrumentation for data measurement and propagation at the Back-n white neutron facility of the China Spallation Neutron Source.The Back-n facility employs backscattering techniques to generate a broad spectrum of white neutrons.Equipped with advanced detectors such as the light particle detector array and the fission ionization chamber detector,the facility achieves high-precision data acquisition through a general-purpose electronics system.Data were managed and stored in a hierarchical system supported by the National High Energy Physics Science Data Center,ensuring long-term preservation and efficient access.The data from the Back-n experiments significantly contribute to nuclear physics,reactor design,astrophysics,and medical physics,enhancing the understanding of nuclear processes and supporting interdisciplinary research. 展开更多
关键词 Nuclear physics Data acquisition Data storage and management Data sharing Neutron experiments White neutron beam
在线阅读 下载PDF
AI-Enhanced Secure Data Aggregation for Smart Grids with Privacy Preservation
6
作者 Congcong Wang Chen Wang +1 位作者 Wenying Zheng Wei Gu 《Computers, Materials & Continua》 SCIE EI 2025年第1期799-816,共18页
As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and use... As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and user privacy concerns within smart grids.However,existing methods struggle with efficiency and security when processing large-scale data.Balancing efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent challenge.This paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data modalities.The approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user privacy.It also explores the application of Boneh Lynn Shacham(BLS)signatures for user authentication.The proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis. 展开更多
关键词 Smart grid data security privacy protection artificial intelligence data aggregation
在线阅读 下载PDF
Concept of characteristics database alliance building:a case study of marine characteristic databases
7
作者 ZHOU Li DONG Wenjing YANG Yi 《Marine Science Bulletin》 2025年第1期84-96,共13页
The characteristic databases in China face issues such as narrow resource coverage,low levels of standardization and normalization,and limited data sharing.To address these challenges,this paper proposes the concept o... The characteristic databases in China face issues such as narrow resource coverage,low levels of standardization and normalization,and limited data sharing.To address these challenges,this paper proposes the concept of characteristic databases alliance,using marine characteristic databases as a case for feasibility analysis and discussion.The paper outlines the development path for such alliances and offers recommendations for future growth,aiming to establish a collaborative platform for the development of characteristic databases. 展开更多
关键词 characteristic databases characteristic resources resource construction database alliance
在线阅读 下载PDF
A Custom Medical Image De-identification System Based on Data Privacy
8
作者 ZHANG Jingchen WANG Jiayang +3 位作者 ZHAO Yuanzhi ZHOU Wei LUO Wei QIAN Qing 《数据与计算发展前沿(中英文)》 2025年第3期122-135,共14页
【Objective】Medical imaging data has great value,but it contains a significant amount of sensitive information about patients.At present,laws and regulations regarding to the de-identification of medical imaging data... 【Objective】Medical imaging data has great value,but it contains a significant amount of sensitive information about patients.At present,laws and regulations regarding to the de-identification of medical imaging data are not clearly defined around the world.This study aims to develop a tool that meets compliance-driven desensitization requirements tailored to diverse research needs.【Methods】To enhance the security of medical image data,we designed and implemented a DICOM format medical image de-identification system on the Windows operating system.【Results】Our custom de-identification system is adaptable to the legal standards of different countries and can accommodate specific research demands.The system offers both web-based online and desktop offline de-identification capabilities,enabling customization of de-identification rules and facilitating batch processing to improve efficiency.【Conclusions】This medical image de-identification system robustly strengthens the stewardship of sensitive medical data,aligning with data security protection requirements while facilitating the sharing and utilization of medical image data.This approach unlocks the intrinsic value inherent in such datasets. 展开更多
关键词 de-identification system medical image data privacy DICOM data sharing
暂未订购
AI-Ready Competency Framework for Biomedical Scientific Data Literacy
9
作者 Zhe Wang Zhi-Gang Wang +3 位作者 Wen-Ya Zhao Wei Zhou Sheng-Fa Zhang Xiao-Lin Yang 《Chinese Medical Sciences Journal》 2025年第3期203-210,I0006,共9页
With the rise of data-intensive research,data literacy has become a critical capability for improving scientific data quality and achieving artificial intelligence(AI)readiness.In the biomedical domain,data are charac... With the rise of data-intensive research,data literacy has become a critical capability for improving scientific data quality and achieving artificial intelligence(AI)readiness.In the biomedical domain,data are characterized by high complexity and privacy sensitivity,calling for robust and systematic data management skills.This paper reviews current trends in scientific data governance and the evolving policy landscape,highlighting persistent challenges such as inconsistent standards,semantic misalignment,and limited awareness of compliance.These issues are largely rooted in the lack of structured training and practical support for researchers.In response,this study builds on existing data literacy frameworks and integrates the specific demands of biomedical research to propose a comprehensive,lifecycle-oriented data literacy competency model with an emphasis on ethics and regulatory awareness.Furthermore,it outlines a tiered training strategy tailored to different research stages—undergraduate,graduate,and professional,offering theoretical foundations and practical pathways for universities and research institutions to advance data literacy education. 展开更多
关键词 AI-ready scientific data management data literacy competency framework FAIR principles
在线阅读 下载PDF
Pelvic Floor Dysfunction Databases:Evolution,Current Landscape,and Future Development
10
作者 Jing-Yu Zhang An-Ran Wang +3 位作者 Shuo Liang Hong-Hui Shi Lan Zhu Si-Zhu Wu 《Chinese Medical Sciences Journal》 2025年第4期295-308,I0004,共15页
Pelvic floor dysfunction(PFD),including conditions such as stress urinary incontinence,pelvic organ prolapse,and fecal incontinence,significantly affects women's quality of life and their physical and mental healt... Pelvic floor dysfunction(PFD),including conditions such as stress urinary incontinence,pelvic organ prolapse,and fecal incontinence,significantly affects women's quality of life and their physical and mental health.With advancement of digital medicine,the systematic collection of data and the high-quality development of database platforms have increasingly become central pillars of PFD research and management.We systematically review the developmental stages of PFDrelated databases.We then conduct a comparative analysis of representative international and domestic platforms,examining key aspects including organizational structures and construction models,data sources and integration strategies,core functionalities,data quality control and standardization,data security and access management,and research applications.Finally,based on the current status of PFD database development both globally and in China,we offer recommendations to strengthen data infrastructure and guide future directions.The findings may serve as a valuable reference for the optimization of PFD databases worldwide. 展开更多
关键词 pelvic floor dysfunction disease registry DATABASE data platform disease research
暂未订购
Impacts of meteorological conditions on the NASM pollution data assimilation system
11
作者 Shan Zhang Liqun Li +4 位作者 Linfeng Shang Dongji Wang Guangtao Niu Xuejun Guo Xiangjun Tian 《Atmospheric and Oceanic Science Letters》 2025年第4期61-66,共6页
Since meteorological conditions are the main factor driving the transport and dispersion of air pollutants,an accurate simulation of the meteorological field will directly affect the accuracy of the atmospheric chemic... Since meteorological conditions are the main factor driving the transport and dispersion of air pollutants,an accurate simulation of the meteorological field will directly affect the accuracy of the atmospheric chemical transport model in simulating PM_(2.5).Based on the NASM joint chemical data assimilation system,the authors quantified the impacts of different meteorological fields on the pollutant simulations as well as revealed the role of meteorological conditions in the accumulation,maintenance,and dissipation of heavy haze pollution.During the two heavy pollution processes from 10 to 24 November 2018,the meteorological fields were obtained using NCEP FNL and ERA5 reanalysis data,each used to drive the WRF model,to analyze the differences in the simulated PM_(2.5) concentration.The results show that the meteorological field has a strong influence on the concentration levels and spatial distribution of the pollution simulations.The ERA5 group had relatively small simulation errors,and more accurate PM_(2.5) simulation results could be obtained.The RMSE was 11.86𝜇g m^(-3)lower than that of the FNL group before assimilation,and 5.77𝜇g m^(-3)lower after joint assimilation.The authors used the PM_(2.5) simulation results obtained by ERA5 data to discuss the role of the wind field and circulation situation on the pollution process,to analyze the correlation between wind speed,temperature,relative humidity,and boundary layer height and pollutant concentrations,and to further clarify the key formation mechanism of this pollution process. 展开更多
关键词 Joint data assimilation system Meteorological fields Reanalysis data PM_(2.5)concentration
在线阅读 下载PDF
Transforming waste to value:Enhancing battery lifetime prediction using incomplete data samples
12
作者 Xiaoang Zhai Guohua Liu +4 位作者 Ting Lu Sihui Chen Yang Liu Jiayu Wan Xin Li 《Journal of Energy Chemistry》 2025年第7期642-649,共8页
The widespread usage of rechargeable batteries in portable devices,electric vehicles,and energy storage systems has underscored the importance for accurately predicting their lifetimes.However,data scarcity often limi... The widespread usage of rechargeable batteries in portable devices,electric vehicles,and energy storage systems has underscored the importance for accurately predicting their lifetimes.However,data scarcity often limits the accuracy of prediction models,which is escalated by the incompletion of data induced by the issues such as sensor failures.To address these challenges,we propose a novel approach to accommodate data insufficiency through achieving external information from incomplete data samples,which are usually discarded in existing studies.In order to fully unleash the prediction power of incomplete data,we have investigated the Multiple Imputation by Chained Equations(MICE)method that diversifies the training data through exploring the potential data patterns.The experimental results demonstrate that the proposed method significantly outperforms the baselines in the most considered scenarios while reducing the prediction root mean square error(RMSE)by up to 18.9%.Furthermore,we have also observed that the penetration of incomplete data benefits the explainability of the prediction model through facilitating the feature selection. 展开更多
关键词 Rechargeable batteries Battery lifetime prediction Data scarcity Incomplete data utilization
在线阅读 下载PDF
Data Elements Accumulation Enabling the“Threeizations”Upgrading of Manufacturing:Theoretical Mechanism 被引量:1
13
作者 Hao Xie 《Proceedings of Business and Economic Studies》 2025年第2期298-304,共7页
The data production elements are driving profound transformations in the real economy across production objects,methods,and tools,generating significant economic effects such as industrial structure upgrading.This pap... The data production elements are driving profound transformations in the real economy across production objects,methods,and tools,generating significant economic effects such as industrial structure upgrading.This paper aims to reveal the impact mechanism of the data elements on the“three transformations”(high-end,intelligent,and green)in the manufacturing sector,theoretically elucidating the intrinsic mechanisms by which the data elements influence these transformations.The study finds that the data elements significantly enhance the high-end,intelligent,and green levels of China's manufacturing industry.In terms of the pathways of impact,the data elements primarily influence the development of high-tech industries and overall green technological innovation,thereby affecting the high-end,intelligent,and green transformation of the industry. 展开更多
关键词 Data elements MANUFACTURING HIGH-END INTELLIGENT Green
在线阅读 下载PDF
An open access 90 m resolution V_(S30) data and map for areas affected by the January 2025 M6.8 Dingri Xizang,China earthquake 被引量:1
14
作者 Jian Zhou Li Li 《Earthquake Science》 2025年第4期339-345,共7页
In this study,we developed a high-resolution(3 arcsec,approximately 90 m)V_(S30) map and associated open-access dataset for the 140 km×200 km region affected by the January 2025 M6.8 Dingri Xizang,China earthquak... In this study,we developed a high-resolution(3 arcsec,approximately 90 m)V_(S30) map and associated open-access dataset for the 140 km×200 km region affected by the January 2025 M6.8 Dingri Xizang,China earthquake.This map provides a significantly finer resolution compared to existing V_(S30) maps,which typically use a 30 arcsec grid.The V_(S30) values were estimated using the Cokriging-based V_(S30) proxy model(SCK model),which integrates V_(S30) measurements as primary constraints and utilizes topographic slope as a secondary parameter.The findings indicate that the V_(S30) values range from 200 to 250 m/s in the sedimentary deposit areas near the earthquake’s epicenter and from 400 to 600 m/s in the surrounding mountainous regions.This study showcases the capability of the SCK model to efficiently generate V_(S30) estimations across various spatial resolutions and demonstrates its effectiveness in producing reliable estimations in data-sparse regions. 展开更多
关键词 V_(S30) MAP DATA Dingri Xizang earthquake
在线阅读 下载PDF
Automation and parallelization scheme to accelerate pulsar observation data processing
15
作者 Xingnan Zhang Minghui Li 《Astronomical Techniques and Instruments》 2025年第4期226-238,共13页
Previous studies aiming to accelerate data processing have focused on enhancement algorithms,using the graphics processing unit(GPU)to speed up programs,and thread-level parallelism.These methods overlook maximizing t... Previous studies aiming to accelerate data processing have focused on enhancement algorithms,using the graphics processing unit(GPU)to speed up programs,and thread-level parallelism.These methods overlook maximizing the utilization of existing central processing unit(CPU)resources and reducing human and computational time costs via process automation.Accordingly,this paper proposes a scheme,called SSM,that combines“Srun job submission mode”,“Sbatch job submission mode”,and“Monitor function”.The SSM scheme includes three main modules:data management,command management,and resource management.Its core innovations are command splitting and parallel execution.The results show that this method effectively improves CPU utilization and reduces the time required for data processing.In terms of CPU utilization,the average value of this scheme is 89%.In contrast,the average CPU utilizations of“Srun job submission mode”and“Sbatch job submission mode”are significantly lower,at 43%and 52%,respectively.In terms of the data-processing time,SSM testing on the Five-hundred-meter Aperture Spherical radio Telescope(FAST)data requires only 5.5 h,compared with 8 h in the“Srun job submission mode”and 14 h in the“Sbatch job submission mode”.In addition,tests on the FAST and Parkes datasets demonstrate the universality of the SSM scheme,which can process data from different telescopes.The compatibility of the SSM scheme for pulsar searches is verified using 2 days of observational data from the globular cluster M2,with the scheme successfully discovering all published pulsars in M2. 展开更多
关键词 Astronomical data Parallel processing PulsaR Exploration and Search TOolkit(PRESTO) CPU FAST Parkes
在线阅读 下载PDF
Integral experiment on slab^(nat)Pb using D-T and D-D neutron sources to validate evaluated nuclear data
16
作者 Kuo-Zhi Xu Yang-Bo Nie +6 位作者 Chang-Lin Lan Yan-Yan Ding Shi-Yu Zhang Qi Zhao Xin-Yi Pan Jie Ren Xi-Chao Ruan 《Nuclear Science and Techniques》 2025年第3期119-133,共15页
Lead(Pb)plays a significant role in the nuclear industry and is extensively used in radiation shielding,radiation protection,neutron moderation,radiation measurements,and various other critical functions.Consequently,... Lead(Pb)plays a significant role in the nuclear industry and is extensively used in radiation shielding,radiation protection,neutron moderation,radiation measurements,and various other critical functions.Consequently,the measurement and evaluation of Pb nuclear data are highly regarded in nuclear scientific research,emphasizing its crucial role in the field.Using the time-of-flight(ToF)method,the neutron leakage spectra from three^(nat)Pb samples were measured at 60°and 120°based on the neutronics integral experimental facility at the China Institute of Atomic Energy(CIAE).The^(nat)Pb sample sizes were30 cm×30 cm×5 cm,30 cm×30 cm×10 cm,and 30 cm×30 cm×15 cm.Neutron sources were generated by the Cockcroft-Walton accelerator,producing approximately 14.5 MeV and 3.5 MeV neutrons through the T(d,n)^(4)He and D(d,n)^(3)He reactions,respectively.Leakage neutron spectra were also calculated by employing the Monte Carlo code of MCNP-4C,and the nuclear data of Pb isotopes from four libraries:CENDL-3.2,JEFF-3.3,JENDL-5,and ENDF/B-Ⅷ.0 were used individually.By comparing the simulation and experimental results,improvements and deficiencies in the evaluated nuclear data of the Pb isotopes were analyzed.Most of the calculated results were consistent with the experimental results;however,a few areas did not fit well.In the(n,el)energy range,the simulated results from CENDL-3.2 were significantly overestimated;in the(n,inl)D and the(n,inl)C energy regions,the results from CENDL-3.2 and ENDF/B-Ⅷ.0 were significantly overestimated at 120°,and the results from JENDL-5 and JEFF-3.3 are underestimated at 60°in the(n,inl)D energy region.The calculated spectra were analyzed by comparing them with the experimental spectra in terms of the neutron spectrum shape and C/E values.The results indicate that the theoretical simulations,using different data libraries,overestimated or underestimated the measured values in certain energy ranges.Secondary neutron energies and angular distributions in the data files have been presented to explain these discrepancies. 展开更多
关键词 Integral experiment Neutron leakage spectra ^(nat)Pb D-T and D-D neutron sources Evaluated nuclear data
在线阅读 下载PDF
Topology Data Analysis-Based Error Detection for Semantic Image Transmission with Incremental Knowledge-Based HARQ
17
作者 Ni Fei Li Rongpeng +1 位作者 Zhao Zhifeng Zhang Honggang 《China Communications》 2025年第1期235-255,共21页
Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpe... Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpected channel volatility and thus developing a re-transmission mechanism(e.g.,hybrid automatic repeat request[HARQ])becomes indispensable.In that regard,instead of discarding previously transmitted information,the incremental knowledge-based HARQ(IK-HARQ)is deemed as a more effective mechanism that could sufficiently utilize the information semantics.However,considering the possible existence of semantic ambiguity in image transmission,a simple bit-level cyclic redundancy check(CRC)might compromise the performance of IK-HARQ.Therefore,there emerges a strong incentive to revolutionize the CRC mechanism,thus more effectively reaping the benefits of both SemCom and HARQ.In this paper,built on top of swin transformer-based joint source-channel coding(JSCC)and IK-HARQ,we propose a semantic image transmission framework SC-TDA-HARQ.In particular,different from the conventional CRC,we introduce a topological data analysis(TDA)-based error detection method,which capably digs out the inner topological and geometric information of images,to capture semantic information and determine the necessity for re-transmission.Extensive numerical results validate the effectiveness and efficiency of the proposed SC-TDA-HARQ framework,especially under the limited bandwidth condition,and manifest the superiority of TDA-based error detection method in image transmission. 展开更多
关键词 error detection incremental knowledgebased HARQ joint source-channel coding semantic communication swin transformer topological data analysis
在线阅读 下载PDF
DH-LDA:A Deeply Hidden Load Data Attack on Electricity Market of Smart Grid
18
作者 Yunhao Yu Meiling Dizha +6 位作者 Boda Zhang Ruibin Wen FuhuaLuo Xiang Guo Junjie Song Bingdong Wang Zhenyong Zhang 《Computers, Materials & Continua》 2025年第11期3861-3877,共17页
The load profile is a key characteristic of the power grid and lies at the basis for the power flow control and generation scheduling.However,due to the wide adoption of internet-of-things(IoT)-based metering infrastr... The load profile is a key characteristic of the power grid and lies at the basis for the power flow control and generation scheduling.However,due to the wide adoption of internet-of-things(IoT)-based metering infrastructure,the cyber vulnerability of load meters has attracted the adversary’s great attention.In this paper,we investigate the vulnerability of manipulating the nodal prices by injecting false load data into the meter measurements.By taking advantage of the changing properties of real-world load profile,we propose a deeply hidden load data attack(i.e.,DH-LDA)that can evade bad data detection,clustering-based detection,and price anomaly detection.The main contributions of this work are as follows:(i)We design a stealthy attack framework that exploits historical load patterns to generate load data with minimal statistical deviation from normalmeasurements,thereby maximizing concealment;(ii)We identify the optimal time window for data injection to ensure that the altered nodal prices follow natural fluctuations,enhancing the undetectability of the attack in real-time market operations;(iii)We develop a resilience evaluation metric and formulate an optimization-based approach to quantify the electricity market’s robustness against DH-LDAs.Our experiments show that the adversary can gain profits from the electricity market while remaining undetected. 展开更多
关键词 Smart grid security load redistribution data electricity market deeply hidden attack
在线阅读 下载PDF
Detecting the Lunar Wrinkle Ridges Through Deep Learning Based on DEM and Aspect Data
19
作者 Xin Lu Jiacheng Sun +2 位作者 Gaofeng Shu Jianhui Zhao Ning Li 《Research in Astronomy and Astrophysics》 2025年第8期167-179,共13页
Lunar wrinkle ridges are an important stress geological structure on the Moon, which reflect the stress state and geological activity on the Moon. They provide important insights into the evolution of the Moon and are... Lunar wrinkle ridges are an important stress geological structure on the Moon, which reflect the stress state and geological activity on the Moon. They provide important insights into the evolution of the Moon and are key factors influencing future lunar activity, such as the choice of landing sites. However, automatic extraction of lunar wrinkle ridges is a challenging task due to their complex morphology and ambiguous features. Traditional manual extraction methods are time-consuming and labor-intensive. To achieve automated and detailed detection of lunar wrinkle ridges, we have constructed a lunar wrinkle ridge data set, incorporating previously unused aspect data to provide edge information, and proposed a Dual-Branch Ridge Detection Network(DBR-Net) based on deep learning technology. This method employs a dual-branch architecture and an Attention Complementary Feature Fusion module to address the issue of insufficient lunar wrinkle ridge features. Through comparisons with the results of various deep learning approaches, it is demonstrated that the proposed method exhibits superior detection performance. Furthermore, the trained model was applied to lunar mare regions, generating a distribution map of lunar mare wrinkle ridges;a significant linear relationship between the length and area of the lunar wrinkle ridges was obtained through statistical analysis, and six previously unrecorded potential lunar wrinkle ridges were detected. The proposed method upgrades the automated extraction of lunar wrinkle ridges to a pixel-level precision and verifies the effectiveness of DBR-Net in lunar wrinkle ridge detection. 展开更多
关键词 MOON methods:data analysis planets and satellites:surfaces techniques:image processing
在线阅读 下载PDF
Data Aggregation Point Placement and Subnetwork Optimization for Smart Grids
20
作者 Tien-Wen Sung Wei Li +2 位作者 Chao-Yang Lee Yuzhen Chen Qingjun Fang 《Computers, Materials & Continua》 2025年第4期407-434,共28页
To transmit customer power data collected by smart meters(SMs)to utility companies,data must first be transmitted to the corresponding data aggregation point(DAP)of the SM.The number of DAPs installed and the installa... To transmit customer power data collected by smart meters(SMs)to utility companies,data must first be transmitted to the corresponding data aggregation point(DAP)of the SM.The number of DAPs installed and the installation location greatly impact the whole network.For the traditional DAP placement algorithm,the number of DAPs must be set in advance,but determining the best number of DAPs is difficult,which undoubtedly reduces the overall performance of the network.Moreover,the excessive gap between the loads of different DAPs is also an important factor affecting the quality of the network.To address the above problems,this paper proposes a DAP placement algorithm,APSSA,based on the improved affinity propagation(AP)algorithm and sparrow search(SSA)algorithm,which can select the appropriate number of DAPs to be installed and the corresponding installation locations according to the number of SMs and their distribution locations in different environments.The algorithm adds an allocation mechanism to optimize the subnetwork in the SSA.APSSA is evaluated under three different areas and compared with other DAP placement algorithms.The experimental results validated that the method in this paper can reduce the network cost,shorten the average transmission distance,and reduce the load gap. 展开更多
关键词 Smart grid data aggregation point placement network cost average transmission distance load gap
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部