Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from sei...Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from seismic networks,satellite observations,and geospatial repositories,creating the need for scalable infrastructures capable of integrating and analyzing such data to support intelligent decision-making.Data warehousing technologies provide a robust foundation for this purpose;however,existing earthquake-oriented data warehouses remain limited,often relying on simplified schemas,domain-specific analytics,or cataloguing efforts.This paper presents the design and implementation of a spatio-temporal data warehouse for seismic activity.The framework integrates spatial and temporal dimensions in a unified schema and introduces a novel array-based approach for managing many-to-many relationships between facts and dimensions without intermediate bridge tables.A comparative evaluation against a conventional bridge-table schema demonstrates that the array-based design improves fact-centric query performance,while the bridge-table schema remains advantageous for dimension-centric queries.To reconcile these trade-offs,a hybrid schema is proposed that retains both representations,ensuring balanced efficiency across heterogeneous workloads.The proposed framework demonstrates how spatio-temporal data warehousing can address schema complexity,improve query performance,and support multidimensional visualization.In doing so,it provides a foundation for integrating seismic analysis into broader big data-driven intelligent decision systems for disaster resilience,risk mitigation,and emergency management.展开更多
The spatial offset of bridge has a significant impact on the safety,comfort,and durability of high-speed railway(HSR)operations,so it is crucial to rapidly and effectively detect the spatial offset of operational HSR ...The spatial offset of bridge has a significant impact on the safety,comfort,and durability of high-speed railway(HSR)operations,so it is crucial to rapidly and effectively detect the spatial offset of operational HSR bridges.Drive-by monitoring of bridge uneven settlement demonstrates significant potential due to its practicality,cost-effectiveness,and efficiency.However,existing drive-by methods for detecting bridge offset have limitations such as reliance on a single data source,low detection accuracy,and the inability to identify lateral deformations of bridges.This paper proposes a novel drive-by inspection method for spatial offset of HSR bridge based on multi-source data fusion of comprehensive inspection train.Firstly,dung beetle optimizer-variational mode decomposition was employed to achieve adaptive decomposition of non-stationary dynamic signals,and explore the hidden temporal relationships in the data.Subsequently,a long short-term memory neural network was developed to achieve feature fusion of multi-source signal and accurate prediction of spatial settlement of HSR bridge.A dataset of track irregularities and CRH380A high-speed train responses was generated using a 3D train-track-bridge interaction model,and the accuracy and effectiveness of the proposed hybrid deep learning model were numerically validated.Finally,the reliability of the proposed drive-by inspection method was further validated by analyzing the actual measurement data obtained from comprehensive inspection train.The research findings indicate that the proposed approach enables rapid and accurate detection of spatial offset in HSR bridge,ensuring the long-term operational safety of HSR bridges.展开更多
Translation is a crucial step in gene expression.Over the past decade,the development and application of ribosome profiling(Ribo-seq)have significantly advanced our understanding of translational regulation in vivo.Ho...Translation is a crucial step in gene expression.Over the past decade,the development and application of ribosome profiling(Ribo-seq)have significantly advanced our understanding of translational regulation in vivo.However,the analysis and visualization of Ribo-seq data remain challenging.Despite the availability of various analytical pipelines,improvements in comprehensiveness,accuracy,and user-friendliness are still necessary.In this study,we develop RiboParser/RiboShiny,a robust framework for analyzing and visualizing Ribo-seq data.Building on published methods,we optimize ribosome structure-based and start/stopbased models to improve the accuracy and stability of P-site detection,even in species with a high proportion of leaderless transcripts.Leveraging these improvements,RiboParser offers comprehensive analyses,including quality control,gene-level analysis,codon-level analysis,and the analysis of Ribo-seq variants.Meanwhile,RiboShiny provides a user-friendly and adaptable platform for data visualization,facilitating deeper insights into the translational landscape.Furthermore,the integration of standardized genome annotation renders our platform universally applicable to various organisms with sequenced genomes.This framework has the potential to significantly improve the precision and efficiency of Ribo-seq data interpretation,thereby deepening our understanding of translational regulation.展开更多
With the advent of the big data era,real-time data analysis and decision-support systems have been recognized as essential tools for enhancing enterprise competitiveness and optimizing the decision-making process.This...With the advent of the big data era,real-time data analysis and decision-support systems have been recognized as essential tools for enhancing enterprise competitiveness and optimizing the decision-making process.This study aims to explore the development strategies of real-time data analysis and decision-support systems,and analyze their application status and future development trends in various industries.The article first reviews the basic concepts and importance of real-time data analysis and decision-support systems,and then discusses in detail the key technical aspects such as system architecture,data collection and processing,analysis methods,and visualization techniques.展开更多
In section‘Track decoding’of this article,one of the paragraphs was inadvertently missed out after the text'…shows the flow diagram of the Tr2-1121 track mode.'The missed paragraph is provided below.
Cervical cancer,a leading malignancy globally,poses a significant threat to women's health,with an estimated 604,000 new cases and 342,000 deaths reported in 2020^([1]).As cervical cancer is closely linked to huma...Cervical cancer,a leading malignancy globally,poses a significant threat to women's health,with an estimated 604,000 new cases and 342,000 deaths reported in 2020^([1]).As cervical cancer is closely linked to human papilloma virus(HPV)infection,early detection relies on HPV screening;however,late-stage prognosis remains poor,underscoring the need for novel diagnostic and therapeutic targets^([2]).展开更多
Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpe...Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpected channel volatility and thus developing a re-transmission mechanism(e.g.,hybrid automatic repeat request[HARQ])becomes indispensable.In that regard,instead of discarding previously transmitted information,the incremental knowledge-based HARQ(IK-HARQ)is deemed as a more effective mechanism that could sufficiently utilize the information semantics.However,considering the possible existence of semantic ambiguity in image transmission,a simple bit-level cyclic redundancy check(CRC)might compromise the performance of IK-HARQ.Therefore,there emerges a strong incentive to revolutionize the CRC mechanism,thus more effectively reaping the benefits of both SemCom and HARQ.In this paper,built on top of swin transformer-based joint source-channel coding(JSCC)and IK-HARQ,we propose a semantic image transmission framework SC-TDA-HARQ.In particular,different from the conventional CRC,we introduce a topological data analysis(TDA)-based error detection method,which capably digs out the inner topological and geometric information of images,to capture semantic information and determine the necessity for re-transmission.Extensive numerical results validate the effectiveness and efficiency of the proposed SC-TDA-HARQ framework,especially under the limited bandwidth condition,and manifest the superiority of TDA-based error detection method in image transmission.展开更多
DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expres...DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expression data generated. To address this, this paper employs a mixed-effects model to analyze gene expression data. In terms of data selection, 1176 genes from the white mouse gene expression dataset under two experimental conditions were chosen, setting up two conditions: pneumococcal infection and no infection, and constructing a mixed-effects model. After preprocessing the gene chip information, the data were imported into the model, preliminary results were calculated, and permutation tests were performed to biologically validate the preliminary results using GSEA. The final dataset consists of 20 groups of gene expression data from pneumococcal infection, which categorizes functionally related genes based on the similarity of their expression profiles, facilitating the study of genes with unknown functions.展开更多
The fracture volume is gradually changed with the depletion of fracture pressure during the production process.However,there are few flowback models available so far that can estimate the fracture volume loss using pr...The fracture volume is gradually changed with the depletion of fracture pressure during the production process.However,there are few flowback models available so far that can estimate the fracture volume loss using pressure transient and rate transient data.The initial flowback involves producing back the fracturing fuid after hydraulic fracturing,while the second flowback involves producing back the preloading fluid injected into the parent wells before fracturing of child wells.The main objective of this research is to compare the initial and second flowback data to capture the changes in fracture volume after production and preload processes.Such a comparison is useful for evaluating well performance and optimizing frac-turing operations.We construct rate-normalized pressure(RNP)versus material balance time(MBT)diagnostic plots using both initial and second flowback data(FB;and FBs,respectively)of six multi-fractured horizontal wells completed in Niobrara and Codell formations in DJ Basin.In general,the slope of RNP plot during the FB,period is higher than that during the FB;period,indicating a potential loss of fracture volume from the FB;to the FB,period.We estimate the changes in effective fracture volume(Ver)by analyzing the changes in the RNP slope and total compressibility between these two flowback periods.Ver during FB,is in general 3%-45%lower than that during FB:.We also compare the drive mechanisms for the two flowback periods by calculating the compaction-drive index(CDI),hydrocarbon-drive index(HDI),and water-drive index(WDI).The dominant drive mechanism during both flowback periods is CDI,but its contribution is reduced by 16%in the FB,period.This drop is generally compensated by a relatively higher HDI during this period.The loss of effective fracture volume might be attributed to the pressure depletion in fractures,which occurs during the production period and can extend 800 days.展开更多
This paper analyzes the advantages of legal digital currencies and explores their impact on bank big data practices.By combining bank big data collection and processing,it clarifies that legal digital currencies can e...This paper analyzes the advantages of legal digital currencies and explores their impact on bank big data practices.By combining bank big data collection and processing,it clarifies that legal digital currencies can enhance the efficiency of bank data processing,enrich data types,and strengthen data analysis and application capabilities.In response to future development needs,it is necessary to strengthen data collection management,enhance data processing capabilities,innovate big data application models,and provide references for bank big data practices,promoting the transformation and upgrading of the banking industry in the context of legal digital currencies.展开更多
With the acceleration of intelligent transformation of energy system,the monitoring of equipment operation status and optimization of production process in thermal power plants face the challenge of multi-source heter...With the acceleration of intelligent transformation of energy system,the monitoring of equipment operation status and optimization of production process in thermal power plants face the challenge of multi-source heterogeneous data integration.In view of the heterogeneous characteristics of physical sensor data,including temperature,vibration and pressure that generated by boilers,steam turbines and other key equipment and real-time working condition data of SCADA system,this paper proposes a multi-source heterogeneous data fusion and analysis platform for thermal power plants based on edge computing and deep learning.By constructing a multi-level fusion architecture,the platform adopts dynamic weight allocation strategy and 5D digital twin model to realize the collaborative analysis of physical sensor data,simulation calculation results and expert knowledge.The data fusion module combines Kalman filter,wavelet transform and Bayesian estimation method to solve the problem of data time series alignment and dimension difference.Simulation results show that the data fusion accuracy can be improved to more than 98%,and the calculation delay can be controlled within 500 ms.The data analysis module integrates Dymola simulation model and AERMOD pollutant diffusion model,supports the cascade analysis of boiler combustion efficiency prediction and flue gas emission monitoring,system response time is less than 2 seconds,and data consistency verification accuracy reaches 99.5%.展开更多
Air temperature is an important indicator to analyze climate change in mountainous areas.ERA5 reanalysis air temperature data are important products that were widely used to analyze temperature change in mountainous a...Air temperature is an important indicator to analyze climate change in mountainous areas.ERA5 reanalysis air temperature data are important products that were widely used to analyze temperature change in mountainous areas.However,the reliability of ERA5 reanalysis air temperature over the Qilian Mountains(QLM)is unclear.In this study,we evaluated the reliability of ERA5 monthly averaged reanalysis 2 m air temperature data using the observations at 17 meteorological stations in the QLM from 1979 to 2017.The results showed that:ERA5 reanalysis monthly averaged air temperature data have a good applicability in the QLM in general(R2=0.99).ERA5 reanalysis temperature data overestimated the observed temperature in the QLM in general.Root mean square error(RMSE)increases with the increasing of elevation range,showing that the reliability of ERA5 reanalysis temperature data is worse in higher elevation than that in lower altitude.ERA5 reanalysis temperature can capture observational warming rates well.All the smallest warming rates of observational temperature and ERA5 reanalysis temperature are found in winter,with the warming rates of 0.393°C/10a and 0.360°C/10a,respectively.This study will provide a reference for the application of ERA5 reanalysis monthly averaged air temperature data at different elevation ranges in the Qilian Mountains.展开更多
The issue of strong noise has increasingly become a bottleneck restricting the precision and application space of electromagnetic exploration methods.Noise suppression and extraction of effective electromagnetic respo...The issue of strong noise has increasingly become a bottleneck restricting the precision and application space of electromagnetic exploration methods.Noise suppression and extraction of effective electromagnetic response information under a strong noise background is a crucial scientific task to be addressed.To solve the noise suppression problem of the controlled-source electromagnetic method in strong interference areas,we propose an approach based on complex-plane 2D k-means clustering for data processing.Based on the stability of the controlled-source signal response,clustering analysis is applied to classify the spectra of different sources and noises in multiple time segments.By identifying the power spectra with controlled-source characteristics,it helps to improve the quality of the controlled-source response extraction.This paper presents the principle and workflow of the proposed algorithm,and demonstrates feasibility and effectiveness of the new algorithm through synthetic and real data examples.The results show that,compared with the conventional Robust denoising method,the clustering algorithm has a stronger suppression effect on common noise,can identify high-quality signals,and improve the preprocessing data quality of the controlledsource electromagnetic method.展开更多
With the rapid development of the Internet and e-commerce,e-commerce platforms have accumulated huge amounts of user behavior data.The emergence of big data technology provides a powerful means for in-depth analysis o...With the rapid development of the Internet and e-commerce,e-commerce platforms have accumulated huge amounts of user behavior data.The emergence of big data technology provides a powerful means for in-depth analysis of these data and insight into user behavior patterns and preferences.This paper elaborates on the application of big data technology in the analysis of user behavior on e-commerce platforms,including the technical methods of data collection,storage,processing and analysis,as well as the specific applications in the construction of user profiles,precision marketing,personalized recommendation,user retention and churn analysis,etc.,and discusses the challenges and countermeasures faced in the application.Through the study of actual cases,it demonstrates the remarkable effectiveness of big data technology in enhancing the competitiveness of e-commerce platforms and user experience.展开更多
Objective To identify core acupoint patterns and elucidate the molecular mechanisms of acupuncture for primary depressive disorder(PDD)through data mining and network analysis.Methods A comprehensive literature search...Objective To identify core acupoint patterns and elucidate the molecular mechanisms of acupuncture for primary depressive disorder(PDD)through data mining and network analysis.Methods A comprehensive literature search was conducted across PubMed,Embase,Ovid Technologies(OVID),Web of Science,Cochrane Library,China National Knowledge Infrastructure(CNKI),China National Knowledge Infrastructure Database(VIP),Wanfang Data,and SinoMed Database from database foundation to January 31,2025,for clinical studies on acupuncture treatment of PDD.Descriptive statistics,high-frequency acupoint analysis,degree and betweenness centrality evaluation,and core acupoint prescription mining identified predominant therapeutic combinations for PDD.Network acupuncture was used to predict therapeutic target for the core acupoint prescription.Subsequent protein-protein interaction(PPI)network and molecular complex detection(MCODE)analyses were conducted to identify the key targets and functional modules.Gene Ontology(GO)and Kyoto Encyclopedia of Genes and Genomes(KEGG)analyses explored the underlying biological mechanisms of the core acupoint prescription in treating PDD.Results A total of 57 acupoint prescriptions underwent systematic analysis.The core therapeutic combinations comprised Baihui(GV20),Yintang(GV29),Neiguan(PC6),Hegu(LI4),and Shenmen(HT7).Network acupuncture analysis identified 88 potential therapeutic targets(79 overlapping with PDD),while PPI network analysis revealed central regulatory nodes,including interleukin(IL)-6,IL-1β,tumor necrosis factor(TNF)-α,toll-like receptor 4(TLR4),IL-10,brain-derived neurotrophic factor(BDNF),transforming growth factor(TGF)-β1,C-XC motif chemokine ligand 10(CXCL10),mitogen-activated protein kinase 3(MAPK3),and nitric oxide synthase 1(NOS1).MCODE-based modular analysis further elucidated three functionally coherent clusters:inflammation-homeostasis(score=6.571),plasticity-neurotransmission(score=3.143),and oxidative stress(score=3.000).GO and KEGG analyses demonstrated significant enrichment of the MAPK,phosphoinositide 3-kinase/protein kinase B(PI3K/Akt),and hypoxia-inducible factor(HIF)-1 signaling pathways.These mechanistic insights suggested that the antidepressant effects mediated through mechanisms of neuroinflammatory regulation,neuroplasticity restoration,and immune-oxidative stress homeostasis.Conclusion This study reveals that acupuncture alleviates depression through a multi-level mechanism,primarily involving the neuroinflammation suppression,neuroplasticity enhancement,and oxidative stress regulation.These findings systematically clarify the underlying mechanisms of acupuncture’s antidepressant effects and identify novel therapeutic targets for further mechanistic research.展开更多
The detection and characterization of non-metallic inclusions are essential for clean steel production.Recently,imaging analysis combined with high-dimensional data processing of metallic materials using artificial in...The detection and characterization of non-metallic inclusions are essential for clean steel production.Recently,imaging analysis combined with high-dimensional data processing of metallic materials using artificial intelligence(AI)-based machine learning(ML)has developed rapidly.This technique has achieved impressive results in the field of inclusion classification in process metallurgy.The present study surveys the ML modeling of inclusion prediction in advanced steels,including the detection,classification,and feature prediction of inclusions in different steel grades.Studies on clean steel with different features based on data and image analysis via ML are summarized.Regarding the data analysis,the inclusion prediction methodology based on ML establishes a connection between the experimental parameters and inclusion characteristics and analyzes the importance of the experimental parameters.Regarding the image analysis,the focus is placed on the classification of different types of inclusions via deep learning,in comparison with data analysis.Finally,further development of inclusion analyses using ML-based methods is recommended.This work paves the way for the application of AIbased methodologies for ultraclean-steel studies from a sustainable metallurgy perspective.展开更多
Photonuclear data are increasingly used in fundamental nuclear research and technological applications.These data are generated using advanced γ-ray sources.The Shanghai laser electron gamma source(SLEGS)is a new las...Photonuclear data are increasingly used in fundamental nuclear research and technological applications.These data are generated using advanced γ-ray sources.The Shanghai laser electron gamma source(SLEGS)is a new laser Compton scattering γ-ray source at the Shanghai Synchrotron Radiation Facility.It delivers energy-tunable,quasi-monoenergetic gamma beams for high-precision photonuclear measurements.This paper presents the flat-efficiency detector(FED)array at SLEGS and its application in photoneutron cross-section measurements.Systematic uncertainties of the FED array were determined to be 3.02%through calibration with a ^(252)Cf neutron source.Using ^(197)Au and ^(159)Tb as representative nuclei,we demonstrate the format and processing methodology for raw photoneutron data.The results validate SLEGS’capability for high-precision photoneutron measurements.展开更多
Advanced geological prediction is a crucial means to ensure safety and efficiency in tunnel construction.However,diff erent advanced geological forecasting methods have their own limitations,resulting in poor detectio...Advanced geological prediction is a crucial means to ensure safety and efficiency in tunnel construction.However,diff erent advanced geological forecasting methods have their own limitations,resulting in poor detection accuracy.Using multiple methods to carry out a comprehensive evaluation can eff ectively improve the accuracy of advanced geological prediction results.In this study,geological information is combined with the detection results of geophysical methods,including transient electromagnetic,induced polarization,and tunnel seismic prediction,to establish a comprehensive analysis method of adverse geology.First,the possible main adverse geological problems are determined according to the geological information.Subsequently,various physical parameters of the rock mass in front of the tunnel face can then be derived on the basis of multisource geophysical data.Finally,based on the analysis results of geological information,the multisource data fusion algorithm is used to determine the type,location,and scale of adverse geology.The advanced geological prediction results that can provide eff ective guidance for tunnel construction can then be obtained.展开更多
This article focuses on the remote diagnosis and analysis of rail vehicle status based on the data of the Train Control Management System(TCMS).It first expounds on the importance of train diagnostic analysis and desi...This article focuses on the remote diagnosis and analysis of rail vehicle status based on the data of the Train Control Management System(TCMS).It first expounds on the importance of train diagnostic analysis and designs a unified TCMS data frame transmission format.Subsequently,a remote data transmission link using 4G signals and data processing methods is introduced.The advantages of remote diagnosis are analyzed,and common methods such as correlation analysis,fault diagnosis,and fault prediction are explained in detail.Then,challenges such as data security and the balance between diagnostic accuracy and real-time performance are discussed,along with development prospects in technological innovation,algorithm optimization,and application promotion.This research provides ideas for remote analysis and diagnosis based on TCMS data,contributing to the safe and efficient operation of rail vehicles.展开更多
The analysis of ancient genomics provides opportunities to explore human population history across both temporal and geographic dimensions(Haak et al.,2015;Wang et al.,2021,2024)to enhance the accessibility and utilit...The analysis of ancient genomics provides opportunities to explore human population history across both temporal and geographic dimensions(Haak et al.,2015;Wang et al.,2021,2024)to enhance the accessibility and utility of these ancient genomic datasets,a range of databases and advanced statistical models have been developed,including the Allen Ancient DNA Resource(AADR)(Mallick et al.,2024)and AdmixTools(Patterson et al.,2012).While upstream processes such as sequencing and raw data processing have been streamlined by resources like the AADR,the downstream analysis of these datasets-encompassing population genetics inference and spatiotemporal interpretation-remains a significant challenge.The AADR provides a unified collection of published ancient DNA(aDNA)data,yet its file-based format and reliance on command-line tools,such as those in Admix-Tools(Patterson et al.,2012),require advanced computational expertise for effective exploration and analysis.These requirements can present significant challenges forresearchers lackingadvanced computational expertise,limiting the accessibility and broader application of these valuable genomic resources.展开更多
文摘Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from seismic networks,satellite observations,and geospatial repositories,creating the need for scalable infrastructures capable of integrating and analyzing such data to support intelligent decision-making.Data warehousing technologies provide a robust foundation for this purpose;however,existing earthquake-oriented data warehouses remain limited,often relying on simplified schemas,domain-specific analytics,or cataloguing efforts.This paper presents the design and implementation of a spatio-temporal data warehouse for seismic activity.The framework integrates spatial and temporal dimensions in a unified schema and introduces a novel array-based approach for managing many-to-many relationships between facts and dimensions without intermediate bridge tables.A comparative evaluation against a conventional bridge-table schema demonstrates that the array-based design improves fact-centric query performance,while the bridge-table schema remains advantageous for dimension-centric queries.To reconcile these trade-offs,a hybrid schema is proposed that retains both representations,ensuring balanced efficiency across heterogeneous workloads.The proposed framework demonstrates how spatio-temporal data warehousing can address schema complexity,improve query performance,and support multidimensional visualization.In doing so,it provides a foundation for integrating seismic analysis into broader big data-driven intelligent decision systems for disaster resilience,risk mitigation,and emergency management.
基金sponsored by the National Natural Science Foundation of China(Grant No.52178100).
文摘The spatial offset of bridge has a significant impact on the safety,comfort,and durability of high-speed railway(HSR)operations,so it is crucial to rapidly and effectively detect the spatial offset of operational HSR bridges.Drive-by monitoring of bridge uneven settlement demonstrates significant potential due to its practicality,cost-effectiveness,and efficiency.However,existing drive-by methods for detecting bridge offset have limitations such as reliance on a single data source,low detection accuracy,and the inability to identify lateral deformations of bridges.This paper proposes a novel drive-by inspection method for spatial offset of HSR bridge based on multi-source data fusion of comprehensive inspection train.Firstly,dung beetle optimizer-variational mode decomposition was employed to achieve adaptive decomposition of non-stationary dynamic signals,and explore the hidden temporal relationships in the data.Subsequently,a long short-term memory neural network was developed to achieve feature fusion of multi-source signal and accurate prediction of spatial settlement of HSR bridge.A dataset of track irregularities and CRH380A high-speed train responses was generated using a 3D train-track-bridge interaction model,and the accuracy and effectiveness of the proposed hybrid deep learning model were numerically validated.Finally,the reliability of the proposed drive-by inspection method was further validated by analyzing the actual measurement data obtained from comprehensive inspection train.The research findings indicate that the proposed approach enables rapid and accurate detection of spatial offset in HSR bridge,ensuring the long-term operational safety of HSR bridges.
基金supported by the National Key Research and Development Program of China(2022YFA0912100)the National Natural Science Foundation of China(32270098 and 32470073)+1 种基金the Fundamental Research Funds for the Central Universities(2662024JC015)the National Key Laboratory of Agricultural Microbiology(AML2024D02)to Z.Z.
文摘Translation is a crucial step in gene expression.Over the past decade,the development and application of ribosome profiling(Ribo-seq)have significantly advanced our understanding of translational regulation in vivo.However,the analysis and visualization of Ribo-seq data remain challenging.Despite the availability of various analytical pipelines,improvements in comprehensiveness,accuracy,and user-friendliness are still necessary.In this study,we develop RiboParser/RiboShiny,a robust framework for analyzing and visualizing Ribo-seq data.Building on published methods,we optimize ribosome structure-based and start/stopbased models to improve the accuracy and stability of P-site detection,even in species with a high proportion of leaderless transcripts.Leveraging these improvements,RiboParser offers comprehensive analyses,including quality control,gene-level analysis,codon-level analysis,and the analysis of Ribo-seq variants.Meanwhile,RiboShiny provides a user-friendly and adaptable platform for data visualization,facilitating deeper insights into the translational landscape.Furthermore,the integration of standardized genome annotation renders our platform universally applicable to various organisms with sequenced genomes.This framework has the potential to significantly improve the precision and efficiency of Ribo-seq data interpretation,thereby deepening our understanding of translational regulation.
文摘With the advent of the big data era,real-time data analysis and decision-support systems have been recognized as essential tools for enhancing enterprise competitiveness and optimizing the decision-making process.This study aims to explore the development strategies of real-time data analysis and decision-support systems,and analyze their application status and future development trends in various industries.The article first reviews the basic concepts and importance of real-time data analysis and decision-support systems,and then discusses in detail the key technical aspects such as system architecture,data collection and processing,analysis methods,and visualization techniques.
文摘In section‘Track decoding’of this article,one of the paragraphs was inadvertently missed out after the text'…shows the flow diagram of the Tr2-1121 track mode.'The missed paragraph is provided below.
基金supported by a project funded by the Hebei Provincial Central Guidance Local Science and Technology Development Fund(236Z7714G)。
文摘Cervical cancer,a leading malignancy globally,poses a significant threat to women's health,with an estimated 604,000 new cases and 342,000 deaths reported in 2020^([1]).As cervical cancer is closely linked to human papilloma virus(HPV)infection,early detection relies on HPV screening;however,late-stage prognosis remains poor,underscoring the need for novel diagnostic and therapeutic targets^([2]).
基金supported in part by the National Key Research and Development Program of China under Grant 2024YFE0200600in part by the National Natural Science Foundation of China under Grant 62071425+3 种基金in part by the Zhejiang Key Research and Development Plan under Grant 2022C01093in part by the Zhejiang Provincial Natural Science Foundation of China under Grant LR23F010005in part by the National Key Laboratory of Wireless Communications Foundation under Grant 2023KP01601in part by the Big Data and Intelligent Computing Key Lab of CQUPT under Grant BDIC-2023-B-001.
文摘Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpected channel volatility and thus developing a re-transmission mechanism(e.g.,hybrid automatic repeat request[HARQ])becomes indispensable.In that regard,instead of discarding previously transmitted information,the incremental knowledge-based HARQ(IK-HARQ)is deemed as a more effective mechanism that could sufficiently utilize the information semantics.However,considering the possible existence of semantic ambiguity in image transmission,a simple bit-level cyclic redundancy check(CRC)might compromise the performance of IK-HARQ.Therefore,there emerges a strong incentive to revolutionize the CRC mechanism,thus more effectively reaping the benefits of both SemCom and HARQ.In this paper,built on top of swin transformer-based joint source-channel coding(JSCC)and IK-HARQ,we propose a semantic image transmission framework SC-TDA-HARQ.In particular,different from the conventional CRC,we introduce a topological data analysis(TDA)-based error detection method,which capably digs out the inner topological and geometric information of images,to capture semantic information and determine the necessity for re-transmission.Extensive numerical results validate the effectiveness and efficiency of the proposed SC-TDA-HARQ framework,especially under the limited bandwidth condition,and manifest the superiority of TDA-based error detection method in image transmission.
文摘DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expression data generated. To address this, this paper employs a mixed-effects model to analyze gene expression data. In terms of data selection, 1176 genes from the white mouse gene expression dataset under two experimental conditions were chosen, setting up two conditions: pneumococcal infection and no infection, and constructing a mixed-effects model. After preprocessing the gene chip information, the data were imported into the model, preliminary results were calculated, and permutation tests were performed to biologically validate the preliminary results using GSEA. The final dataset consists of 20 groups of gene expression data from pneumococcal infection, which categorizes functionally related genes based on the similarity of their expression profiles, facilitating the study of genes with unknown functions.
文摘The fracture volume is gradually changed with the depletion of fracture pressure during the production process.However,there are few flowback models available so far that can estimate the fracture volume loss using pressure transient and rate transient data.The initial flowback involves producing back the fracturing fuid after hydraulic fracturing,while the second flowback involves producing back the preloading fluid injected into the parent wells before fracturing of child wells.The main objective of this research is to compare the initial and second flowback data to capture the changes in fracture volume after production and preload processes.Such a comparison is useful for evaluating well performance and optimizing frac-turing operations.We construct rate-normalized pressure(RNP)versus material balance time(MBT)diagnostic plots using both initial and second flowback data(FB;and FBs,respectively)of six multi-fractured horizontal wells completed in Niobrara and Codell formations in DJ Basin.In general,the slope of RNP plot during the FB,period is higher than that during the FB;period,indicating a potential loss of fracture volume from the FB;to the FB,period.We estimate the changes in effective fracture volume(Ver)by analyzing the changes in the RNP slope and total compressibility between these two flowback periods.Ver during FB,is in general 3%-45%lower than that during FB:.We also compare the drive mechanisms for the two flowback periods by calculating the compaction-drive index(CDI),hydrocarbon-drive index(HDI),and water-drive index(WDI).The dominant drive mechanism during both flowback periods is CDI,but its contribution is reduced by 16%in the FB,period.This drop is generally compensated by a relatively higher HDI during this period.The loss of effective fracture volume might be attributed to the pressure depletion in fractures,which occurs during the production period and can extend 800 days.
文摘This paper analyzes the advantages of legal digital currencies and explores their impact on bank big data practices.By combining bank big data collection and processing,it clarifies that legal digital currencies can enhance the efficiency of bank data processing,enrich data types,and strengthen data analysis and application capabilities.In response to future development needs,it is necessary to strengthen data collection management,enhance data processing capabilities,innovate big data application models,and provide references for bank big data practices,promoting the transformation and upgrading of the banking industry in the context of legal digital currencies.
文摘With the acceleration of intelligent transformation of energy system,the monitoring of equipment operation status and optimization of production process in thermal power plants face the challenge of multi-source heterogeneous data integration.In view of the heterogeneous characteristics of physical sensor data,including temperature,vibration and pressure that generated by boilers,steam turbines and other key equipment and real-time working condition data of SCADA system,this paper proposes a multi-source heterogeneous data fusion and analysis platform for thermal power plants based on edge computing and deep learning.By constructing a multi-level fusion architecture,the platform adopts dynamic weight allocation strategy and 5D digital twin model to realize the collaborative analysis of physical sensor data,simulation calculation results and expert knowledge.The data fusion module combines Kalman filter,wavelet transform and Bayesian estimation method to solve the problem of data time series alignment and dimension difference.Simulation results show that the data fusion accuracy can be improved to more than 98%,and the calculation delay can be controlled within 500 ms.The data analysis module integrates Dymola simulation model and AERMOD pollutant diffusion model,supports the cascade analysis of boiler combustion efficiency prediction and flue gas emission monitoring,system response time is less than 2 seconds,and data consistency verification accuracy reaches 99.5%.
基金financially supported by the National Natural Science Foundation of China(No.41621001)。
文摘Air temperature is an important indicator to analyze climate change in mountainous areas.ERA5 reanalysis air temperature data are important products that were widely used to analyze temperature change in mountainous areas.However,the reliability of ERA5 reanalysis air temperature over the Qilian Mountains(QLM)is unclear.In this study,we evaluated the reliability of ERA5 monthly averaged reanalysis 2 m air temperature data using the observations at 17 meteorological stations in the QLM from 1979 to 2017.The results showed that:ERA5 reanalysis monthly averaged air temperature data have a good applicability in the QLM in general(R2=0.99).ERA5 reanalysis temperature data overestimated the observed temperature in the QLM in general.Root mean square error(RMSE)increases with the increasing of elevation range,showing that the reliability of ERA5 reanalysis temperature data is worse in higher elevation than that in lower altitude.ERA5 reanalysis temperature can capture observational warming rates well.All the smallest warming rates of observational temperature and ERA5 reanalysis temperature are found in winter,with the warming rates of 0.393°C/10a and 0.360°C/10a,respectively.This study will provide a reference for the application of ERA5 reanalysis monthly averaged air temperature data at different elevation ranges in the Qilian Mountains.
基金supported by the National Key Research and Development Program Project of China(Grant No.2023YFF0718003)the key research and development plan project of Yunnan Province(Grant No.202303AA080006).
文摘The issue of strong noise has increasingly become a bottleneck restricting the precision and application space of electromagnetic exploration methods.Noise suppression and extraction of effective electromagnetic response information under a strong noise background is a crucial scientific task to be addressed.To solve the noise suppression problem of the controlled-source electromagnetic method in strong interference areas,we propose an approach based on complex-plane 2D k-means clustering for data processing.Based on the stability of the controlled-source signal response,clustering analysis is applied to classify the spectra of different sources and noises in multiple time segments.By identifying the power spectra with controlled-source characteristics,it helps to improve the quality of the controlled-source response extraction.This paper presents the principle and workflow of the proposed algorithm,and demonstrates feasibility and effectiveness of the new algorithm through synthetic and real data examples.The results show that,compared with the conventional Robust denoising method,the clustering algorithm has a stronger suppression effect on common noise,can identify high-quality signals,and improve the preprocessing data quality of the controlledsource electromagnetic method.
文摘With the rapid development of the Internet and e-commerce,e-commerce platforms have accumulated huge amounts of user behavior data.The emergence of big data technology provides a powerful means for in-depth analysis of these data and insight into user behavior patterns and preferences.This paper elaborates on the application of big data technology in the analysis of user behavior on e-commerce platforms,including the technical methods of data collection,storage,processing and analysis,as well as the specific applications in the construction of user profiles,precision marketing,personalized recommendation,user retention and churn analysis,etc.,and discusses the challenges and countermeasures faced in the application.Through the study of actual cases,it demonstrates the remarkable effectiveness of big data technology in enhancing the competitiveness of e-commerce platforms and user experience.
文摘Objective To identify core acupoint patterns and elucidate the molecular mechanisms of acupuncture for primary depressive disorder(PDD)through data mining and network analysis.Methods A comprehensive literature search was conducted across PubMed,Embase,Ovid Technologies(OVID),Web of Science,Cochrane Library,China National Knowledge Infrastructure(CNKI),China National Knowledge Infrastructure Database(VIP),Wanfang Data,and SinoMed Database from database foundation to January 31,2025,for clinical studies on acupuncture treatment of PDD.Descriptive statistics,high-frequency acupoint analysis,degree and betweenness centrality evaluation,and core acupoint prescription mining identified predominant therapeutic combinations for PDD.Network acupuncture was used to predict therapeutic target for the core acupoint prescription.Subsequent protein-protein interaction(PPI)network and molecular complex detection(MCODE)analyses were conducted to identify the key targets and functional modules.Gene Ontology(GO)and Kyoto Encyclopedia of Genes and Genomes(KEGG)analyses explored the underlying biological mechanisms of the core acupoint prescription in treating PDD.Results A total of 57 acupoint prescriptions underwent systematic analysis.The core therapeutic combinations comprised Baihui(GV20),Yintang(GV29),Neiguan(PC6),Hegu(LI4),and Shenmen(HT7).Network acupuncture analysis identified 88 potential therapeutic targets(79 overlapping with PDD),while PPI network analysis revealed central regulatory nodes,including interleukin(IL)-6,IL-1β,tumor necrosis factor(TNF)-α,toll-like receptor 4(TLR4),IL-10,brain-derived neurotrophic factor(BDNF),transforming growth factor(TGF)-β1,C-XC motif chemokine ligand 10(CXCL10),mitogen-activated protein kinase 3(MAPK3),and nitric oxide synthase 1(NOS1).MCODE-based modular analysis further elucidated three functionally coherent clusters:inflammation-homeostasis(score=6.571),plasticity-neurotransmission(score=3.143),and oxidative stress(score=3.000).GO and KEGG analyses demonstrated significant enrichment of the MAPK,phosphoinositide 3-kinase/protein kinase B(PI3K/Akt),and hypoxia-inducible factor(HIF)-1 signaling pathways.These mechanistic insights suggested that the antidepressant effects mediated through mechanisms of neuroinflammatory regulation,neuroplasticity restoration,and immune-oxidative stress homeostasis.Conclusion This study reveals that acupuncture alleviates depression through a multi-level mechanism,primarily involving the neuroinflammation suppression,neuroplasticity enhancement,and oxidative stress regulation.These findings systematically clarify the underlying mechanisms of acupuncture’s antidepressant effects and identify novel therapeutic targets for further mechanistic research.
基金support from the National Key Research and Development Program of China(No.2024YFB3713705)is acknowledgedWangzhong Mu would like to acknowledge the Strategic Mobility,Sweden(SSF,No.SM22-0039)+1 种基金the Swedish Foundation for International Cooperation in Research and Higher Education(STINT,No.IB2022-9228)the Jernkontoret(Sweden)for supporting this clean steel research.Gonghao Lian would like to acknowledge China Scholarship Council(CSC,No.202306080032).
文摘The detection and characterization of non-metallic inclusions are essential for clean steel production.Recently,imaging analysis combined with high-dimensional data processing of metallic materials using artificial intelligence(AI)-based machine learning(ML)has developed rapidly.This technique has achieved impressive results in the field of inclusion classification in process metallurgy.The present study surveys the ML modeling of inclusion prediction in advanced steels,including the detection,classification,and feature prediction of inclusions in different steel grades.Studies on clean steel with different features based on data and image analysis via ML are summarized.Regarding the data analysis,the inclusion prediction methodology based on ML establishes a connection between the experimental parameters and inclusion characteristics and analyzes the importance of the experimental parameters.Regarding the image analysis,the focus is placed on the classification of different types of inclusions via deep learning,in comparison with data analysis.Finally,further development of inclusion analyses using ML-based methods is recommended.This work paves the way for the application of AIbased methodologies for ultraclean-steel studies from a sustainable metallurgy perspective.
基金supported by National Key Research and Development Program of China(Nos.2022YFA1602404 and 2023YFA1606901)the National Natural Science Foundation of China(Nos.12275338,12388102,and U2441221)the Key Laboratory of Nuclear Data Foundation(JCKY2022201C152).
文摘Photonuclear data are increasingly used in fundamental nuclear research and technological applications.These data are generated using advanced γ-ray sources.The Shanghai laser electron gamma source(SLEGS)is a new laser Compton scattering γ-ray source at the Shanghai Synchrotron Radiation Facility.It delivers energy-tunable,quasi-monoenergetic gamma beams for high-precision photonuclear measurements.This paper presents the flat-efficiency detector(FED)array at SLEGS and its application in photoneutron cross-section measurements.Systematic uncertainties of the FED array were determined to be 3.02%through calibration with a ^(252)Cf neutron source.Using ^(197)Au and ^(159)Tb as representative nuclei,we demonstrate the format and processing methodology for raw photoneutron data.The results validate SLEGS’capability for high-precision photoneutron measurements.
基金National Natural Science Foundation of China(grant numbers 42293351,41877239,51422904 and 51379112).
文摘Advanced geological prediction is a crucial means to ensure safety and efficiency in tunnel construction.However,diff erent advanced geological forecasting methods have their own limitations,resulting in poor detection accuracy.Using multiple methods to carry out a comprehensive evaluation can eff ectively improve the accuracy of advanced geological prediction results.In this study,geological information is combined with the detection results of geophysical methods,including transient electromagnetic,induced polarization,and tunnel seismic prediction,to establish a comprehensive analysis method of adverse geology.First,the possible main adverse geological problems are determined according to the geological information.Subsequently,various physical parameters of the rock mass in front of the tunnel face can then be derived on the basis of multisource geophysical data.Finally,based on the analysis results of geological information,the multisource data fusion algorithm is used to determine the type,location,and scale of adverse geology.The advanced geological prediction results that can provide eff ective guidance for tunnel construction can then be obtained.
文摘This article focuses on the remote diagnosis and analysis of rail vehicle status based on the data of the Train Control Management System(TCMS).It first expounds on the importance of train diagnostic analysis and designs a unified TCMS data frame transmission format.Subsequently,a remote data transmission link using 4G signals and data processing methods is introduced.The advantages of remote diagnosis are analyzed,and common methods such as correlation analysis,fault diagnosis,and fault prediction are explained in detail.Then,challenges such as data security and the balance between diagnostic accuracy and real-time performance are discussed,along with development prospects in technological innovation,algorithm optimization,and application promotion.This research provides ideas for remote analysis and diagnosis based on TCMS data,contributing to the safe and efficient operation of rail vehicles.
基金by the National Key Research and Development Program of China(2023YFC3303701-02 and 2024YFC3306701)the National Natural Science Foundation of China(T2425014 and 32270667)+3 种基金the Natural Science Foundation of Fujian Province of China(2023J06013)the Major Project of the National Social Science Foundation of China granted to Chuan-Chao Wang(21&ZD285)Open Research Fund of State Key Laboratory of Genetic Engineering at Fudan University(SKLGE-2310)Open Research Fund of Forensic Genetics Key Laboratory of the Ministry of Public Security(2023FGKFKT07).
文摘The analysis of ancient genomics provides opportunities to explore human population history across both temporal and geographic dimensions(Haak et al.,2015;Wang et al.,2021,2024)to enhance the accessibility and utility of these ancient genomic datasets,a range of databases and advanced statistical models have been developed,including the Allen Ancient DNA Resource(AADR)(Mallick et al.,2024)and AdmixTools(Patterson et al.,2012).While upstream processes such as sequencing and raw data processing have been streamlined by resources like the AADR,the downstream analysis of these datasets-encompassing population genetics inference and spatiotemporal interpretation-remains a significant challenge.The AADR provides a unified collection of published ancient DNA(aDNA)data,yet its file-based format and reliance on command-line tools,such as those in Admix-Tools(Patterson et al.,2012),require advanced computational expertise for effective exploration and analysis.These requirements can present significant challenges forresearchers lackingadvanced computational expertise,limiting the accessibility and broader application of these valuable genomic resources.