在小学高年级英语语篇教学中,存在学生思维浅表化、问题设计碎片化、旧版教材适配难这三个痛点。以译林版英语教材六年级上册Unit 4 Then and now中Story time的教学为例,教师立足教材文本,构建“课前定问—课初引链—课中解链—课后拓...在小学高年级英语语篇教学中,存在学生思维浅表化、问题设计碎片化、旧版教材适配难这三个痛点。以译林版英语教材六年级上册Unit 4 Then and now中Story time的教学为例,教师立足教材文本,构建“课前定问—课初引链—课中解链—课后拓链—全程评链”的五步闭环,用大问题拉主线、小问题搭台阶,能激活学生语篇学习内驱力,实现英语教学从“知识传递”到“素养培养”的转变。展开更多
With the accelerating aging process of China’s population,the demand for community elderly care services has shown diversified and personalized characteristics.However,problems such as insufficient total care service...With the accelerating aging process of China’s population,the demand for community elderly care services has shown diversified and personalized characteristics.However,problems such as insufficient total care service resources,uneven distribution,and prominent supply-demand contradictions have seriously affected service quality.Big data technology,with core advantages including data collection,analysis and mining,and accurate prediction,provides a new solution for the allocation of community elderly care service resources.This paper systematically studies the application value of big data technology in the allocation of community elderly care service resources from three aspects:resource allocation efficiency,service accuracy,and management intelligence.Combined with practical needs,it proposes optimal allocation strategies such as building a big data analysis platform and accurately grasping the elderly’s care needs,striving to provide operable path references for the construction of community elderly care service systems,promoting the early realization of the elderly care service goal of“adequate support and proper care for the elderly”,and boosting the high-quality development of China’s elderly care service industry.展开更多
Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from sei...Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from seismic networks,satellite observations,and geospatial repositories,creating the need for scalable infrastructures capable of integrating and analyzing such data to support intelligent decision-making.Data warehousing technologies provide a robust foundation for this purpose;however,existing earthquake-oriented data warehouses remain limited,often relying on simplified schemas,domain-specific analytics,or cataloguing efforts.This paper presents the design and implementation of a spatio-temporal data warehouse for seismic activity.The framework integrates spatial and temporal dimensions in a unified schema and introduces a novel array-based approach for managing many-to-many relationships between facts and dimensions without intermediate bridge tables.A comparative evaluation against a conventional bridge-table schema demonstrates that the array-based design improves fact-centric query performance,while the bridge-table schema remains advantageous for dimension-centric queries.To reconcile these trade-offs,a hybrid schema is proposed that retains both representations,ensuring balanced efficiency across heterogeneous workloads.The proposed framework demonstrates how spatio-temporal data warehousing can address schema complexity,improve query performance,and support multidimensional visualization.In doing so,it provides a foundation for integrating seismic analysis into broader big data-driven intelligent decision systems for disaster resilience,risk mitigation,and emergency management.展开更多
大孔径CT基于X线断层扫描原理的超大孔径设计,能容纳肥胖患者以及携带医疗设备的特殊患者,其凭借高清晰成像为放疗计划的制定与实施提供了关键的解剖学依据,是现代放疗“精准化”转型的重要支撑[1]。我院于2017年引进飞利浦Brilliance B...大孔径CT基于X线断层扫描原理的超大孔径设计,能容纳肥胖患者以及携带医疗设备的特殊患者,其凭借高清晰成像为放疗计划的制定与实施提供了关键的解剖学依据,是现代放疗“精准化”转型的重要支撑[1]。我院于2017年引进飞利浦Brilliance Big Bore CT设备,截至2024年底,已累计完成5000余例肿瘤患者的放疗定位扫描。展开更多
The increasing number of interconnected devices and the incorporation of smart technology into contemporary healthcare systems have significantly raised the attack surface of cyber threats.The early detection of threa...The increasing number of interconnected devices and the incorporation of smart technology into contemporary healthcare systems have significantly raised the attack surface of cyber threats.The early detection of threats is both necessary and complex,yet these interconnected healthcare settings generate enormous amounts of heterogeneous data.Traditional Intrusion Detection Systems(IDS),which are generally centralized and machine learning-based,often fail to address the rapidly changing nature of cyberattacks and are challenged by ethical concerns related to patient data privacy.Moreover,traditional AI-driven IDS usually face challenges in handling large-scale,heterogeneous healthcare data while ensuring data privacy and operational efficiency.To address these issues,emerging technologies such as Big Data Analytics(BDA)and Federated Learning(FL)provide a hybrid framework for scalable,adaptive intrusion detection in IoT-driven healthcare systems.Big data techniques enable processing large-scale,highdimensional healthcare data,and FL can be used to train a model in a decentralized manner without transferring raw data,thereby maintaining privacy between institutions.This research proposes a privacy-preserving Federated Learning–based model that efficiently detects cyber threats in connected healthcare systems while ensuring distributed big data processing,privacy,and compliance with ethical regulations.To strengthen the reliability of the reported findings,the resultswere validated using cross-dataset testing and 95%confidence intervals derived frombootstrap analysis,confirming consistent performance across heterogeneous healthcare data distributions.This solution takes a significant step toward securing next-generation healthcare infrastructure by combining scalability,privacy,adaptability,and earlydetection capabilities.The proposed global model achieves a test accuracy of 99.93%±0.03(95%CI)and amiss-rate of only 0.07%±0.02,representing state-of-the-art performance in privacy-preserving intrusion detection.The proposed FL-driven IDS framework offers an efficient,privacy-preserving,and scalable solution for securing next-generation healthcare infrastructures by combining adaptability,early detection,and ethical data management.展开更多
The Wufeng–Longmaxi Formation derives its name from the Upper Ordovician Wufeng Formation and the Lower Silurian Longmaxi Formation,found in sequence in the Sichuan Basin.This formation hosts rich shale gas reservoir...The Wufeng–Longmaxi Formation derives its name from the Upper Ordovician Wufeng Formation and the Lower Silurian Longmaxi Formation,found in sequence in the Sichuan Basin.This formation hosts rich shale gas reservoirs,and its shale gas enrichment patterns are examined in this study using data from 1197 shale samples collected from 14 wells.Five basic and three key parameters,eight in all,are assessed for each sample.The five basic parameters include burial depth and the contents of four mineral types—quartz,clay,carbonate,and other minerals;the three key parameters,representing shale gas enrichment,are total organic carbon(TOC)content,porosity,and gas content.The SHapley Additive exPlanations(SHAP)analysis originated in game theory is used here in an interpretable machine learning framework,to address issues of heterogeneous data structure,noisy relationships,and multi-objective optimization.An evaluation of the ranking,contribution values,and conditions of changes for these parameters offers new quantitative insights into shale gas enrichment patterns.A quantitative analysis of the relationship between data-sets identifies the primary factors controlling TOC,porosity,and gas content of shale gas reservoirs.The results show that TOC and porosity jointly influence gas content;mineral content has a significant impact on both,TOC and porosity;and the burial depth governs porosity which,in turn,affects the conditions under which shale gas is preserved.Input parameter thresholds are also determined and provide a basis for the establishment of quantitative criteria to evaluate shale gas enrichment.The predictive accuracy of the model used in this study is significantly improved by the step-wise addition of two input parameters,namely TOC and porosity,separately and together.Thus,the game theory method in big data-driven analysis uses a combination of TOC and porosity to evaluate the gas content with encouraging results—suggesting that these are the key parameters that indicate source rock and reservoir properties.展开更多
The security of the seed industry is crucial for ensuring national food security.Currently,developed countries in Europe and America,along with international seed industry giants,have entered the Breeding 4.0 era.This...The security of the seed industry is crucial for ensuring national food security.Currently,developed countries in Europe and America,along with international seed industry giants,have entered the Breeding 4.0 era.This era integrates biotechnology,artificial intelligence(AI),and big data information technology.In contrast,China is still in a transition period between stages 2.0 and 3.0,which primarily relies on conventional selection and molecular breeding.In the context of increasingly complex international situations,accurately identifying core issues in China's seed industry innovation and seizing the frontier of international seed technology are strategically important.These efforts are essential for ensuring food security and revitalizing the seed industry.This paper systematically analyzes the characteristics of crop breeding data from artificial selection to intelligent design breeding.It explores the applications and development trends of AI and big data in modern crop breeding from several key perspectives.These include highthroughput phenotype acquisition and analysis,multiomics big data database and management system construction,AI-based multiomics integrated analysis,and the development of intelligent breeding software tools based on biological big data and AI technology.Based on an in-depth analysis of the current status and challenges of China's seed industry technology development,we propose strategic goals and key tasks for China's new generation of AI and big data-driven intelligent design breeding.These suggestions aim to accelerate the development of an intelligent-driven crop breeding engineering system that features large-scale gene mining,efficient gene manipulation,engineered variety design,and systematized biobreeding.This study provides a theoretical basis and practical guidance for the development of China's seed industry technology.展开更多
Research into metamorphism plays a pivotal role in reconstructing the evolution of continent,particularly through the study of ancient rocks that are highly susceptible to metamorphic alterations due to multiple tecto...Research into metamorphism plays a pivotal role in reconstructing the evolution of continent,particularly through the study of ancient rocks that are highly susceptible to metamorphic alterations due to multiple tectonic activities.In the big data era,the establishment of new data platforms and the application of big data methods have become a focus for metamorphic rocks.Significant progress has been made in creating specialized databases,compiling comprehensive datasets,and utilizing data analytics to address complex scientific questions.However,many existing databases are inadequate in meeting the specific requirements of metamorphic research,resulting from a substantial amount of valuable data remaining uncollected.Therefore,constructing new databases that can cope with the development of the data era is necessary.This article provides an extensive review of existing databases related to metamorphic rocks and discusses data-driven studies in this.Accordingly,several crucial factors that need to be taken into consideration in the establishment of specialized metamorphic databases are identified,aiming to leverage data-driven applications to achieve broader scientific objectives in metamorphic research.展开更多
On October 18,2017,the 19th National Congress Report called for the implementation of the Healthy China Strategy.The development of biomedical data plays a pivotal role in advancing this strategy.Since the 18th Nation...On October 18,2017,the 19th National Congress Report called for the implementation of the Healthy China Strategy.The development of biomedical data plays a pivotal role in advancing this strategy.Since the 18th National Congress of the Communist Party of China,China has vigorously promoted the integration and implementation of the Healthy China and Digital China strategies.The National Health Commission has prioritized the development of health and medical big data,issuing policies to promote standardized applica-tions and foster innovation in"Internet+Healthcare."Biomedical data has significantly contributed to preci-sion medicine,personalized health management,drug development,disease diagnosis,public health monitor-ing,and epidemic prediction capabilities.展开更多
Deep-time Earth research plays a pivotal role in deciphering the rates,patterns,and mechanisms of Earth's evolutionary processes throughout geological history,providing essential scientific foundations for climate...Deep-time Earth research plays a pivotal role in deciphering the rates,patterns,and mechanisms of Earth's evolutionary processes throughout geological history,providing essential scientific foundations for climate prediction,natural resource exploration,and sustainable planetary stewardship.To advance Deep-time Earth research in the era of big data and artificial intelligence,the International Union of Geological Sciences initiated the“Deeptime Digital Earth International Big Science Program”(DDE)in 2019.At the core of this ambitious program lies the development of geoscience knowledge graphs,serving as a transformative knowledge infrastructure that enables the integration,sharing,mining,and analysis of heterogeneous geoscience big data.The DDE knowledge graph initiative has made significant strides in three critical dimensions:(1)establishing a unified knowledge structure across geoscience disciplines that ensures consistent representation of geological entities and their interrelationships through standardized ontologies and semantic frameworks;(2)developing a robust and scalable software infrastructure capable of supporting both expert-driven and machine-assisted knowledge engineering for large-scale graph construction and management;(3)implementing a comprehensive three-tiered architecture encompassing basic,discipline-specific,and application-oriented knowledge graphs,spanning approximately 20 geoscience disciplines.Through its open knowledge framework and international collaborative network,this initiative has fostered multinational research collaborations,establishing a robust foundation for next-generation geoscience research while propelling the discipline toward FAIR(Findable,Accessible,Interoperable,Reusable)data practices in deep-time Earth systems research.展开更多
This paper addresses urban sustainability challenges amid global urbanization, emphasizing the need for innova tive approaches aligned with the Sustainable Development Goals. While traditional tools and linear models ...This paper addresses urban sustainability challenges amid global urbanization, emphasizing the need for innova tive approaches aligned with the Sustainable Development Goals. While traditional tools and linear models offer insights, they fall short in presenting a holistic view of complex urban challenges. System dynamics (SD) models that are often utilized to provide holistic, systematic understanding of a research subject, like the urban system, emerge as valuable tools, but data scarcity and theoretical inadequacy pose challenges. The research reviews relevant papers on recent SD model applications in urban sustainability since 2018, categorizing them based on nine key indicators. Among the reviewed papers, data limitations and model assumptions were identified as ma jor challenges in applying SD models to urban sustainability. This led to exploring the transformative potential of big data analytics, a rare approach in this field as identified by this study, to enhance SD models’ empirical foundation. Integrating big data could provide data-driven calibration, potentially improving predictive accuracy and reducing reliance on simplified assumptions. The paper concludes by advocating for new approaches that reduce assumptions and promote real-time applicable models, contributing to a comprehensive understanding of urban sustainability through the synergy of big data and SD models.展开更多
Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of th...Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.展开更多
Viral infectious diseases,characterized by their intricate nature and wide-ranging diversity,pose substantial challenges in the domain of data management.The vast volume of data generated by these diseases,spanning fr...Viral infectious diseases,characterized by their intricate nature and wide-ranging diversity,pose substantial challenges in the domain of data management.The vast volume of data generated by these diseases,spanning from the molecular mechanisms within cells to large-scale epidemiological patterns,has surpassed the capabilities of traditional analytical methods.In the era of artificial intelligence(AI)and big data,there is an urgent necessity for the optimization of these analytical methods to more effectively handle and utilize the information.Despite the rapid accumulation of data associated with viral infections,the lack of a comprehensive framework for integrating,selecting,and analyzing these datasets has left numerous researchers uncertain about which data to select,how to access it,and how to utilize it most effectively in their research.This review endeavors to fill these gaps by exploring the multifaceted nature of viral infectious diseases and summarizing relevant data across multiple levels,from the molecular details of pathogens to broad epidemiological trends.The scope extends from the micro-scale to the macro-scale,encompassing pathogens,hosts,and vectors.In addition to data summarization,this review thoroughly investigates various dataset sources.It also traces the historical evolution of data collection in the field of viral infectious diseases,highlighting the progress achieved over time.Simultaneously,it evaluates the current limitations that impede data utilization.Furthermore,we propose strategies to surmount these challenges,focusing on the development and application of advanced computational techniques,AI-driven models,and enhanced data integration practices.By providing a comprehensive synthesis of existing knowledge,this review is designed to guide future research and contribute to more informed approaches in the surveillance,prevention,and control of viral infectious diseases,particularly within the context of the expanding big-data landscape.展开更多
This study examines the Big Data Collection and Preprocessing course at Anhui Institute of Information Engineering,implementing a hybrid teaching reform using the Bosi Smart Learning Platform.The proposed hybrid model...This study examines the Big Data Collection and Preprocessing course at Anhui Institute of Information Engineering,implementing a hybrid teaching reform using the Bosi Smart Learning Platform.The proposed hybrid model follows a“three-stage”and“two-subject”framework,incorporating a structured design for teaching content and assessment methods before,during,and after class.Practical results indicate that this approach significantly enhances teaching effectiveness and improves students’learning autonomy.展开更多
This paper makes an in-depth study of the status quo and trend of the development of the big health industry under the background of digitalization,aiming to explore ways to promote the transformation,upgrading and su...This paper makes an in-depth study of the status quo and trend of the development of the big health industry under the background of digitalization,aiming to explore ways to promote the transformation,upgrading and sustainable development of the big health industry through technological innovation and policy guidance,and provide valuable references for the government,enterprises and universities and other stakeholders to promote the high-quality development of the big health industry in the digital era.展开更多
RETRACTION:P.Goyal and R.Malviya,“Challenges and Opportunities of Big Data Analytics in Healthcare,”Health Care Science 2,no.5(2023):328-338,https://doi.org/10.1002/hcs2.66.The above article,published online on 4 Oc...RETRACTION:P.Goyal and R.Malviya,“Challenges and Opportunities of Big Data Analytics in Healthcare,”Health Care Science 2,no.5(2023):328-338,https://doi.org/10.1002/hcs2.66.The above article,published online on 4 October 2023 in Wiley Online Library(wileyonlinelibrary.com),has been retracted by agreement between the journal Editor-in-Chief,Zongjiu Zhang;Tsinghua University Press;and John Wiley&Sons Ltd.展开更多
Objectives:This study aimed to develop and validate a stroke risk prediction model based on machine learning(ML)and regional healthcare big data,and determine whether it may improve the prediction performance compared...Objectives:This study aimed to develop and validate a stroke risk prediction model based on machine learning(ML)and regional healthcare big data,and determine whether it may improve the prediction performance compared with the conventional Logistic Regression(LR)model.Methods:This retrospective cohort study analyzed data from the CHinese Electronic health Records Research in Yinzhou(CHERRY)(2015–2021).We included adults aged 18–75 from the platform who had established records before 2015.Individuals with pre-existing stroke,key data absence,or excessive missingness(>30%)were excluded.Data on demographic,clinical measures,lifestyle factors,comorbidities,and family history of stroke were collected.Variable selection was performed in two stages:an initial screening via univariate analysis,followed by a prioritization of variables based on clinical relevance and actionability,with a focus on those that are modifiable.Stroke prediction models were developed using LR and four ML algorithms:Decision Tree(DT),Random Forest(RF),eXtreme Gradient Boosting(XGBoost),and Back Propagation Neural Network(BPNN).The dataset was split 7:3 for training and validation sets.Performance was assessed using receiver operating characteristic(ROC)curves,calibration,and confusion matrices,and the cutoff value was determined by Youden's index to classify risk groups.Results:The study cohort comprised 92,172 participants with 436 incident stroke cases(incidence rate:474/100,000 person-years).Ultimately,13 predictor variables were included.RF achieved the highest accuracy(0.935),precision(0.923),sensitivity(recall:0.947),and F1 score(0.935).Model evaluation demonstrated superior predictive performance of ML algorithms over conventional LR,with training/validation areaunderthe curve(AUC)sof0.777/0.779(LR),0.921/0.918(BPNN),0.988/0.980(RF),0.980/0.955(DT),and 0.962/0.958(XGBoost).Calibration analysis revealed a better fit for DT,LR and BPNN compared to RF and XGBoost model.Based on the optimal performance of the RF model,the ranking of factors in descending order of importance was:hypertension,age,diabetes,systolic blood pressure,waist,high-density lipoprotein Cholesterol,fasting blood glucose,physical activity,BMI,low-density lipoprotein cholesterol,total cholesterol,dietary habits,and family history of stroke.Using Youden's index as the optimal cutoff,the RF model stratified individuals into high-risk(>0.789)and low-risk(≤0.789)groups with robust discrimination.Conclusions:The ML-based prediction models demonstrated superior performance metrics compared to conventional LR and the RF is the optimal prediction model,providing an effective tool for risk stratifi cation in primary stroke prevention in community settings.展开更多
With the rapid development of intelligent manufacturing,industrial big data play an increasingly crucial role in the digital transformation of enterprises.However,current industrial big data platforms still face chall...With the rapid development of intelligent manufacturing,industrial big data play an increasingly crucial role in the digital transformation of enterprises.However,current industrial big data platforms still face challenges in data acquisition,processing,and visualization,including data processing inefficiencies,suboptimal storage solutions,and insufficient visualization experiences,which are often exacerbated by inherent data quality issues such as noise and outliers.To address these problems,this study proposes an industrial big data processing framework based on Flink and builds a data presentation system by combining Grafana and ECharts.The system collects data through enterprise sensors,utilizes Kafka message queues for data buffering,and uses Flink for efficient real-time data processing,incorporating foundational data cleansing techniques and strategies for mitigating common noise and anomalies.For data storage,MySQL is employed for static data,and InfluxDB is used for real-time data to improve storage efficiency.In terms of data visualization,Grafana displays real-time data,whereas ECharts is used for static data,offering users an intuitive and comprehensive data display interface.This study aims to provide an efficient and customizable industrial big data solution,with an emphasis on improving data reliability for visualization,to help enterprises monitor equipment information in real time,obtain effective information,and accelerate their intelligent transformation process.展开更多
In the era of big data,data has gradually become an important asset of enterprises,and the application of big data technology has gradually become the key to the optimization of enterprise marketing management mode.En...In the era of big data,data has gradually become an important asset of enterprises,and the application of big data technology has gradually become the key to the optimization of enterprise marketing management mode.Enterprises take the initiative to meet the development trend of the times,rely on big data technology to effectively process and analyze data,innovate decision-making methods and operation models,and achieve efficient marketing and fine management,which is an important way to improve their market competitiveness.Therefore,the author first analyzes the empowering role of big data technology on enterprise marketing management,and then discusses the difficulties faced by enterprise marketing management in the era of big data,and finally puts forward targeted improvement strategies,aiming to provide a reference for enterprises to innovate and change the marketing management mode.展开更多
文摘在小学高年级英语语篇教学中,存在学生思维浅表化、问题设计碎片化、旧版教材适配难这三个痛点。以译林版英语教材六年级上册Unit 4 Then and now中Story time的教学为例,教师立足教材文本,构建“课前定问—课初引链—课中解链—课后拓链—全程评链”的五步闭环,用大问题拉主线、小问题搭台阶,能激活学生语篇学习内驱力,实现英语教学从“知识传递”到“素养培养”的转变。
文摘With the accelerating aging process of China’s population,the demand for community elderly care services has shown diversified and personalized characteristics.However,problems such as insufficient total care service resources,uneven distribution,and prominent supply-demand contradictions have seriously affected service quality.Big data technology,with core advantages including data collection,analysis and mining,and accurate prediction,provides a new solution for the allocation of community elderly care service resources.This paper systematically studies the application value of big data technology in the allocation of community elderly care service resources from three aspects:resource allocation efficiency,service accuracy,and management intelligence.Combined with practical needs,it proposes optimal allocation strategies such as building a big data analysis platform and accurately grasping the elderly’s care needs,striving to provide operable path references for the construction of community elderly care service systems,promoting the early realization of the elderly care service goal of“adequate support and proper care for the elderly”,and boosting the high-quality development of China’s elderly care service industry.
文摘Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from seismic networks,satellite observations,and geospatial repositories,creating the need for scalable infrastructures capable of integrating and analyzing such data to support intelligent decision-making.Data warehousing technologies provide a robust foundation for this purpose;however,existing earthquake-oriented data warehouses remain limited,often relying on simplified schemas,domain-specific analytics,or cataloguing efforts.This paper presents the design and implementation of a spatio-temporal data warehouse for seismic activity.The framework integrates spatial and temporal dimensions in a unified schema and introduces a novel array-based approach for managing many-to-many relationships between facts and dimensions without intermediate bridge tables.A comparative evaluation against a conventional bridge-table schema demonstrates that the array-based design improves fact-centric query performance,while the bridge-table schema remains advantageous for dimension-centric queries.To reconcile these trade-offs,a hybrid schema is proposed that retains both representations,ensuring balanced efficiency across heterogeneous workloads.The proposed framework demonstrates how spatio-temporal data warehousing can address schema complexity,improve query performance,and support multidimensional visualization.In doing so,it provides a foundation for integrating seismic analysis into broader big data-driven intelligent decision systems for disaster resilience,risk mitigation,and emergency management.
文摘大孔径CT基于X线断层扫描原理的超大孔径设计,能容纳肥胖患者以及携带医疗设备的特殊患者,其凭借高清晰成像为放疗计划的制定与实施提供了关键的解剖学依据,是现代放疗“精准化”转型的重要支撑[1]。我院于2017年引进飞利浦Brilliance Big Bore CT设备,截至2024年底,已累计完成5000余例肿瘤患者的放疗定位扫描。
文摘The increasing number of interconnected devices and the incorporation of smart technology into contemporary healthcare systems have significantly raised the attack surface of cyber threats.The early detection of threats is both necessary and complex,yet these interconnected healthcare settings generate enormous amounts of heterogeneous data.Traditional Intrusion Detection Systems(IDS),which are generally centralized and machine learning-based,often fail to address the rapidly changing nature of cyberattacks and are challenged by ethical concerns related to patient data privacy.Moreover,traditional AI-driven IDS usually face challenges in handling large-scale,heterogeneous healthcare data while ensuring data privacy and operational efficiency.To address these issues,emerging technologies such as Big Data Analytics(BDA)and Federated Learning(FL)provide a hybrid framework for scalable,adaptive intrusion detection in IoT-driven healthcare systems.Big data techniques enable processing large-scale,highdimensional healthcare data,and FL can be used to train a model in a decentralized manner without transferring raw data,thereby maintaining privacy between institutions.This research proposes a privacy-preserving Federated Learning–based model that efficiently detects cyber threats in connected healthcare systems while ensuring distributed big data processing,privacy,and compliance with ethical regulations.To strengthen the reliability of the reported findings,the resultswere validated using cross-dataset testing and 95%confidence intervals derived frombootstrap analysis,confirming consistent performance across heterogeneous healthcare data distributions.This solution takes a significant step toward securing next-generation healthcare infrastructure by combining scalability,privacy,adaptability,and earlydetection capabilities.The proposed global model achieves a test accuracy of 99.93%±0.03(95%CI)and amiss-rate of only 0.07%±0.02,representing state-of-the-art performance in privacy-preserving intrusion detection.The proposed FL-driven IDS framework offers an efficient,privacy-preserving,and scalable solution for securing next-generation healthcare infrastructures by combining adaptability,early detection,and ethical data management.
基金funded by the Technical Development(Entrusted)Project of Science and Department of SINOPEC(Grant No.P23240-4)the National Natural Science Foundation of China(Grant Nos.42172165,42272143 and 2025ZD1403901-05)。
文摘The Wufeng–Longmaxi Formation derives its name from the Upper Ordovician Wufeng Formation and the Lower Silurian Longmaxi Formation,found in sequence in the Sichuan Basin.This formation hosts rich shale gas reservoirs,and its shale gas enrichment patterns are examined in this study using data from 1197 shale samples collected from 14 wells.Five basic and three key parameters,eight in all,are assessed for each sample.The five basic parameters include burial depth and the contents of four mineral types—quartz,clay,carbonate,and other minerals;the three key parameters,representing shale gas enrichment,are total organic carbon(TOC)content,porosity,and gas content.The SHapley Additive exPlanations(SHAP)analysis originated in game theory is used here in an interpretable machine learning framework,to address issues of heterogeneous data structure,noisy relationships,and multi-objective optimization.An evaluation of the ranking,contribution values,and conditions of changes for these parameters offers new quantitative insights into shale gas enrichment patterns.A quantitative analysis of the relationship between data-sets identifies the primary factors controlling TOC,porosity,and gas content of shale gas reservoirs.The results show that TOC and porosity jointly influence gas content;mineral content has a significant impact on both,TOC and porosity;and the burial depth governs porosity which,in turn,affects the conditions under which shale gas is preserved.Input parameter thresholds are also determined and provide a basis for the establishment of quantitative criteria to evaluate shale gas enrichment.The predictive accuracy of the model used in this study is significantly improved by the step-wise addition of two input parameters,namely TOC and porosity,separately and together.Thus,the game theory method in big data-driven analysis uses a combination of TOC and porosity to evaluate the gas content with encouraging results—suggesting that these are the key parameters that indicate source rock and reservoir properties.
基金partially supported by the Construction of Collaborative Innovation Center of Beijing Academy of Agricultural and Forestry Sciences(KJCX20240406)the Beijing Natural Science Foundation(JQ24037)+1 种基金the National Natural Science Foundation of China(32330075)the Earmarked Fund for China Agriculture Research System(CARS-02 and CARS-54)。
文摘The security of the seed industry is crucial for ensuring national food security.Currently,developed countries in Europe and America,along with international seed industry giants,have entered the Breeding 4.0 era.This era integrates biotechnology,artificial intelligence(AI),and big data information technology.In contrast,China is still in a transition period between stages 2.0 and 3.0,which primarily relies on conventional selection and molecular breeding.In the context of increasingly complex international situations,accurately identifying core issues in China's seed industry innovation and seizing the frontier of international seed technology are strategically important.These efforts are essential for ensuring food security and revitalizing the seed industry.This paper systematically analyzes the characteristics of crop breeding data from artificial selection to intelligent design breeding.It explores the applications and development trends of AI and big data in modern crop breeding from several key perspectives.These include highthroughput phenotype acquisition and analysis,multiomics big data database and management system construction,AI-based multiomics integrated analysis,and the development of intelligent breeding software tools based on biological big data and AI technology.Based on an in-depth analysis of the current status and challenges of China's seed industry technology development,we propose strategic goals and key tasks for China's new generation of AI and big data-driven intelligent design breeding.These suggestions aim to accelerate the development of an intelligent-driven crop breeding engineering system that features large-scale gene mining,efficient gene manipulation,engineered variety design,and systematized biobreeding.This study provides a theoretical basis and practical guidance for the development of China's seed industry technology.
基金funded by the National Natural Science Foundation of China(No.42220104008)。
文摘Research into metamorphism plays a pivotal role in reconstructing the evolution of continent,particularly through the study of ancient rocks that are highly susceptible to metamorphic alterations due to multiple tectonic activities.In the big data era,the establishment of new data platforms and the application of big data methods have become a focus for metamorphic rocks.Significant progress has been made in creating specialized databases,compiling comprehensive datasets,and utilizing data analytics to address complex scientific questions.However,many existing databases are inadequate in meeting the specific requirements of metamorphic research,resulting from a substantial amount of valuable data remaining uncollected.Therefore,constructing new databases that can cope with the development of the data era is necessary.This article provides an extensive review of existing databases related to metamorphic rocks and discusses data-driven studies in this.Accordingly,several crucial factors that need to be taken into consideration in the establishment of specialized metamorphic databases are identified,aiming to leverage data-driven applications to achieve broader scientific objectives in metamorphic research.
文摘On October 18,2017,the 19th National Congress Report called for the implementation of the Healthy China Strategy.The development of biomedical data plays a pivotal role in advancing this strategy.Since the 18th National Congress of the Communist Party of China,China has vigorously promoted the integration and implementation of the Healthy China and Digital China strategies.The National Health Commission has prioritized the development of health and medical big data,issuing policies to promote standardized applica-tions and foster innovation in"Internet+Healthcare."Biomedical data has significantly contributed to preci-sion medicine,personalized health management,drug development,disease diagnosis,public health monitor-ing,and epidemic prediction capabilities.
基金Strategic Priority Research Program of the Chinese Academy of Sciences,No.XDB0740000National Key Research and Development Program of China,No.2022YFB3904200,No.2022YFF0711601+1 种基金Key Project of Innovation LREIS,No.PI009National Natural Science Foundation of China,No.42471503。
文摘Deep-time Earth research plays a pivotal role in deciphering the rates,patterns,and mechanisms of Earth's evolutionary processes throughout geological history,providing essential scientific foundations for climate prediction,natural resource exploration,and sustainable planetary stewardship.To advance Deep-time Earth research in the era of big data and artificial intelligence,the International Union of Geological Sciences initiated the“Deeptime Digital Earth International Big Science Program”(DDE)in 2019.At the core of this ambitious program lies the development of geoscience knowledge graphs,serving as a transformative knowledge infrastructure that enables the integration,sharing,mining,and analysis of heterogeneous geoscience big data.The DDE knowledge graph initiative has made significant strides in three critical dimensions:(1)establishing a unified knowledge structure across geoscience disciplines that ensures consistent representation of geological entities and their interrelationships through standardized ontologies and semantic frameworks;(2)developing a robust and scalable software infrastructure capable of supporting both expert-driven and machine-assisted knowledge engineering for large-scale graph construction and management;(3)implementing a comprehensive three-tiered architecture encompassing basic,discipline-specific,and application-oriented knowledge graphs,spanning approximately 20 geoscience disciplines.Through its open knowledge framework and international collaborative network,this initiative has fostered multinational research collaborations,establishing a robust foundation for next-generation geoscience research while propelling the discipline toward FAIR(Findable,Accessible,Interoperable,Reusable)data practices in deep-time Earth systems research.
基金sponsored by the U.S.Department of Housing and Urban Development(Grant No.NJLTS0027-22)The opinions expressed in this study are the authors alone,and do not represent the U.S.Depart-ment of HUD’s opinions.
文摘This paper addresses urban sustainability challenges amid global urbanization, emphasizing the need for innova tive approaches aligned with the Sustainable Development Goals. While traditional tools and linear models offer insights, they fall short in presenting a holistic view of complex urban challenges. System dynamics (SD) models that are often utilized to provide holistic, systematic understanding of a research subject, like the urban system, emerge as valuable tools, but data scarcity and theoretical inadequacy pose challenges. The research reviews relevant papers on recent SD model applications in urban sustainability since 2018, categorizing them based on nine key indicators. Among the reviewed papers, data limitations and model assumptions were identified as ma jor challenges in applying SD models to urban sustainability. This led to exploring the transformative potential of big data analytics, a rare approach in this field as identified by this study, to enhance SD models’ empirical foundation. Integrating big data could provide data-driven calibration, potentially improving predictive accuracy and reducing reliance on simplified assumptions. The paper concludes by advocating for new approaches that reduce assumptions and promote real-time applicable models, contributing to a comprehensive understanding of urban sustainability through the synergy of big data and SD models.
基金supported By Grant (PLN2022-14) of State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation (Southwest Petroleum University)。
文摘Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.
基金supported by the National Natural Science Foundation of China(32370703)the CAMS Innovation Fund for Medical Sciences(CIFMS)(2022-I2M-1-021,2021-I2M-1-061)the Major Project of Guangzhou National Labora-tory(GZNL2024A01015).
文摘Viral infectious diseases,characterized by their intricate nature and wide-ranging diversity,pose substantial challenges in the domain of data management.The vast volume of data generated by these diseases,spanning from the molecular mechanisms within cells to large-scale epidemiological patterns,has surpassed the capabilities of traditional analytical methods.In the era of artificial intelligence(AI)and big data,there is an urgent necessity for the optimization of these analytical methods to more effectively handle and utilize the information.Despite the rapid accumulation of data associated with viral infections,the lack of a comprehensive framework for integrating,selecting,and analyzing these datasets has left numerous researchers uncertain about which data to select,how to access it,and how to utilize it most effectively in their research.This review endeavors to fill these gaps by exploring the multifaceted nature of viral infectious diseases and summarizing relevant data across multiple levels,from the molecular details of pathogens to broad epidemiological trends.The scope extends from the micro-scale to the macro-scale,encompassing pathogens,hosts,and vectors.In addition to data summarization,this review thoroughly investigates various dataset sources.It also traces the historical evolution of data collection in the field of viral infectious diseases,highlighting the progress achieved over time.Simultaneously,it evaluates the current limitations that impede data utilization.Furthermore,we propose strategies to surmount these challenges,focusing on the development and application of advanced computational techniques,AI-driven models,and enhanced data integration practices.By providing a comprehensive synthesis of existing knowledge,this review is designed to guide future research and contribute to more informed approaches in the surveillance,prevention,and control of viral infectious diseases,particularly within the context of the expanding big-data landscape.
基金2024 Anqing Normal University University-Level Key Project(ZK2024062D)。
文摘This study examines the Big Data Collection and Preprocessing course at Anhui Institute of Information Engineering,implementing a hybrid teaching reform using the Bosi Smart Learning Platform.The proposed hybrid model follows a“three-stage”and“two-subject”framework,incorporating a structured design for teaching content and assessment methods before,during,and after class.Practical results indicate that this approach significantly enhances teaching effectiveness and improves students’learning autonomy.
文摘This paper makes an in-depth study of the status quo and trend of the development of the big health industry under the background of digitalization,aiming to explore ways to promote the transformation,upgrading and sustainable development of the big health industry through technological innovation and policy guidance,and provide valuable references for the government,enterprises and universities and other stakeholders to promote the high-quality development of the big health industry in the digital era.
文摘RETRACTION:P.Goyal and R.Malviya,“Challenges and Opportunities of Big Data Analytics in Healthcare,”Health Care Science 2,no.5(2023):328-338,https://doi.org/10.1002/hcs2.66.The above article,published online on 4 October 2023 in Wiley Online Library(wileyonlinelibrary.com),has been retracted by agreement between the journal Editor-in-Chief,Zongjiu Zhang;Tsinghua University Press;and John Wiley&Sons Ltd.
基金funded by Beijing Natural Science Foundation-Haidian Original Innovation Joint Fund(Grant No.L222103)the National Natural Science Foundation of China(Grant No.72174012)。
文摘Objectives:This study aimed to develop and validate a stroke risk prediction model based on machine learning(ML)and regional healthcare big data,and determine whether it may improve the prediction performance compared with the conventional Logistic Regression(LR)model.Methods:This retrospective cohort study analyzed data from the CHinese Electronic health Records Research in Yinzhou(CHERRY)(2015–2021).We included adults aged 18–75 from the platform who had established records before 2015.Individuals with pre-existing stroke,key data absence,or excessive missingness(>30%)were excluded.Data on demographic,clinical measures,lifestyle factors,comorbidities,and family history of stroke were collected.Variable selection was performed in two stages:an initial screening via univariate analysis,followed by a prioritization of variables based on clinical relevance and actionability,with a focus on those that are modifiable.Stroke prediction models were developed using LR and four ML algorithms:Decision Tree(DT),Random Forest(RF),eXtreme Gradient Boosting(XGBoost),and Back Propagation Neural Network(BPNN).The dataset was split 7:3 for training and validation sets.Performance was assessed using receiver operating characteristic(ROC)curves,calibration,and confusion matrices,and the cutoff value was determined by Youden's index to classify risk groups.Results:The study cohort comprised 92,172 participants with 436 incident stroke cases(incidence rate:474/100,000 person-years).Ultimately,13 predictor variables were included.RF achieved the highest accuracy(0.935),precision(0.923),sensitivity(recall:0.947),and F1 score(0.935).Model evaluation demonstrated superior predictive performance of ML algorithms over conventional LR,with training/validation areaunderthe curve(AUC)sof0.777/0.779(LR),0.921/0.918(BPNN),0.988/0.980(RF),0.980/0.955(DT),and 0.962/0.958(XGBoost).Calibration analysis revealed a better fit for DT,LR and BPNN compared to RF and XGBoost model.Based on the optimal performance of the RF model,the ranking of factors in descending order of importance was:hypertension,age,diabetes,systolic blood pressure,waist,high-density lipoprotein Cholesterol,fasting blood glucose,physical activity,BMI,low-density lipoprotein cholesterol,total cholesterol,dietary habits,and family history of stroke.Using Youden's index as the optimal cutoff,the RF model stratified individuals into high-risk(>0.789)and low-risk(≤0.789)groups with robust discrimination.Conclusions:The ML-based prediction models demonstrated superior performance metrics compared to conventional LR and the RF is the optimal prediction model,providing an effective tool for risk stratifi cation in primary stroke prevention in community settings.
文摘With the rapid development of intelligent manufacturing,industrial big data play an increasingly crucial role in the digital transformation of enterprises.However,current industrial big data platforms still face challenges in data acquisition,processing,and visualization,including data processing inefficiencies,suboptimal storage solutions,and insufficient visualization experiences,which are often exacerbated by inherent data quality issues such as noise and outliers.To address these problems,this study proposes an industrial big data processing framework based on Flink and builds a data presentation system by combining Grafana and ECharts.The system collects data through enterprise sensors,utilizes Kafka message queues for data buffering,and uses Flink for efficient real-time data processing,incorporating foundational data cleansing techniques and strategies for mitigating common noise and anomalies.For data storage,MySQL is employed for static data,and InfluxDB is used for real-time data to improve storage efficiency.In terms of data visualization,Grafana displays real-time data,whereas ECharts is used for static data,offering users an intuitive and comprehensive data display interface.This study aims to provide an efficient and customizable industrial big data solution,with an emphasis on improving data reliability for visualization,to help enterprises monitor equipment information in real time,obtain effective information,and accelerate their intelligent transformation process.
文摘In the era of big data,data has gradually become an important asset of enterprises,and the application of big data technology has gradually become the key to the optimization of enterprise marketing management mode.Enterprises take the initiative to meet the development trend of the times,rely on big data technology to effectively process and analyze data,innovate decision-making methods and operation models,and achieve efficient marketing and fine management,which is an important way to improve their market competitiveness.Therefore,the author first analyzes the empowering role of big data technology on enterprise marketing management,and then discusses the difficulties faced by enterprise marketing management in the era of big data,and finally puts forward targeted improvement strategies,aiming to provide a reference for enterprises to innovate and change the marketing management mode.