The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack...The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack a unified data structure,and depend heavily on manual intervention to process high-frequency and retroactive transactions.To address these limitations,a graph-based unified settlement framework is proposed to enhance automation,flexibility,and adaptability in electricity market settlements.A flexible attribute-graph model is employed to represent heterogeneousmulti-market data,enabling standardized integration,rapid querying,and seamless adaptation to evolving business requirements.An extensible operator library is designed to support configurable settlement rules,and a suite of modular tools—including dataset generation,formula configuration,billing templates,and task scheduling—facilitates end-to-end automated settlement processing.A robust refund-clearing mechanism is further incorporated,utilizing sandbox execution,data-version snapshots,dynamic lineage tracing,and real-time changecapture technologies to enable rapid and accurate recalculations under dynamic policy and data revisions.Case studies based on real-world data from regional Chinese markets validate the effectiveness of the proposed approach,demonstrating marked improvements in computational efficiency,system robustness,and automation.Moreover,enhanced settlement accuracy and high temporal granularity improve price-signal fidelity,promote cost-reflective tariffs,and incentivize energy-efficient and demand-responsive behavior among market participants.The method not only supports equitable and transparent market operations but also provides a generalizable,scalable foundation for modern electricity settlement platforms in increasingly complex and dynamic market environments.展开更多
1.Introduction Data inference(DInf)is a data security threat in which critical information is inferred from low-sensitivity data.Once regarded as an advanced professional threat limited to intelligence analysts,DInf h...1.Introduction Data inference(DInf)is a data security threat in which critical information is inferred from low-sensitivity data.Once regarded as an advanced professional threat limited to intelligence analysts,DInf has become a widespread risk in the artificial intelligence(AI)era.展开更多
Dear Editor,The attacker is always going to intrude covertly networked control systems(NCSs)by dynamically changing false data injection attacks(FDIAs)strategy,while the defender try their best to resist attacks by de...Dear Editor,The attacker is always going to intrude covertly networked control systems(NCSs)by dynamically changing false data injection attacks(FDIAs)strategy,while the defender try their best to resist attacks by designing defense strategy on the basis of identifying attack strategy,maintaining stable operation of NCSs.To solve this attack-defense game problem,this letter investigates optimal secure control of NCSs under FDIAs.First,for the alterations of energy caused by false data,a novel attack-defense game model is constructed,which considers the changes of energy caused by the actions of the defender and attacker in the forward and feedback channels.展开更多
The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficie...The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.展开更多
False Data Injection Attacks(FDIAs)pose a critical security threat to modern power grids,corrupting state estimation and enabling malicious control actions that can lead to severe consequences,including cascading fail...False Data Injection Attacks(FDIAs)pose a critical security threat to modern power grids,corrupting state estimation and enabling malicious control actions that can lead to severe consequences,including cascading failures,large-scale blackouts,and significant economic losses.While detecting attacks is important,accurately localizing compromised nodes or measurements is even more critical,as it enables timely mitigation,targeted response,and enhanced system resilience beyond what detection alone can offer.Existing research typically models topological features using fixed structures,which can introduce irrelevant information and affect the effectiveness of feature extraction.To address this limitation,this paper proposes an FDIA localization model with adaptive neighborhood selection,which dynamically captures spatial dependencies of the power grid by adjusting node relationships based on data-driven similarities.The improved Transformer is employed to pre-fuse global spatial features of the graph,enriching the feature representation.To improve spatio-temporal correlation extraction for FDIA localization,the proposed model employs dilated causal convolution with a gating mechanism combined with graph convolution to capture and fuse long-range temporal features and adaptive topological features.This fully exploits the temporal dynamics and spatial dependencies inherent in the power grid.Finally,multi-source information is integrated to generate highly robust node embeddings,enhancing FDIA detection and localization.Experiments are conducted on IEEE 14,57,and 118-bus systems,and the results demonstrate that the proposed model substantially improves the accuracy of FDIA localization.Additional experiments are conducted to verify the effectiveness and robustness of the proposed model.展开更多
Although digital changes in power systems have added more ways to monitor and control them,these changes have also led to new cyber-attack risks,mainly from False Data Injection(FDI)attacks.If this happens,the sensors...Although digital changes in power systems have added more ways to monitor and control them,these changes have also led to new cyber-attack risks,mainly from False Data Injection(FDI)attacks.If this happens,the sensors and operations are compromised,which can lead to big problems,disruptions,failures and blackouts.In response to this challenge,this paper presents a reliable and innovative detection framework that leverages Bidirectional Long Short-Term Memory(Bi-LSTM)networks and employs explanatory methods from Artificial Intelligence(AI).Not only does the suggested architecture detect potential fraud with high accuracy,but it also makes its decisions transparent,enabling operators to take appropriate action.Themethod developed here utilizesmodel-free,interpretable tools to identify essential input elements,thereby making predictions more understandable and usable.Enhancing detection performance is made possible by correcting class imbalance using Synthetic Minority Over-sampling Technique(SMOTE)-based data balancing.Benchmark power system data confirms that the model functions correctly through detailed experiments.Experimental results showed that Bi-LSTM+Explainable AI(XAI)achieved an average accuracy of 94%,surpassing XGBoost(89%)and Bagging(84%),while ensuring explainability and a high level of robustness across various operating scenarios.By conducting an ablation study,we find that bidirectional recursive modeling and ReLU activation help improve generalization and model predictability.Additionally,examining model decisions through LIME enables us to identify which features are crucial for making smart grid operational decisions in real time.The research offers a practical and flexible approach for detecting FDI attacks,improving the security of cyber-physical systems,and facilitating the deployment of AI in energy infrastructure.展开更多
In this paper,we address a cross-layer resilient control issue for a kind of multi-spacecraft system(MSS)under attack.Attackers with bad intentions use the false data injection(FDI)attack to prevent the MSS from reach...In this paper,we address a cross-layer resilient control issue for a kind of multi-spacecraft system(MSS)under attack.Attackers with bad intentions use the false data injection(FDI)attack to prevent the MSS from reaching the goal of consensus.In order to ensure the effectiveness of the control,the embedded defender in MSS preliminarily allocates the defense resources among spacecrafts.Then,the attacker selects its target spacecrafts to mount FDI attack to achieve the maximum damage.In physical layer,a Nash equilibrium(NE)control strategy is proposed for MSS to quantify system performance under the effect of attacks by solving a game problem.In cyber layer,a fuzzy Stackelberg game framework is used to examine the rivalry process between the attacker and defender.The strategies of both attacker and defender are given based on the analysis of physical layer and cyber layer.Finally,a simulation example is used to test the viability of the proposed cross layer fuzzy game algorithm.展开更多
A security issue with multi-sensor unmanned aerial vehicle(UAV)cyber physical systems(CPS)from the viewpoint of a false data injection(FDI)attacker is investigated in this paper.The FDI attacker can employ attacks on ...A security issue with multi-sensor unmanned aerial vehicle(UAV)cyber physical systems(CPS)from the viewpoint of a false data injection(FDI)attacker is investigated in this paper.The FDI attacker can employ attacks on feedback and feed-forward channels simultaneously with limited resource.The attacker aims at degrading the UAV CPS's estimation performance to the max while keeping stealthiness characterized by the Kullback-Leibler(K-L)divergence.The attacker is resource limited which can only attack part of sensors,and the attacked sensor as well as specific forms of attack signals at each instant should be considered by the attacker.Also,the sensor selection principle is investigated with respect to time invariant attack covariances.Additionally,the optimal switching attack strategies in regard to time variant attack covariances are modeled as a multi-agent Markov decision process(MDP)with hybrid discrete-continuous action space.Then,the multi-agent MDP is solved by utilizing the deep Multi-agent parameterized Q-networks(MAPQN)method.Ultimately,a quadrotor near hover system is used to validate the effectiveness of the results in the simulation section.展开更多
Viral infectious diseases,characterized by their intricate nature and wide-ranging diversity,pose substantial challenges in the domain of data management.The vast volume of data generated by these diseases,spanning fr...Viral infectious diseases,characterized by their intricate nature and wide-ranging diversity,pose substantial challenges in the domain of data management.The vast volume of data generated by these diseases,spanning from the molecular mechanisms within cells to large-scale epidemiological patterns,has surpassed the capabilities of traditional analytical methods.In the era of artificial intelligence(AI)and big data,there is an urgent necessity for the optimization of these analytical methods to more effectively handle and utilize the information.Despite the rapid accumulation of data associated with viral infections,the lack of a comprehensive framework for integrating,selecting,and analyzing these datasets has left numerous researchers uncertain about which data to select,how to access it,and how to utilize it most effectively in their research.This review endeavors to fill these gaps by exploring the multifaceted nature of viral infectious diseases and summarizing relevant data across multiple levels,from the molecular details of pathogens to broad epidemiological trends.The scope extends from the micro-scale to the macro-scale,encompassing pathogens,hosts,and vectors.In addition to data summarization,this review thoroughly investigates various dataset sources.It also traces the historical evolution of data collection in the field of viral infectious diseases,highlighting the progress achieved over time.Simultaneously,it evaluates the current limitations that impede data utilization.Furthermore,we propose strategies to surmount these challenges,focusing on the development and application of advanced computational techniques,AI-driven models,and enhanced data integration practices.By providing a comprehensive synthesis of existing knowledge,this review is designed to guide future research and contribute to more informed approaches in the surveillance,prevention,and control of viral infectious diseases,particularly within the context of the expanding big-data landscape.展开更多
Phenotypic prediction is a promising strategy for accelerating plant breeding.Data from multiple sources(called multi-view data)can provide complementary information to characterize a biological object from various as...Phenotypic prediction is a promising strategy for accelerating plant breeding.Data from multiple sources(called multi-view data)can provide complementary information to characterize a biological object from various aspects.By integrating multi-view information into phenotypic prediction,a multi-view best linear unbiased prediction(MVBLUP)method is proposed in this paper.To measure the importance of multiple data views,the differential evolution algorithm with an early stopping mechanism is used,by which we obtain a multi-view kinship matrix and then incorporate it into the BLUP model for phenotypic prediction.To further illustrate the characteristics of MVBLUP,we perform the empirical experiments on four multi-view datasets in different crops.Compared to the single-view method,the prediction accuracy of the MVBLUP method has improved by 0.038–0.201 on average.The results demonstrate that the MVBLUP is an effective integrative prediction method for multi-view data.展开更多
Substantial advancements have been achieved in Tunnel Boring Machine(TBM)technology and monitoring systems,yet the presence of missing data impedes accurate analysis and interpretation of TBM monitoring results.This s...Substantial advancements have been achieved in Tunnel Boring Machine(TBM)technology and monitoring systems,yet the presence of missing data impedes accurate analysis and interpretation of TBM monitoring results.This study aims to investigate the issue of missing data in extensive TBM datasets.Through a comprehensive literature review,we analyze the mechanism of missing TBM data and compare different imputation methods,including statistical analysis and machine learning algorithms.We also examine the impact of various missing patterns and rates on the efficacy of these methods.Finally,we propose a dynamic interpolation strategy tailored for TBM engineering sites.The research results show that K-Nearest Neighbors(KNN)and Random Forest(RF)algorithms can achieve good interpolation results;As the missing rate increases,the interpolation effect of different methods will decrease;The interpolation effect of block missing is poor,followed by mixed missing,and the interpolation effect of sporadic missing is the best.On-site application results validate the proposed interpolation strategy's capability to achieve robust missing value interpolation effects,applicable in ML scenarios such as parameter optimization,attitude warning,and pressure prediction.These findings contribute to enhancing the efficiency of TBM missing data processing,offering more effective support for large-scale TBM monitoring datasets.展开更多
To solve the query processing correctness problem for semantic-based relational data integration,the semantics of SAPRQL(simple protocol and RDF query language) queries is defined.In the course of query rewriting,al...To solve the query processing correctness problem for semantic-based relational data integration,the semantics of SAPRQL(simple protocol and RDF query language) queries is defined.In the course of query rewriting,all relative tables are found and decomposed into minimal connectable units.Minimal connectable units are joined according to semantic queries to produce the semantically correct query plans.Algorithms for query rewriting and transforming are presented.Computational complexity of the algorithms is discussed.Under the worst case,the query decomposing algorithm can be finished in O(n2) time and the query rewriting algorithm requires O(nm) time.And the performance of the algorithms is verified by experiments,and experimental results show that when the length of query is less than 8,the query processing algorithms can provide satisfactory performance.展开更多
Precipitation types primarily include rainfall,snowfall,and sleet,and the transformation of precipitation types has significant impacts on regional climate,ecosystems,and the land-atmosphere system.This study employs ...Precipitation types primarily include rainfall,snowfall,and sleet,and the transformation of precipitation types has significant impacts on regional climate,ecosystems,and the land-atmosphere system.This study employs the Ding method to separate precipitation types from three datasets(CMFD,ERA5_Land,and CN05.1).Using data from 26meteorological observation stations in the Chinese Tianshan Mountains Region(CTMR)of China as the validation dataset,the precipitation type separation accuracy of three datasets was evaluated.Additionally,the impacts of relative humidity,precipitation amount,and air temperature on the accuracy of precipitation type separation were analyzed.The results indicate that the CMFD dataset provides the highest separation accuracy,followed by CN05.1,with ERA5_Land showing the poorest performance.Spatial correlation analysis reveals that CMFD outperforms the other two datasets at both annual and monthly scales.Root Mean Square Error(RMSE)and Mean Deviation(MD)values suggest that CMFD is more consistent with the station observational data.The analysis further demonstrates that relative humidity and precipitation amount significantly affect separation accuracy.After bias correction,the correlation coefficients between CMFD,ERA5_Land,and station observational data improved to 0.85-0.94,while the RMSE was controlled within 2 mm.The study also revealed that the overestimation of precipitation was positively correlated with the overestimation of rainfall days,negatively correlated with the overestimation of snowfall days,and that underestimated air temperatures led to an increase in the misclassification of snowfall days.This research provides a basis for selecting climate change datasets and managing water resources in alpine regions.展开更多
As the number of distributed power supplies increases on the user side,smart grids are becoming larger and more complex.These changes bring new security challenges,especially with the widespread adop-tion of data-driv...As the number of distributed power supplies increases on the user side,smart grids are becoming larger and more complex.These changes bring new security challenges,especially with the widespread adop-tion of data-driven control methods.This paper introduces a novel black-box false data injection attack(FDIA)method that exploits the measurement modules of distributed power supplies within smart grids,highlighting its effectiveness in bypassing conventional security measures.Unlike traditional methods that focus on data manipulation within communication networks,this approach directly injects false data at the point of measurement,using a generative adversarial network(GAN)to generate stealthy attack vectors.This method requires no detailed knowledge of the target system,making it practical for real-world attacks.The attack’s impact on power system stability is demonstrated through experiments,high-lighting the significant cybersecurity risks introduced by data-driven algorithms in smart grids.展开更多
Understanding climate-growth relationships is essential for adaptive forest management.By using a more detailed approach(daily climatic data),we sought to uncover finer-scale climatic effects on European larch(Larix d...Understanding climate-growth relationships is essential for adaptive forest management.By using a more detailed approach(daily climatic data),we sought to uncover finer-scale climatic effects on European larch(Larix decidua)growth in the Tatra Mountains(the Western Carpathians),providing a more nuanced understanding of the climate-growth response in the mountain ecosystem.We analyzed tree-ring width index(TRWI)chronology with daily mean temperature,insolation duration,and precipitation records from 1950 to 2019,and in two subperiods(1950-1984 and 1985-2019).Larch growth is strongly affected by temperature,insolation duration,and precipitation,but with different positive or negative influences and varied intensity across various subperiods.The climate-growth analysis indicates that larch benefited from warm,sunny,and dry late winters and springs,as well as warm summers during the entire analyzed period.However,in the last decades,the previously strong and significant influence of March-July temperature has mostly disappeared,becoming limited to only a few days(June).Notably,the formerly strong negative influence of summer and early autumn temperatures and insolation duration in the previous year disappeared.In the earlier subperiod,larch growth showed strong positive responses to late-summer/early autumn precipitation of the previous year and negative effects from spring to late-summer rainfall.In recent decades,these patterns have weakened but still limited the growth.Our results revealed significant changes in the larch growth response,highlighting its adaptability to fluctuating environmental conditions.In recent decades,the influence of temperature,insolation duration,and precipitation on radial growth has weakened,which suggests that climate change has had a positive impact on tree growth in the Tatra Mountains.These findings suggest that rising temperatures in European mountain regions may alter the climatic sensitivity of tree species.Understanding these changes is crucial to improving resilience-based management strategies in the face of climate change.展开更多
Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising t...Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control.展开更多
As Internet ofThings(IoT)technologies continue to evolve at an unprecedented pace,intelligent big data control and information systems have become critical enablers for organizational digital transformation,facilitati...As Internet ofThings(IoT)technologies continue to evolve at an unprecedented pace,intelligent big data control and information systems have become critical enablers for organizational digital transformation,facilitating data-driven decision making,fostering innovation ecosystems,and maintaining operational stability.In this study,we propose an advanced deployment algorithm for Service Function Chaining(SFC)that leverages an enhanced Practical Byzantine Fault Tolerance(PBFT)mechanism.The main goal is to tackle the issues of security and resource efficiency in SFC implementation across diverse network settings.By integrating blockchain technology and Deep Reinforcement Learning(DRL),our algorithm not only optimizes resource utilization and quality of service but also ensures robust security during SFC deployment.Specifically,the enhanced PBFT consensus mechanism(VRPBFT)significantly reduces consensus latency and improves Byzantine node detection through the introduction of a Verifiable Random Function(VRF)and a node reputation grading model.Experimental results demonstrate that compared to traditional PBFT,the proposed VRPBFT algorithm reduces consensus latency by approximately 30%and decreases the proportion of Byzantine nodes by 40%after 100 rounds of consensus.Furthermore,the DRL-based SFC deployment algorithm(SDRL)exhibits rapid convergence during training,with improvements in long-term average revenue,request acceptance rate,and revenue/cost ratio of 17%,14.49%,and 20.35%,respectively,over existing algorithms.Additionally,the CPU resource utilization of the SDRL algorithmreaches up to 42%,which is 27.96%higher than other algorithms.These findings indicate that the proposed algorithm substantially enhances resource utilization efficiency,service quality,and security in SFC deployment.展开更多
Snow cover plays a critical role in global climate regulation and hydrological processes.Accurate monitoring is essential for understanding snow distribution patterns,managing water resources,and assessing the impacts...Snow cover plays a critical role in global climate regulation and hydrological processes.Accurate monitoring is essential for understanding snow distribution patterns,managing water resources,and assessing the impacts of climate change.Remote sensing has become a vital tool for snow monitoring,with the widely used Moderate-resolution Imaging Spectroradiometer(MODIS)snow products from the Terra and Aqua satellites.However,cloud cover often interferes with snow detection,making cloud removal techniques crucial for reliable snow product generation.This study evaluated the accuracy of four MODIS snow cover datasets generated through different cloud removal algorithms.Using real-time field camera observations from four stations in the Tianshan Mountains,China,this study assessed the performance of these datasets during three distinct snow periods:the snow accumulation period(September-November),snowmelt period(March-June),and stable snow period(December-February in the following year).The findings showed that cloud-free snow products generated using the Hidden Markov Random Field(HMRF)algorithm consistently outperformed the others,particularly under cloud cover,while cloud-free snow products using near-day synthesis and the spatiotemporal adaptive fusion method with error correction(STAR)demonstrated varying performance depending on terrain complexity and cloud conditions.This study highlighted the importance of considering terrain features,land cover types,and snow dynamics when selecting cloud removal methods,particularly in areas with rapid snow accumulation and melting.The results suggested that future research should focus on improving cloud removal algorithms through the integration of machine learning,multi-source data fusion,and advanced remote sensing technologies.By expanding validation efforts and refining cloud removal strategies,more accurate and reliable snow products can be developed,contributing to enhanced snow monitoring and better management of water resources in alpine and arid areas.展开更多
In weather forecasting,generating atmospheric variables for regions with complex topography,such as the Andean regions with peaks reaching 6500 m above sea level,poses significant challenges.Traditional regional clima...In weather forecasting,generating atmospheric variables for regions with complex topography,such as the Andean regions with peaks reaching 6500 m above sea level,poses significant challenges.Traditional regional climate models often struggle to accurately represent the atmospheric behavior in such areas.Furthermore,the capability to produce high spatio-temporal resolution data(less than 27 km and hourly)is limited to a few institutions globally due to the substantial computational resources required.This study presents the results of atmospheric data generated using a new type of artificial intelligence(AI)models,aimed to reduce the computational cost of generating downscaled climate data using climate regional models like the Weather Research and Forecasting(WRF)model over the Andes.The WRF model was selected for this comparison due to its frequent use in simulating atmospheric variables in the Andes.Our results demonstrate a higher downscaling performance for the four target weather variables studied(temperature,relative humidity,zonal and meridional wind)over coastal,mountain,and jungle regions.Moreover,this AI model offers several advantages,including lower computational costs compared to dynamic models like WRF and continuous improvement potential with additional training data.展开更多
INTRODUCTION.Crustal velocity model is crucial for describing the subsurface composition and structure,and has significant implications in offshore oil and gas exploration and marine geophysical engineering(Xie et al....INTRODUCTION.Crustal velocity model is crucial for describing the subsurface composition and structure,and has significant implications in offshore oil and gas exploration and marine geophysical engineering(Xie et al.,2024).Currently,travel time tomography is the most commonly used method for velocity modeling based on ocean bottom seismometer(OBS)data(Zhang et al.,2023;Sambolian et al.,2021).This method usually assumes that the sub-seafloor structure is layered,and therefore faces challenges in high-precision modeling with strong lateral discontinuities.展开更多
基金funded by the Science and Technology Project of State Grid Corporation of China(5108-202355437A-3-2-ZN).
文摘The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack a unified data structure,and depend heavily on manual intervention to process high-frequency and retroactive transactions.To address these limitations,a graph-based unified settlement framework is proposed to enhance automation,flexibility,and adaptability in electricity market settlements.A flexible attribute-graph model is employed to represent heterogeneousmulti-market data,enabling standardized integration,rapid querying,and seamless adaptation to evolving business requirements.An extensible operator library is designed to support configurable settlement rules,and a suite of modular tools—including dataset generation,formula configuration,billing templates,and task scheduling—facilitates end-to-end automated settlement processing.A robust refund-clearing mechanism is further incorporated,utilizing sandbox execution,data-version snapshots,dynamic lineage tracing,and real-time changecapture technologies to enable rapid and accurate recalculations under dynamic policy and data revisions.Case studies based on real-world data from regional Chinese markets validate the effectiveness of the proposed approach,demonstrating marked improvements in computational efficiency,system robustness,and automation.Moreover,enhanced settlement accuracy and high temporal granularity improve price-signal fidelity,promote cost-reflective tariffs,and incentivize energy-efficient and demand-responsive behavior among market participants.The method not only supports equitable and transparent market operations but also provides a generalizable,scalable foundation for modern electricity settlement platforms in increasingly complex and dynamic market environments.
基金supported by the National Key Research and Development Program of China(2022YFB2703503)the National Natural Science Foundation of China(62293501,62525210,and 62293502)the China Scholarship Council(202306280318).
文摘1.Introduction Data inference(DInf)is a data security threat in which critical information is inferred from low-sensitivity data.Once regarded as an advanced professional threat limited to intelligence analysts,DInf has become a widespread risk in the artificial intelligence(AI)era.
基金supported in part by the National Science Foundation of China(62373240,62273224,U24A20259).
文摘Dear Editor,The attacker is always going to intrude covertly networked control systems(NCSs)by dynamically changing false data injection attacks(FDIAs)strategy,while the defender try their best to resist attacks by designing defense strategy on the basis of identifying attack strategy,maintaining stable operation of NCSs.To solve this attack-defense game problem,this letter investigates optimal secure control of NCSs under FDIAs.First,for the alterations of energy caused by false data,a novel attack-defense game model is constructed,which considers the changes of energy caused by the actions of the defender and attacker in the forward and feedback channels.
基金supported by the National Key Research and Development Program of China(2023YFB3307801)the National Natural Science Foundation of China(62394343,62373155,62073142)+3 种基金Major Science and Technology Project of Xinjiang(No.2022A01006-4)the Programme of Introducing Talents of Discipline to Universities(the 111 Project)under Grant B17017the Fundamental Research Funds for the Central Universities,Science Foundation of China University of Petroleum,Beijing(No.2462024YJRC011)the Open Research Project of the State Key Laboratory of Industrial Control Technology,China(Grant No.ICT2024B70).
文摘The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.
基金supported by National Key Research and Development Plan of China(No.2022YFB3103304).
文摘False Data Injection Attacks(FDIAs)pose a critical security threat to modern power grids,corrupting state estimation and enabling malicious control actions that can lead to severe consequences,including cascading failures,large-scale blackouts,and significant economic losses.While detecting attacks is important,accurately localizing compromised nodes or measurements is even more critical,as it enables timely mitigation,targeted response,and enhanced system resilience beyond what detection alone can offer.Existing research typically models topological features using fixed structures,which can introduce irrelevant information and affect the effectiveness of feature extraction.To address this limitation,this paper proposes an FDIA localization model with adaptive neighborhood selection,which dynamically captures spatial dependencies of the power grid by adjusting node relationships based on data-driven similarities.The improved Transformer is employed to pre-fuse global spatial features of the graph,enriching the feature representation.To improve spatio-temporal correlation extraction for FDIA localization,the proposed model employs dilated causal convolution with a gating mechanism combined with graph convolution to capture and fuse long-range temporal features and adaptive topological features.This fully exploits the temporal dynamics and spatial dependencies inherent in the power grid.Finally,multi-source information is integrated to generate highly robust node embeddings,enhancing FDIA detection and localization.Experiments are conducted on IEEE 14,57,and 118-bus systems,and the results demonstrate that the proposed model substantially improves the accuracy of FDIA localization.Additional experiments are conducted to verify the effectiveness and robustness of the proposed model.
基金the Deanship of Scientific Research and Libraries in Princess Nourah bint Abdulrahman University for funding this research work through the Research Group project,Grant No.(RG-1445-0064).
文摘Although digital changes in power systems have added more ways to monitor and control them,these changes have also led to new cyber-attack risks,mainly from False Data Injection(FDI)attacks.If this happens,the sensors and operations are compromised,which can lead to big problems,disruptions,failures and blackouts.In response to this challenge,this paper presents a reliable and innovative detection framework that leverages Bidirectional Long Short-Term Memory(Bi-LSTM)networks and employs explanatory methods from Artificial Intelligence(AI).Not only does the suggested architecture detect potential fraud with high accuracy,but it also makes its decisions transparent,enabling operators to take appropriate action.Themethod developed here utilizesmodel-free,interpretable tools to identify essential input elements,thereby making predictions more understandable and usable.Enhancing detection performance is made possible by correcting class imbalance using Synthetic Minority Over-sampling Technique(SMOTE)-based data balancing.Benchmark power system data confirms that the model functions correctly through detailed experiments.Experimental results showed that Bi-LSTM+Explainable AI(XAI)achieved an average accuracy of 94%,surpassing XGBoost(89%)and Bagging(84%),while ensuring explainability and a high level of robustness across various operating scenarios.By conducting an ablation study,we find that bidirectional recursive modeling and ReLU activation help improve generalization and model predictability.Additionally,examining model decisions through LIME enables us to identify which features are crucial for making smart grid operational decisions in real time.The research offers a practical and flexible approach for detecting FDI attacks,improving the security of cyber-physical systems,and facilitating the deployment of AI in energy infrastructure.
基金supported by the Natural Science Foundation of China(62073268,62122063,62203360)the Young Star of Science and Technology in Shaanxi Province(2020KJXX-078).
文摘In this paper,we address a cross-layer resilient control issue for a kind of multi-spacecraft system(MSS)under attack.Attackers with bad intentions use the false data injection(FDI)attack to prevent the MSS from reaching the goal of consensus.In order to ensure the effectiveness of the control,the embedded defender in MSS preliminarily allocates the defense resources among spacecrafts.Then,the attacker selects its target spacecrafts to mount FDI attack to achieve the maximum damage.In physical layer,a Nash equilibrium(NE)control strategy is proposed for MSS to quantify system performance under the effect of attacks by solving a game problem.In cyber layer,a fuzzy Stackelberg game framework is used to examine the rivalry process between the attacker and defender.The strategies of both attacker and defender are given based on the analysis of physical layer and cyber layer.Finally,a simulation example is used to test the viability of the proposed cross layer fuzzy game algorithm.
文摘A security issue with multi-sensor unmanned aerial vehicle(UAV)cyber physical systems(CPS)from the viewpoint of a false data injection(FDI)attacker is investigated in this paper.The FDI attacker can employ attacks on feedback and feed-forward channels simultaneously with limited resource.The attacker aims at degrading the UAV CPS's estimation performance to the max while keeping stealthiness characterized by the Kullback-Leibler(K-L)divergence.The attacker is resource limited which can only attack part of sensors,and the attacked sensor as well as specific forms of attack signals at each instant should be considered by the attacker.Also,the sensor selection principle is investigated with respect to time invariant attack covariances.Additionally,the optimal switching attack strategies in regard to time variant attack covariances are modeled as a multi-agent Markov decision process(MDP)with hybrid discrete-continuous action space.Then,the multi-agent MDP is solved by utilizing the deep Multi-agent parameterized Q-networks(MAPQN)method.Ultimately,a quadrotor near hover system is used to validate the effectiveness of the results in the simulation section.
基金supported by the National Natural Science Foundation of China(32370703)the CAMS Innovation Fund for Medical Sciences(CIFMS)(2022-I2M-1-021,2021-I2M-1-061)the Major Project of Guangzhou National Labora-tory(GZNL2024A01015).
文摘Viral infectious diseases,characterized by their intricate nature and wide-ranging diversity,pose substantial challenges in the domain of data management.The vast volume of data generated by these diseases,spanning from the molecular mechanisms within cells to large-scale epidemiological patterns,has surpassed the capabilities of traditional analytical methods.In the era of artificial intelligence(AI)and big data,there is an urgent necessity for the optimization of these analytical methods to more effectively handle and utilize the information.Despite the rapid accumulation of data associated with viral infections,the lack of a comprehensive framework for integrating,selecting,and analyzing these datasets has left numerous researchers uncertain about which data to select,how to access it,and how to utilize it most effectively in their research.This review endeavors to fill these gaps by exploring the multifaceted nature of viral infectious diseases and summarizing relevant data across multiple levels,from the molecular details of pathogens to broad epidemiological trends.The scope extends from the micro-scale to the macro-scale,encompassing pathogens,hosts,and vectors.In addition to data summarization,this review thoroughly investigates various dataset sources.It also traces the historical evolution of data collection in the field of viral infectious diseases,highlighting the progress achieved over time.Simultaneously,it evaluates the current limitations that impede data utilization.Furthermore,we propose strategies to surmount these challenges,focusing on the development and application of advanced computational techniques,AI-driven models,and enhanced data integration practices.By providing a comprehensive synthesis of existing knowledge,this review is designed to guide future research and contribute to more informed approaches in the surveillance,prevention,and control of viral infectious diseases,particularly within the context of the expanding big-data landscape.
基金supported by National Natural Science Foundation of China(32122066,32201855)STI2030—Major Projects(2023ZD04076).
文摘Phenotypic prediction is a promising strategy for accelerating plant breeding.Data from multiple sources(called multi-view data)can provide complementary information to characterize a biological object from various aspects.By integrating multi-view information into phenotypic prediction,a multi-view best linear unbiased prediction(MVBLUP)method is proposed in this paper.To measure the importance of multiple data views,the differential evolution algorithm with an early stopping mechanism is used,by which we obtain a multi-view kinship matrix and then incorporate it into the BLUP model for phenotypic prediction.To further illustrate the characteristics of MVBLUP,we perform the empirical experiments on four multi-view datasets in different crops.Compared to the single-view method,the prediction accuracy of the MVBLUP method has improved by 0.038–0.201 on average.The results demonstrate that the MVBLUP is an effective integrative prediction method for multi-view data.
基金supported by the National Natural Science Foundation of China(Grant No.52409151)the Programme of Shenzhen Key Laboratory of Green,Efficient and Intelligent Construction of Underground Metro Station(Programme No.ZDSYS20200923105200001)the Science and Technology Major Project of Xizang Autonomous Region of China(XZ202201ZD0003G).
文摘Substantial advancements have been achieved in Tunnel Boring Machine(TBM)technology and monitoring systems,yet the presence of missing data impedes accurate analysis and interpretation of TBM monitoring results.This study aims to investigate the issue of missing data in extensive TBM datasets.Through a comprehensive literature review,we analyze the mechanism of missing TBM data and compare different imputation methods,including statistical analysis and machine learning algorithms.We also examine the impact of various missing patterns and rates on the efficacy of these methods.Finally,we propose a dynamic interpolation strategy tailored for TBM engineering sites.The research results show that K-Nearest Neighbors(KNN)and Random Forest(RF)algorithms can achieve good interpolation results;As the missing rate increases,the interpolation effect of different methods will decrease;The interpolation effect of block missing is poor,followed by mixed missing,and the interpolation effect of sporadic missing is the best.On-site application results validate the proposed interpolation strategy's capability to achieve robust missing value interpolation effects,applicable in ML scenarios such as parameter optimization,attitude warning,and pressure prediction.These findings contribute to enhancing the efficiency of TBM missing data processing,offering more effective support for large-scale TBM monitoring datasets.
基金Weaponry Equipment Pre-Research Foundation of PLA Equipment Ministry (No. 9140A06050409JB8102)Pre-Research Foundation of PLA University of Science and Technology (No. 2009JSJ11)
文摘To solve the query processing correctness problem for semantic-based relational data integration,the semantics of SAPRQL(simple protocol and RDF query language) queries is defined.In the course of query rewriting,all relative tables are found and decomposed into minimal connectable units.Minimal connectable units are joined according to semantic queries to produce the semantically correct query plans.Algorithms for query rewriting and transforming are presented.Computational complexity of the algorithms is discussed.Under the worst case,the query decomposing algorithm can be finished in O(n2) time and the query rewriting algorithm requires O(nm) time.And the performance of the algorithms is verified by experiments,and experimental results show that when the length of query is less than 8,the query processing algorithms can provide satisfactory performance.
基金financial support from the National Natural Sciences Foundation of China(42261026,and 42161025)the Open Foundation of Xinjiang Key Laboratory of Water Cycle and Utilization in Arid Zone(XJYS0907-2023-01)。
文摘Precipitation types primarily include rainfall,snowfall,and sleet,and the transformation of precipitation types has significant impacts on regional climate,ecosystems,and the land-atmosphere system.This study employs the Ding method to separate precipitation types from three datasets(CMFD,ERA5_Land,and CN05.1).Using data from 26meteorological observation stations in the Chinese Tianshan Mountains Region(CTMR)of China as the validation dataset,the precipitation type separation accuracy of three datasets was evaluated.Additionally,the impacts of relative humidity,precipitation amount,and air temperature on the accuracy of precipitation type separation were analyzed.The results indicate that the CMFD dataset provides the highest separation accuracy,followed by CN05.1,with ERA5_Land showing the poorest performance.Spatial correlation analysis reveals that CMFD outperforms the other two datasets at both annual and monthly scales.Root Mean Square Error(RMSE)and Mean Deviation(MD)values suggest that CMFD is more consistent with the station observational data.The analysis further demonstrates that relative humidity and precipitation amount significantly affect separation accuracy.After bias correction,the correlation coefficients between CMFD,ERA5_Land,and station observational data improved to 0.85-0.94,while the RMSE was controlled within 2 mm.The study also revealed that the overestimation of precipitation was positively correlated with the overestimation of rainfall days,negatively correlated with the overestimation of snowfall days,and that underestimated air temperatures led to an increase in the misclassification of snowfall days.This research provides a basis for selecting climate change datasets and managing water resources in alpine regions.
基金supported by the National Natural Science Foundation of China(62302234).
文摘As the number of distributed power supplies increases on the user side,smart grids are becoming larger and more complex.These changes bring new security challenges,especially with the widespread adop-tion of data-driven control methods.This paper introduces a novel black-box false data injection attack(FDIA)method that exploits the measurement modules of distributed power supplies within smart grids,highlighting its effectiveness in bypassing conventional security measures.Unlike traditional methods that focus on data manipulation within communication networks,this approach directly injects false data at the point of measurement,using a generative adversarial network(GAN)to generate stealthy attack vectors.This method requires no detailed knowledge of the target system,making it practical for real-world attacks.The attack’s impact on power system stability is demonstrated through experiments,high-lighting the significant cybersecurity risks introduced by data-driven algorithms in smart grids.
基金funded by the National Science Centre Poland (Grant no.N N309 71124) (Tomasz Zielonka)statutory funds of: the W.Szafer Institute of Botany,Polish Academy of Sciences (Katarzyna Izworska)+1 种基金the Institute of Biology and Earth Sciences, University of the National Education Commission (Tomasz Zielonka)the Department of Forest Biodiversity, University of Agriculture in Krakow (Elżbieta Muter)
文摘Understanding climate-growth relationships is essential for adaptive forest management.By using a more detailed approach(daily climatic data),we sought to uncover finer-scale climatic effects on European larch(Larix decidua)growth in the Tatra Mountains(the Western Carpathians),providing a more nuanced understanding of the climate-growth response in the mountain ecosystem.We analyzed tree-ring width index(TRWI)chronology with daily mean temperature,insolation duration,and precipitation records from 1950 to 2019,and in two subperiods(1950-1984 and 1985-2019).Larch growth is strongly affected by temperature,insolation duration,and precipitation,but with different positive or negative influences and varied intensity across various subperiods.The climate-growth analysis indicates that larch benefited from warm,sunny,and dry late winters and springs,as well as warm summers during the entire analyzed period.However,in the last decades,the previously strong and significant influence of March-July temperature has mostly disappeared,becoming limited to only a few days(June).Notably,the formerly strong negative influence of summer and early autumn temperatures and insolation duration in the previous year disappeared.In the earlier subperiod,larch growth showed strong positive responses to late-summer/early autumn precipitation of the previous year and negative effects from spring to late-summer rainfall.In recent decades,these patterns have weakened but still limited the growth.Our results revealed significant changes in the larch growth response,highlighting its adaptability to fluctuating environmental conditions.In recent decades,the influence of temperature,insolation duration,and precipitation on radial growth has weakened,which suggests that climate change has had a positive impact on tree growth in the Tatra Mountains.These findings suggest that rising temperatures in European mountain regions may alter the climatic sensitivity of tree species.Understanding these changes is crucial to improving resilience-based management strategies in the face of climate change.
基金Supported by Natural Science Basic Research Plan in Shaanxi Province of China(Program No.2022JM-396)the Strategic Priority Research Program of the Chinese Academy of Sciences,Grant No.XDA23040101+4 种基金Shaanxi Province Key Research and Development Projects(Program No.2023-YBSF-437)Xi'an Shiyou University Graduate Student Innovation Fund Program(Program No.YCX2412041)State Key Laboratory of Air Traffic Management System and Technology(SKLATM202001)Tianjin Education Commission Research Program Project(2020KJ028)Fundamental Research Funds for the Central Universities(3122019132)。
文摘Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control.
基金supported by the National Natural Science Foundation of China under Grant 62471493 and 62402257partially supported by the Natural Science Foundation of Shandong Province under Grant ZR2023LZH017,ZR2024MF066 and 2023QF025+2 种基金partially supported by the Open Research Subject of State Key Laboratory of Intelligent Game(No.ZBKF-24-12)partially supported by the Foundation of Key Laboratory of Education Informatization for Nationalities(Yunnan Normal University),the Ministry of Education(No.EIN2024C006)partially supported by the Key Laboratory of Ethnic Language Intelligent Analysis and Security Governance of MOE(No.202306).
文摘As Internet ofThings(IoT)technologies continue to evolve at an unprecedented pace,intelligent big data control and information systems have become critical enablers for organizational digital transformation,facilitating data-driven decision making,fostering innovation ecosystems,and maintaining operational stability.In this study,we propose an advanced deployment algorithm for Service Function Chaining(SFC)that leverages an enhanced Practical Byzantine Fault Tolerance(PBFT)mechanism.The main goal is to tackle the issues of security and resource efficiency in SFC implementation across diverse network settings.By integrating blockchain technology and Deep Reinforcement Learning(DRL),our algorithm not only optimizes resource utilization and quality of service but also ensures robust security during SFC deployment.Specifically,the enhanced PBFT consensus mechanism(VRPBFT)significantly reduces consensus latency and improves Byzantine node detection through the introduction of a Verifiable Random Function(VRF)and a node reputation grading model.Experimental results demonstrate that compared to traditional PBFT,the proposed VRPBFT algorithm reduces consensus latency by approximately 30%and decreases the proportion of Byzantine nodes by 40%after 100 rounds of consensus.Furthermore,the DRL-based SFC deployment algorithm(SDRL)exhibits rapid convergence during training,with improvements in long-term average revenue,request acceptance rate,and revenue/cost ratio of 17%,14.49%,and 20.35%,respectively,over existing algorithms.Additionally,the CPU resource utilization of the SDRL algorithmreaches up to 42%,which is 27.96%higher than other algorithms.These findings indicate that the proposed algorithm substantially enhances resource utilization efficiency,service quality,and security in SFC deployment.
基金funded by the Third Xinjiang Scientific Expedition Program(2021xjkk1400)the National Natural Science Foundation of China(42071049)+2 种基金the Natural Science Foundation of Xinjiang Uygur Autonomous Region(2019D01C022)the Xinjiang Uygur Autonomous Region Innovation Environment Construction Special Project&Science and Technology Innovation Base Construction Project(PT2107)the Tianshan Talent-Science and Technology Innovation Team(2022TSYCTD0006).
文摘Snow cover plays a critical role in global climate regulation and hydrological processes.Accurate monitoring is essential for understanding snow distribution patterns,managing water resources,and assessing the impacts of climate change.Remote sensing has become a vital tool for snow monitoring,with the widely used Moderate-resolution Imaging Spectroradiometer(MODIS)snow products from the Terra and Aqua satellites.However,cloud cover often interferes with snow detection,making cloud removal techniques crucial for reliable snow product generation.This study evaluated the accuracy of four MODIS snow cover datasets generated through different cloud removal algorithms.Using real-time field camera observations from four stations in the Tianshan Mountains,China,this study assessed the performance of these datasets during three distinct snow periods:the snow accumulation period(September-November),snowmelt period(March-June),and stable snow period(December-February in the following year).The findings showed that cloud-free snow products generated using the Hidden Markov Random Field(HMRF)algorithm consistently outperformed the others,particularly under cloud cover,while cloud-free snow products using near-day synthesis and the spatiotemporal adaptive fusion method with error correction(STAR)demonstrated varying performance depending on terrain complexity and cloud conditions.This study highlighted the importance of considering terrain features,land cover types,and snow dynamics when selecting cloud removal methods,particularly in areas with rapid snow accumulation and melting.The results suggested that future research should focus on improving cloud removal algorithms through the integration of machine learning,multi-source data fusion,and advanced remote sensing technologies.By expanding validation efforts and refining cloud removal strategies,more accurate and reliable snow products can be developed,contributing to enhanced snow monitoring and better management of water resources in alpine and arid areas.
文摘In weather forecasting,generating atmospheric variables for regions with complex topography,such as the Andean regions with peaks reaching 6500 m above sea level,poses significant challenges.Traditional regional climate models often struggle to accurately represent the atmospheric behavior in such areas.Furthermore,the capability to produce high spatio-temporal resolution data(less than 27 km and hourly)is limited to a few institutions globally due to the substantial computational resources required.This study presents the results of atmospheric data generated using a new type of artificial intelligence(AI)models,aimed to reduce the computational cost of generating downscaled climate data using climate regional models like the Weather Research and Forecasting(WRF)model over the Andes.The WRF model was selected for this comparison due to its frequent use in simulating atmospheric variables in the Andes.Our results demonstrate a higher downscaling performance for the four target weather variables studied(temperature,relative humidity,zonal and meridional wind)over coastal,mountain,and jungle regions.Moreover,this AI model offers several advantages,including lower computational costs compared to dynamic models like WRF and continuous improvement potential with additional training data.
基金financially supported by the National Key R&D Program of China(No.2023YFF0803404)the Zhejiang Provincial Natural Science Foundation(No.LY23D040001)+4 种基金the Open Research Fund of Key Laboratory of Engineering Geophysical Prospecting and Detection of Chinese Geophysical Society(No.CJ2021GB01)the Open Re-search Fund of Changjiang River Scientific Research Institute(No.CKWV20221011/KY)the ZhouShan Science and Technology Project(No.2023C81010)the National Natural Science Foundation of China(No.41904100)supported by Chinese Natural Science Foundation Open Research Cruise(Cruise No.NORC2019–08)。
文摘INTRODUCTION.Crustal velocity model is crucial for describing the subsurface composition and structure,and has significant implications in offshore oil and gas exploration and marine geophysical engineering(Xie et al.,2024).Currently,travel time tomography is the most commonly used method for velocity modeling based on ocean bottom seismometer(OBS)data(Zhang et al.,2023;Sambolian et al.,2021).This method usually assumes that the sub-seafloor structure is layered,and therefore faces challenges in high-precision modeling with strong lateral discontinuities.