The majority of fishery stocks in the world are data limited,which limits formal stock assessments.Identifying the impacts of input data on stock assessment is critical for improving stock assessment and developing pr...The majority of fishery stocks in the world are data limited,which limits formal stock assessments.Identifying the impacts of input data on stock assessment is critical for improving stock assessment and developing precautionary management strategies.We compare catch advice obtained from applications of various datalimited methods(DLMs)with forecasted catch advice from existing data-rich stock assessment models for the Indian Ocean bigeye tuna(Thunnus obesus).Our goal was to evaluate the consistency of catch advice derived from data-rich methods and data-limited approaches when only a subset of data is available.The Stock Synthesis(SS)results were treated as benchmarks for comparison because they reflect the most comprehensive and best possible scientific information of the stock.This study indicated that although the DLMs examined appeared robust for the Indian Ocean bigeye tuna,the implied catch advice differed between data-limited approaches and the current assessment,due to different data inputs and model assumptions.Most DLMs tended to provide more optimistic catch advice compared with the SS,which was mostly influenced by historical catches,current abundance and depletion estimates,and natural mortality,but was less sensitive to life-history parameters(particularly those related to growth).This study highlights the utility of DLMs and their implications on catch advice for the management of tuna stocks.展开更多
In order to assess the effects of calibration data series length on the performance and optimal parameter values of a hydrological model in ungauged or data-limited catchments (data are non-continuous and fragmental ...In order to assess the effects of calibration data series length on the performance and optimal parameter values of a hydrological model in ungauged or data-limited catchments (data are non-continuous and fragmental in some catchments), we used non-continuous calibration periods for more independent streamflow data for SIMHYD (simple hydrology) model calibration. Nash-Sutcliffe efficiency and percentage water balance error were used as performance measures. The particle swarm optimization (PSO) method was used to calibrate the rainfall-runoff models. Different lengths of data series ranging from one year to ten years, randomly sampled, were used to study the impact of calibration data series length. Fifty-five relatively unimpaired catchments located all over Australia with daily precipitation, potential evapotranspiration, and streamflow data were tested to obtain more general conclusions. The results show that longer calibration data series do not necessarily result in better model performance. In general, eight years of data are sufficient to obtain steady estimates of model performance and parameters for the SIMHYD model. It is also shown that most humid catchments require fewer calibration data to obtain a good performance and stable parameter values. The model performs better in humid and semi-humid catchments than in arid catchments. Our results may have useful and interesting implications for the efficiency of using limited observation data for hydrological model calibration in different climates.展开更多
Machine-learning methodologies have increasingly been embraced in landslide susceptibility assessment.However,the considerable time and financial burdens of landslide inventories often result in persistent data scarci...Machine-learning methodologies have increasingly been embraced in landslide susceptibility assessment.However,the considerable time and financial burdens of landslide inventories often result in persistent data scarcity,which frequently impedes the generation of accurate and informative landslide susceptibility maps.Addressing this challenge,this study compiled a nationwide dataset and developed a transfer learning-based model to evaluate landslide susceptibility in the Chongqing region specifically.Notably,the proposed model,calibrated with the warmup-cosine annealing(WCA)learning rate strategy,demonstrated remarkable predictive capabilities,particularly in scenarios marked by data limitations and when training data were normalized using parameters from the source region.This is evidenced by the area under the receiver operating characteristic curve(AUC)values,which exhibited significant improvements of 51.00%,24.40%and 2.15%,respectively,compared to a deep learning model,in contexts where only 1%,5%and 10%of data from the target region were used for retraining.Simultaneously,there were reductions in loss of 16.12%,27.61%and 15.44%,respectively,in these instances.展开更多
基金The National Natural Science Foundation of China under contract No.41676120。
文摘The majority of fishery stocks in the world are data limited,which limits formal stock assessments.Identifying the impacts of input data on stock assessment is critical for improving stock assessment and developing precautionary management strategies.We compare catch advice obtained from applications of various datalimited methods(DLMs)with forecasted catch advice from existing data-rich stock assessment models for the Indian Ocean bigeye tuna(Thunnus obesus).Our goal was to evaluate the consistency of catch advice derived from data-rich methods and data-limited approaches when only a subset of data is available.The Stock Synthesis(SS)results were treated as benchmarks for comparison because they reflect the most comprehensive and best possible scientific information of the stock.This study indicated that although the DLMs examined appeared robust for the Indian Ocean bigeye tuna,the implied catch advice differed between data-limited approaches and the current assessment,due to different data inputs and model assumptions.Most DLMs tended to provide more optimistic catch advice compared with the SS,which was mostly influenced by historical catches,current abundance and depletion estimates,and natural mortality,but was less sensitive to life-history parameters(particularly those related to growth).This study highlights the utility of DLMs and their implications on catch advice for the management of tuna stocks.
基金supported by the National Basic Research Program of China (the 973 Program,Grant No.2010CB951102)the National Supporting Plan Program of China (Grants No.2007BAB28B01 and 2008BAB42B03)the National Natural Science Foundation of China (Grant No. 50709042),and the Regional Water Theme in the Water for a Healthy Country Flagship
文摘In order to assess the effects of calibration data series length on the performance and optimal parameter values of a hydrological model in ungauged or data-limited catchments (data are non-continuous and fragmental in some catchments), we used non-continuous calibration periods for more independent streamflow data for SIMHYD (simple hydrology) model calibration. Nash-Sutcliffe efficiency and percentage water balance error were used as performance measures. The particle swarm optimization (PSO) method was used to calibrate the rainfall-runoff models. Different lengths of data series ranging from one year to ten years, randomly sampled, were used to study the impact of calibration data series length. Fifty-five relatively unimpaired catchments located all over Australia with daily precipitation, potential evapotranspiration, and streamflow data were tested to obtain more general conclusions. The results show that longer calibration data series do not necessarily result in better model performance. In general, eight years of data are sufficient to obtain steady estimates of model performance and parameters for the SIMHYD model. It is also shown that most humid catchments require fewer calibration data to obtain a good performance and stable parameter values. The model performs better in humid and semi-humid catchments than in arid catchments. Our results may have useful and interesting implications for the efficiency of using limited observation data for hydrological model calibration in different climates.
基金Project(2301DH09002)supported by the Bureau of Planning and Natural Resources,Chongqing,ChinaProject(2022T3051)supported by the Science and Technology Service Network Initiative,ChinaProject(2018-ZL-01)supported by the Sichuan Transportation Science and Technology,China。
文摘Machine-learning methodologies have increasingly been embraced in landslide susceptibility assessment.However,the considerable time and financial burdens of landslide inventories often result in persistent data scarcity,which frequently impedes the generation of accurate and informative landslide susceptibility maps.Addressing this challenge,this study compiled a nationwide dataset and developed a transfer learning-based model to evaluate landslide susceptibility in the Chongqing region specifically.Notably,the proposed model,calibrated with the warmup-cosine annealing(WCA)learning rate strategy,demonstrated remarkable predictive capabilities,particularly in scenarios marked by data limitations and when training data were normalized using parameters from the source region.This is evidenced by the area under the receiver operating characteristic curve(AUC)values,which exhibited significant improvements of 51.00%,24.40%and 2.15%,respectively,compared to a deep learning model,in contexts where only 1%,5%and 10%of data from the target region were used for retraining.Simultaneously,there were reductions in loss of 16.12%,27.61%and 15.44%,respectively,in these instances.