In order to understand the fundamental questions of the biology of life and to duplicate the pathogenesis of human diseases, animal models using different experimental animals, such as rodents, Drosophila, Caenorhabdi...In order to understand the fundamental questions of the biology of life and to duplicate the pathogenesis of human diseases, animal models using different experimental animals, such as rodents, Drosophila, Caenorhabditis elegans, and zebrafish, have been established and used widely for many decades. The controllability of environmental conditions, the high reproducibility, the ease of scale and the comparability of results, as well as the ability to use different standards for ethical protocols, all make an animal model the ideal tool for carrying out studies on human diseases and the development of novel pharmaceuticals and new therapies (Xue et al., 2014). An ideal animal model should reflect the complete spectra of a specific human disease, with similar features on the following key issues: (1) genetic basis; (2) anatomy and physiology; (3) pathological response(s) and underlying mechanism(s); (4) phenotypic endpoints as clinical studies; (5) responsiveness to known drugs with clinical efficacy; and (6) prediction of clinical efficacy (McGonigle and Ruggeri, 2014).展开更多
The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficie...The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.展开更多
Based on convenience and safety of historical data application, B/S mode is used instead of database management structure of C/S mode, and it can not only combine database and network, but also realize the safe use of...Based on convenience and safety of historical data application, B/S mode is used instead of database management structure of C/S mode, and it can not only combine database and network, but also realize the safe use of historical data online. High-level programming language is used to develop a online management and application system of historical meteorological data based on B/S mode. System data import function can import ground report file sequence of Xuzhou ( including five county-level stations) into the database, construct Oracle database of Xuzhou and the five stations since 1953, and establish data tables of time, day, ten-day, monthly, quarterly and annual historical data, weather information and so forth. Management software of database server is established to realize instruction-level management and scheduling of database and a balanced distribution of resources among users. At the same time, a Web-based management application interface is set up to meet users' needs to retrieve a variety of repositories, and it provides statistical query of time, day, ten-day, monthly, quarterly and annual historical data and climate data for each meteorological element, thereby meeting the needs of meteorological research and all sectors of society for statistical query of meteorological data.展开更多
In the era of big data,the ways people work,live and think have changed dramatically,and the social governance system is also being restructured.Achieving intelligent social governance has now become a national strate...In the era of big data,the ways people work,live and think have changed dramatically,and the social governance system is also being restructured.Achieving intelligent social governance has now become a national strategy.The application of big data technology to counterterrorism efforts has become a powerful weapon for all countries.However,due to the uncertainty,difficulty of interpretation and potential risk of discrimination in big data technology and algorithm models,basic human rights,freedom and even ethics are likely to be impacted and challenged.As a result,there is an urgent need to prioritize basic human rights and regulate the application of big data for counter terrorism purposes.The legislation and law enforcement regarding the use of big data to counter terrorism must be subject to constitutional and other legal reviews,so as to strike a balance between safeguarding national security and protecting basic human rights.展开更多
In data centers, the transmission control protocol(TCP) incast causes catastrophic goodput degradation to applications with a many-to-one traffic pattern. In this paper, we intend to tame incast at the receiver-side a...In data centers, the transmission control protocol(TCP) incast causes catastrophic goodput degradation to applications with a many-to-one traffic pattern. In this paper, we intend to tame incast at the receiver-side application. Towards this goal, we first develop an analytical model that formulates the incast probability as a function of connection variables and network environment settings. We combine the model with the optimization theory and derive some insights into minimizing the incast probability through tuning connection variables related to applications. Then,enlightened by the analytical results, we propose an adaptive application-layer solution to the TCP incast.The solution equally allocates advertised windows to concurrent connections, and dynamically adapts the number of concurrent connections to the varying conditions. Simulation results show that our solution consistently eludes incast and achieves high goodput in various scenarios including the ones with multiple bottleneck links and background TCP traffic.展开更多
Fixture design and planning is one of the most important manufacturing activities, playing a pivotal role in deciding the lead time for product development. Fixture design, which affects the part-quality in terms of g...Fixture design and planning is one of the most important manufacturing activities, playing a pivotal role in deciding the lead time for product development. Fixture design, which affects the part-quality in terms of geometric accuracy and surface finish, can be enhanced by using the product manufacturing information(PMI) stored in the neutral standard for the exchange of product model data(STEP) file, thereby integrating design and manufacturing. The present paper proposes a unique fixture design approach, to extract the geometry information from STEP application protocol(AP) 242 files of computer aided design(CAD) models, for providing automatic suggestions of locator positions and clamping surfaces. Automatic feature extraction software "FiXplan", developed using the programming language C#, is used to extract the part feature, dimension and geometry information. The information from the STEP AP 242 file is deduced using geometric reasoning techniques, which in turn is utilized for fixture planning. The developed software is observed to be adept in identifying the primary, secondary, and tertiary locating faces and locator position configurations of prismatic components. Structural analysis of the prismatic part under different locator positions was performed using commercial finite element method software, ABAQUS, and the optimized locator position was identified on the basis of minimum deformation of the workpiece.The area-ratio(base locator enclosed area(%)/work piece base area(%)) for the ideal locator configuration was observed as 33%. Experiments were conducted on a prismatic workpiece using a specially designed fixture, for different locator configurations. The surface roughness and waviness of the machined surfaces were analysed using an Alicona non-contact optical profilometer. The best surface characteristics were obtained for the surface machined under the ideal locator positions having an area-ratio of 33%, thus validating the predicted numerical results. The efficiency, capability and applicability of the developed software is demonstrated for the finishing operation of a sensor cover – a typical prismatic component having applications in the naval industry, under different locator configurations.The best results were obtained under the proposed ideal locator configuration of area-ratio 33%.展开更多
In this paper, we conduct research on the structured data mining algorithm and applications on machine learning field. Various fields due to the advancement of informatization and digitization, a lot of multi-source a...In this paper, we conduct research on the structured data mining algorithm and applications on machine learning field. Various fields due to the advancement of informatization and digitization, a lot of multi-source and heterogeneous data distributed storage, in order to achieve the sharing, we must solve from the storage management to the interoperability of a series of mechanism, the method and implementation technology. Unstructured data does not have strict structure, therefore, compared with structured information that is more difficult to standardization, with management more difficult. According to these characteristics, the large capacity of unstructured data or using files separately store, is stored in the database index of similar pointer. Under this background, we propose the new idea on the structured data mining algorithm that is meaningful.展开更多
The rapid advancement of artificial intelligence technology is driving transformative changes in medical diagnosis,treatment,and management systems through large-scale deep learning models-a process that brings both g...The rapid advancement of artificial intelligence technology is driving transformative changes in medical diagnosis,treatment,and management systems through large-scale deep learning models-a process that brings both groundbreaking opportunities and multifaceted challenges.This study focuses on the medical and healthcare applications of large-scale deep learning architectures,conducting a comprehensive survey to categorize and analyze their diverse uses.The survey results reveal that current applications of large models in healthcare encompass medical data management,healthcare services,medical devices,and preventive medicine,among others.Concurrently,large models demonstrate significant advantages in the medical domain,especially in high-precision diagnosis and prediction,data analysis and knowledge discovery,and enhancing operational efficiency.Nevertheless,we identify several challenges that need urgent attention,including improving the interpretability of large models,strengthening privacy protection,and addressing issues related to handling incomplete data.This research is dedicated to systematically elucidating the deep collaborative mechanisms between artificial intelligence and the healthcare field,providing theoretical references and practical guidance for both academia and industry.展开更多
目的采用数据挖掘方法总结肺纤维化动物模型的特点及建立药效指标评价体系。方法通过中国知网(CNKI)、万方数据库(Wangfang)、维普中文科技期刊全文数据库(VIP)、PubMed、Web of Science、Embase数据库检索与肺纤维化动物药效研究相关...目的采用数据挖掘方法总结肺纤维化动物模型的特点及建立药效指标评价体系。方法通过中国知网(CNKI)、万方数据库(Wangfang)、维普中文科技期刊全文数据库(VIP)、PubMed、Web of Science、Embase数据库检索与肺纤维化动物药效研究相关的文献,归纳整理、分析肺纤维化动物模型的造模方法、干预药物等,并统计检测指标类型、方法等,构建肺纤维化动物药效指标体系。结果共纳入1174篇文献,动物造模常见的诱导因素有肿瘤药物、环境/职业暴露颗粒、物理因素等,以C57 BL/6小鼠、SD大鼠为主要研究对象,其中博来霉素以无创性气管滴注诱导肺纤维化动物模型最常见。肺纤维化动物药效研究中常见的干预药物有化学药、抑制剂/激动剂、天然药物、中药复方等。肺纤维化动物药效研究中检测指标包括一般情况、肺功能、肺组织病理、细胞外基质、上皮间质转化、细胞因子、氧化应激等七类,其中一般情况以体质量、肺系数、生存分析检测为主;肺功能指标主要包括用力肺活量、动态肺顺应性、潮气量等;常见的肺组织病理染色方法有HE染色、Masson染色及天狼猩红染色等;细胞外基质检测指标以Ⅰ型胶原蛋白、羟脯氨酸、纤维连接蛋白等为主;上皮间质转化指标有α-平滑肌肌动蛋白、E-钙黏蛋白、波形蛋白等;细胞因子检测指标主要有转化生长因子β1、肿瘤坏死因子α、白细胞介素6等;氧化应激检测指标主要包括丙二醛、超氧化物歧化酶、谷胱甘肽等。根据检测指标频次≥200次作为肺纤维化动物药效研究检测的强推荐指标,将一般情况(体质量、肺系数)、肺病理(HE染色、Masson染色等)、细胞外基质(羟脯氨酸、Ⅰ型胶原蛋白、Ⅲ型胶原蛋白、纤维连接蛋白)、上皮间质化(α-平滑肌肌动蛋白)、细胞因子(肿瘤坏死因子α、白细胞介素6/1β、转化生长因子β1)作为肺纤维化动物药效研究检测的强推荐指标。结论本研究为肺纤维化动物模型构建及药效指标评价体系的建立提供了更多参考。展开更多
Reliable estimation of region-wide rice yield is vital for food security and agricultural management.Field-scale models have increased our understanding of rice yield and its estimation under theoretical environmental...Reliable estimation of region-wide rice yield is vital for food security and agricultural management.Field-scale models have increased our understanding of rice yield and its estimation under theoretical environmental conditions.However,they offer little infor-mation on spatial variability effects on farm-scale yield.Remote Sensing(RS)is a useful tool to upscale yield estimates from farm scales to regional levels.Much research used RS with rice models for reliable yield estimation.As several countries start to operatio-nalize rice monitoring systems,it is needed to synthesize current literature to identify knowledge gaps,to improve estimation accuracies,and to optimize processing.This paper critically reviewed significant developments in using geospatial methods,imagery,and quantitative models to estimate rice yield.First,essential characteristics of rice were discussed as detected by optical and radar sensors,band selection,sensor configuration,spatial resolution,mapping methods,and biophysical variables of rice derivable from RS data.Second,various empirical,process-based,and semi-empirical models that used RS data for spatial estimation of yield were critically assessed-discussing how major types of models,RS platforms,data assimilation algorithms,canopy state variables,and RS variables can be integrated for yield estimation.Lastly,to overcome current constraints and to improve accuracies,several possibilities were suggested-adding new modeling modules,using alternative canopy variables,and adopting novel modeling approaches.As rice yields are expected to decrease due to global warming,geospatial rice yield estimation techniques are indispensable tools for climate change assessments.Future studies should focus on resolving the current limitations of estimation by precise delineation of rice cultivars,by incorporating dynamic harvesting indices based on climatic drivers,using innovative modeling approaches with machine learning.展开更多
In recent years, global reanalysis weather data has been widely used in hydrological modeling around the world, but the results of simulations vary greatly. To consider the applicability of Climate Forecast System Rea...In recent years, global reanalysis weather data has been widely used in hydrological modeling around the world, but the results of simulations vary greatly. To consider the applicability of Climate Forecast System Reanalysis(CFSR) data in the hydrologic simulation of watersheds, the Bahe River Basin was used as a case study. Two types of weather data(conventional weather data and CFSR weather data) were considered to establish a Soil and Water Assessment Tool(SWAT) model, which was used to simulate runoff from 2001 to 2012 in the basin at annual and monthly scales. The effect of both datasets on the simulation was assessed using regression analysis, Nash-Sutcliffe Efficiency(NSE), and Percent Bias(PBIAS). A CFSR weather data correction method was proposed. The main results were as follows.(1) The CFSR climate data was applicable for hydrologic simulation in the Bahe River Basin(R^2 of the simulated results above 0.50, NSE above 0.33, and |PBIAS| below 14.8. Although the quality of the CFSR weather data is not perfect, it achieved a satisfactory hydrological simulation after rainfall data correction.(2) The simulated streamflow using the CFSR data was higher than the observed streamflow, which was likely because the estimation of daily rainfall data by CFSR weather data resulted in more rainy days and stronger rainfall intensity than was actually observed. Therefore, the data simulated a higher base flow and flood peak discharge in terms of the water balance, except for some individual years.(3) The relation between the CFSR rainfall data(x) and the observed rainfall data(y) could berepresented by a power exponent equation: y=1.4789x0.8875(R2=0.98,P〈0.001). There was a slight variation between the fitted equations for each station. The equation provides a theoretical basis for the correction of CFSR rainfall data.展开更多
A version of a product consists of the product structure tree and the versions of all its components. The model includes two sets of data: attributes and documents describing each component. The paper discusses the v...A version of a product consists of the product structure tree and the versions of all its components. The model includes two sets of data: attributes and documents describing each component. The paper discusses the version change relations between a sub-node component and a up-node component in a product structure tree, analyzes the version control system for a static reference and that for a dynamic reference and proposes a product structure model in support of dynamic reference, which is easy to use and contains a complete set of information providing an essential way of data organization for the PDM system.展开更多
文摘In order to understand the fundamental questions of the biology of life and to duplicate the pathogenesis of human diseases, animal models using different experimental animals, such as rodents, Drosophila, Caenorhabditis elegans, and zebrafish, have been established and used widely for many decades. The controllability of environmental conditions, the high reproducibility, the ease of scale and the comparability of results, as well as the ability to use different standards for ethical protocols, all make an animal model the ideal tool for carrying out studies on human diseases and the development of novel pharmaceuticals and new therapies (Xue et al., 2014). An ideal animal model should reflect the complete spectra of a specific human disease, with similar features on the following key issues: (1) genetic basis; (2) anatomy and physiology; (3) pathological response(s) and underlying mechanism(s); (4) phenotypic endpoints as clinical studies; (5) responsiveness to known drugs with clinical efficacy; and (6) prediction of clinical efficacy (McGonigle and Ruggeri, 2014).
基金supported by the National Key Research and Development Program of China(2023YFB3307801)the National Natural Science Foundation of China(62394343,62373155,62073142)+3 种基金Major Science and Technology Project of Xinjiang(No.2022A01006-4)the Programme of Introducing Talents of Discipline to Universities(the 111 Project)under Grant B17017the Fundamental Research Funds for the Central Universities,Science Foundation of China University of Petroleum,Beijing(No.2462024YJRC011)the Open Research Project of the State Key Laboratory of Industrial Control Technology,China(Grant No.ICT2024B70).
文摘The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.
文摘Based on convenience and safety of historical data application, B/S mode is used instead of database management structure of C/S mode, and it can not only combine database and network, but also realize the safe use of historical data online. High-level programming language is used to develop a online management and application system of historical meteorological data based on B/S mode. System data import function can import ground report file sequence of Xuzhou ( including five county-level stations) into the database, construct Oracle database of Xuzhou and the five stations since 1953, and establish data tables of time, day, ten-day, monthly, quarterly and annual historical data, weather information and so forth. Management software of database server is established to realize instruction-level management and scheduling of database and a balanced distribution of resources among users. At the same time, a Web-based management application interface is set up to meet users' needs to retrieve a variety of repositories, and it provides statistical query of time, day, ten-day, monthly, quarterly and annual historical data and climate data for each meteorological element, thereby meeting the needs of meteorological research and all sectors of society for statistical query of meteorological data.
文摘In the era of big data,the ways people work,live and think have changed dramatically,and the social governance system is also being restructured.Achieving intelligent social governance has now become a national strategy.The application of big data technology to counterterrorism efforts has become a powerful weapon for all countries.However,due to the uncertainty,difficulty of interpretation and potential risk of discrimination in big data technology and algorithm models,basic human rights,freedom and even ethics are likely to be impacted and challenged.As a result,there is an urgent need to prioritize basic human rights and regulate the application of big data for counter terrorism purposes.The legislation and law enforcement regarding the use of big data to counter terrorism must be subject to constitutional and other legal reviews,so as to strike a balance between safeguarding national security and protecting basic human rights.
基金supported by the Fundamental Research Fundsfor the Central Universities under Grant No.ZYGX2015J009the Sichuan Province Scientific and Technological Support Project under Grants No.2014GZ0017 and No.2016GZ0093
文摘In data centers, the transmission control protocol(TCP) incast causes catastrophic goodput degradation to applications with a many-to-one traffic pattern. In this paper, we intend to tame incast at the receiver-side application. Towards this goal, we first develop an analytical model that formulates the incast probability as a function of connection variables and network environment settings. We combine the model with the optimization theory and derive some insights into minimizing the incast probability through tuning connection variables related to applications. Then,enlightened by the analytical results, we propose an adaptive application-layer solution to the TCP incast.The solution equally allocates advertised windows to concurrent connections, and dynamically adapts the number of concurrent connections to the varying conditions. Simulation results show that our solution consistently eludes incast and achieves high goodput in various scenarios including the ones with multiple bottleneck links and background TCP traffic.
基金Department of Science and Technology,Government of India for providing financial support under the scheme FIST(No.SR/FST/ETI-388/2015)。
文摘Fixture design and planning is one of the most important manufacturing activities, playing a pivotal role in deciding the lead time for product development. Fixture design, which affects the part-quality in terms of geometric accuracy and surface finish, can be enhanced by using the product manufacturing information(PMI) stored in the neutral standard for the exchange of product model data(STEP) file, thereby integrating design and manufacturing. The present paper proposes a unique fixture design approach, to extract the geometry information from STEP application protocol(AP) 242 files of computer aided design(CAD) models, for providing automatic suggestions of locator positions and clamping surfaces. Automatic feature extraction software "FiXplan", developed using the programming language C#, is used to extract the part feature, dimension and geometry information. The information from the STEP AP 242 file is deduced using geometric reasoning techniques, which in turn is utilized for fixture planning. The developed software is observed to be adept in identifying the primary, secondary, and tertiary locating faces and locator position configurations of prismatic components. Structural analysis of the prismatic part under different locator positions was performed using commercial finite element method software, ABAQUS, and the optimized locator position was identified on the basis of minimum deformation of the workpiece.The area-ratio(base locator enclosed area(%)/work piece base area(%)) for the ideal locator configuration was observed as 33%. Experiments were conducted on a prismatic workpiece using a specially designed fixture, for different locator configurations. The surface roughness and waviness of the machined surfaces were analysed using an Alicona non-contact optical profilometer. The best surface characteristics were obtained for the surface machined under the ideal locator positions having an area-ratio of 33%, thus validating the predicted numerical results. The efficiency, capability and applicability of the developed software is demonstrated for the finishing operation of a sensor cover – a typical prismatic component having applications in the naval industry, under different locator configurations.The best results were obtained under the proposed ideal locator configuration of area-ratio 33%.
文摘In this paper, we conduct research on the structured data mining algorithm and applications on machine learning field. Various fields due to the advancement of informatization and digitization, a lot of multi-source and heterogeneous data distributed storage, in order to achieve the sharing, we must solve from the storage management to the interoperability of a series of mechanism, the method and implementation technology. Unstructured data does not have strict structure, therefore, compared with structured information that is more difficult to standardization, with management more difficult. According to these characteristics, the large capacity of unstructured data or using files separately store, is stored in the database index of similar pointer. Under this background, we propose the new idea on the structured data mining algorithm that is meaningful.
基金funded by the National Natural Science Foundation of China(Grant No.62272236)the Natural Science Foundation of Jiangsu Province(Grant No.BK20201136).
文摘The rapid advancement of artificial intelligence technology is driving transformative changes in medical diagnosis,treatment,and management systems through large-scale deep learning models-a process that brings both groundbreaking opportunities and multifaceted challenges.This study focuses on the medical and healthcare applications of large-scale deep learning architectures,conducting a comprehensive survey to categorize and analyze their diverse uses.The survey results reveal that current applications of large models in healthcare encompass medical data management,healthcare services,medical devices,and preventive medicine,among others.Concurrently,large models demonstrate significant advantages in the medical domain,especially in high-precision diagnosis and prediction,data analysis and knowledge discovery,and enhancing operational efficiency.Nevertheless,we identify several challenges that need urgent attention,including improving the interpretability of large models,strengthening privacy protection,and addressing issues related to handling incomplete data.This research is dedicated to systematically elucidating the deep collaborative mechanisms between artificial intelligence and the healthcare field,providing theoretical references and practical guidance for both academia and industry.
文摘目的采用数据挖掘方法总结肺纤维化动物模型的特点及建立药效指标评价体系。方法通过中国知网(CNKI)、万方数据库(Wangfang)、维普中文科技期刊全文数据库(VIP)、PubMed、Web of Science、Embase数据库检索与肺纤维化动物药效研究相关的文献,归纳整理、分析肺纤维化动物模型的造模方法、干预药物等,并统计检测指标类型、方法等,构建肺纤维化动物药效指标体系。结果共纳入1174篇文献,动物造模常见的诱导因素有肿瘤药物、环境/职业暴露颗粒、物理因素等,以C57 BL/6小鼠、SD大鼠为主要研究对象,其中博来霉素以无创性气管滴注诱导肺纤维化动物模型最常见。肺纤维化动物药效研究中常见的干预药物有化学药、抑制剂/激动剂、天然药物、中药复方等。肺纤维化动物药效研究中检测指标包括一般情况、肺功能、肺组织病理、细胞外基质、上皮间质转化、细胞因子、氧化应激等七类,其中一般情况以体质量、肺系数、生存分析检测为主;肺功能指标主要包括用力肺活量、动态肺顺应性、潮气量等;常见的肺组织病理染色方法有HE染色、Masson染色及天狼猩红染色等;细胞外基质检测指标以Ⅰ型胶原蛋白、羟脯氨酸、纤维连接蛋白等为主;上皮间质转化指标有α-平滑肌肌动蛋白、E-钙黏蛋白、波形蛋白等;细胞因子检测指标主要有转化生长因子β1、肿瘤坏死因子α、白细胞介素6等;氧化应激检测指标主要包括丙二醛、超氧化物歧化酶、谷胱甘肽等。根据检测指标频次≥200次作为肺纤维化动物药效研究检测的强推荐指标,将一般情况(体质量、肺系数)、肺病理(HE染色、Masson染色等)、细胞外基质(羟脯氨酸、Ⅰ型胶原蛋白、Ⅲ型胶原蛋白、纤维连接蛋白)、上皮间质化(α-平滑肌肌动蛋白)、细胞因子(肿瘤坏死因子α、白细胞介素6/1β、转化生长因子β1)作为肺纤维化动物药效研究检测的强推荐指标。结论本研究为肺纤维化动物模型构建及药效指标评价体系的建立提供了更多参考。
基金This work is supported by New Zealand Ministry of Foreign Affairs and Trade PhD Scholarship and the University of Auckland’s Postgraduate Research Student SupportMinistry of Foreign Affairs and Trade,New Zealand,University of Auckland.
文摘Reliable estimation of region-wide rice yield is vital for food security and agricultural management.Field-scale models have increased our understanding of rice yield and its estimation under theoretical environmental conditions.However,they offer little infor-mation on spatial variability effects on farm-scale yield.Remote Sensing(RS)is a useful tool to upscale yield estimates from farm scales to regional levels.Much research used RS with rice models for reliable yield estimation.As several countries start to operatio-nalize rice monitoring systems,it is needed to synthesize current literature to identify knowledge gaps,to improve estimation accuracies,and to optimize processing.This paper critically reviewed significant developments in using geospatial methods,imagery,and quantitative models to estimate rice yield.First,essential characteristics of rice were discussed as detected by optical and radar sensors,band selection,sensor configuration,spatial resolution,mapping methods,and biophysical variables of rice derivable from RS data.Second,various empirical,process-based,and semi-empirical models that used RS data for spatial estimation of yield were critically assessed-discussing how major types of models,RS platforms,data assimilation algorithms,canopy state variables,and RS variables can be integrated for yield estimation.Lastly,to overcome current constraints and to improve accuracies,several possibilities were suggested-adding new modeling modules,using alternative canopy variables,and adopting novel modeling approaches.As rice yields are expected to decrease due to global warming,geospatial rice yield estimation techniques are indispensable tools for climate change assessments.Future studies should focus on resolving the current limitations of estimation by precise delineation of rice cultivars,by incorporating dynamic harvesting indices based on climatic drivers,using innovative modeling approaches with machine learning.
基金International Partnership Program of Chinese Academy of Sciences,No.131551KYSB20160002 National Natural Science Foundation of China,No.41401602+2 种基金 Natural Science Basic Research Plan in Shaanxi Province of China,No.2014JQ2-4021 Key Scientific and Technological Innovation Team Plan of Shaanxi Province,No.2014KCT-27 Graduate Student Innovation Project of Northwest University,No.YZZ15011
文摘In recent years, global reanalysis weather data has been widely used in hydrological modeling around the world, but the results of simulations vary greatly. To consider the applicability of Climate Forecast System Reanalysis(CFSR) data in the hydrologic simulation of watersheds, the Bahe River Basin was used as a case study. Two types of weather data(conventional weather data and CFSR weather data) were considered to establish a Soil and Water Assessment Tool(SWAT) model, which was used to simulate runoff from 2001 to 2012 in the basin at annual and monthly scales. The effect of both datasets on the simulation was assessed using regression analysis, Nash-Sutcliffe Efficiency(NSE), and Percent Bias(PBIAS). A CFSR weather data correction method was proposed. The main results were as follows.(1) The CFSR climate data was applicable for hydrologic simulation in the Bahe River Basin(R^2 of the simulated results above 0.50, NSE above 0.33, and |PBIAS| below 14.8. Although the quality of the CFSR weather data is not perfect, it achieved a satisfactory hydrological simulation after rainfall data correction.(2) The simulated streamflow using the CFSR data was higher than the observed streamflow, which was likely because the estimation of daily rainfall data by CFSR weather data resulted in more rainy days and stronger rainfall intensity than was actually observed. Therefore, the data simulated a higher base flow and flood peak discharge in terms of the water balance, except for some individual years.(3) The relation between the CFSR rainfall data(x) and the observed rainfall data(y) could berepresented by a power exponent equation: y=1.4789x0.8875(R2=0.98,P〈0.001). There was a slight variation between the fitted equations for each station. The equation provides a theoretical basis for the correction of CFSR rainfall data.
基金Supported by the Fundamental Fujian Nature Science (A0440006) and Xiamen Science & Technology Project (3502Z20055028)
文摘A version of a product consists of the product structure tree and the versions of all its components. The model includes two sets of data: attributes and documents describing each component. The paper discusses the version change relations between a sub-node component and a up-node component in a product structure tree, analyzes the version control system for a static reference and that for a dynamic reference and proposes a product structure model in support of dynamic reference, which is easy to use and contains a complete set of information providing an essential way of data organization for the PDM system.