Wired drill pipe(WDP)technology is one of the most promising data acquisition technologies in today s oil and gas industry.For the first time it allows sensors to be positioned along the drill string which enables c...Wired drill pipe(WDP)technology is one of the most promising data acquisition technologies in today s oil and gas industry.For the first time it allows sensors to be positioned along the drill string which enables collecting and transmitting valuable data not only from the bottom hole assembly(BHA),but also along the entire length of the wellbore to the drill floor.The technology has received industry acceptance as a viable alternative to the typical logging while drilling(LWD)method.Recently more and more WDP applications can be found in the challenging drilling environments around the world,leading to many innovations to the industry.Nevertheless most of the data acquired from WDP can be noisy and in some circumstances of very poor quality.Diverse factors contribute to the poor data quality.Most common sources include mis-calibrated sensors,sensor drifting,errors during data transmission,or some abnormal conditions in the well,etc.The challenge of improving the data quality has attracted more and more focus from many researchers during the past decade.This paper has proposed a promising solution to address such challenge by making corrections of the raw WDP data and estimating unmeasurable parameters to reveal downhole behaviors.An advanced data processing method,data validation and reconciliation(DVR)has been employed,which makes use of the redundant data from multiple WDP sensors to filter/remove the noise from the measurements and ensures the coherence of all sensors and models.Moreover it has the ability to distinguish the accurate measurements from the inaccurate ones.In addition,the data with improved quality can be used for estimating some crucial parameters in the drilling process which are unmeasurable in the first place,hence provide better model calibrations for integrated well planning and realtime operations.展开更多
Evolution of the Arctic sea ice and its snow cover during the SHEBA year were simulated by applying a high-resolution thermodynamic snow/ice model (HIGHTSI). Attention was paid to the impact of albedo on snow and se...Evolution of the Arctic sea ice and its snow cover during the SHEBA year were simulated by applying a high-resolution thermodynamic snow/ice model (HIGHTSI). Attention was paid to the impact of albedo on snow and sea ice mass balance, effect of snow on total ice mass balance, and the model vertical resolution. The SHEBA annual simulation was made applying the best possible external forcing data set created by the Sea Ice Model Intercomparison Project. The HIGHTSI control run reasonably reproduced the observed snow and ice thickness. A number of albedo schemes were incorporated into HIGHTSI to study the feedback processes between the albedo and snow and ice thickness. The snow thickness turned out to be an essential variable in the albedo parameterization. Albedo schemes dependent on the surface temperature were liable to excessive positive feedback effects generated by errors in the modelled surface temperature. The superimposed ice formation should be taken into account for the annual Arctic sea ice mass balance.展开更多
In this article we study the estimation method of nonparametric regression measurement error model based on a validation data. The estimation procedures are based on orthogonal series estimation and truncated series a...In this article we study the estimation method of nonparametric regression measurement error model based on a validation data. The estimation procedures are based on orthogonal series estimation and truncated series approximation methods without specifying any structure equation and the distribution assumption. The convergence rates of the proposed estimator are derived. By example and through simulation, the method is robust against the misspecification of a measurement error model.展开更多
In this article, we develop estimation approaches for nonparametric multiple regression measurement error models when both independent validation data on covariables and primary data on the response variable and surro...In this article, we develop estimation approaches for nonparametric multiple regression measurement error models when both independent validation data on covariables and primary data on the response variable and surrogate covariables are available. An estimator which integrates Fourier series estimation and truncated series approximation methods is derived without any error model structure assumption between the true covariables and surrogate variables. Most importantly, our proposed methodology can be readily extended to the case that only some of covariates are measured with errors with the assistance of validation data. Under mild conditions, we derive the convergence rates of the proposed estimators. The finite-sample properties of the estimators are investigated through simulation studies.展开更多
The basis of accurate mineral resource estimates is to have a geological model which replicates the nature and style of the orebody. Key inputs into the generation of a good geological model are the sample data and ma...The basis of accurate mineral resource estimates is to have a geological model which replicates the nature and style of the orebody. Key inputs into the generation of a good geological model are the sample data and mapping information. The Obuasi Mine sample data with a lot of legacy issues were subjected to a robust validation process and integrated with mapping information to generate an accurate geological orebody model for mineral resource estimation in Block 8 Lower. Validation of the sample data focused on replacing missing collar coordinates, missing assays, and correcting magnetic declination that was used to convert the downhole surveys from true to magnetic, fix missing lithology and finally assign confidence numbers to all the sample data. The missing coordinates which were replaced ensured that the sample data plotted at their correct location in space as intended from the planning stage. Magnetic declination data, which was maintained constant throughout all the years even though it changes every year, was also corrected in the validation project. The corrected magnetic declination ensured that the drillholes were plotted on their accurate trajectory as per the planned azimuth and also reflected the true position of the intercepted mineralized fissure(s) which was previously not the case and marked a major blot in the modelling of the Obuasi orebody. The incorporation of mapped data with the validated sample data in the wireframes resulted in a better interpretation of the orebody. The updated mineral resource generated by domaining quartz from the sulphides and compared with the old resource showed that the sulphide tonnes in the old resource estimates were overestimated by 1% and the grade overestimated by 8.5%.展开更多
In this paper,the latest progress,major achievements and future plans of Chinese meteorological satellites and the core data processing techniques are discussed.First,the latest three FengYun(FY)meteorological satelli...In this paper,the latest progress,major achievements and future plans of Chinese meteorological satellites and the core data processing techniques are discussed.First,the latest three FengYun(FY)meteorological satellites(FY-2H,FY-3D,and FY-4A)and their primary objectives are introduced Second,the core image navigation techniques and accuracies of the FY meteorological satellites are elaborated,including the latest geostationary(FY-2/4)and polar-orbit(FY-3)satellites.Third,the radiometric calibration techniques and accuracies of reflective solar bands,thermal infrared bands,and passive microwave bands for FY meteorological satellites are discussed.It also illustrates the latest progress of real-time calibration with the onboard calibration system and validation with different methods,including the vicarious China radiance calibration site calibration,pseudo invariant calibration site calibration,deep convective clouds calibration,and lunar calibration.Fourth,recent progress of meteorological satellite data assimilation applications and quantitative science produce are summarized at length.The main progress is in meteorological satellite data assimilation by using microwave and hyper-spectral infrared sensors in global and regional numerical weather prediction models.Lastly,the latest progress in radiative transfer,absorption and scattering calculations for satellite remote sensing is summarized,and some important research using a new radiative transfer model are illustrated.展开更多
Recently published Medical Physics Practice Guideline 5.a. (MPPG 5.a.) by American Association of Physicists in Medicine (AAPM) sets the minimum requirements for treatment planning system (TPS) dose algorithm commissi...Recently published Medical Physics Practice Guideline 5.a. (MPPG 5.a.) by American Association of Physicists in Medicine (AAPM) sets the minimum requirements for treatment planning system (TPS) dose algorithm commissioning and quality assurance (QA). The guideline recommends some validation tests and tolerances based primarily on published AAPM task group reports and the criteria used by IROC Houston. We performed the commissioning and validation of the dose algorithms for both megavoltage photon and electron beams on three linacs following MPPG 5.a. We designed the validation experiments in an attempt to highlight the evaluation method and tolerance criteria recommended by the guideline. It seems that comparison of dose profiles using in-water scan is an effective technique for basic photon and electron validation. IMRT/VMAT dose calculation is recommended to be tested with some TG-119 and clinical cases, but no consensus of the tolerance exists. Extensive validation tests have provided the better understanding of the accuracy and limitation of a specific dose calculation algorithm. We believe that some tests and evaluation criteria given in the guideline can be further refined.展开更多
For various reasons,many of the security programming rules applicable to specific software have not been recorded in official documents,and hence can hardly be employed by static analysis tools for detection.In this p...For various reasons,many of the security programming rules applicable to specific software have not been recorded in official documents,and hence can hardly be employed by static analysis tools for detection.In this paper,we propose a new approach,named SVR-Miner(Security Validation Rules Miner),which uses frequent sequence mining technique [1-4] to automatically infer implicit security validation rules from large software code written in C programming language.Different from the past works in this area,SVR-Miner introduces three techniques which are sensitive thread,program slicing [5-7],and equivalent statements computing to improve the accuracy of rules.Experiments with the Linux Kernel demonstrate the effectiveness of our approach.With the ten given sensitive threads,SVR-Miner automatically generated 17 security validation rules and detected 8 violations,5 of which were published by Linux Kernel Organization before we detected them.We have reported the other three to the Linux Kernel Organization recently.展开更多
从天地图融合数据质量检查出发,依据数据标准,通过结合具体的质检规则,研究了一种基于ArcGIS Data Reviewer模块的自动化、批量化并且可使数据在处理阶段就可进行检查的天地图融合数据检验方法,这种灵活的质检机制大大减少了数据融合过...从天地图融合数据质量检查出发,依据数据标准,通过结合具体的质检规则,研究了一种基于ArcGIS Data Reviewer模块的自动化、批量化并且可使数据在处理阶段就可进行检查的天地图融合数据检验方法,这种灵活的质检机制大大减少了数据融合过程中的人工反复处理,提高了生产单位的作业效率及成果质量,也可为其他项目的质检系统开发提供借鉴。展开更多
This paper considers the local linear regression estimators for partially linear model with censored data. Which have some nice large-sample behaviors and are easy to implement. By many simulation runs, the author als...This paper considers the local linear regression estimators for partially linear model with censored data. Which have some nice large-sample behaviors and are easy to implement. By many simulation runs, the author also found that the estimators show remarkable in the small sample case yet.展开更多
Background: Over the last decades interest has grown on how climate change impacts forest resources. However,one of the main constraints is that meteorological stations are riddled with missing climatic data. This stu...Background: Over the last decades interest has grown on how climate change impacts forest resources. However,one of the main constraints is that meteorological stations are riddled with missing climatic data. This study compared five approaches for estimating monthly precipitation records: inverse distance weighting(IDW), a modification of IDW that includes elevation differences between target and neighboring stations(IDW_m), correlation coefficient weighting(CCW), multiple linear regression(MLR) and artificial neural networks(ANN).Methods: A complete series of monthly precipitation records(1995-2012) from twenty meteorological stations located in central Chile were used. Two target stations were selected and their neighboring stations, located within a radius of25 km(3 stations) and 50 km(9 stations), were identified. Cross-validation was used for evaluating the accuracy of the estimation approaches. The performance and predictive capability of the approaches were evaluated using the ratio of the root mean square error to the standard deviation of measured data(RSR), the percent bias(PBIAS), and the NashSutcliffe efficiency(NSE). For testing the main and interactive effects of the radius of influence and estimation approaches,a two-level factorial design considering the target station as the blocking factor was used.Results: ANN and MLR showed the best statistics for all the stations and radius of influence. However, these approaches were not significantly different with IDW_m. Inclusion of elevation differences into IDW significantly improved IDW_m estimates. In terms of precision, similar estimates were obtained when applying ANN, MLR or IDW_m, and the radius of influence had a significant influence on their estimates, we conclude that estimates based on nine neighboring stations located within a radius of 50 km are needed for completing missing monthly precipitation data in regions with complex topography.Conclusions: It is concluded that approaches based on ANN, MLR and IDWm had the best performance in two sectors located in south-central Chile with a complex topography. A radius of influence of 50 km(9 neighboring stations) is recommended for completing monthly precipitation data.展开更多
The goal was to perform the filling,consistency and processing of the rainfall time series data from 1943 to 2013 in five regions of the state.Data were obtained from several sources(ANA,CPRM,INMET,SERLA and LIGHT),to...The goal was to perform the filling,consistency and processing of the rainfall time series data from 1943 to 2013 in five regions of the state.Data were obtained from several sources(ANA,CPRM,INMET,SERLA and LIGHT),totaling 23 stations.The time series(raw data)showed failures that were filled with data from TRMM satellite via 3B43 product,and with the climatological normal from INMET.The 3B43 product was used from 1998 to 2013 and the climatological normal over the 1947-1997 period.Data were submitted to descriptive and exploratory analysis,parametric tests(Shapiro-Wilks and Bartlett),cluster analysis(CA),and data processing(Box Cox)in the 23 stations.Descriptive analysis of the raw data consistency showed a probability of occurrence above 75%(high time variability).Through the CA,two homogeneous rainfall groups(G1 and G2)were defined.The group G1 and G2 represent 77.01%and 22.99%of the rainfall occurring in SRJ,respectively.Box Cox Processing was effective in stabilizing the normality of the residuals and homogeneity of variance of the monthly rainfall time series of the five regions of the state.Data from 3B43 product and the climatological normal can be used as an alternative source of quality data for gap filling.展开更多
Caching frequently accessed data items on the mobile client is an effective technique to improve the system performance in mobile environment. Proper choice of cache replacement technique to find a suitable subset of ...Caching frequently accessed data items on the mobile client is an effective technique to improve the system performance in mobile environment. Proper choice of cache replacement technique to find a suitable subset of items for eviction from cache is very important because of limited cache size. Available policies do not take into account the movement patterns of the client. In this paper, we propose a new cache replacement policy for location dependent data in mobile environment. The proposed policy uses a predicted region based cost function to select an item for eviction from cache. The policy selects the predicted region based on client’s movement and uses it to calculate the data distance of an item. This makes the policy adaptive to client’s movement pattern unlike earlier policies that consider the directional / non-directional data distance only. We call our policy the Prioritized Predicted Region based Cache Replacement Policy (PPRRP). Simulation results show that the proposed policy significantly improves the system performance in comparison to previous schemes in terms of cache hit ratio.展开更多
Machine learning advancements in healthcare have made data collected through smartphones and wearable devices a vital source of public health and medical insights.While wearable device data help to monitor,detect,and ...Machine learning advancements in healthcare have made data collected through smartphones and wearable devices a vital source of public health and medical insights.While wearable device data help to monitor,detect,and predict diseases and health conditions,some data owners hesitate to share such sensitive data with companies or researchers due to privacy concerns.Moreover,wearable devices have been recently available as commercial products;thus large,diverse,and representative datasets are not available to most researchers.In this article,the authors propose an open marketplace where wearable device users securely monetize their wearable device records by sharing data with consumers(e.g.,researchers)to make wearable device data more available to healthcare researchers.To secure the data transactions in a privacy-preserving manner,the authors use a decentralized approach using Blockchain and Non-Fungible Tokens(NFTs).To ensure data originality and integrity with secure validation,the marketplace uses Trusted Execution Environments(TEE)in wearable devices to verify the correctness of health data.The marketplace also allows researchers to train models using Federated Learning with a TEE-backed secure aggregation of data users may not be willing to share.To ensure user participation,we model incentive mechanisms for the Federated Learning-based and anonymized data-sharing approaches using NFTs.The authors also propose using payment channels and batching to reduce smart contact gas fees and optimize user profits.If widely adopted,it’s believed that TEE and Blockchain-based incentives will promote the ethical use of machine learning with validated wearable device data in healthcare and improve user participation due to incentives.展开更多
The pharmaceutical industry’s increasing adoption of cloud-based technologies has introduced new challenges in computerized systems validation (CSV). This paper explores the evolving landscape of cloud validation in ...The pharmaceutical industry’s increasing adoption of cloud-based technologies has introduced new challenges in computerized systems validation (CSV). This paper explores the evolving landscape of cloud validation in pharmaceutical manufacturing, focusing on ensuring data integrity and regulatory compliance in the digital era. We examine the unique characteristics of cloud-based systems and their implications for traditional validation approaches. A comprehensive review of current regulatory frameworks, including FDA and EMA guidelines, provides context for discussing cloud-specific validation challenges. The paper introduces a risk-based approach to cloud CSV, detailing methodologies for assessing and mitigating risks associated with cloud adoption in pharmaceutical environments. Key considerations for maintaining data integrity in cloud systems are analyzed, particularly when applying ALCOA+ principles in distributed computing environments. The article presents strategies for adapting traditional Installation Qualification (IQ), Operational Qualification (OQ), and Performance Qualification (PQ) models to cloud-based systems, highlighting the importance of continuous validation in dynamic cloud environments. The paper also explores emerging trends, including integrating artificial intelligence and edge computing in pharmaceutical manufacturing and their implications for future validation strategies. This research contributes to the evolving body of knowledge on cloud validation in pharmaceuticals by proposing a framework that balances regulatory compliance with the agility offered by cloud technologies. The findings suggest that while cloud adoption presents unique challenges, a well-structured, risk-based approach to validation can ensure the integrity and compliance of cloud-based systems in pharmaceutical manufacturing.展开更多
基金supported by University of Stavanger, NorwaySINTEF,the Center for Integrated Operations in the Petroleum Industry and the management of National Oilwell Varco Intelli Serv
文摘Wired drill pipe(WDP)technology is one of the most promising data acquisition technologies in today s oil and gas industry.For the first time it allows sensors to be positioned along the drill string which enables collecting and transmitting valuable data not only from the bottom hole assembly(BHA),but also along the entire length of the wellbore to the drill floor.The technology has received industry acceptance as a viable alternative to the typical logging while drilling(LWD)method.Recently more and more WDP applications can be found in the challenging drilling environments around the world,leading to many innovations to the industry.Nevertheless most of the data acquired from WDP can be noisy and in some circumstances of very poor quality.Diverse factors contribute to the poor data quality.Most common sources include mis-calibrated sensors,sensor drifting,errors during data transmission,or some abnormal conditions in the well,etc.The challenge of improving the data quality has attracted more and more focus from many researchers during the past decade.This paper has proposed a promising solution to address such challenge by making corrections of the raw WDP data and estimating unmeasurable parameters to reveal downhole behaviors.An advanced data processing method,data validation and reconciliation(DVR)has been employed,which makes use of the redundant data from multiple WDP sensors to filter/remove the noise from the measurements and ensures the coherence of all sensors and models.Moreover it has the ability to distinguish the accurate measurements from the inaccurate ones.In addition,the data with improved quality can be used for estimating some crucial parameters in the drilling process which are unmeasurable in the first place,hence provide better model calibrations for integrated well planning and realtime operations.
基金supported by the EC-funded project DAMOCLES (grant 18509)which is part of the Sixth Framework Program of DFG(grant LU 818/1-1)Natural Science Foundation of China(grants No.40233032,40376006).
文摘Evolution of the Arctic sea ice and its snow cover during the SHEBA year were simulated by applying a high-resolution thermodynamic snow/ice model (HIGHTSI). Attention was paid to the impact of albedo on snow and sea ice mass balance, effect of snow on total ice mass balance, and the model vertical resolution. The SHEBA annual simulation was made applying the best possible external forcing data set created by the Sea Ice Model Intercomparison Project. The HIGHTSI control run reasonably reproduced the observed snow and ice thickness. A number of albedo schemes were incorporated into HIGHTSI to study the feedback processes between the albedo and snow and ice thickness. The snow thickness turned out to be an essential variable in the albedo parameterization. Albedo schemes dependent on the surface temperature were liable to excessive positive feedback effects generated by errors in the modelled surface temperature. The superimposed ice formation should be taken into account for the annual Arctic sea ice mass balance.
文摘In this article we study the estimation method of nonparametric regression measurement error model based on a validation data. The estimation procedures are based on orthogonal series estimation and truncated series approximation methods without specifying any structure equation and the distribution assumption. The convergence rates of the proposed estimator are derived. By example and through simulation, the method is robust against the misspecification of a measurement error model.
文摘In this article, we develop estimation approaches for nonparametric multiple regression measurement error models when both independent validation data on covariables and primary data on the response variable and surrogate covariables are available. An estimator which integrates Fourier series estimation and truncated series approximation methods is derived without any error model structure assumption between the true covariables and surrogate variables. Most importantly, our proposed methodology can be readily extended to the case that only some of covariates are measured with errors with the assistance of validation data. Under mild conditions, we derive the convergence rates of the proposed estimators. The finite-sample properties of the estimators are investigated through simulation studies.
文摘The basis of accurate mineral resource estimates is to have a geological model which replicates the nature and style of the orebody. Key inputs into the generation of a good geological model are the sample data and mapping information. The Obuasi Mine sample data with a lot of legacy issues were subjected to a robust validation process and integrated with mapping information to generate an accurate geological orebody model for mineral resource estimation in Block 8 Lower. Validation of the sample data focused on replacing missing collar coordinates, missing assays, and correcting magnetic declination that was used to convert the downhole surveys from true to magnetic, fix missing lithology and finally assign confidence numbers to all the sample data. The missing coordinates which were replaced ensured that the sample data plotted at their correct location in space as intended from the planning stage. Magnetic declination data, which was maintained constant throughout all the years even though it changes every year, was also corrected in the validation project. The corrected magnetic declination ensured that the drillholes were plotted on their accurate trajectory as per the planned azimuth and also reflected the true position of the intercepted mineralized fissure(s) which was previously not the case and marked a major blot in the modelling of the Obuasi orebody. The incorporation of mapped data with the validated sample data in the wireframes resulted in a better interpretation of the orebody. The updated mineral resource generated by domaining quartz from the sulphides and compared with the old resource showed that the sulphide tonnes in the old resource estimates were overestimated by 1% and the grade overestimated by 8.5%.
基金funded by the National Key R&D Program of China(Grant Nos.2018YFB0504900 and 2015AA123700)
文摘In this paper,the latest progress,major achievements and future plans of Chinese meteorological satellites and the core data processing techniques are discussed.First,the latest three FengYun(FY)meteorological satellites(FY-2H,FY-3D,and FY-4A)and their primary objectives are introduced Second,the core image navigation techniques and accuracies of the FY meteorological satellites are elaborated,including the latest geostationary(FY-2/4)and polar-orbit(FY-3)satellites.Third,the radiometric calibration techniques and accuracies of reflective solar bands,thermal infrared bands,and passive microwave bands for FY meteorological satellites are discussed.It also illustrates the latest progress of real-time calibration with the onboard calibration system and validation with different methods,including the vicarious China radiance calibration site calibration,pseudo invariant calibration site calibration,deep convective clouds calibration,and lunar calibration.Fourth,recent progress of meteorological satellite data assimilation applications and quantitative science produce are summarized at length.The main progress is in meteorological satellite data assimilation by using microwave and hyper-spectral infrared sensors in global and regional numerical weather prediction models.Lastly,the latest progress in radiative transfer,absorption and scattering calculations for satellite remote sensing is summarized,and some important research using a new radiative transfer model are illustrated.
文摘Recently published Medical Physics Practice Guideline 5.a. (MPPG 5.a.) by American Association of Physicists in Medicine (AAPM) sets the minimum requirements for treatment planning system (TPS) dose algorithm commissioning and quality assurance (QA). The guideline recommends some validation tests and tolerances based primarily on published AAPM task group reports and the criteria used by IROC Houston. We performed the commissioning and validation of the dose algorithms for both megavoltage photon and electron beams on three linacs following MPPG 5.a. We designed the validation experiments in an attempt to highlight the evaluation method and tolerance criteria recommended by the guideline. It seems that comparison of dose profiles using in-water scan is an effective technique for basic photon and electron validation. IMRT/VMAT dose calculation is recommended to be tested with some TG-119 and clinical cases, but no consensus of the tolerance exists. Extensive validation tests have provided the better understanding of the accuracy and limitation of a specific dose calculation algorithm. We believe that some tests and evaluation criteria given in the guideline can be further refined.
基金National Natural Science Foundation of China under Grant No.60873213,91018008 and 61070192Beijing Science Foundation under Grant No. 4082018Shanghai Key Laboratory of Intelligent Information Processing of China under Grant No. IIPL-09-006
文摘For various reasons,many of the security programming rules applicable to specific software have not been recorded in official documents,and hence can hardly be employed by static analysis tools for detection.In this paper,we propose a new approach,named SVR-Miner(Security Validation Rules Miner),which uses frequent sequence mining technique [1-4] to automatically infer implicit security validation rules from large software code written in C programming language.Different from the past works in this area,SVR-Miner introduces three techniques which are sensitive thread,program slicing [5-7],and equivalent statements computing to improve the accuracy of rules.Experiments with the Linux Kernel demonstrate the effectiveness of our approach.With the ten given sensitive threads,SVR-Miner automatically generated 17 security validation rules and detected 8 violations,5 of which were published by Linux Kernel Organization before we detected them.We have reported the other three to the Linux Kernel Organization recently.
文摘从天地图融合数据质量检查出发,依据数据标准,通过结合具体的质检规则,研究了一种基于ArcGIS Data Reviewer模块的自动化、批量化并且可使数据在处理阶段就可进行检查的天地图融合数据检验方法,这种灵活的质检机制大大减少了数据融合过程中的人工反复处理,提高了生产单位的作业效率及成果质量,也可为其他项目的质检系统开发提供借鉴。
文摘This paper considers the local linear regression estimators for partially linear model with censored data. Which have some nice large-sample behaviors and are easy to implement. By many simulation runs, the author also found that the estimators show remarkable in the small sample case yet.
基金supported by the National Fund for Scientific and Technological Development(FONDECYT)[Project 1151050]the first author gratefully acknowledges funding from Chile's Education Ministry through the program MECESUP2 [Project UC00702]
文摘Background: Over the last decades interest has grown on how climate change impacts forest resources. However,one of the main constraints is that meteorological stations are riddled with missing climatic data. This study compared five approaches for estimating monthly precipitation records: inverse distance weighting(IDW), a modification of IDW that includes elevation differences between target and neighboring stations(IDW_m), correlation coefficient weighting(CCW), multiple linear regression(MLR) and artificial neural networks(ANN).Methods: A complete series of monthly precipitation records(1995-2012) from twenty meteorological stations located in central Chile were used. Two target stations were selected and their neighboring stations, located within a radius of25 km(3 stations) and 50 km(9 stations), were identified. Cross-validation was used for evaluating the accuracy of the estimation approaches. The performance and predictive capability of the approaches were evaluated using the ratio of the root mean square error to the standard deviation of measured data(RSR), the percent bias(PBIAS), and the NashSutcliffe efficiency(NSE). For testing the main and interactive effects of the radius of influence and estimation approaches,a two-level factorial design considering the target station as the blocking factor was used.Results: ANN and MLR showed the best statistics for all the stations and radius of influence. However, these approaches were not significantly different with IDW_m. Inclusion of elevation differences into IDW significantly improved IDW_m estimates. In terms of precision, similar estimates were obtained when applying ANN, MLR or IDW_m, and the radius of influence had a significant influence on their estimates, we conclude that estimates based on nine neighboring stations located within a radius of 50 km are needed for completing missing monthly precipitation data in regions with complex topography.Conclusions: It is concluded that approaches based on ANN, MLR and IDWm had the best performance in two sectors located in south-central Chile with a complex topography. A radius of influence of 50 km(9 neighboring stations) is recommended for completing monthly precipitation data.
基金The authors acknowledge the Agência Nacional deÁguas(ANA),Companhia de Pesquisa de Recursos Minerais(CPRM)Instituto Nacional de Meteorologia(INMET)+2 种基金SERLA Fundação Superintendência Estadual de Rios e Lagoas and Light Serviços de Eletricidade S/A by gently give in the data for composing the rainfall time seriesthe CAPES for granting Doctorate scholarshipThe second author thanks Brazilian National Council for Scientific and Technological Development(CNPq)for granting the Research Productivity Fellowship level 2(309681/2019-7).
文摘The goal was to perform the filling,consistency and processing of the rainfall time series data from 1943 to 2013 in five regions of the state.Data were obtained from several sources(ANA,CPRM,INMET,SERLA and LIGHT),totaling 23 stations.The time series(raw data)showed failures that were filled with data from TRMM satellite via 3B43 product,and with the climatological normal from INMET.The 3B43 product was used from 1998 to 2013 and the climatological normal over the 1947-1997 period.Data were submitted to descriptive and exploratory analysis,parametric tests(Shapiro-Wilks and Bartlett),cluster analysis(CA),and data processing(Box Cox)in the 23 stations.Descriptive analysis of the raw data consistency showed a probability of occurrence above 75%(high time variability).Through the CA,two homogeneous rainfall groups(G1 and G2)were defined.The group G1 and G2 represent 77.01%and 22.99%of the rainfall occurring in SRJ,respectively.Box Cox Processing was effective in stabilizing the normality of the residuals and homogeneity of variance of the monthly rainfall time series of the five regions of the state.Data from 3B43 product and the climatological normal can be used as an alternative source of quality data for gap filling.
文摘Caching frequently accessed data items on the mobile client is an effective technique to improve the system performance in mobile environment. Proper choice of cache replacement technique to find a suitable subset of items for eviction from cache is very important because of limited cache size. Available policies do not take into account the movement patterns of the client. In this paper, we propose a new cache replacement policy for location dependent data in mobile environment. The proposed policy uses a predicted region based cost function to select an item for eviction from cache. The policy selects the predicted region based on client’s movement and uses it to calculate the data distance of an item. This makes the policy adaptive to client’s movement pattern unlike earlier policies that consider the directional / non-directional data distance only. We call our policy the Prioritized Predicted Region based Cache Replacement Policy (PPRRP). Simulation results show that the proposed policy significantly improves the system performance in comparison to previous schemes in terms of cache hit ratio.
文摘Machine learning advancements in healthcare have made data collected through smartphones and wearable devices a vital source of public health and medical insights.While wearable device data help to monitor,detect,and predict diseases and health conditions,some data owners hesitate to share such sensitive data with companies or researchers due to privacy concerns.Moreover,wearable devices have been recently available as commercial products;thus large,diverse,and representative datasets are not available to most researchers.In this article,the authors propose an open marketplace where wearable device users securely monetize their wearable device records by sharing data with consumers(e.g.,researchers)to make wearable device data more available to healthcare researchers.To secure the data transactions in a privacy-preserving manner,the authors use a decentralized approach using Blockchain and Non-Fungible Tokens(NFTs).To ensure data originality and integrity with secure validation,the marketplace uses Trusted Execution Environments(TEE)in wearable devices to verify the correctness of health data.The marketplace also allows researchers to train models using Federated Learning with a TEE-backed secure aggregation of data users may not be willing to share.To ensure user participation,we model incentive mechanisms for the Federated Learning-based and anonymized data-sharing approaches using NFTs.The authors also propose using payment channels and batching to reduce smart contact gas fees and optimize user profits.If widely adopted,it’s believed that TEE and Blockchain-based incentives will promote the ethical use of machine learning with validated wearable device data in healthcare and improve user participation due to incentives.
文摘The pharmaceutical industry’s increasing adoption of cloud-based technologies has introduced new challenges in computerized systems validation (CSV). This paper explores the evolving landscape of cloud validation in pharmaceutical manufacturing, focusing on ensuring data integrity and regulatory compliance in the digital era. We examine the unique characteristics of cloud-based systems and their implications for traditional validation approaches. A comprehensive review of current regulatory frameworks, including FDA and EMA guidelines, provides context for discussing cloud-specific validation challenges. The paper introduces a risk-based approach to cloud CSV, detailing methodologies for assessing and mitigating risks associated with cloud adoption in pharmaceutical environments. Key considerations for maintaining data integrity in cloud systems are analyzed, particularly when applying ALCOA+ principles in distributed computing environments. The article presents strategies for adapting traditional Installation Qualification (IQ), Operational Qualification (OQ), and Performance Qualification (PQ) models to cloud-based systems, highlighting the importance of continuous validation in dynamic cloud environments. The paper also explores emerging trends, including integrating artificial intelligence and edge computing in pharmaceutical manufacturing and their implications for future validation strategies. This research contributes to the evolving body of knowledge on cloud validation in pharmaceuticals by proposing a framework that balances regulatory compliance with the agility offered by cloud technologies. The findings suggest that while cloud adoption presents unique challenges, a well-structured, risk-based approach to validation can ensure the integrity and compliance of cloud-based systems in pharmaceutical manufacturing.