Knowing the influence of the size of datasets for regression models can help in improving the accuracy of a solar power forecast and make the most out of renewable energy systems.This research explores the influence o...Knowing the influence of the size of datasets for regression models can help in improving the accuracy of a solar power forecast and make the most out of renewable energy systems.This research explores the influence of dataset size on the accuracy and reliability of regression models for solar power prediction,contributing to better forecasting methods.The study analyzes data from two solar panels,aSiMicro03036 and aSiTandem72-46,over 7,14,17,21,28,and 38 days,with each dataset comprising five independent and one dependent parameter,and split 80–20 for training and testing.Results indicate that Random Forest consistently outperforms other models,achieving the highest correlation coefficient of 0.9822 and the lowest Mean Absolute Error(MAE)of 2.0544 on the aSiTandem72-46 panel with 21 days of data.For the aSiMicro03036 panel,the best MAE of 4.2978 was reached using the k-Nearest Neighbor(k-NN)algorithm,which was set up as instance-based k-Nearest neighbors(IBk)in Weka after being trained on 17 days of data.Regression performance for most models(excluding IBk)stabilizes at 14 days or more.Compared to the 7-day dataset,increasing to 21 days reduced the MAE by around 20%and improved correlation coefficients by around 2.1%,highlighting the value of moderate dataset expansion.These findings suggest that datasets spanning 17 to 21 days,with 80%used for training,can significantly enhance the predictive accuracy of solar power generation models.展开更多
After the first Earth Orientation Parameters Prediction Comparison Campaign(1 st EOP PCC),the traditional method using least-squares extrapolation and autoregressive(LS+AR)models was considered as one of the polar mot...After the first Earth Orientation Parameters Prediction Comparison Campaign(1 st EOP PCC),the traditional method using least-squares extrapolation and autoregressive(LS+AR)models was considered as one of the polar motion prediction methods with higher accuracy.The traditional method predicts individual polar motion series separately,which has a single input data and limited improvement in prediction accuracy.To address this problem,this paper proposes a new method for predicting polar motion by combining the difference between polar motion series.The X,Y,and Y-X series were predicted separately using LS+AR models.Then,the new forecast value of X series is obtained by combining the forecast value of Y series with that of Y-X series;the new forecast value of Y series is obtained by combining the forecast value of X series with that of Y-X series.The hindcast experimental comparison results from January 1,2011 to April 4,2021 show that the new method achieves a maximum improvement of 12.95%and 14.96%over the traditional method in the X and Y directions,respectively.The new method has obvious advantages compared with the differential method.This study tests the stability and superiority of the new method and provides a new idea for the research of polar motion prediction.展开更多
In order to research brain problems using MRI,PET,and CT neuroimaging,a correct understanding of brain function is required.This has been considered in earlier times with the support of traditional algorithms.Deep lea...In order to research brain problems using MRI,PET,and CT neuroimaging,a correct understanding of brain function is required.This has been considered in earlier times with the support of traditional algorithms.Deep learning process has also been widely considered in these genomics data processing system.In this research,brain disorder illness incliding Alzheimer’s disease,Schizophrenia and Parkinson’s diseaseis is analyzed owing to misdetection of disorders in neuroimaging data examined by means fo traditional methods.Moeover,deep learning approach is incorporated here for classification purpose of brain disorder with the aid of Deep Belief Networks(DBN).Images are stored in a secured manner by using DNA sequence based on JPEG Zig Zag Encryption algorithm(DBNJZZ)approach.The suggested approach is executed and tested by using the performance metric measure such as accuracy,root mean square error,Mean absolute error and mean absolute percentage error.Proposed DBNJZZ gives better performance than previously available methods.展开更多
An accurate mathematical representation of soil particle-size distribution(PSD) is required to estimate soil hydraulic properties or to compare texture measurements using different classification systems. However, man...An accurate mathematical representation of soil particle-size distribution(PSD) is required to estimate soil hydraulic properties or to compare texture measurements using different classification systems. However, many databases do not contain full PSD data,but instead contain only the clay, silt, and sand mass fractions. The objective of this study was to evaluate the abilities of four PSD models(the Skaggs model, the Fooladmand model, the modified Gray model GM(1,1), and the Fredlund model) to predict detailed PSD using limited soil textural data and to determine the effects of soil texture on the performance of the individual PSD model.The mean absolute error(MAE) and root mean square error(RMSE) were used to measure the goodness-of-fit of the models, and the Akaike's information criterion(AIC) was used to compare the quality of model fits. The performance of all PSD models except the GM(1,1) improved with increasing clay content in soils. This result showed that the GM(1,1) was less dependent on soil texture.The Fredlund model was the best for describing the PSDs of all soil textures except in the sand textural class. However, the GM(1,1) showed better performance as the sand content increased. These results indicated that the Fredlund model showed the best performance and the least values of all evaluation criteria, and can be used using limited soil textural data for detailed PSD.展开更多
Local markets in East Africa have been destroyed by raging fires,leading to the loss of life and property in the nearby communities.Electrical circuits,arson,and neglected charcoal stoves are the major causes of these...Local markets in East Africa have been destroyed by raging fires,leading to the loss of life and property in the nearby communities.Electrical circuits,arson,and neglected charcoal stoves are the major causes of these fires.Previous methods,i.e.,satellites,are expensive to maintain and cause unnecessary delays.Also,unit-smoke detectors are highly prone to false alerts.In this paper,an Interval Type-2 TSK fuzzy model for an intelligent lightweight fire intensity detection algorithm with decision-making in low-power devices is proposed using a sparse inference rules approach.A free open–source MATLAB/Simulink fuzzy toolbox integrated into MATLAB 2018a is used to investigate the performance of the Interval Type-2 fuzzy model.Two crisp input parameters,namely:FIT and FIG��are used.Results show that the Interval Type-2 model achieved an accuracy value of FIO�=98.2%,MAE=1.3010,MSE=1.6938 and RMSE=1.3015 using regression analysis.The study shall assist the firefighting personnel in fully understanding and mitigating the current level of fire danger.As a result,the proposed solution can be fully implemented in low-cost,low-power fire detection systems to monitor the state of fire with improved accuracy and reduced false alerts.Through informed decision-making in low-cost fire detection devices,early warning notifications can be provided to aid in the rapid evacuation of people,thereby improving fire safety surveillance,management,and protection for the market community.展开更多
In this paper,we tested our methodology on the stocks of four representative companies:Apple,Comcast Corporation(CMCST),Google,and Qualcomm.We compared their performance to several stocks using the hidden Markov model...In this paper,we tested our methodology on the stocks of four representative companies:Apple,Comcast Corporation(CMCST),Google,and Qualcomm.We compared their performance to several stocks using the hidden Markov model(HMM)and forecasts using mean absolute percentage error(MAPE).For simplicity,we considered four main features in these stocks:open,close,high,and low prices.When using the HMM for forecasting,the HMM has the best prediction for the daily low stock price and daily high stock price of Apple and CMCST,respectively.By calculating the MAPE for the four data sets of Google,the close price has the largest prediction error,while the open price has the smallest prediction error.The HMM has the largest prediction error and the smallest prediction error for Qualcomm’s daily low stock price and daily high stock price,respectively.展开更多
In this paper, the Holt’s exponential smoothing and Auto-Regressive Integrated Moving Average (ARIMA) models were used to forecast inflation rate of Zambia using the monthly consumer price index (CPI) data from May 2...In this paper, the Holt’s exponential smoothing and Auto-Regressive Integrated Moving Average (ARIMA) models were used to forecast inflation rate of Zambia using the monthly consumer price index (CPI) data from May 2010 to May 2014. Results show that the ARIMA ((12), 1, 0) is an adequate model which best fits the CPI time series data and is therefore suitable for forecasting CPI and subsequently the inflation rate. However, the choice of the Holt’s exponential smoothing is as good as an ARIMA model considering the smaller deviations in the mean absolute percentage error and mean square error. Moreover, the Holt’s exponential smoothing model is less complicated since you do not require specialised software to implement it as is the case for ARIMA models. The forecasted inflation rate for April and May, 2015 is 7.0 and 6.6 respectively.展开更多
For fuzzy systems to be implemented effectively,the fuzzy membership function(MF)is essential.A fuzzy system(FS)that implements precise input and output MFs is presented to enhance the performance and accuracy of sing...For fuzzy systems to be implemented effectively,the fuzzy membership function(MF)is essential.A fuzzy system(FS)that implements precise input and output MFs is presented to enhance the performance and accuracy of single-input single-output(SISO)FSs and introduce the most applicable input and output MFs protocol to linearize the fuzzy system’s output.Utilizing a variety of non-linear techniques,a SISO FS is simulated.The results of FS experiments conducted in comparable conditions are then compared.The simulated results and the results of the experimental setup agree fairly well.The findings of the suggested model demonstrate that the relative error is abated to a sufficient range(≤±10%)and that the mean absolute percentage error(MPAE)is reduced by around 66.2%.The proposed strategy to reduceMAPE using an FS improves the system’s performance and control accuracy.By using the best input and output MFs protocol,the energy and financial efficiency of every SISO FS can be improved with very little tuning of MFs.The proposed fuzzy system performed far better than other modern days approaches available in the literature.展开更多
Photomosaic images are composite images composed of many small images called tiles.In its overall visual effect,a photomosaic image is similar to the target image,and photomosaics are also called“montage art”.Noisy ...Photomosaic images are composite images composed of many small images called tiles.In its overall visual effect,a photomosaic image is similar to the target image,and photomosaics are also called“montage art”.Noisy blocks and the loss of local information are the major obstacles in most methods or programs that create photomosaic images.To solve these problems and generate a photomosaic image in this study,we propose a tile selection method based on error minimization.A photomosaic image can be generated by partitioning the target image in a rectangular pattern,selecting appropriate tile images,and then adding them with a weight coefficient.Based on the principles of montage art,the quality of the generated photomosaic image can be evaluated by both global and local error.Under the proposed framework,via an error function analysis,the results show that selecting a tile image using a global minimum distance minimizes both the global error and the local error simultaneously.Moreover,the weight coefficient of the image superposition can be used to adjust the ratio of the global and local errors.Finally,to verify the proposed method,we built a new photomosaic creation dataset during this study.The experimental results show that the proposed method achieves a low mean absolute error and that the generated photomosaic images have a more artistic effect than do the existing approaches.展开更多
Wheat is the most widely grown crop in the world,and its yield is closely related to global food security.The number of ears is important for wheat breeding and yield estimation.Therefore,automated wheat ear counting ...Wheat is the most widely grown crop in the world,and its yield is closely related to global food security.The number of ears is important for wheat breeding and yield estimation.Therefore,automated wheat ear counting techniques are essential for breeding high-yield varieties and increasing grain yield.However,all existing methods require position-level annotation for training,implying that a large amount of labor is required for annotation,limiting the application and development of deep learning technology in the agricultural field.To address this problem,we propose a count-supervised multiscale perceptive wheat counting network(CSNet,count-supervised network),which aims to achieve accurate counting of wheat ears using quantity information.In particular,in the absence of location information,CSNet adopts MLP-Mixer to construct a multiscale perception module with a global receptive field that implements the learning of small target attention maps between wheat ear features.We conduct comparative experiments on a publicly available global wheat head detection dataset,showing that the proposed count-supervised strategy outperforms existing position-supervised methods in terms of mean absolute error(MAE)and root mean square error(RMSE).This superior performance indicates that the proposed approach has a positive impact on improving ear counts and reducing labeling costs,demonstrating its great potential for agricultural counting tasks.The code is available at .展开更多
Precipitation is the most discontinuous atmospheric parameter because of its temporal and spatial variability. Precipitation observations at automatic weather stations(AWSs) show different patterns over different ti...Precipitation is the most discontinuous atmospheric parameter because of its temporal and spatial variability. Precipitation observations at automatic weather stations(AWSs) show different patterns over different time periods. This paper aims to reconstruct missing data by finding the time periods when precipitation patterns are similar, with a method called the intermittent sliding window period(ISWP) technique—a novel approach to reconstructing the majority of non-continuous missing real-time precipitation data. The ISWP technique is applied to a 1-yr precipitation dataset(January 2015 to January 2016), with a temporal resolution of 1 h, collected at 11 AWSs run by the Indian Meteorological Department in the capital region of Delhi. The acquired dataset has missing precipitation data amounting to 13.66%, of which 90.6% are reconstructed successfully. Furthermore, some traditional estimation algorithms are applied to the reconstructed dataset to estimate the remaining missing values on an hourly basis. The results show that the interpolation of the reconstructed dataset using the ISWP technique exhibits high quality compared with interpolation of the raw dataset. By adopting the ISWP technique, the root-mean-square errors(RMSEs)in the estimation of missing rainfall data—based on the arithmetic mean, multiple linear regression, linear regression,and moving average methods—are reduced by 4.2%, 55.47%, 19.44%, and 9.64%, respectively. However, adopting the ISWP technique with the inverse distance weighted method increases the RMSE by 0.07%, due to the fact that the reconstructed data add a more diverse relation to its neighboring AWSs.展开更多
文摘Knowing the influence of the size of datasets for regression models can help in improving the accuracy of a solar power forecast and make the most out of renewable energy systems.This research explores the influence of dataset size on the accuracy and reliability of regression models for solar power prediction,contributing to better forecasting methods.The study analyzes data from two solar panels,aSiMicro03036 and aSiTandem72-46,over 7,14,17,21,28,and 38 days,with each dataset comprising five independent and one dependent parameter,and split 80–20 for training and testing.Results indicate that Random Forest consistently outperforms other models,achieving the highest correlation coefficient of 0.9822 and the lowest Mean Absolute Error(MAE)of 2.0544 on the aSiTandem72-46 panel with 21 days of data.For the aSiMicro03036 panel,the best MAE of 4.2978 was reached using the k-Nearest Neighbor(k-NN)algorithm,which was set up as instance-based k-Nearest neighbors(IBk)in Weka after being trained on 17 days of data.Regression performance for most models(excluding IBk)stabilizes at 14 days or more.Compared to the 7-day dataset,increasing to 21 days reduced the MAE by around 20%and improved correlation coefficients by around 2.1%,highlighting the value of moderate dataset expansion.These findings suggest that datasets spanning 17 to 21 days,with 80%used for training,can significantly enhance the predictive accuracy of solar power generation models.
基金funded by the National Natural Science Foundation of China(Nos.42174011 and 41874001)Jiangxi Province Graduate Student Innovation Fund(No.YC2021-S614)+2 种基金Jiangxi Provincial Natural Science Foundation(No.20202BABL212015)the East China University of Technology Ph.D.Project(No.DNBK2019181)the Key Laboratory for Digital Land and Resources of Jiangxi Province,East China University of Technology(No.DLLJ202109)
文摘After the first Earth Orientation Parameters Prediction Comparison Campaign(1 st EOP PCC),the traditional method using least-squares extrapolation and autoregressive(LS+AR)models was considered as one of the polar motion prediction methods with higher accuracy.The traditional method predicts individual polar motion series separately,which has a single input data and limited improvement in prediction accuracy.To address this problem,this paper proposes a new method for predicting polar motion by combining the difference between polar motion series.The X,Y,and Y-X series were predicted separately using LS+AR models.Then,the new forecast value of X series is obtained by combining the forecast value of Y series with that of Y-X series;the new forecast value of Y series is obtained by combining the forecast value of X series with that of Y-X series.The hindcast experimental comparison results from January 1,2011 to April 4,2021 show that the new method achieves a maximum improvement of 12.95%and 14.96%over the traditional method in the X and Y directions,respectively.The new method has obvious advantages compared with the differential method.This study tests the stability and superiority of the new method and provides a new idea for the research of polar motion prediction.
文摘In order to research brain problems using MRI,PET,and CT neuroimaging,a correct understanding of brain function is required.This has been considered in earlier times with the support of traditional algorithms.Deep learning process has also been widely considered in these genomics data processing system.In this research,brain disorder illness incliding Alzheimer’s disease,Schizophrenia and Parkinson’s diseaseis is analyzed owing to misdetection of disorders in neuroimaging data examined by means fo traditional methods.Moeover,deep learning approach is incorporated here for classification purpose of brain disorder with the aid of Deep Belief Networks(DBN).Images are stored in a secured manner by using DNA sequence based on JPEG Zig Zag Encryption algorithm(DBNJZZ)approach.The suggested approach is executed and tested by using the performance metric measure such as accuracy,root mean square error,Mean absolute error and mean absolute percentage error.Proposed DBNJZZ gives better performance than previously available methods.
基金supported by the Rice Research Institute, Rasht of Iran
文摘An accurate mathematical representation of soil particle-size distribution(PSD) is required to estimate soil hydraulic properties or to compare texture measurements using different classification systems. However, many databases do not contain full PSD data,but instead contain only the clay, silt, and sand mass fractions. The objective of this study was to evaluate the abilities of four PSD models(the Skaggs model, the Fooladmand model, the modified Gray model GM(1,1), and the Fredlund model) to predict detailed PSD using limited soil textural data and to determine the effects of soil texture on the performance of the individual PSD model.The mean absolute error(MAE) and root mean square error(RMSE) were used to measure the goodness-of-fit of the models, and the Akaike's information criterion(AIC) was used to compare the quality of model fits. The performance of all PSD models except the GM(1,1) improved with increasing clay content in soils. This result showed that the GM(1,1) was less dependent on soil texture.The Fredlund model was the best for describing the PSDs of all soil textures except in the sand textural class. However, the GM(1,1) showed better performance as the sand content increased. These results indicated that the Fredlund model showed the best performance and the least values of all evaluation criteria, and can be used using limited soil textural data for detailed PSD.
文摘Local markets in East Africa have been destroyed by raging fires,leading to the loss of life and property in the nearby communities.Electrical circuits,arson,and neglected charcoal stoves are the major causes of these fires.Previous methods,i.e.,satellites,are expensive to maintain and cause unnecessary delays.Also,unit-smoke detectors are highly prone to false alerts.In this paper,an Interval Type-2 TSK fuzzy model for an intelligent lightweight fire intensity detection algorithm with decision-making in low-power devices is proposed using a sparse inference rules approach.A free open–source MATLAB/Simulink fuzzy toolbox integrated into MATLAB 2018a is used to investigate the performance of the Interval Type-2 fuzzy model.Two crisp input parameters,namely:FIT and FIG��are used.Results show that the Interval Type-2 model achieved an accuracy value of FIO�=98.2%,MAE=1.3010,MSE=1.6938 and RMSE=1.3015 using regression analysis.The study shall assist the firefighting personnel in fully understanding and mitigating the current level of fire danger.As a result,the proposed solution can be fully implemented in low-cost,low-power fire detection systems to monitor the state of fire with improved accuracy and reduced false alerts.Through informed decision-making in low-cost fire detection devices,early warning notifications can be provided to aid in the rapid evacuation of people,thereby improving fire safety surveillance,management,and protection for the market community.
文摘In this paper,we tested our methodology on the stocks of four representative companies:Apple,Comcast Corporation(CMCST),Google,and Qualcomm.We compared their performance to several stocks using the hidden Markov model(HMM)and forecasts using mean absolute percentage error(MAPE).For simplicity,we considered four main features in these stocks:open,close,high,and low prices.When using the HMM for forecasting,the HMM has the best prediction for the daily low stock price and daily high stock price of Apple and CMCST,respectively.By calculating the MAPE for the four data sets of Google,the close price has the largest prediction error,while the open price has the smallest prediction error.The HMM has the largest prediction error and the smallest prediction error for Qualcomm’s daily low stock price and daily high stock price,respectively.
文摘In this paper, the Holt’s exponential smoothing and Auto-Regressive Integrated Moving Average (ARIMA) models were used to forecast inflation rate of Zambia using the monthly consumer price index (CPI) data from May 2010 to May 2014. Results show that the ARIMA ((12), 1, 0) is an adequate model which best fits the CPI time series data and is therefore suitable for forecasting CPI and subsequently the inflation rate. However, the choice of the Holt’s exponential smoothing is as good as an ARIMA model considering the smaller deviations in the mean absolute percentage error and mean square error. Moreover, the Holt’s exponential smoothing model is less complicated since you do not require specialised software to implement it as is the case for ARIMA models. The forecasted inflation rate for April and May, 2015 is 7.0 and 6.6 respectively.
文摘For fuzzy systems to be implemented effectively,the fuzzy membership function(MF)is essential.A fuzzy system(FS)that implements precise input and output MFs is presented to enhance the performance and accuracy of single-input single-output(SISO)FSs and introduce the most applicable input and output MFs protocol to linearize the fuzzy system’s output.Utilizing a variety of non-linear techniques,a SISO FS is simulated.The results of FS experiments conducted in comparable conditions are then compared.The simulated results and the results of the experimental setup agree fairly well.The findings of the suggested model demonstrate that the relative error is abated to a sufficient range(≤±10%)and that the mean absolute percentage error(MPAE)is reduced by around 66.2%.The proposed strategy to reduceMAPE using an FS improves the system’s performance and control accuracy.By using the best input and output MFs protocol,the energy and financial efficiency of every SISO FS can be improved with very little tuning of MFs.The proposed fuzzy system performed far better than other modern days approaches available in the literature.
基金supported by the National Natural Science Foundation Foundation of China(Grant Nos.61871196,61673186,and 61602190)the Natural Science Foundation of Fujian Province of China(2019J01082 and 2017J01110)the Promotion Program for Young and Middle-aged Teacher in Science and Technology Research of Huaqiao University(ZQN-YX601 and ZQN-710)。
文摘Photomosaic images are composite images composed of many small images called tiles.In its overall visual effect,a photomosaic image is similar to the target image,and photomosaics are also called“montage art”.Noisy blocks and the loss of local information are the major obstacles in most methods or programs that create photomosaic images.To solve these problems and generate a photomosaic image in this study,we propose a tile selection method based on error minimization.A photomosaic image can be generated by partitioning the target image in a rectangular pattern,selecting appropriate tile images,and then adding them with a weight coefficient.Based on the principles of montage art,the quality of the generated photomosaic image can be evaluated by both global and local error.Under the proposed framework,via an error function analysis,the results show that selecting a tile image using a global minimum distance minimizes both the global error and the local error simultaneously.Moreover,the weight coefficient of the image superposition can be used to adjust the ratio of the global and local errors.Finally,to verify the proposed method,we built a new photomosaic creation dataset during this study.The experimental results show that the proposed method achieves a low mean absolute error and that the generated photomosaic images have a more artistic effect than do the existing approaches.
基金supported by the National Natural Science Foundation of China(no.62162008)Guizhou Provincial Science and Technology Projects(CXTD[2023]027)Guiyang Guian Science and Technology Talent Training Project([2024]2-15).
文摘Wheat is the most widely grown crop in the world,and its yield is closely related to global food security.The number of ears is important for wheat breeding and yield estimation.Therefore,automated wheat ear counting techniques are essential for breeding high-yield varieties and increasing grain yield.However,all existing methods require position-level annotation for training,implying that a large amount of labor is required for annotation,limiting the application and development of deep learning technology in the agricultural field.To address this problem,we propose a count-supervised multiscale perceptive wheat counting network(CSNet,count-supervised network),which aims to achieve accurate counting of wheat ears using quantity information.In particular,in the absence of location information,CSNet adopts MLP-Mixer to construct a multiscale perception module with a global receptive field that implements the learning of small target attention maps between wheat ear features.We conduct comparative experiments on a publicly available global wheat head detection dataset,showing that the proposed count-supervised strategy outperforms existing position-supervised methods in terms of mean absolute error(MAE)and root mean square error(RMSE).This superior performance indicates that the proposed approach has a positive impact on improving ear counts and reducing labeling costs,demonstrating its great potential for agricultural counting tasks.The code is available at .
文摘Precipitation is the most discontinuous atmospheric parameter because of its temporal and spatial variability. Precipitation observations at automatic weather stations(AWSs) show different patterns over different time periods. This paper aims to reconstruct missing data by finding the time periods when precipitation patterns are similar, with a method called the intermittent sliding window period(ISWP) technique—a novel approach to reconstructing the majority of non-continuous missing real-time precipitation data. The ISWP technique is applied to a 1-yr precipitation dataset(January 2015 to January 2016), with a temporal resolution of 1 h, collected at 11 AWSs run by the Indian Meteorological Department in the capital region of Delhi. The acquired dataset has missing precipitation data amounting to 13.66%, of which 90.6% are reconstructed successfully. Furthermore, some traditional estimation algorithms are applied to the reconstructed dataset to estimate the remaining missing values on an hourly basis. The results show that the interpolation of the reconstructed dataset using the ISWP technique exhibits high quality compared with interpolation of the raw dataset. By adopting the ISWP technique, the root-mean-square errors(RMSEs)in the estimation of missing rainfall data—based on the arithmetic mean, multiple linear regression, linear regression,and moving average methods—are reduced by 4.2%, 55.47%, 19.44%, and 9.64%, respectively. However, adopting the ISWP technique with the inverse distance weighted method increases the RMSE by 0.07%, due to the fact that the reconstructed data add a more diverse relation to its neighboring AWSs.