To obtain more stable spectral data for accurate quantitative analysis of multi-element,especially for the large-area in-situ elements detection of soils, we propose a method for a multielement quantitative analysis o...To obtain more stable spectral data for accurate quantitative analysis of multi-element,especially for the large-area in-situ elements detection of soils, we propose a method for a multielement quantitative analysis of soils using calibration-free laser-induced breakdown spectroscopy(CF-LIBS) based on data filtering. In this study, we analyze a standard soil sample doped with two heavy metal elements, Cu and Cd, with a specific focus on the line of Cu I324.75 nm for filtering the experimental data of multiple sample sets. Pre-and post-data filtering,the relative standard deviation for Cu decreased from 30% to 10%, The limits of detection(LOD)values for Cu and Cd decreased by 5% and 4%, respectively. Through CF-LIBS, a quantitative analysis was conducted to determine the relative content of elements in soils. Using Cu as a reference, the concentration of Cd was accurately calculated. The results show that post-data filtering, the average relative error of the Cd decreases from 11% to 5%, indicating the effectiveness of data filtering in improving the accuracy of quantitative analysis. Moreover, the content of Si, Fe and other elements can be accurately calculated using this method. To further correct the calculation, the results for Cd was used to provide a more precise calculation. This approach is of great importance for the large-area in-situ heavy metals and trace elements detection in soil, as well as for rapid and accurate quantitative analysis.展开更多
This paper presents a simple complete K level tree (CKT) architecture for text database organization and rapid data filtering. A database is constructed as a CKT forest and each CKT contains data of the same length. T...This paper presents a simple complete K level tree (CKT) architecture for text database organization and rapid data filtering. A database is constructed as a CKT forest and each CKT contains data of the same length. The maximum depth and the minimum depth of an individual CKT are equal and identical to data’s length. Insertion and deletion operations are defined; storage method and filtering algorithm are also designed for good compensation between efficiency and complexity. Applications to computer aided teaching of Chinese and protein selection show that an about 30% reduction of storage consumption and an over 60% reduction of computation may be easily obtained.展开更多
Model reconstruction from points scanned on existing physical objects is much important in a variety of situations such as reverse engineering for mechanical products, computer vision and recovery of biological shapes...Model reconstruction from points scanned on existing physical objects is much important in a variety of situations such as reverse engineering for mechanical products, computer vision and recovery of biological shapes from two dimensional contours. With the development of measuring equipment, cloud points that contain more details of the object can be obtained conveniently. On the other hand, large quantity of sampled points brings difficulties to model reconstruction method. This paper first presents an algorithm to automatically reduce the number of cloud points under given tolerance. Triangle mesh surface from the simplified data set is reconstructed by the marching cubes algorithm. For various reasons, reconstructed mesh usually contains unwanted holes. An approach to create new triangles is proposed with optimized shape for covering the unexpected holes in triangle meshes. After hole filling, watertight triangle mesh can be directly output in STL format, which is widely used in rapid prototype manufacturing. Practical examples are included to demonstrate the method.展开更多
2The joint opening degree is a critical index for assessing the stability of jointed rock masses,which directly impacts the rock mass quality.It is also a key factor influencing the design of tunnel support structures...2The joint opening degree is a critical index for assessing the stability of jointed rock masses,which directly impacts the rock mass quality.It is also a key factor influencing the design of tunnel support structures.Hammer and rotary drilling rigs,commonly employed as rock-breaking equipment in tunneling,inevitably encounter joints with varying opening degrees during construction.This research aims to enhance the sampling frequency of hammer and rotary drilling rigs and optimize the joint detection algorithm,thereby equipping these rigs with the capability to detect joint opening degrees.This paper develops high-frequency acquisition equipment for drilling parameters to realize millimeter-level data acquisition.Drilling experiments on jointed rock mass are conducted under conditions corresponding to joint opening degrees of 1 mm,3 mm,and 5 mm.The relationships among joint opening degree,drilling parameters,and width of rock failure region are investigated.A joint opening degree detection algorithm is proposed based on the drilling parameters and moving average filter.The results indicate that the curves of penetration velocity and rotary pressure along the drilling direction exhibit a three-segment distribution,i.e."stable segment-adjustment segment-stable segment".The variation curves of drilling parameters display a“velocity mountain”and a“pressure valley”in the failure region.The relative errors in joint opening degree estimation based on penetration velocity and rotary pressure range from 3.4%to 32%and from 6%to 35%,with average relative errors of 12.95%and 16.24%,respectively.展开更多
Efficient and effective data acquisition is of theoretical and practical importance in WSN applications because data measured and collected by WSN is often unreliable, such as those often accompanied by noise and erro...Efficient and effective data acquisition is of theoretical and practical importance in WSN applications because data measured and collected by WSN is often unreliable, such as those often accompanied by noise and error, missing values or inconsistent data. Motivated by fog computing, which focuses on how to effectively offload computation-intensive tasks from resource-constrained devices, this paper proposes a simple but yet effective data acquisition approach with the ability of filtering abnormal data and meeting the real-time requirement. Our method uses a cooperation mechanism by leveraging on both an architectural and algorithmic approach. Firstly, the sensor node with the limited computing resource only accomplishes detecting and marking the suspicious data using a light weight algorithm. Secondly, the cluster head evaluates suspicious data by referring to the data from the other sensor nodes in the same cluster and discard the abnormal data directly. Thirdly, the sink node fills up the discarded data with an approximate value using nearest neighbor data supplement method. Through the architecture, each node only consumes a few computational resources and distributes the heavily computing load to several nodes. Simulation results show that our data acquisition method is effective considering the real-time outlier filtering and the computing overhead.展开更多
The massive web-based information resources have led to an increasing demand for effective automatic retrieval of target information for web applications. This paper introduces a web-based data extraction tool that de...The massive web-based information resources have led to an increasing demand for effective automatic retrieval of target information for web applications. This paper introduces a web-based data extraction tool that deploys various algorithms to locate, extract and filter tabular data from HTML pages and to transform them into new web-based representations. The tool has been applied in an aquaculture web application platform for extracting and generating aquatic product market information. Results prove that this tool is very effective in extracting the required data from web pages.展开更多
The stability problem of power grids has become increasingly serious in recent years as the size of novel power systems increases.In order to improve and ensure the stable operation of the novel power system,this stud...The stability problem of power grids has become increasingly serious in recent years as the size of novel power systems increases.In order to improve and ensure the stable operation of the novel power system,this study proposes an artificial emotional lazy Q-learning method,which combines artificial emotion,lazy learning,and reinforcement learning for static security and stability analysis of power systems.Moreover,this study compares the analysis results of the proposed method with those of the small disturbance method for a stand-alone power system and verifies that the proposed lazy Q-learning method is able to effectively screen useful data for learning,and improve the static security stability of the new type of power system more effectively than the traditional proportional-integral-differential control and Q-learning methods.展开更多
The target of this paper is the performance-based diagnostics of a gas turbine for the automated early detection of components malfunctions. The paper proposes a new combination of multiple methodologies for the perfo...The target of this paper is the performance-based diagnostics of a gas turbine for the automated early detection of components malfunctions. The paper proposes a new combination of multiple methodologies for the performance-based diagnostics of single and multiple failures on a two-spool engine. The aim of this technique is to combine the strength of each methodology and provide a high success rate for single and multiple failures with the presence of measurement malfunctions. A combination of KF(Kalman Filter), ANN(Artificial Neural Network) and FL(Fuzzy Logic) is used in this research in order to improve the success rate, to increase the flexibility and the number of failures detected and to combine the strength of multiple methods to have a more robust solution. The Kalman filter has in his strength the measurement noise treatment, the artificial neural network the simulation and prediction of reference and deteriorated performance profile and the fuzzy logic the categorization flexibility, which is used to quantify and classify the failures. In the area of GT(Gas Turbine) diagnostics, the multiple failures in combination with measurement issues and the utilization of multiple methods for a 2-spool industrial gas turbine engine has not been investigated extensively.This paper reports the key contribution of each component of the methodology and brief the results in the quantification and classification success rate. The methodology is tested for constant deterioration and increasing noise and for random deterioration. For the random deterioration and nominal noise of 0.4%, in particular, the quantification success rate is above 92.0%, while the classification success rate is above 95.1%. Moreover, the speed of the data processing(1.7 s/sample)proves the suitability of this methodology for online diagnostics.展开更多
This paper discusses a strategy for estimating Hammerstein nonlinear systems in the presence of measurement noises for industrial control by applying filtering and recursive approaches.The proposed Hammerstein nonline...This paper discusses a strategy for estimating Hammerstein nonlinear systems in the presence of measurement noises for industrial control by applying filtering and recursive approaches.The proposed Hammerstein nonlinear systems are made up of a neural fuzzy network(NFN)and a linear state`-space model.The estimation of parameters for Hammerstein systems can be achieved by employing hybrid signals,which consist of step signals and random signals.First,based on the characteristic that step signals do not excite static nonlinear systems,that is,the intermediate variable of the Hammerstein system is a step signal with different amplitudes from the input,the unknown intermediate variables can be replaced by inputs,solving the problem of unmeasurable intermediate variable information.In the presence of step signals,the parameters of the state-space model are estimated using the recursive extended least squares(RELS)algorithm.Moreover,to effectively deal with the interference of measurement noises,a data filtering technique is introduced,and the filtering-based RELS is formulated for estimating the NFN by employing random signals.Finally,according to the structure of the Hammerstein system,the control system is designed by eliminating the nonlinear block so that the generated system is approximately equivalent to a linear system,and it can then be easily controlled by applying a linear controller.The effectiveness and feasibility of the developed identification and control strategy are demonstrated using two industrial simulation cases.展开更多
This paper introduces the reader to our Kalman filter developed for geodetic VLBI(very long baseline interferometry) data analysis. The focus lies on the EOP(Earth Orientation Parameter) determination based on the...This paper introduces the reader to our Kalman filter developed for geodetic VLBI(very long baseline interferometry) data analysis. The focus lies on the EOP(Earth Orientation Parameter) determination based on the Continuous VLBI Campaign 2014(CONT14) data, but also earlier CONT campaigns are analyzed. For validation and comparison purposes we use EOP determined with the classical LSM(least squares method) estimated from the same VLBI data set as the Kalman solution with a daily resolution. To gain higher resolved EOP from LSM we run solutions which yield hourly estimates for polar motion and dUTl = Universal Time(UT1)-Coordinated Universal Time(UTC). As an external validation data set we use a GPS(Global Positioning System) solution providing hourly polar motion results.Further, we describe our approach for determining the noise driving the Kalman filter. It has to be chosen carefully, since it can lead to a significant degradation of the results. We illustrate this issue in context with the de-correlation of polar motion and nutation.Finally, we find that the agreement with respect to GPS can be improved by up to 50% using our filter compared to the LSM approach, reaching a similar precision than the GPS solution. Especially the power of erroneous high-frequency signals can be reduced dramatically, opening up new possibilities for highfrequency EOP studies and investigations of the models involved in VLBI data analysis.We prove that the Kalman filter is more than on par with the classical least squares method and that it is a valuable alternative, especially on the advent of the VLBI2010 Global Observing System and within the GGOS frame work.展开更多
Bayesian estimation theory provides a general approach for the state estimate of linear or nonlinear and Gaussian or non-Gaussian systems. In this study, we first explore two Bayesian-based methods: ensemble adjustme...Bayesian estimation theory provides a general approach for the state estimate of linear or nonlinear and Gaussian or non-Gaussian systems. In this study, we first explore two Bayesian-based methods: ensemble adjustment Kalman filter(EAKF) and sequential importance resampling particle filter(SIR-PF), using a well-known nonlinear and non-Gaussian model(Lorenz '63 model). The EAKF, which is a deterministic scheme of the ensemble Kalman filter(En KF), performs better than the classical(stochastic) En KF in a general framework. Comparison between the SIR-PF and the EAKF reveals that the former outperforms the latter if ensemble size is so large that can avoid the filter degeneracy, and vice versa. The impact of the probability density functions and effective ensemble sizes on assimilation performances are also explored. On the basis of comparisons between the SIR-PF and the EAKF, a mixture filter, called ensemble adjustment Kalman particle filter(EAKPF), is proposed to combine their both merits. Similar to the ensemble Kalman particle filter, which combines the stochastic En KF and SIR-PF analysis schemes with a tuning parameter, the new mixture filter essentially provides a continuous interpolation between the EAKF and SIR-PF. The same Lorenz '63 model is used as a testbed, showing that the EAKPF is able to overcome filter degeneracy while maintaining the non-Gaussian nature, and performs better than the EAKF given limited ensemble size.展开更多
For name-based routing/switching in NDN, the key challenges are to manage large-scale forwarding Tables, to lookup long names of variable lengths, and to deal with frequent updates. Hashing associated with proper leng...For name-based routing/switching in NDN, the key challenges are to manage large-scale forwarding Tables, to lookup long names of variable lengths, and to deal with frequent updates. Hashing associated with proper length-detecting is a straightforward yet efficient solution. Binary search strategy can reduce the number of required hash detecting in the worst case. However, to assure the searching path correct in such a schema, either backtrack searching or redundantly storing some prefixes is required, leading to performance or memory issues as a result. In this paper, we make a deep study on the binary search, and propose a novel mechanism to ensure correct searching path without neither additional backtrack costs nor redundant memory consumptions. Along any binary search path, a bloom filter is employed at each branching point to verify whether a said prefix is present, instead of storing that prefix here. By this means, we can gain significantly optimization on memory efficiency, at the cost of bloom checking before each detecting. Our evaluation experiments on both real-world and randomly synthesized data sets demonstrate our superiorities clearly展开更多
Brain-computer interface is a communication system that connects the brain with computer (or other devices) but is not dependent on the normal output of the brain (i.e., peripheral nerve and muscle). Electro-oculo...Brain-computer interface is a communication system that connects the brain with computer (or other devices) but is not dependent on the normal output of the brain (i.e., peripheral nerve and muscle). Electro-oculogram is a dominant artifact which has a significant negative influence on further analysis of real electroencephalography data. This paper presented a data adaptive technique for artifact suppression and brain wave extraction from electroencephalography signals to detect regional brain activities. Empirical mode decomposition based adaptive thresholding approach was employed here to suppress the electro-oculogram artifact. Fractional Gaussian noise was used to determine the threshold level derived from the analysis data without any training. The purified electroencephalography signal was composed of the brain waves also called rhythmic components which represent the brain activities. The rhythmic components were extracted from each electroencephalography channel using adaptive wiener filter with the original scale. The regional brain activities were mapped on the basis of the spatial distribution of rhythmic components, and the results showed that different regions of the brain are activated in response to different stimuli. This research analyzed the activities of a single rhythmic component, alpha with respect to different motor imaginations. The experimental results showed that the proposed method is very efficient in artifact suppression and identifying individual motor imagery based on the activities of alpha component.展开更多
In order to improve the accuracy of free flight conflict detection and reduce the false alarm rate, an improved flight conflict detection algorithm is proposed based on Gauss-Hermite particle filter(GHPF). The algor...In order to improve the accuracy of free flight conflict detection and reduce the false alarm rate, an improved flight conflict detection algorithm is proposed based on Gauss-Hermite particle filter(GHPF). The algorithm improves the traditional flight conflict detection method in two aspects:(i) New observation data are integrated into system state transition probability, and Gauss-Hermite Filter(GHF) is used for generating the importance density function.(ii) GHPF is used for flight trajectory prediction and flight conflict probability calculation. The experimental results show that the accuracy of conflict detection and tracing with GHPF is better than that with standard particle filter. The detected conflict probability is more precise with GHPF, and GHPF is suitable for early free flight conflict detection.展开更多
Generally, predicting whether an item will be liked or disliked by active users, and how much an item will be liked, is a main task of collaborative filtering systems or recommender systems. Recently, predicting most ...Generally, predicting whether an item will be liked or disliked by active users, and how much an item will be liked, is a main task of collaborative filtering systems or recommender systems. Recently, predicting most likely bought items for a target user, which is a subproblem of the rank problem of collaborative filtering, became an important task in collaborative filtering. Traditionally, the prediction uses the user item co-occurrence data based on users' buying behaviors. However, it is challenging to achieve good prediction performance using traditional methods based on single domain information due to the extreme sparsity of the buying matrix. In this paper, we propose a novel method called the preference transfer model for effective cross-domain collaborative filtering. Based on the preference transfer model, a common basis item-factor matrix and different user-factor matrices are factorized.Each user-factor matrix can be viewed as user preference in terms of browsing behavior or buying behavior. Then,two factor-user matrices can be used to construct a so-called ‘preference dictionary' that can discover in advance the consistent preference of users, from their browsing behaviors to their buying behaviors. Experimental results demonstrate that the proposed preference transfer model outperforms the other methods on the Alibaba Tmall data set provided by the Alibaba Group.展开更多
With the vigorous expansion of nonlinear adaptive filtering with real-valued kernel functions,its counterpart complex kernel adaptive filtering algorithms were also sequentially proposed to solve the complex-valued no...With the vigorous expansion of nonlinear adaptive filtering with real-valued kernel functions,its counterpart complex kernel adaptive filtering algorithms were also sequentially proposed to solve the complex-valued nonlinear problems arising in almost all real-world applications.This paper firstly presents two schemes of the complex Gaussian kernel-based adaptive filtering algorithms to illustrate their respective characteristics.Then the theoretical convergence behavior of the complex Gaussian kernel least mean square(LMS) algorithm is studied by using the fixed dictionary strategy.The simulation results demonstrate that the theoretical curves predicted by the derived analytical models consistently coincide with the Monte Carlo simulation results in both transient and steady-state stages for two introduced complex Gaussian kernel LMS algonthms using non-circular complex data.The analytical models are able to be regard as a theoretical tool evaluating ability and allow to compare with mean square error(MSE) performance among of complex kernel LMS(KLMS) methods according to the specified kernel bandwidth and the length of dictionary.展开更多
Advances in mobile technology make most people have their own mobile devices which contain various sensors such as a smartphone.People produce their own personal data or collect surrounding environment data with their...Advances in mobile technology make most people have their own mobile devices which contain various sensors such as a smartphone.People produce their own personal data or collect surrounding environment data with their mobile devices at every moment.Recently,a broad spectrum of studies on Participatory Sensing,the concept of extracting new knowledge from a mass of data sent by participants,are conducted.Data collection method is one of the base technologies for Participatory Sensing,so networking and data filtering techniques for collecting a large number of data are the most interested research area.In this paper,we propose a data collection model in hybrid network for participatory sensing.The proposed model classifies data into two types and decides networking form and data filtering method based on the data type to decrease loads on data center and improve transmission speed.展开更多
The problem of forming validation regions or gates for new sensor measurements obtained when tracking targets in clutter is considered. Since the gate size is an integral part of the data association filter, this pape...The problem of forming validation regions or gates for new sensor measurements obtained when tracking targets in clutter is considered. Since the gate size is an integral part of the data association filter, this paper is intended to describe a way of estimating the gate size via the performance of the data association filter. That is, the gate size can be estimated by looking for the optimal performance of the data association filter. Simulations show that this estimation method of the gate size offers advantages over the common and classical estimation methods of the gate size, especially in a heavy clutter and/or false alarm environment.展开更多
Using Ensemble Adjustment Kalman Filter(EAKF), two types of ocean satellite datasets were assimilated into the First Institute of Oceanography Earth System Model(FIO-ESM), v1.0. One control experiment without data ass...Using Ensemble Adjustment Kalman Filter(EAKF), two types of ocean satellite datasets were assimilated into the First Institute of Oceanography Earth System Model(FIO-ESM), v1.0. One control experiment without data assimilation and four assimilation experiments were conducted. All the experiments were ensemble runs for 1-year period and each ensemble started from different initial conditions. One assimilation experiment was designed to assimilate sea level anomaly(SLA); another, to assimilate sea surface temperature(SST); and the other two assimilation experiments were designed to assimilate both SLA and SST but in different orders. To examine the effects of data assimilation, all the results were compared with an objective analysis dataset of EN3. Different from the ocean model without coupling, the momentum and heat fluxes were calculated via air-sea coupling in FIO-ESM, which makes the relations among variables closer to the reality. The outputs after the assimilation of satellite data were improved on the whole, especially at depth shallower than 1000 m. The effects due to the assimilation of different kinds of satellite datasets were somewhat different. The improvement due to SST assimilation was greater near the surface, while the improvement due to SLA assimilation was relatively great in the subsurface. The results after the assimilation of both SLA and SST were much better than those only assimilated one kind of dataset, but the difference due to the assimilation order of the two kinds of datasets was not significant.展开更多
In this paper, we put forward a new method to reduce the calculation amountof the gain matrix of Kalman filter in data assimilation. We rewrite the vector describing the totalstate variables with two vectors whose dim...In this paper, we put forward a new method to reduce the calculation amountof the gain matrix of Kalman filter in data assimilation. We rewrite the vector describing the totalstate variables with two vectors whose dimensions are small and thus obtain the main parts and thetrivial parts of the state variables. On the basis of the rewrittten formula, we not only develop areduced Kalman filter scheme, but also obtain the transition equations about truncation errors, withwhich the validity of the main parts acting for the total state variables can be evaluatedquantitatively. The error transition equations thus offer an indirect testimony to the rationalityof the main parts.展开更多
基金supported by the Major Science and Technology Project of Gansu Province(No.22ZD6FA021-5)the Industrial Support Project of Gansu Province(Nos.2023CYZC-19 and 2021CYZC-22)the Science and Technology Project of Gansu Province(Nos.23YFFA0074,22JR5RA137 and 22JR5RA151).
文摘To obtain more stable spectral data for accurate quantitative analysis of multi-element,especially for the large-area in-situ elements detection of soils, we propose a method for a multielement quantitative analysis of soils using calibration-free laser-induced breakdown spectroscopy(CF-LIBS) based on data filtering. In this study, we analyze a standard soil sample doped with two heavy metal elements, Cu and Cd, with a specific focus on the line of Cu I324.75 nm for filtering the experimental data of multiple sample sets. Pre-and post-data filtering,the relative standard deviation for Cu decreased from 30% to 10%, The limits of detection(LOD)values for Cu and Cd decreased by 5% and 4%, respectively. Through CF-LIBS, a quantitative analysis was conducted to determine the relative content of elements in soils. Using Cu as a reference, the concentration of Cd was accurately calculated. The results show that post-data filtering, the average relative error of the Cd decreases from 11% to 5%, indicating the effectiveness of data filtering in improving the accuracy of quantitative analysis. Moreover, the content of Si, Fe and other elements can be accurately calculated using this method. To further correct the calculation, the results for Cd was used to provide a more precise calculation. This approach is of great importance for the large-area in-situ heavy metals and trace elements detection in soil, as well as for rapid and accurate quantitative analysis.
文摘This paper presents a simple complete K level tree (CKT) architecture for text database organization and rapid data filtering. A database is constructed as a CKT forest and each CKT contains data of the same length. The maximum depth and the minimum depth of an individual CKT are equal and identical to data’s length. Insertion and deletion operations are defined; storage method and filtering algorithm are also designed for good compensation between efficiency and complexity. Applications to computer aided teaching of Chinese and protein selection show that an about 30% reduction of storage consumption and an over 60% reduction of computation may be easily obtained.
文摘Model reconstruction from points scanned on existing physical objects is much important in a variety of situations such as reverse engineering for mechanical products, computer vision and recovery of biological shapes from two dimensional contours. With the development of measuring equipment, cloud points that contain more details of the object can be obtained conveniently. On the other hand, large quantity of sampled points brings difficulties to model reconstruction method. This paper first presents an algorithm to automatically reduce the number of cloud points under given tolerance. Triangle mesh surface from the simplified data set is reconstructed by the marching cubes algorithm. For various reasons, reconstructed mesh usually contains unwanted holes. An approach to create new triangles is proposed with optimized shape for covering the unexpected holes in triangle meshes. After hole filling, watertight triangle mesh can be directly output in STL format, which is widely used in rapid prototype manufacturing. Practical examples are included to demonstrate the method.
基金the National Natural Science Foundation of China(No.52378411)for the support in this research.
文摘2The joint opening degree is a critical index for assessing the stability of jointed rock masses,which directly impacts the rock mass quality.It is also a key factor influencing the design of tunnel support structures.Hammer and rotary drilling rigs,commonly employed as rock-breaking equipment in tunneling,inevitably encounter joints with varying opening degrees during construction.This research aims to enhance the sampling frequency of hammer and rotary drilling rigs and optimize the joint detection algorithm,thereby equipping these rigs with the capability to detect joint opening degrees.This paper develops high-frequency acquisition equipment for drilling parameters to realize millimeter-level data acquisition.Drilling experiments on jointed rock mass are conducted under conditions corresponding to joint opening degrees of 1 mm,3 mm,and 5 mm.The relationships among joint opening degree,drilling parameters,and width of rock failure region are investigated.A joint opening degree detection algorithm is proposed based on the drilling parameters and moving average filter.The results indicate that the curves of penetration velocity and rotary pressure along the drilling direction exhibit a three-segment distribution,i.e."stable segment-adjustment segment-stable segment".The variation curves of drilling parameters display a“velocity mountain”and a“pressure valley”in the failure region.The relative errors in joint opening degree estimation based on penetration velocity and rotary pressure range from 3.4%to 32%and from 6%to 35%,with average relative errors of 12.95%and 16.24%,respectively.
基金supported by National Natural Science Foundation of China, "Research on Accurate and Fair Service Recommendation Approach in Mobile Internet Environment", (No. 61571066)
文摘Efficient and effective data acquisition is of theoretical and practical importance in WSN applications because data measured and collected by WSN is often unreliable, such as those often accompanied by noise and error, missing values or inconsistent data. Motivated by fog computing, which focuses on how to effectively offload computation-intensive tasks from resource-constrained devices, this paper proposes a simple but yet effective data acquisition approach with the ability of filtering abnormal data and meeting the real-time requirement. Our method uses a cooperation mechanism by leveraging on both an architectural and algorithmic approach. Firstly, the sensor node with the limited computing resource only accomplishes detecting and marking the suspicious data using a light weight algorithm. Secondly, the cluster head evaluates suspicious data by referring to the data from the other sensor nodes in the same cluster and discard the abnormal data directly. Thirdly, the sink node fills up the discarded data with an approximate value using nearest neighbor data supplement method. Through the architecture, each node only consumes a few computational resources and distributes the heavily computing load to several nodes. Simulation results show that our data acquisition method is effective considering the real-time outlier filtering and the computing overhead.
基金Supported by the Shanghai Education Committee (No.06KZ016)
文摘The massive web-based information resources have led to an increasing demand for effective automatic retrieval of target information for web applications. This paper introduces a web-based data extraction tool that deploys various algorithms to locate, extract and filter tabular data from HTML pages and to transform them into new web-based representations. The tool has been applied in an aquaculture web application platform for extracting and generating aquatic product market information. Results prove that this tool is very effective in extracting the required data from web pages.
基金the Technology Project of China Southern Power Grid Digital Grid Research Institute Corporation,Ltd.(670000KK52220003)the National Key R&D Program of China(2020YFB0906000).
文摘The stability problem of power grids has become increasingly serious in recent years as the size of novel power systems increases.In order to improve and ensure the stable operation of the novel power system,this study proposes an artificial emotional lazy Q-learning method,which combines artificial emotion,lazy learning,and reinforcement learning for static security and stability analysis of power systems.Moreover,this study compares the analysis results of the proposed method with those of the small disturbance method for a stand-alone power system and verifies that the proposed lazy Q-learning method is able to effectively screen useful data for learning,and improve the static security stability of the new type of power system more effectively than the traditional proportional-integral-differential control and Q-learning methods.
文摘The target of this paper is the performance-based diagnostics of a gas turbine for the automated early detection of components malfunctions. The paper proposes a new combination of multiple methodologies for the performance-based diagnostics of single and multiple failures on a two-spool engine. The aim of this technique is to combine the strength of each methodology and provide a high success rate for single and multiple failures with the presence of measurement malfunctions. A combination of KF(Kalman Filter), ANN(Artificial Neural Network) and FL(Fuzzy Logic) is used in this research in order to improve the success rate, to increase the flexibility and the number of failures detected and to combine the strength of multiple methods to have a more robust solution. The Kalman filter has in his strength the measurement noise treatment, the artificial neural network the simulation and prediction of reference and deteriorated performance profile and the fuzzy logic the categorization flexibility, which is used to quantify and classify the failures. In the area of GT(Gas Turbine) diagnostics, the multiple failures in combination with measurement issues and the utilization of multiple methods for a 2-spool industrial gas turbine engine has not been investigated extensively.This paper reports the key contribution of each component of the methodology and brief the results in the quantification and classification success rate. The methodology is tested for constant deterioration and increasing noise and for random deterioration. For the random deterioration and nominal noise of 0.4%, in particular, the quantification success rate is above 92.0%, while the classification success rate is above 95.1%. Moreover, the speed of the data processing(1.7 s/sample)proves the suitability of this methodology for online diagnostics.
基金Project supported by the National Natural Science Foundation of China(No.62003151)the Changzhou Science and Technology Bureau,China(No.CJ20220065)+1 种基金the Qinglan Project of Jiangsu Province,China(No.2022[29])the Zhongwu Youth Innovative Talents Support Program of Jiangsu University of Technology,China(No.202102003)。
文摘This paper discusses a strategy for estimating Hammerstein nonlinear systems in the presence of measurement noises for industrial control by applying filtering and recursive approaches.The proposed Hammerstein nonlinear systems are made up of a neural fuzzy network(NFN)and a linear state`-space model.The estimation of parameters for Hammerstein systems can be achieved by employing hybrid signals,which consist of step signals and random signals.First,based on the characteristic that step signals do not excite static nonlinear systems,that is,the intermediate variable of the Hammerstein system is a step signal with different amplitudes from the input,the unknown intermediate variables can be replaced by inputs,solving the problem of unmeasurable intermediate variable information.In the presence of step signals,the parameters of the state-space model are estimated using the recursive extended least squares(RELS)algorithm.Moreover,to effectively deal with the interference of measurement noises,a data filtering technique is introduced,and the filtering-based RELS is formulated for estimating the NFN by employing random signals.Finally,according to the structure of the Hammerstein system,the control system is designed by eliminating the nonlinear block so that the generated system is approximately equivalent to a linear system,and it can then be easily controlled by applying a linear controller.The effectiveness and feasibility of the developed identification and control strategy are demonstrated using two industrial simulation cases.
基金supported by the Austrian Science Fund(FWF),project P24187-N21
文摘This paper introduces the reader to our Kalman filter developed for geodetic VLBI(very long baseline interferometry) data analysis. The focus lies on the EOP(Earth Orientation Parameter) determination based on the Continuous VLBI Campaign 2014(CONT14) data, but also earlier CONT campaigns are analyzed. For validation and comparison purposes we use EOP determined with the classical LSM(least squares method) estimated from the same VLBI data set as the Kalman solution with a daily resolution. To gain higher resolved EOP from LSM we run solutions which yield hourly estimates for polar motion and dUTl = Universal Time(UT1)-Coordinated Universal Time(UTC). As an external validation data set we use a GPS(Global Positioning System) solution providing hourly polar motion results.Further, we describe our approach for determining the noise driving the Kalman filter. It has to be chosen carefully, since it can lead to a significant degradation of the results. We illustrate this issue in context with the de-correlation of polar motion and nutation.Finally, we find that the agreement with respect to GPS can be improved by up to 50% using our filter compared to the LSM approach, reaching a similar precision than the GPS solution. Especially the power of erroneous high-frequency signals can be reduced dramatically, opening up new possibilities for highfrequency EOP studies and investigations of the models involved in VLBI data analysis.We prove that the Kalman filter is more than on par with the classical least squares method and that it is a valuable alternative, especially on the advent of the VLBI2010 Global Observing System and within the GGOS frame work.
基金The National Natural Science Foundation of China under contract Nos 41276029 and 41321004the Project of State Key Laboratory of Satellite Ocean Environment Dynamics,Second Institute of Oceanography under contract No.SOEDZZ1404the National Basic Research Program(973 Program)of China under contract No.2013CB430302
文摘Bayesian estimation theory provides a general approach for the state estimate of linear or nonlinear and Gaussian or non-Gaussian systems. In this study, we first explore two Bayesian-based methods: ensemble adjustment Kalman filter(EAKF) and sequential importance resampling particle filter(SIR-PF), using a well-known nonlinear and non-Gaussian model(Lorenz '63 model). The EAKF, which is a deterministic scheme of the ensemble Kalman filter(En KF), performs better than the classical(stochastic) En KF in a general framework. Comparison between the SIR-PF and the EAKF reveals that the former outperforms the latter if ensemble size is so large that can avoid the filter degeneracy, and vice versa. The impact of the probability density functions and effective ensemble sizes on assimilation performances are also explored. On the basis of comparisons between the SIR-PF and the EAKF, a mixture filter, called ensemble adjustment Kalman particle filter(EAKPF), is proposed to combine their both merits. Similar to the ensemble Kalman particle filter, which combines the stochastic En KF and SIR-PF analysis schemes with a tuning parameter, the new mixture filter essentially provides a continuous interpolation between the EAKF and SIR-PF. The same Lorenz '63 model is used as a testbed, showing that the EAKPF is able to overcome filter degeneracy while maintaining the non-Gaussian nature, and performs better than the EAKF given limited ensemble size.
基金supported by the National Natural Science Foundation of China (Grant No. 61472130 and 61702174)the China Postdoctoral Science Foundation funded project
文摘For name-based routing/switching in NDN, the key challenges are to manage large-scale forwarding Tables, to lookup long names of variable lengths, and to deal with frequent updates. Hashing associated with proper length-detecting is a straightforward yet efficient solution. Binary search strategy can reduce the number of required hash detecting in the worst case. However, to assure the searching path correct in such a schema, either backtrack searching or redundantly storing some prefixes is required, leading to performance or memory issues as a result. In this paper, we make a deep study on the binary search, and propose a novel mechanism to ensure correct searching path without neither additional backtrack costs nor redundant memory consumptions. Along any binary search path, a bloom filter is employed at each branching point to verify whether a said prefix is present, instead of storing that prefix here. By this means, we can gain significantly optimization on memory efficiency, at the cost of bloom checking before each detecting. Our evaluation experiments on both real-world and randomly synthesized data sets demonstrate our superiorities clearly
基金supported by a grant from the National Institute of Information and Communications Technology(NICT),Japan
文摘Brain-computer interface is a communication system that connects the brain with computer (or other devices) but is not dependent on the normal output of the brain (i.e., peripheral nerve and muscle). Electro-oculogram is a dominant artifact which has a significant negative influence on further analysis of real electroencephalography data. This paper presented a data adaptive technique for artifact suppression and brain wave extraction from electroencephalography signals to detect regional brain activities. Empirical mode decomposition based adaptive thresholding approach was employed here to suppress the electro-oculogram artifact. Fractional Gaussian noise was used to determine the threshold level derived from the analysis data without any training. The purified electroencephalography signal was composed of the brain waves also called rhythmic components which represent the brain activities. The rhythmic components were extracted from each electroencephalography channel using adaptive wiener filter with the original scale. The regional brain activities were mapped on the basis of the spatial distribution of rhythmic components, and the results showed that different regions of the brain are activated in response to different stimuli. This research analyzed the activities of a single rhythmic component, alpha with respect to different motor imaginations. The experimental results showed that the proposed method is very efficient in artifact suppression and identifying individual motor imagery based on the activities of alpha component.
基金Supported by the Joint Project of National Natural Science Foundation of ChinaCivil Aviation Administration of China(U1333116)
文摘In order to improve the accuracy of free flight conflict detection and reduce the false alarm rate, an improved flight conflict detection algorithm is proposed based on Gauss-Hermite particle filter(GHPF). The algorithm improves the traditional flight conflict detection method in two aspects:(i) New observation data are integrated into system state transition probability, and Gauss-Hermite Filter(GHF) is used for generating the importance density function.(ii) GHPF is used for flight trajectory prediction and flight conflict probability calculation. The experimental results show that the accuracy of conflict detection and tracing with GHPF is better than that with standard particle filter. The detected conflict probability is more precise with GHPF, and GHPF is suitable for early free flight conflict detection.
基金supported by the National Basic Research Program(973)of China(No.2012CB316400)the National Natural Science Foundation of China(No.61571393)
文摘Generally, predicting whether an item will be liked or disliked by active users, and how much an item will be liked, is a main task of collaborative filtering systems or recommender systems. Recently, predicting most likely bought items for a target user, which is a subproblem of the rank problem of collaborative filtering, became an important task in collaborative filtering. Traditionally, the prediction uses the user item co-occurrence data based on users' buying behaviors. However, it is challenging to achieve good prediction performance using traditional methods based on single domain information due to the extreme sparsity of the buying matrix. In this paper, we propose a novel method called the preference transfer model for effective cross-domain collaborative filtering. Based on the preference transfer model, a common basis item-factor matrix and different user-factor matrices are factorized.Each user-factor matrix can be viewed as user preference in terms of browsing behavior or buying behavior. Then,two factor-user matrices can be used to construct a so-called ‘preference dictionary' that can discover in advance the consistent preference of users, from their browsing behaviors to their buying behaviors. Experimental results demonstrate that the proposed preference transfer model outperforms the other methods on the Alibaba Tmall data set provided by the Alibaba Group.
基金supported by the National Natural Science Foundation of China(6100115361271415+4 种基金6140149961531015)the Fundamental Research Funds for the Central Universities(3102014JCQ010103102014ZD0041)the Opening Research Foundation of State Key Laboratory of Underwater Information Processing and Control(9140C231002130C23085)
文摘With the vigorous expansion of nonlinear adaptive filtering with real-valued kernel functions,its counterpart complex kernel adaptive filtering algorithms were also sequentially proposed to solve the complex-valued nonlinear problems arising in almost all real-world applications.This paper firstly presents two schemes of the complex Gaussian kernel-based adaptive filtering algorithms to illustrate their respective characteristics.Then the theoretical convergence behavior of the complex Gaussian kernel least mean square(LMS) algorithm is studied by using the fixed dictionary strategy.The simulation results demonstrate that the theoretical curves predicted by the derived analytical models consistently coincide with the Monte Carlo simulation results in both transient and steady-state stages for two introduced complex Gaussian kernel LMS algonthms using non-circular complex data.The analytical models are able to be regard as a theoretical tool evaluating ability and allow to compare with mean square error(MSE) performance among of complex kernel LMS(KLMS) methods according to the specified kernel bandwidth and the length of dictionary.
基金supported by Defense Acquisition Program Administration and Agency for Defense Development under the contract UD140022PD,Koreafunded by the Ministry of Science,ICT and Future Planning(NRF-2015R1C1A2A01051452).
文摘Advances in mobile technology make most people have their own mobile devices which contain various sensors such as a smartphone.People produce their own personal data or collect surrounding environment data with their mobile devices at every moment.Recently,a broad spectrum of studies on Participatory Sensing,the concept of extracting new knowledge from a mass of data sent by participants,are conducted.Data collection method is one of the base technologies for Participatory Sensing,so networking and data filtering techniques for collecting a large number of data are the most interested research area.In this paper,we propose a data collection model in hybrid network for participatory sensing.The proposed model classifies data into two types and decides networking form and data filtering method based on the data type to decrease loads on data center and improve transmission speed.
基金the National Natural Science Foundation of China (Grant No. 60672096)
文摘The problem of forming validation regions or gates for new sensor measurements obtained when tracking targets in clutter is considered. Since the gate size is an integral part of the data association filter, this paper is intended to describe a way of estimating the gate size via the performance of the data association filter. That is, the gate size can be estimated by looking for the optimal performance of the data association filter. Simulations show that this estimation method of the gate size offers advantages over the common and classical estimation methods of the gate size, especially in a heavy clutter and/or false alarm environment.
基金the National Natural Science Foundation of China-Shandong Joint Fund for Marine Science Research Centers (Grant No. U1406404)the Public Science and Technology Research Funds Projects of Ocean (Grant No. 201505013)Scientific Research Foundation of the First Institute of Oceanography, State Oceanic Administration (Grant No. 2012G24)
文摘Using Ensemble Adjustment Kalman Filter(EAKF), two types of ocean satellite datasets were assimilated into the First Institute of Oceanography Earth System Model(FIO-ESM), v1.0. One control experiment without data assimilation and four assimilation experiments were conducted. All the experiments were ensemble runs for 1-year period and each ensemble started from different initial conditions. One assimilation experiment was designed to assimilate sea level anomaly(SLA); another, to assimilate sea surface temperature(SST); and the other two assimilation experiments were designed to assimilate both SLA and SST but in different orders. To examine the effects of data assimilation, all the results were compared with an objective analysis dataset of EN3. Different from the ocean model without coupling, the momentum and heat fluxes were calculated via air-sea coupling in FIO-ESM, which makes the relations among variables closer to the reality. The outputs after the assimilation of satellite data were improved on the whole, especially at depth shallower than 1000 m. The effects due to the assimilation of different kinds of satellite datasets were somewhat different. The improvement due to SST assimilation was greater near the surface, while the improvement due to SLA assimilation was relatively great in the subsurface. The results after the assimilation of both SLA and SST were much better than those only assimilated one kind of dataset, but the difference due to the assimilation order of the two kinds of datasets was not significant.
文摘In this paper, we put forward a new method to reduce the calculation amountof the gain matrix of Kalman filter in data assimilation. We rewrite the vector describing the totalstate variables with two vectors whose dimensions are small and thus obtain the main parts and thetrivial parts of the state variables. On the basis of the rewrittten formula, we not only develop areduced Kalman filter scheme, but also obtain the transition equations about truncation errors, withwhich the validity of the main parts acting for the total state variables can be evaluatedquantitatively. The error transition equations thus offer an indirect testimony to the rationalityof the main parts.