Cluster analysis is a crucial technique in unsupervised machine learning,pattern recognition,and data analysis.However,current clustering algorithms suffer from the need for manual determination of parameter values,lo...Cluster analysis is a crucial technique in unsupervised machine learning,pattern recognition,and data analysis.However,current clustering algorithms suffer from the need for manual determination of parameter values,low accuracy,and inconsistent performance concerning data size and structure.To address these challenges,a novel clustering algorithm called the fully automated density-based clustering method(FADBC)is proposed.The FADBC method consists of two stages:parameter selection and cluster extraction.In the first stage,a proposed method extracts optimal parameters for the dataset,including the epsilon size and a minimum number of points thresholds.These parameters are then used in a density-based technique to scan each point in the dataset and evaluate neighborhood densities to find clusters.The proposed method was evaluated on different benchmark datasets andmetrics,and the experimental results demonstrate its competitive performance without requiring manual inputs.The results show that the FADBC method outperforms well-known clustering methods such as the agglomerative hierarchical method,k-means,spectral clustering,DBSCAN,FCDCSD,Gaussian mixtures,and density-based spatial clustering methods.It can handle any kind of data set well and perform excellently.展开更多
In recent years,there has been a concerted effort to improve anomaly detection tech-niques,particularly in the context of high-dimensional,distributed clinical data.Analysing patient data within clinical settings reve...In recent years,there has been a concerted effort to improve anomaly detection tech-niques,particularly in the context of high-dimensional,distributed clinical data.Analysing patient data within clinical settings reveals a pronounced focus on refining diagnostic accuracy,personalising treatment plans,and optimising resource allocation to enhance clinical outcomes.Nonetheless,this domain faces unique challenges,such as irregular data collection,inconsistent data quality,and patient-specific structural variations.This paper proposed a novel hybrid approach that integrates heuristic and stochastic methods for anomaly detection in patient clinical data to address these challenges.The strategy combines HPO-based optimal Density-Based Spatial Clustering of Applications with Noise for clustering patient exercise data,facilitating efficient anomaly identification.Subsequently,a stochastic method based on the Interquartile Range filters unreliable data points,ensuring that medical tools and professionals receive only the most pertinent and accurate information.The primary objective of this study is to equip healthcare pro-fessionals and researchers with a robust tool for managing extensive,high-dimensional clinical datasets,enabling effective isolation and removal of aberrant data points.Furthermore,a sophisticated regression model has been developed using Automated Machine Learning(AutoML)to assess the impact of the ensemble abnormal pattern detection approach.Various statistical error estimation techniques validate the efficacy of the hybrid approach alongside AutoML.Experimental results show that implementing this innovative hybrid model on patient rehabilitation data leads to a notable enhance-ment in AutoML performance,with an average improvement of 0.041 in the R2 score,surpassing the effectiveness of traditional regression models.展开更多
Ball milling is widely used in industry to mill particulate material.The primary purpose of this process is to attain an appropriate product size with the least possible energy consumption.The process is also extensiv...Ball milling is widely used in industry to mill particulate material.The primary purpose of this process is to attain an appropriate product size with the least possible energy consumption.The process is also extensively utilised in pharmaceuticals for the comminution of the excipients or drugs.Surprisingly,for ball mill,little is known concerning the mechanism of size reduction.Traditional prediction approaches are not deemed useful to provide significant insights into the operation or facilitate radical step changes in performance.Therefore,the discrete element method(DEM)as a computational modelling approach has been used in this paper.In previous research,DEM has been applied to simulate breaking behaviour through the impact energy of all ball collisions as the driving force for fracturing.However,the nature of pharmaceutical material fragmentation during ball milling is more complex.Suitable functional equations which link broken media and applied energy do not consider the collision of particulate media of different shapes or collisions of particulate media(such as granules)with balls and rotating mill drum.This could have a significant impact on fragmentation.Therefore,this paper aimed to investigate the fragmentation of bounded particles into DEM granules of different shape/size during the ball milling process.A systematic study was undertaken to explore the effect of milling speed on breakage behaviour.Also,in this study,a combination of a density-based clustering method and discrete element method was employed to numerically investigate the number and size of the fragments generated during the ball milling process over time.It was discovered that the collisions of the ball increased proportionally with rotation speed until reaching the critical rotation speed.Consequently,results illustrate that with an increase of rotation speed,the mill power increased correspondingly.The caratacting motion of mill material together with balls was identified as the most effective regime regarding the fragmentation,and fewer breakage events occurred for centrifugal motion.Higher quantities of the fines in each batch were produced with increased milling speed with less quantities of grain fragments.Moreover,the relationship between the number of produced fragment and milling speed at the end of the process exhibited a linear tendency.展开更多
Detection of Autism Spectrum Disorder(ASD)is a crucial area of research,representing a foundational aspect of psychological studies.The advancement of technology and the widespread adoption of machine learning methodo...Detection of Autism Spectrum Disorder(ASD)is a crucial area of research,representing a foundational aspect of psychological studies.The advancement of technology and the widespread adoption of machine learning methodologies have brought significant attention to this field in recent years.Interdisciplinary efforts have further propelled research into detection methods.Consequently,this study aims to contribute to both the fields of psychology and computer science.Specifically,the goal is to apply machine learning techniques to limited data for the detection of Autism Spectrum Disorder.This study is structured into two distinct phases:data preprocessing and classification.In the data preprocessing phase,four datasets—Toddler,Children,Adolescent,and Adult—were converted into numerical form,adjusted as necessary,and subsequently clustered.Clustering was performed using six different methods:Kmeans,agglomerative,DBSCAN(Density-Based Spatial Clustering of Applications with Noise),mean shift,spectral,and Birch.In the second phase,the clustered ASD data were classified.The model’s accuracy was assessed using 5-fold cross-validation to ensure robust evaluation.In total,ten distinct machine learning algorithms were employed.The findings indicate that all clustering methods demonstrated success with various classifiers.Notably,the K-means algorithm emerged as particularly effective,achieving consistent and significant results across all datasets.This study is expected to serve as a guide for improving ASD detection performance,even with minimal data availability.展开更多
We employed random distributions and gradient descent methods for the Generator Coordinate Method(GCM)to identify effective basis wave functions,taking halo nuclei ^(6)He and ^(6)Li as examples.By comparing the ground...We employed random distributions and gradient descent methods for the Generator Coordinate Method(GCM)to identify effective basis wave functions,taking halo nuclei ^(6)He and ^(6)Li as examples.By comparing the ground state(0^(+))energy of ^(6)He and the excited state(0^(+))energy of 6 Li calculated with various random distributions and manually selected generation coordinates,we found that the heavy tail characteristic of the logistic distribution better describes the features of the halo nuclei.Subsequently,the Adam algorithm from machine learning was applied to optimize the basis wave functions,indicating that a limited number of basis wave functions can approximate the converged values.These results offer some empirical insights for selecting basis wave functions and contribute to the broader application of machine learning methods in predicting effective basis wave functions.展开更多
The use of metal oxides has been extensively documented in the literature and applied in a variety of contexts,including but not limited to energy storage,chemical sensors,and biomedical applications.One of the most s...The use of metal oxides has been extensively documented in the literature and applied in a variety of contexts,including but not limited to energy storage,chemical sensors,and biomedical applications.One of the most significant applications of metal oxides is heterogeneous catalysis,which represents a pivotal technology in industrial production on a global scale.Catalysts serve as the primary enabling agents for chemical reactions,and among the plethora of catalysts,metal oxides including magnesium oxide(MgO),ceria(CeO_(2))and titania(TiO_(2)),have been identified to be particularly effective in catalyzing a variety of reactions[1].Theoretical calculations based on density functional theory(DFT)and a multitude of other quantum chemistry methods have proven invaluable in elucidating the mechanisms of metal-oxide-catalyzed reactions,thereby facilitating the design of high-performance catalysts[2].展开更多
Finding clusters based on density represents a significant class of clustering algorithms.These methods can discover clusters of various shapes and sizes.The most studied algorithm in this class is theDensity-Based Sp...Finding clusters based on density represents a significant class of clustering algorithms.These methods can discover clusters of various shapes and sizes.The most studied algorithm in this class is theDensity-Based Spatial Clustering of Applications with Noise(DBSCAN).It identifies clusters by grouping the densely connected objects into one group and discarding the noise objects.It requires two input parameters:epsilon(fixed neighborhood radius)and MinPts(the lowest number of objects in epsilon).However,it can’t handle clusters of various densities since it uses a global value for epsilon.This article proposes an adaptation of the DBSCAN method so it can discover clusters of varied densities besides reducing the required number of input parameters to only one.Only user input in the proposed method is the MinPts.Epsilon on the other hand,is computed automatically based on statistical information of the dataset.The proposed method finds the core distance for each object in the dataset,takes the average of these distances as the first value of epsilon,and finds the clusters satisfying this density level.The remaining unclustered objects will be clustered using a new value of epsilon that equals the average core distances of unclustered objects.This process continues until all objects have been clustered or the remaining unclustered objects are less than 0.006 of the dataset’s size.The proposed method requires MinPts only as an input parameter because epsilon is computed from data.Benchmark datasets were used to evaluate the effectiveness of the proposed method that produced promising results.Practical experiments demonstrate that the outstanding ability of the proposed method to detect clusters of different densities even if there is no separation between them.The accuracy of the method ranges from 92%to 100%for the experimented datasets.展开更多
We propose a new clustering algorithm that assists the researchers to quickly and accurately analyze data. We call this algorithm Combined Density-based and Constraint-based Algorithm (CDC). CDC consists of two phases...We propose a new clustering algorithm that assists the researchers to quickly and accurately analyze data. We call this algorithm Combined Density-based and Constraint-based Algorithm (CDC). CDC consists of two phases. In the first phase, CDC employs the idea of density-based clustering algorithm to split the original data into a number of fragmented clusters. At the same time, CDC cuts off the noises and outliers. In the second phase, CDC employs the concept of K-means clustering algorithm to select a greater cluster to be the center. Then, the greater cluster merges some smaller clusters which satisfy some constraint rules. Due to the merged clusters around the center cluster, the clustering results show high accuracy. Moreover, CDC reduces the calculations and speeds up the clustering process. In this paper, the accuracy of CDC is evaluated and compared with those of K-means, hierarchical clustering, and the genetic clustering algorithm (GCA) proposed in 2004. Experimental results show that CDC has better performance.展开更多
Clustering evolving data streams is important to be performed in a limited time with a reasonable quality. The existing micro clustering based methods do not consider the distribution of data points inside the micro c...Clustering evolving data streams is important to be performed in a limited time with a reasonable quality. The existing micro clustering based methods do not consider the distribution of data points inside the micro cluster. We propose LeaDen-Stream (Leader Density-based clustering algorithm over evolving data Stream), a density-based clustering algorithm using leader clustering. The algorithm is based on a two-phase clustering. The online phase selects the proper mini-micro or micro-cluster leaders based on the distribution of data points in the micro clusters. Then, the leader centers are sent to the offline phase to form final clusters. In LeaDen-Stream, by carefully choosing between two kinds of micro leaders, we decrease time complexity of the clustering while maintaining the cluster quality. A pruning strategy is also used to filter out real data from noise by introducing dense and sparse mini-micro and micro-cluster leaders. Our performance study over a number of real and synthetic data sets demonstrates the effectiveness and efficiency of our method.展开更多
This paper presents an evaluation method for the entropy-weighting of wind power clusters that comprehensively evaluates the allocation problems of wind power clusters by considering the correlation between indicators...This paper presents an evaluation method for the entropy-weighting of wind power clusters that comprehensively evaluates the allocation problems of wind power clusters by considering the correlation between indicators and the dynamic performance of weight changes.A dynamic layered sorting allocation method is also proposed.The proposed evaluation method considers the power-limiting degree of the last cycle,the adjustment margin,and volatility.It uses the theory of weight variation to update the entropy weight coefficients of each indicator in real time,and then performs a fuzzy evaluation based on the membership function to obtain intuitive comprehensive evaluation results.A case study of a large-scale wind power base in Northwest China was conducted.The proposed evaluation method is compared with fixed-weight entropy and principal component analysis methods.The results show that the three scoring trends are the same,and that the proposed evaluation method is closer to the average level of the latter two,demonstrating higher accuracy.The proposed allocation method can reduce the number of adjustments made to wind farms,which is significant for the allocation and evaluation of wind power clusters.展开更多
High fidelity analysis models,which are beneficial to improving the design quality,have been more and more widely utilized in the modern engineering design optimization problems.However,the high fidelity analysis mode...High fidelity analysis models,which are beneficial to improving the design quality,have been more and more widely utilized in the modern engineering design optimization problems.However,the high fidelity analysis models are so computationally expensive that the time required in design optimization is usually unacceptable.In order to improve the efficiency of optimization involving high fidelity analysis models,the optimization efficiency can be upgraded through applying surrogates to approximate the computationally expensive models,which can greately reduce the computation time.An efficient heuristic global optimization method using adaptive radial basis function(RBF) based on fuzzy clustering(ARFC) is proposed.In this method,a novel algorithm of maximin Latin hypercube design using successive local enumeration(SLE) is employed to obtain sample points with good performance in both space-filling and projective uniformity properties,which does a great deal of good to metamodels accuracy.RBF method is adopted for constructing the metamodels,and with the increasing the number of sample points the approximation accuracy of RBF is gradually enhanced.The fuzzy c-means clustering method is applied to identify the reduced attractive regions in the original design space.The numerical benchmark examples are used for validating the performance of ARFC.The results demonstrates that for most application examples the global optima are effectively obtained and comparison with adaptive response surface method(ARSM) proves that the proposed method can intuitively capture promising design regions and can efficiently identify the global or near-global design optimum.This method improves the efficiency and global convergence of the optimization problems,and gives a new optimization strategy for engineering design optimization problems involving computationally expensive models.展开更多
Tarq geochemical 1:100,000 Sheet is located in Isfahan province which is investigated by Iran’s Geological and Explorations Organization using stream sediment analyzes. This area has stratigraphy of Precambrian to Qu...Tarq geochemical 1:100,000 Sheet is located in Isfahan province which is investigated by Iran’s Geological and Explorations Organization using stream sediment analyzes. This area has stratigraphy of Precambrian to Quaternary rocks and is located in the Central Iran zone. According to the presence of signs of gold mineralization in this area, it is necessary to identify important mineral areas in this area. Therefore, finding information is necessary about the relationship and monitoring the elements of gold, arsenic, and antimony relative to each other in this area to determine the extent of geochemical halos and to estimate the grade. Therefore, a well-known and useful K-means method is used for monitoring the elements in the present study, this is a clustering method based on minimizing the total Euclidean distances of each sample from the center of the classes which are assigned to them. In this research, the clustering quality function and the utility rate of the sample have been used in the desired cluster (S(i)) to determine the optimum number of clusters. Finally, with regard to the cluster centers and the results, the equations were used to predict the amount of the gold element based on four parameters of arsenic and antimony grade, length and width of sampling points.展开更多
The fuzzy C-means clustering algorithm(FCM) to the fuzzy kernel C-means clustering algorithm(FKCM) to effectively perform cluster analysis on the diversiform structures are extended, such as non-hyperspherical data, d...The fuzzy C-means clustering algorithm(FCM) to the fuzzy kernel C-means clustering algorithm(FKCM) to effectively perform cluster analysis on the diversiform structures are extended, such as non-hyperspherical data, data with noise, data with mixture of heterogeneous cluster prototypes, asymmetric data, etc. Based on the Mercer kernel, FKCM clustering algorithm is derived from FCM algorithm united with kernel method. The results of experiments with the synthetic and real data show that the FKCM clustering algorithm is universality and can effectively unsupervised analyze datasets with variform structures in contrast to FCM algorithm. It is can be imagined that kernel-based clustering algorithm is one of important research direction of fuzzy clustering analysis.展开更多
Open clusters(OCs)serve as invaluable tracers for investigating the properties and evolution of stars and galaxies.Despite recent advancements in machine learning clustering algorithms,accurately discerning such clust...Open clusters(OCs)serve as invaluable tracers for investigating the properties and evolution of stars and galaxies.Despite recent advancements in machine learning clustering algorithms,accurately discerning such clusters remains challenging.We re-visited the 3013 samples generated with a hybrid clustering algorithm of FoF and pyUPMASK.A multi-view clustering(MvC)ensemble method was applied,which analyzes each member star of the OC from three perspectives—proper motion,spatial position,and composite views—before integrating the clustering outcomes to deduce more reliable cluster memberships.Based on the MvC results,we further excluded cluster candidates with fewer than ten member stars and obtained 1256 OC candidates.After isochrone fitting and visual inspection,we identified 506 candidate OCs in the Milky Way.In addition to the 493 previously reported candidates,we finally discovered 13 high-confidence new candidate clusters.展开更多
The selection of refracturing candidate is one of the most important jobs faced by oilfield engineers. However, due to the complicated multi-parameter relationships and their comprehensive influence, the selection of ...The selection of refracturing candidate is one of the most important jobs faced by oilfield engineers. However, due to the complicated multi-parameter relationships and their comprehensive influence, the selection of refracturing candidate is often very difficult. In this paper, a novel approach combining data analysis techniques and fuzzy clustering was proposed to select refracturing candidate. First, the analysis techniques were used to quantitatively calculate the weight coefficient and determine the key factors. Then, the idealized refracturing well was established by considering the main factors. Fuzzy clustering was applied to evaluate refracturing potential. Finally, reservoirs numerical simulation was used to further evaluate reservoirs energy and material basis of the optimum refracturing candidates. The hybrid method has been successfully applied to a tight oil reservoir in China. The average steady production was 15.8 t/d after refracturing treatment, increasing significantly compared with previous status. The research results can guide the development of tight oil and gas reservoirs effectively.展开更多
In order to improve the accuracy and efficiency of 3D model retrieval,the method based on affinity propagation clustering algorithm is proposed. Firstly,projection ray-based method is proposed to improve the feature e...In order to improve the accuracy and efficiency of 3D model retrieval,the method based on affinity propagation clustering algorithm is proposed. Firstly,projection ray-based method is proposed to improve the feature extraction efficiency of 3D models. Based on the relationship between model and its projection,the intersection in 3D space is transformed into intersection in 2D space,which reduces the number of intersection and improves the efficiency of the extraction algorithm. In feature extraction,multi-layer spheres method is analyzed. The two-layer spheres method makes the feature vector more accurate and improves retrieval precision. Secondly,Semi-supervised Affinity Propagation ( S-AP) clustering is utilized because it can be applied to different cluster structures. The S-AP algorithm is adopted to find the center models and then the center model collection is built. During retrieval process,the collection is utilized to classify the query model into corresponding model base and then the most similar model is retrieved in the model base. Finally,75 sample models from Princeton library are selected to do the experiment and then 36 models are used for retrieval test. The results validate that the proposed method outperforms the original method and the retrieval precision and recall ratios are improved effectively.展开更多
The knowledge of bubble profiles in gas-liquid two-phase flows is crucial for analyzing the kinetic processes such as heat and mass transfer, and this knowledge is contained in field data obtained by surface-resolved ...The knowledge of bubble profiles in gas-liquid two-phase flows is crucial for analyzing the kinetic processes such as heat and mass transfer, and this knowledge is contained in field data obtained by surface-resolved computational fluid dynamics (CFD) simulations. To obtain this information, an efficient bubble profile reconstruction method based on an improved agglomerative hierarchical clustering (AHC) algorithm is proposed in this paper. The reconstruction method is featured by the implementations of a binary space division preprocessing, which aims to reduce the computational complexity, an adaptive linkage criterion, which guarantees the applicability of the AHC algorithm when dealing with datasets involving either non-uniform or distorted grids, and a stepwise execution strategy, which enables the separation of attached bubbles. To illustrate and verify this method, it was applied to dealing with 3 datasets, 2 of them with pre-specified spherical bubbles and the other obtained by a surface-resolved CFD simulation. Application results indicate that the proposed method is effective even when the data include some non-uniform and distortion.展开更多
To make the quantitative results of nuclear magnetic resonance(NMR) transverse relaxation(T;) spectrums reflect the type and pore structure of reservoir more directly, an unsupervised clustering method was developed t...To make the quantitative results of nuclear magnetic resonance(NMR) transverse relaxation(T;) spectrums reflect the type and pore structure of reservoir more directly, an unsupervised clustering method was developed to obtain the quantitative pore structure information from the NMR T;spectrums based on the Gaussian mixture model(GMM). Firstly, We conducted the principal component analysis on T;spectrums in order to reduce the dimension data and the dependence of the original variables. Secondly, the dimension-reduced data was fitted using the GMM probability density function, and the model parameters and optimal clustering numbers were obtained according to the expectation-maximization algorithm and the change of the Akaike information criterion. Finally, the T;spectrum features and pore structure types of different clustering groups were analyzed and compared with T;geometric mean and T;arithmetic mean. The effectiveness of the algorithm has been verified by numerical simulation and field NMR logging data. The research shows that the clustering results based on GMM method have good correlations with the shape and distribution of the T;spectrum, pore structure, and petroleum productivity, providing a new means for quantitative identification of pore structure, reservoir grading, and oil and gas productivity evaluation.展开更多
The idea of modified water masses is introduced and a cluster analysis is used for determining the boundary of modified water masses and its variety in the shallow water area of the Huanghai Sea (Yellow Sea) and the E...The idea of modified water masses is introduced and a cluster analysis is used for determining the boundary of modified water masses and its variety in the shallow water area of the Huanghai Sea (Yellow Sea) and the East China Sea. According to the specified standards to make the cluster, we have determined the number and boundary of the water masses and the mixed zones.The results obtained by the cluster method show that there are eight modified water masses in this area. According to the relative index of temperature and salinity,the modified water masses are divided into nine different characteristic parts. The water, masses may also be divided into three salinity types. On the TS-Diagram, the points concerning temperature and safinity of different modified mater masses are distributed around a curve, from which the characteristics of gradual modification may be embodied. The variation ranges of different modified water masses are all large, explaining the intensive modification of water masses in展开更多
A novel model of fuzzy clustering using kernel methods is proposed. This model is called kernel modified possibilistic c-means (KMPCM) model. The proposed model is an extension of the modified possibilistic c-means ...A novel model of fuzzy clustering using kernel methods is proposed. This model is called kernel modified possibilistic c-means (KMPCM) model. The proposed model is an extension of the modified possibilistic c-means (MPCM) algorithm by using kernel methods. Different from MPCM and fuzzy c-means (FCM) model which are based on Euclidean distance, the proposed model is based on kernel-induced distance. Furthermore, with kernel methods the input data can be mapped implicitly into a high-dimensional feature space where the nonlinear pattern now appears linear. It is unnecessary to do calculation in the high-dimensional feature space because the kernel function can do it. Numerical experiments show that KMPCM outperforms FCM and MPCM.展开更多
基金the Deanship of Scientific Research at Umm Al-Qura University,Grant Code:(23UQU4361009DSR001).
文摘Cluster analysis is a crucial technique in unsupervised machine learning,pattern recognition,and data analysis.However,current clustering algorithms suffer from the need for manual determination of parameter values,low accuracy,and inconsistent performance concerning data size and structure.To address these challenges,a novel clustering algorithm called the fully automated density-based clustering method(FADBC)is proposed.The FADBC method consists of two stages:parameter selection and cluster extraction.In the first stage,a proposed method extracts optimal parameters for the dataset,including the epsilon size and a minimum number of points thresholds.These parameters are then used in a density-based technique to scan each point in the dataset and evaluate neighborhood densities to find clusters.The proposed method was evaluated on different benchmark datasets andmetrics,and the experimental results demonstrate its competitive performance without requiring manual inputs.The results show that the FADBC method outperforms well-known clustering methods such as the agglomerative hierarchical method,k-means,spectral clustering,DBSCAN,FCDCSD,Gaussian mixtures,and density-based spatial clustering methods.It can handle any kind of data set well and perform excellently.
文摘In recent years,there has been a concerted effort to improve anomaly detection tech-niques,particularly in the context of high-dimensional,distributed clinical data.Analysing patient data within clinical settings reveals a pronounced focus on refining diagnostic accuracy,personalising treatment plans,and optimising resource allocation to enhance clinical outcomes.Nonetheless,this domain faces unique challenges,such as irregular data collection,inconsistent data quality,and patient-specific structural variations.This paper proposed a novel hybrid approach that integrates heuristic and stochastic methods for anomaly detection in patient clinical data to address these challenges.The strategy combines HPO-based optimal Density-Based Spatial Clustering of Applications with Noise for clustering patient exercise data,facilitating efficient anomaly identification.Subsequently,a stochastic method based on the Interquartile Range filters unreliable data points,ensuring that medical tools and professionals receive only the most pertinent and accurate information.The primary objective of this study is to equip healthcare pro-fessionals and researchers with a robust tool for managing extensive,high-dimensional clinical datasets,enabling effective isolation and removal of aberrant data points.Furthermore,a sophisticated regression model has been developed using Automated Machine Learning(AutoML)to assess the impact of the ensemble abnormal pattern detection approach.Various statistical error estimation techniques validate the efficacy of the hybrid approach alongside AutoML.Experimental results show that implementing this innovative hybrid model on patient rehabilitation data leads to a notable enhance-ment in AutoML performance,with an average improvement of 0.041 in the R2 score,surpassing the effectiveness of traditional regression models.
基金supported by the Career-FIT Fellowshipsfunded through European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No.713654supported by ACCORD(ITMS project code:313021X329),funded through the European Regional Development Fund.
文摘Ball milling is widely used in industry to mill particulate material.The primary purpose of this process is to attain an appropriate product size with the least possible energy consumption.The process is also extensively utilised in pharmaceuticals for the comminution of the excipients or drugs.Surprisingly,for ball mill,little is known concerning the mechanism of size reduction.Traditional prediction approaches are not deemed useful to provide significant insights into the operation or facilitate radical step changes in performance.Therefore,the discrete element method(DEM)as a computational modelling approach has been used in this paper.In previous research,DEM has been applied to simulate breaking behaviour through the impact energy of all ball collisions as the driving force for fracturing.However,the nature of pharmaceutical material fragmentation during ball milling is more complex.Suitable functional equations which link broken media and applied energy do not consider the collision of particulate media of different shapes or collisions of particulate media(such as granules)with balls and rotating mill drum.This could have a significant impact on fragmentation.Therefore,this paper aimed to investigate the fragmentation of bounded particles into DEM granules of different shape/size during the ball milling process.A systematic study was undertaken to explore the effect of milling speed on breakage behaviour.Also,in this study,a combination of a density-based clustering method and discrete element method was employed to numerically investigate the number and size of the fragments generated during the ball milling process over time.It was discovered that the collisions of the ball increased proportionally with rotation speed until reaching the critical rotation speed.Consequently,results illustrate that with an increase of rotation speed,the mill power increased correspondingly.The caratacting motion of mill material together with balls was identified as the most effective regime regarding the fragmentation,and fewer breakage events occurred for centrifugal motion.Higher quantities of the fines in each batch were produced with increased milling speed with less quantities of grain fragments.Moreover,the relationship between the number of produced fragment and milling speed at the end of the process exhibited a linear tendency.
文摘Detection of Autism Spectrum Disorder(ASD)is a crucial area of research,representing a foundational aspect of psychological studies.The advancement of technology and the widespread adoption of machine learning methodologies have brought significant attention to this field in recent years.Interdisciplinary efforts have further propelled research into detection methods.Consequently,this study aims to contribute to both the fields of psychology and computer science.Specifically,the goal is to apply machine learning techniques to limited data for the detection of Autism Spectrum Disorder.This study is structured into two distinct phases:data preprocessing and classification.In the data preprocessing phase,four datasets—Toddler,Children,Adolescent,and Adult—were converted into numerical form,adjusted as necessary,and subsequently clustered.Clustering was performed using six different methods:Kmeans,agglomerative,DBSCAN(Density-Based Spatial Clustering of Applications with Noise),mean shift,spectral,and Birch.In the second phase,the clustered ASD data were classified.The model’s accuracy was assessed using 5-fold cross-validation to ensure robust evaluation.In total,ten distinct machine learning algorithms were employed.The findings indicate that all clustering methods demonstrated success with various classifiers.Notably,the K-means algorithm emerged as particularly effective,achieving consistent and significant results across all datasets.This study is expected to serve as a guide for improving ASD detection performance,even with minimal data availability.
基金supported by the National Key R&D Program of China(No.2023YFA1606701)the National Natural Science Foundation of China(Nos.12175042,11890710,11890714,12047514,12147101,and 12347106)+1 种基金Guangdong Major Project of Basic and Applied Basic Research(No.2020B0301030008)China National Key R&D Program(No.2022YFA1602402).
文摘We employed random distributions and gradient descent methods for the Generator Coordinate Method(GCM)to identify effective basis wave functions,taking halo nuclei ^(6)He and ^(6)Li as examples.By comparing the ground state(0^(+))energy of ^(6)He and the excited state(0^(+))energy of 6 Li calculated with various random distributions and manually selected generation coordinates,we found that the heavy tail characteristic of the logistic distribution better describes the features of the halo nuclei.Subsequently,the Adam algorithm from machine learning was applied to optimize the basis wave functions,indicating that a limited number of basis wave functions can approximate the converged values.These results offer some empirical insights for selecting basis wave functions and contribute to the broader application of machine learning methods in predicting effective basis wave functions.
基金financial support from the National Key R&D Program of China(2021YFB3500700)the National Natural Science Foundation of China(22473042,22003016,and 92145302).
文摘The use of metal oxides has been extensively documented in the literature and applied in a variety of contexts,including but not limited to energy storage,chemical sensors,and biomedical applications.One of the most significant applications of metal oxides is heterogeneous catalysis,which represents a pivotal technology in industrial production on a global scale.Catalysts serve as the primary enabling agents for chemical reactions,and among the plethora of catalysts,metal oxides including magnesium oxide(MgO),ceria(CeO_(2))and titania(TiO_(2)),have been identified to be particularly effective in catalyzing a variety of reactions[1].Theoretical calculations based on density functional theory(DFT)and a multitude of other quantum chemistry methods have proven invaluable in elucidating the mechanisms of metal-oxide-catalyzed reactions,thereby facilitating the design of high-performance catalysts[2].
基金The author extends his appreciation to theDeputyship forResearch&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the project number(IFPSAU-2021/01/17758).
文摘Finding clusters based on density represents a significant class of clustering algorithms.These methods can discover clusters of various shapes and sizes.The most studied algorithm in this class is theDensity-Based Spatial Clustering of Applications with Noise(DBSCAN).It identifies clusters by grouping the densely connected objects into one group and discarding the noise objects.It requires two input parameters:epsilon(fixed neighborhood radius)and MinPts(the lowest number of objects in epsilon).However,it can’t handle clusters of various densities since it uses a global value for epsilon.This article proposes an adaptation of the DBSCAN method so it can discover clusters of varied densities besides reducing the required number of input parameters to only one.Only user input in the proposed method is the MinPts.Epsilon on the other hand,is computed automatically based on statistical information of the dataset.The proposed method finds the core distance for each object in the dataset,takes the average of these distances as the first value of epsilon,and finds the clusters satisfying this density level.The remaining unclustered objects will be clustered using a new value of epsilon that equals the average core distances of unclustered objects.This process continues until all objects have been clustered or the remaining unclustered objects are less than 0.006 of the dataset’s size.The proposed method requires MinPts only as an input parameter because epsilon is computed from data.Benchmark datasets were used to evaluate the effectiveness of the proposed method that produced promising results.Practical experiments demonstrate that the outstanding ability of the proposed method to detect clusters of different densities even if there is no separation between them.The accuracy of the method ranges from 92%to 100%for the experimented datasets.
文摘We propose a new clustering algorithm that assists the researchers to quickly and accurately analyze data. We call this algorithm Combined Density-based and Constraint-based Algorithm (CDC). CDC consists of two phases. In the first phase, CDC employs the idea of density-based clustering algorithm to split the original data into a number of fragmented clusters. At the same time, CDC cuts off the noises and outliers. In the second phase, CDC employs the concept of K-means clustering algorithm to select a greater cluster to be the center. Then, the greater cluster merges some smaller clusters which satisfy some constraint rules. Due to the merged clusters around the center cluster, the clustering results show high accuracy. Moreover, CDC reduces the calculations and speeds up the clustering process. In this paper, the accuracy of CDC is evaluated and compared with those of K-means, hierarchical clustering, and the genetic clustering algorithm (GCA) proposed in 2004. Experimental results show that CDC has better performance.
文摘Clustering evolving data streams is important to be performed in a limited time with a reasonable quality. The existing micro clustering based methods do not consider the distribution of data points inside the micro cluster. We propose LeaDen-Stream (Leader Density-based clustering algorithm over evolving data Stream), a density-based clustering algorithm using leader clustering. The algorithm is based on a two-phase clustering. The online phase selects the proper mini-micro or micro-cluster leaders based on the distribution of data points in the micro clusters. Then, the leader centers are sent to the offline phase to form final clusters. In LeaDen-Stream, by carefully choosing between two kinds of micro leaders, we decrease time complexity of the clustering while maintaining the cluster quality. A pruning strategy is also used to filter out real data from noise by introducing dense and sparse mini-micro and micro-cluster leaders. Our performance study over a number of real and synthetic data sets demonstrates the effectiveness and efficiency of our method.
基金supported by the National Natural Science Foundation of China(Grant No.52076038,U22B20112,No.52106238)the Fundamental Research Funds for Central Universities(No.423162,B230201051).
文摘This paper presents an evaluation method for the entropy-weighting of wind power clusters that comprehensively evaluates the allocation problems of wind power clusters by considering the correlation between indicators and the dynamic performance of weight changes.A dynamic layered sorting allocation method is also proposed.The proposed evaluation method considers the power-limiting degree of the last cycle,the adjustment margin,and volatility.It uses the theory of weight variation to update the entropy weight coefficients of each indicator in real time,and then performs a fuzzy evaluation based on the membership function to obtain intuitive comprehensive evaluation results.A case study of a large-scale wind power base in Northwest China was conducted.The proposed evaluation method is compared with fixed-weight entropy and principal component analysis methods.The results show that the three scoring trends are the same,and that the proposed evaluation method is closer to the average level of the latter two,demonstrating higher accuracy.The proposed allocation method can reduce the number of adjustments made to wind farms,which is significant for the allocation and evaluation of wind power clusters.
基金supported by National Natural Science Foundation of China (Grant Nos. 50875024,51105040)Excellent Young Scholars Research Fund of Beijing Institute of Technology,China (Grant No.2010Y0102)Defense Creative Research Group Foundation of China(Grant No. GFTD0803)
文摘High fidelity analysis models,which are beneficial to improving the design quality,have been more and more widely utilized in the modern engineering design optimization problems.However,the high fidelity analysis models are so computationally expensive that the time required in design optimization is usually unacceptable.In order to improve the efficiency of optimization involving high fidelity analysis models,the optimization efficiency can be upgraded through applying surrogates to approximate the computationally expensive models,which can greately reduce the computation time.An efficient heuristic global optimization method using adaptive radial basis function(RBF) based on fuzzy clustering(ARFC) is proposed.In this method,a novel algorithm of maximin Latin hypercube design using successive local enumeration(SLE) is employed to obtain sample points with good performance in both space-filling and projective uniformity properties,which does a great deal of good to metamodels accuracy.RBF method is adopted for constructing the metamodels,and with the increasing the number of sample points the approximation accuracy of RBF is gradually enhanced.The fuzzy c-means clustering method is applied to identify the reduced attractive regions in the original design space.The numerical benchmark examples are used for validating the performance of ARFC.The results demonstrates that for most application examples the global optima are effectively obtained and comparison with adaptive response surface method(ARSM) proves that the proposed method can intuitively capture promising design regions and can efficiently identify the global or near-global design optimum.This method improves the efficiency and global convergence of the optimization problems,and gives a new optimization strategy for engineering design optimization problems involving computationally expensive models.
文摘Tarq geochemical 1:100,000 Sheet is located in Isfahan province which is investigated by Iran’s Geological and Explorations Organization using stream sediment analyzes. This area has stratigraphy of Precambrian to Quaternary rocks and is located in the Central Iran zone. According to the presence of signs of gold mineralization in this area, it is necessary to identify important mineral areas in this area. Therefore, finding information is necessary about the relationship and monitoring the elements of gold, arsenic, and antimony relative to each other in this area to determine the extent of geochemical halos and to estimate the grade. Therefore, a well-known and useful K-means method is used for monitoring the elements in the present study, this is a clustering method based on minimizing the total Euclidean distances of each sample from the center of the classes which are assigned to them. In this research, the clustering quality function and the utility rate of the sample have been used in the desired cluster (S(i)) to determine the optimum number of clusters. Finally, with regard to the cluster centers and the results, the equations were used to predict the amount of the gold element based on four parameters of arsenic and antimony grade, length and width of sampling points.
文摘The fuzzy C-means clustering algorithm(FCM) to the fuzzy kernel C-means clustering algorithm(FKCM) to effectively perform cluster analysis on the diversiform structures are extended, such as non-hyperspherical data, data with noise, data with mixture of heterogeneous cluster prototypes, asymmetric data, etc. Based on the Mercer kernel, FKCM clustering algorithm is derived from FCM algorithm united with kernel method. The results of experiments with the synthetic and real data show that the FKCM clustering algorithm is universality and can effectively unsupervised analyze datasets with variform structures in contrast to FCM algorithm. It is can be imagined that kernel-based clustering algorithm is one of important research direction of fuzzy clustering analysis.
基金supported by the National Key Research And Development Program of China(No.2022YFF0711500)the National Natural Science Foundation of China(NSFC,Grant No.12373097)+1 种基金the Basic and Applied Basic Research Foundation Project of Guangdong Province(No.2024A1515011503)the Guangzhou Science and Technology Funds(2023A03J0016)。
文摘Open clusters(OCs)serve as invaluable tracers for investigating the properties and evolution of stars and galaxies.Despite recent advancements in machine learning clustering algorithms,accurately discerning such clusters remains challenging.We re-visited the 3013 samples generated with a hybrid clustering algorithm of FoF and pyUPMASK.A multi-view clustering(MvC)ensemble method was applied,which analyzes each member star of the OC from three perspectives—proper motion,spatial position,and composite views—before integrating the clustering outcomes to deduce more reliable cluster memberships.Based on the MvC results,we further excluded cluster candidates with fewer than ten member stars and obtained 1256 OC candidates.After isochrone fitting and visual inspection,we identified 506 candidate OCs in the Milky Way.In addition to the 493 previously reported candidates,we finally discovered 13 high-confidence new candidate clusters.
基金Projects(51204054,51504203)supported by the National Natural Science Foundation of ChinaProject(2016ZX05023-001)supported by the National Science and Technology Major Project of China
文摘The selection of refracturing candidate is one of the most important jobs faced by oilfield engineers. However, due to the complicated multi-parameter relationships and their comprehensive influence, the selection of refracturing candidate is often very difficult. In this paper, a novel approach combining data analysis techniques and fuzzy clustering was proposed to select refracturing candidate. First, the analysis techniques were used to quantitatively calculate the weight coefficient and determine the key factors. Then, the idealized refracturing well was established by considering the main factors. Fuzzy clustering was applied to evaluate refracturing potential. Finally, reservoirs numerical simulation was used to further evaluate reservoirs energy and material basis of the optimum refracturing candidates. The hybrid method has been successfully applied to a tight oil reservoir in China. The average steady production was 15.8 t/d after refracturing treatment, increasing significantly compared with previous status. The research results can guide the development of tight oil and gas reservoirs effectively.
基金Sponsored by the National Natural Science Foundation of China (Grant No. 51075083)
文摘In order to improve the accuracy and efficiency of 3D model retrieval,the method based on affinity propagation clustering algorithm is proposed. Firstly,projection ray-based method is proposed to improve the feature extraction efficiency of 3D models. Based on the relationship between model and its projection,the intersection in 3D space is transformed into intersection in 2D space,which reduces the number of intersection and improves the efficiency of the extraction algorithm. In feature extraction,multi-layer spheres method is analyzed. The two-layer spheres method makes the feature vector more accurate and improves retrieval precision. Secondly,Semi-supervised Affinity Propagation ( S-AP) clustering is utilized because it can be applied to different cluster structures. The S-AP algorithm is adopted to find the center models and then the center model collection is built. During retrieval process,the collection is utilized to classify the query model into corresponding model base and then the most similar model is retrieved in the model base. Finally,75 sample models from Princeton library are selected to do the experiment and then 36 models are used for retrieval test. The results validate that the proposed method outperforms the original method and the retrieval precision and recall ratios are improved effectively.
基金Projects(51634010,51676211) supported by the National Natural Science Foundation of ChinaProject(2017SK2253) supported by the Key Research and Development Program of Hunan Province,China
文摘The knowledge of bubble profiles in gas-liquid two-phase flows is crucial for analyzing the kinetic processes such as heat and mass transfer, and this knowledge is contained in field data obtained by surface-resolved computational fluid dynamics (CFD) simulations. To obtain this information, an efficient bubble profile reconstruction method based on an improved agglomerative hierarchical clustering (AHC) algorithm is proposed in this paper. The reconstruction method is featured by the implementations of a binary space division preprocessing, which aims to reduce the computational complexity, an adaptive linkage criterion, which guarantees the applicability of the AHC algorithm when dealing with datasets involving either non-uniform or distorted grids, and a stepwise execution strategy, which enables the separation of attached bubbles. To illustrate and verify this method, it was applied to dealing with 3 datasets, 2 of them with pre-specified spherical bubbles and the other obtained by a surface-resolved CFD simulation. Application results indicate that the proposed method is effective even when the data include some non-uniform and distortion.
基金Supported by the National Natural Science Foundation of China (42174142)National Science and Technology Major Project (2017ZX05039-002)+2 种基金Operation Fund of China National Petroleum Corporation Logging Key Laboratory (2021DQ20210107-11)Fundamental Research Funds for Central Universities (19CX02006A)Major Science and Technology Project of China National Petroleum Corporation (ZD2019-183-006)。
文摘To make the quantitative results of nuclear magnetic resonance(NMR) transverse relaxation(T;) spectrums reflect the type and pore structure of reservoir more directly, an unsupervised clustering method was developed to obtain the quantitative pore structure information from the NMR T;spectrums based on the Gaussian mixture model(GMM). Firstly, We conducted the principal component analysis on T;spectrums in order to reduce the dimension data and the dependence of the original variables. Secondly, the dimension-reduced data was fitted using the GMM probability density function, and the model parameters and optimal clustering numbers were obtained according to the expectation-maximization algorithm and the change of the Akaike information criterion. Finally, the T;spectrum features and pore structure types of different clustering groups were analyzed and compared with T;geometric mean and T;arithmetic mean. The effectiveness of the algorithm has been verified by numerical simulation and field NMR logging data. The research shows that the clustering results based on GMM method have good correlations with the shape and distribution of the T;spectrum, pore structure, and petroleum productivity, providing a new means for quantitative identification of pore structure, reservoir grading, and oil and gas productivity evaluation.
文摘The idea of modified water masses is introduced and a cluster analysis is used for determining the boundary of modified water masses and its variety in the shallow water area of the Huanghai Sea (Yellow Sea) and the East China Sea. According to the specified standards to make the cluster, we have determined the number and boundary of the water masses and the mixed zones.The results obtained by the cluster method show that there are eight modified water masses in this area. According to the relative index of temperature and salinity,the modified water masses are divided into nine different characteristic parts. The water, masses may also be divided into three salinity types. On the TS-Diagram, the points concerning temperature and safinity of different modified mater masses are distributed around a curve, from which the characteristics of gradual modification may be embodied. The variation ranges of different modified water masses are all large, explaining the intensive modification of water masses in
基金Project supported by the 15th Plan for National Defence Preventive Research Project (Grant No.413030201)
文摘A novel model of fuzzy clustering using kernel methods is proposed. This model is called kernel modified possibilistic c-means (KMPCM) model. The proposed model is an extension of the modified possibilistic c-means (MPCM) algorithm by using kernel methods. Different from MPCM and fuzzy c-means (FCM) model which are based on Euclidean distance, the proposed model is based on kernel-induced distance. Furthermore, with kernel methods the input data can be mapped implicitly into a high-dimensional feature space where the nonlinear pattern now appears linear. It is unnecessary to do calculation in the high-dimensional feature space because the kernel function can do it. Numerical experiments show that KMPCM outperforms FCM and MPCM.