In practice, retraining a trained classifier is necessary when novel data become available. This paper adopts an incremental learning procedure to adaptively train a Kernel-based Nonlinear Representor (KNR), a recentl...In practice, retraining a trained classifier is necessary when novel data become available. This paper adopts an incremental learning procedure to adaptively train a Kernel-based Nonlinear Representor (KNR), a recently presented nonlinear classifier for optimal pattern representation, so that its generalization ability may be evaluated in time-variant situation and a sparser representation is obtained for computationally intensive tasks. The addressed techniques are applied to handwritten digit classification to illustrate the feasibility for pattern recognition.展开更多
Previously, a novel classifier called Kernel-based Nonlinear Discriminator (KND) was proposed to discriminate a pattern class from other classes by minimizing mean effect of the latter. To consider the effect of the t...Previously, a novel classifier called Kernel-based Nonlinear Discriminator (KND) was proposed to discriminate a pattern class from other classes by minimizing mean effect of the latter. To consider the effect of the target class, this paper introduces an oblique projection algorithm to determine the coefficients of a KND so that it is extended to a new version called extended KND (eKND). In eKND construction, the desired output vector of the target class is obliquely projected onto the relevant subspace along the subspace related to other classes. In addition, a simple technique is proposed to calculate the associated oblique projection operator. Experimental results on handwritten digit recognition show that the algorithm performes better than a KND classifier and some other commonly used classifiers.展开更多
This paper presents a new kernel-based algorithm for video object tracking called rebound of region of interest (RROI). The novel algorithm uses a rectangle-shaped section as region of interest (ROI) to represent and ...This paper presents a new kernel-based algorithm for video object tracking called rebound of region of interest (RROI). The novel algorithm uses a rectangle-shaped section as region of interest (ROI) to represent and track specific objects in videos. The proposed algorithm is constituted by two stages. The first stage seeks to determine the direction of the object’s motion by analyzing the changing regions around the object being tracked between two consecutive frames. Once the direction of the object’s motion has been predicted, it is initialized an iterative process that seeks to minimize a function of dissimilarity in order to find the location of the object being tracked in the next frame. The main advantage of the proposed algorithm is that, unlike existing kernel-based methods, it is immune to highly cluttered conditions. The results obtained by the proposed algorithm show that the tracking process was successfully carried out for a set of color videos with different challenging conditions such as occlusion, illumination changes, cluttered conditions, and object scale changes.展开更多
Workload balancing in cloud computing is not yet resolved,particularly considering Infrastructure as a Service(IaaS)in the cloud network.The problem of being underloaded or overloaded should not occur at the time of t...Workload balancing in cloud computing is not yet resolved,particularly considering Infrastructure as a Service(IaaS)in the cloud network.The problem of being underloaded or overloaded should not occur at the time of the server or host accessing the cloud which may lead to create system crash problem.Thus,to resolve these existing problems,an efficient task scheduling algorithm is required for distributing the tasks over the entire feasible resources,which is termed load balancing.The load balancing approach assures that the entire Virtual Machines(VMs)are utilized appropriately.So,it is highly essential to develop a load-balancing model in a cloud environment based on machine learning and optimization strategies.Here,the computing and networking data is utilized for the analysis to observe the traffic as well as performance patterns.The acquired data is offered to the machine learning decision to select the right server by predicting the performance effectively by employing an Optimal Kernel-based Extreme Learning Machine(OK-ELM)and their parameter is tuned by the developed hybrid approach Population Size-based Mud Ring Tunicate Swarm Algorithm(PS-MRTSA).Further,effective scheduling is performed to resolve the load balancing issues by employing the developed model MR-TSA.Here,the developed approach effectively resolves the multi-objective constraints such as Response time,Resource cost,and energy consumption.Thus,the recommended load balancing model securesan enhanced performance rate than the traditional approaches over several experimental analyses.展开更多
Due to the complexity of economic system and the interactive effects between all kinds of economic variables and foreign trade, it is not easy to predict foreign trade volume. However, the difficulty in predicting for...Due to the complexity of economic system and the interactive effects between all kinds of economic variables and foreign trade, it is not easy to predict foreign trade volume. However, the difficulty in predicting foreign trade volume is usually attributed to the limitation of many conventional forecasting models. To improve the prediction performance, the study proposes a novel kernel-based ensemble learning approach hybridizing econometric models and artificial intelligence (AI) models to predict China's foreign trade volume. In the proposed approach, an important econometric model, the co-integration-based error correction vector auto-regression (EC-VAR) model is first used to capture the impacts of all kinds of economic variables on Chinese foreign trade from a multivariate linear anal- ysis perspective. Then an artificial neural network (ANN) based EC-VAR model is used to capture the nonlinear effects of economic variables on foreign trade from the nonlinear viewpoint. Subsequently, for incorporating the effects of irregular events on foreign trade, the text mining and expert's judgmental adjustments are also integrated into the nonlinear ANN-based EC-VAR model. Finally, all kinds of economic variables, the outputs of linear and nonlinear EC-VAR models and judgmental adjustment model are used as input variables of a typical kernel-based support vector regression (SVR) for en- semble prediction purpose. For illustration, the proposed kernel-based ensemble learning methodology hybridizing econometric techniques and AI methods is applied to China's foreign trade volume predic- tion problem. Experimental results reveal that the hybrid econometric-AI ensemble learning approach can significantly improve the prediction performance over other linear and nonlinear models listed in this study.展开更多
Purpose-The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.Design/methodology/approach-After collecting the ultrasound images,contrast-limited adaptive ...Purpose-The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.Design/methodology/approach-After collecting the ultrasound images,contrast-limited adaptive histogram equalization approach(CLAHE)is applied as preprocessing,in order to enhance the visual quality of the images that helps in better segmentation.Then,adaptively regularized kernel-based fuzzy C means(ARKFCM)is used to segment tumor from the enhanced image along with local ternary pattern combined with selective level set approaches.Findings-The proposed segmentation algorithm precisely segments the tumor portions from the enhanced images with lower computation cost.The proposed segmentation algorithm is compared with the existing algorithms and ground truth values in terms of Jaccard coefficient,dice coefficient,precision,Matthews correlation coefficient,f-score and accuracy.The experimental analysis shows that the proposed algorithm achieved 99.18% of accuracy and 92.17% of f-score value,which is better than the existing algorithms.Practical implications-From the experimental analysis,the proposed ARKFCM with enhanced level set algorithm obtained better performance in ultrasound liver tumor segmentation related to graph-based algorithm.However,the proposed algorithm showed 3.11% improvement in dice coefficient compared to graph-based algorithm.Originality/value-The image preprocessing is carried out using CLAHE algorithm.The preprocessed image is segmented by employing selective level set model and Local Ternary Pattern in ARKFCM algorithm.In this research,the proposed algorithm has advantages such as independence of clustering parameters,robustness in preserving the image details and optimal in finding the threshold value that effectively reduces the computational cost.展开更多
Within the context of global change, marine sensitive factors or Marine Essential Climate Variables have been defined by many projects, and their sensitive spatial regions and time phases play significant roles in reg...Within the context of global change, marine sensitive factors or Marine Essential Climate Variables have been defined by many projects, and their sensitive spatial regions and time phases play significant roles in regional sea-air interactions and better understanding of their dynamic process. In this paper, we propose a cluster-based method for marine sensitive region extraction and representation. This method includes a kernel expansion algorithm for extracting marine sensitive regions, and a field-object triple form, integration of object-oriented and field-based model, for representing marine sensitive objects. Firstly, this method recognizes ENSO-related spatial patterns using empirical orthogonal decomposition of long term marine sensitive factors and correlation analysis with multiple ENSO index. The cluster kernel, defined by statistics of spatial patterns, is initialized to carry out spatial expansion and cluster mergence with spatial neighborhoods recursively, then all the related lattices with similar behavior are merged into marine sensitive regions. After this, the Field-object triple form of < O, A, F > is used to represent the marine sensitive objects, both with the discrete object with a precise extend and boundary, and the continuous field with variations dependent on spatial locations. Finally, the marine sensitive objects about sea surface temperature are extracted, represented and analyzed as a case of study, which proves the effectiveness and the efficiency of the proposed method.展开更多
Virtual trusted platform module (vTPM) is an impor- tant part in building trusted cloud environment. Aiming at the remediation of lack of effective security assurances of vTPM in- stances in the existing virtual TPM...Virtual trusted platform module (vTPM) is an impor- tant part in building trusted cloud environment. Aiming at the remediation of lack of effective security assurances of vTPM in- stances in the existing virtual TPM architecture, this paper pre- sents a security-improved scheme for virtual TPM based on ker- nel-based virtual machine (KVM). By realizing the TPM2.0 speci- fication in hardware and software, we add protection for vTPM's secrets using the asymmetric encryption algorithm of TPM. This scheme supports the safety migration of a TPM key during VM-vTPM migration and the security association for different virtual machines (VMs) with vTPM instances. We implement a virtual trusted platform with higher security based on KVM virtual infrastructure. The experiments show that the proposed scheme can enhance the security of virtual trusted platform and has fewer additional performance loss for the VM migration with vTPM.展开更多
A non-parameter Bayesian classifier based on Kernel Density Estimation (KDE)is presented for face recognition, which can be regarded as a weighted Nearest Neighbor (NN)classifier in formation. The class conditional de...A non-parameter Bayesian classifier based on Kernel Density Estimation (KDE)is presented for face recognition, which can be regarded as a weighted Nearest Neighbor (NN)classifier in formation. The class conditional density is estimated by KDE and the bandwidthof the kernel function is estimated by Expectation Maximum (EM) algorithm. Two subspaceanalysis methods-linear Principal Component Analysis (PCA) and Kernel-based PCA (KPCA)are respectively used to extract features, and the proposed method is compared with ProbabilisticReasoning Models (PRM), Nearest Center (NC) and NN classifiers which are widely used in facerecognition systems. The experiments are performed on two benchmarks and the experimentalresults show that the KDE outperforms PRM, NC and NN classifiers.展开更多
For the laser-induced damage(LID) in large-aperture final optics, we present a novel approach of damage online inspection and its experimental system, which solves two problems: classification of true and false LID an...For the laser-induced damage(LID) in large-aperture final optics, we present a novel approach of damage online inspection and its experimental system, which solves two problems: classification of true and false LID and size measurement of the LID. We first analyze the imaging principle of the experimental system for the true and false damage sites, then use kernel-based extreme learning machine(K-ELM) to distinguish them, and finally propose hierarchical kernel extreme learning machine(HK-ELM) to predict the damage size. The experimental results show that the classification accuracy is higher than 95%, the mean relative error of the predicted LID size is within 10%. So the proposed method meets the technical requirements for the damage online inspection.展开更多
To extract region of interests (ROI) in brain magnetic resonance imaging (MRI) with more than two objects and improve the segmentation accuracy, a hybrid model of a kemel-based fuzzy c-means (KFCM) clustering al...To extract region of interests (ROI) in brain magnetic resonance imaging (MRI) with more than two objects and improve the segmentation accuracy, a hybrid model of a kemel-based fuzzy c-means (KFCM) clustering algorithm and Chan-Vese (CV) model for brain MRI segmentation is proposed. The approach consists of two succes- sive stages. Firstly, the KFCM is used to make a coarse segmentation, which achieves the automatic selection of initial contour. Then an improved CV model is utilized to subdivide the image. Fuzzy membership degree from KFCM clus- tering is incorporated into the fidelity term of the 2-phase piecewise constant CV model to obtain accurate multi-object segmentation. Experimental results show that the proposed model has advantages both in accuracy and in robustness to noise in comparison with fuzzy c-means (FCM) clustering, KFCM, and the hybrid model of FCM and CV on brain MRI segmentation.展开更多
In this study, potential of Least Square-Support Vector Regression (LS-SVR) approach is utilized to model the daily variation of river flow. Inherent complexity, unavailability of reasonably long data set and heteroge...In this study, potential of Least Square-Support Vector Regression (LS-SVR) approach is utilized to model the daily variation of river flow. Inherent complexity, unavailability of reasonably long data set and heterogeneous catchment response are the couple of issues that hinder the generalization of relationship between previous and forthcoming river flow magnitudes. The problem complexity may get enhanced with the influence of upstream dam releases. These issues are investigated by exploiting the capability of LS-SVR–an approach that considers Structural Risk Minimization (SRM) against the Empirical Risk Minimization (ERM)–used by other learning approaches, such as, Artificial Neural Network (ANN). This study is conducted in upper Narmada river basin in India having Bargi dam in its catchment, constructed in 1989. The river gauging station–Sandia is located few hundred kilometer downstream of Bargi dam. The model development is carried out with pre-construction flow regime and its performance is checked for both pre- and post-construction of the dam for any perceivable difference. It is found that the performances are similar for both the flow regimes, which indicates that the releases from the dam at daily scale for this gauging site may be ignored. In order to investigate the temporal horizon over which the prediction performance may be relied upon, a multistep-ahead prediction is carried out and the model performance is found to be reasonably good up to 5-day-ahead predictions though the performance is decreasing with the increase in lead-time. Skills of both LS-SVR and ANN are reported and it is found that the former performs better than the latter for all the lead-times in general, and shorter lead times in particular.展开更多
In view of weak defect signals and large acoustic emission(AE) data in low speed bearing condition monitoring, we propose a bearing fault diagnosis technique based on a combination of empirical mode decomposition(EMD)...In view of weak defect signals and large acoustic emission(AE) data in low speed bearing condition monitoring, we propose a bearing fault diagnosis technique based on a combination of empirical mode decomposition(EMD), clear iterative interval threshold(CIIT) and the kernel-based fuzzy c-means(KFCM) eigenvalue extraction. In this technique, we use EMD-CIIT and EMD to complete the noise removal and to extract the intrinsic mode functions(IMFs). Then we select the first three IMFs and calculate their histogram entropies as the main fault features. These features are used for bearing fault classification using KFCM technique. The result shows that the combined EMD-CIIT and KFCM algorithm can accurately identify various bearing faults based on AE signals acquired from a low speed bearing test rig.展开更多
Internet of things(IoT)field has emerged due to the rapid growth of artificial intelligence and communication technologies.The use of IoT technology in modern healthcare environments is convenient for doctors and pati...Internet of things(IoT)field has emerged due to the rapid growth of artificial intelligence and communication technologies.The use of IoT technology in modern healthcare environments is convenient for doctors and patients as it can be used in real-time monitoring of patients,proper administration of patient information,and healthcare management.However,the usage of IoT in the healthcare domain will become a nightmare if patient information is not securely maintainedwhile transferring over an insecure network or storing at the administrator end.In this manuscript,the authors have developed a secure IoT healthcare monitoring system using the Blockchainbased XOR Elliptic Curve Cryptography(BC-XORECC)technique to avoid various vulnerable attacks.Initially,thework has established an authentication process for patient details by generating tokens,keys,and tags using Length Ceaser Cipher-based PearsonHashingAlgorithm(LCC-PHA),EllipticCurve Cryptography(ECC),and Fishers Yates Shuffled Based Adelson-Velskii and Landis(FYS-AVL)tree.The authentications prevent unauthorized users from accessing or misuse the data.After that,a secure data transfer is performed using BC-XORECC,which acts faster by maintaining high data privacy and blocking the path for the attackers.Finally,the Linear Spline Kernel-Based Recurrent Neural Network(LSK-RNN)classification monitors the patient’s health status.The whole developed framework brings out a secure data transfer without data loss or data breaches and remains efficient for health care monitoring via IoT.Experimental analysis shows that the proposed framework achieves a faster encryption and decryption time,classifies the patient’s health status with an accuracy of 89%,and remains robust comparedwith the existing state-of-the-art method.展开更多
As the boom of mobile devices,Android mobile apps play an irreplaceable roles in people’s daily life,which have the characteristics of frequent updates involving in many code commits to meet new requirements.Just-in-...As the boom of mobile devices,Android mobile apps play an irreplaceable roles in people’s daily life,which have the characteristics of frequent updates involving in many code commits to meet new requirements.Just-in-Time(JIT)defect prediction aims to identify whether the commit instances will bring defects into the new release of apps and provides immediate feedback to developers,which is more suitable to mobile apps.As the within-app defect prediction needs sufficient historical data to label the commit instances,which is inadequate in practice,one alternative method is to use the cross-project model.In this work,we propose a novel method,called KAL,for cross-project JIT defect prediction task in the context of Android mobile apps.More specifically,KAL first transforms the commit instances into a high-dimensional feature space using kernel-based principal component analysis technique to obtain the representative features.Then,the adversarial learning technique is used to extract the common feature embedding for the model building.We conduct experiments on 14 Android mobile apps and employ four effort-aware indicators for performance evaluation.The results on 182 cross-project pairs demonstrate that our proposed KAL method obtains better performance than 20 comparative methods.展开更多
The technical advancement in information systems contributes towards the massive availability of the documents stored in the electronic databases such as e-mails,internet and web pages.Therefore,it becomes a complex t...The technical advancement in information systems contributes towards the massive availability of the documents stored in the electronic databases such as e-mails,internet and web pages.Therefore,it becomes a complex task for arranging and browsing the required document.This paper proposes an approach for incremental clustering using the BatGrey Wolf Optimizer(BAGWO).The input documents are initially subjected to the pre-processing module to obtain useful keywords,and then the feature extraction is performed based on wordnet features.After feature extraction,feature selection is carried out using entropy function.Subsequently,the clustering is done using the proposed BAGWO algorithm.The BAGWO algorithm is designed by integrating the Bat Algorithm(BA)and Grey Wolf Optimizer(GWO)for generating the different clusters of text documents.Hence,the clustering is determined using the BAGWO algorithm,yielding the group of clusters.On the other side,upon the arrival of a new document,the same steps of pre-processing and feature extraction are performed.Based on the features of the test document,the mapping is done between the features of the test document,and the clusters obtained by the proposed BAGWO approach.The mapping is performed using the kernel-based deep point distance and once the mapping terminated,the representatives are updated based on the fuzzy-based representative update.The performance of the developed BAGWO outperformed the existing techniques in terms of clustering accuracy,Jaccard coefficient,and rand coefficient with maximal values 0.948,0.968,and 0.969,respectively.展开更多
Kernel-based clustering is supposed to provide a better analysis tool for pattern classification,which implicitly maps input samples to a highdimensional space for improving pattern separability.For this implicit spac...Kernel-based clustering is supposed to provide a better analysis tool for pattern classification,which implicitly maps input samples to a highdimensional space for improving pattern separability.For this implicit space map,the kernel trick is believed to elegantly tackle the problem of“curse of dimensionality”,which has actually been more challenging for kernel-based clustering in terms of computational complexity and classification accuracy,which traditional kernelized algorithms cannot effectively deal with.In this paper,we propose a novel kernel clustering algorithm,called KFCM-III,for this problem by replacing the traditional isotropic Gaussian kernel with the anisotropic kernel formulated by Mahalanobis distance.Moreover,a reduced-set represented kernelized center has been employed for reducing the computational complexity of KFCM-I algorithm and circumventing the model deficiency of KFCM-II algorithm.The proposed KFCMIII has been evaluated for segmenting magnetic resonance imaging(MRI)images.For this task,an image intensity inhomogeneity correction is employed during image segmentation process.With a scheme called preclassification,the proposed intensity correction scheme could further speed up image segmentation.The experimental results on public image data show the superiorities of KFCM-III.展开更多
基金Supported by the Key Project of Chinese Ministry of Education (No.105150).
文摘In practice, retraining a trained classifier is necessary when novel data become available. This paper adopts an incremental learning procedure to adaptively train a Kernel-based Nonlinear Representor (KNR), a recently presented nonlinear classifier for optimal pattern representation, so that its generalization ability may be evaluated in time-variant situation and a sparser representation is obtained for computationally intensive tasks. The addressed techniques are applied to handwritten digit classification to illustrate the feasibility for pattern recognition.
基金Supported by the key project of Chinese Ministry of Education(No.1051150)
文摘Previously, a novel classifier called Kernel-based Nonlinear Discriminator (KND) was proposed to discriminate a pattern class from other classes by minimizing mean effect of the latter. To consider the effect of the target class, this paper introduces an oblique projection algorithm to determine the coefficients of a KND so that it is extended to a new version called extended KND (eKND). In eKND construction, the desired output vector of the target class is obliquely projected onto the relevant subspace along the subspace related to other classes. In addition, a simple technique is proposed to calculate the associated oblique projection operator. Experimental results on handwritten digit recognition show that the algorithm performes better than a KND classifier and some other commonly used classifiers.
文摘This paper presents a new kernel-based algorithm for video object tracking called rebound of region of interest (RROI). The novel algorithm uses a rectangle-shaped section as region of interest (ROI) to represent and track specific objects in videos. The proposed algorithm is constituted by two stages. The first stage seeks to determine the direction of the object’s motion by analyzing the changing regions around the object being tracked between two consecutive frames. Once the direction of the object’s motion has been predicted, it is initialized an iterative process that seeks to minimize a function of dissimilarity in order to find the location of the object being tracked in the next frame. The main advantage of the proposed algorithm is that, unlike existing kernel-based methods, it is immune to highly cluttered conditions. The results obtained by the proposed algorithm show that the tracking process was successfully carried out for a set of color videos with different challenging conditions such as occlusion, illumination changes, cluttered conditions, and object scale changes.
文摘Workload balancing in cloud computing is not yet resolved,particularly considering Infrastructure as a Service(IaaS)in the cloud network.The problem of being underloaded or overloaded should not occur at the time of the server or host accessing the cloud which may lead to create system crash problem.Thus,to resolve these existing problems,an efficient task scheduling algorithm is required for distributing the tasks over the entire feasible resources,which is termed load balancing.The load balancing approach assures that the entire Virtual Machines(VMs)are utilized appropriately.So,it is highly essential to develop a load-balancing model in a cloud environment based on machine learning and optimization strategies.Here,the computing and networking data is utilized for the analysis to observe the traffic as well as performance patterns.The acquired data is offered to the machine learning decision to select the right server by predicting the performance effectively by employing an Optimal Kernel-based Extreme Learning Machine(OK-ELM)and their parameter is tuned by the developed hybrid approach Population Size-based Mud Ring Tunicate Swarm Algorithm(PS-MRTSA).Further,effective scheduling is performed to resolve the load balancing issues by employing the developed model MR-TSA.Here,the developed approach effectively resolves the multi-objective constraints such as Response time,Resource cost,and energy consumption.Thus,the recommended load balancing model securesan enhanced performance rate than the traditional approaches over several experimental analyses.
基金the National Natural Science Foundation of China under Grant Nos.70601029 and 70221001the Knowledge Innovation Program of the Chinese Academy of Sciences under Grant Nos.3547600,3046540,and 3047540the Strategy Research Grant of City University of Hong Kong under Grant No.7001806
文摘Due to the complexity of economic system and the interactive effects between all kinds of economic variables and foreign trade, it is not easy to predict foreign trade volume. However, the difficulty in predicting foreign trade volume is usually attributed to the limitation of many conventional forecasting models. To improve the prediction performance, the study proposes a novel kernel-based ensemble learning approach hybridizing econometric models and artificial intelligence (AI) models to predict China's foreign trade volume. In the proposed approach, an important econometric model, the co-integration-based error correction vector auto-regression (EC-VAR) model is first used to capture the impacts of all kinds of economic variables on Chinese foreign trade from a multivariate linear anal- ysis perspective. Then an artificial neural network (ANN) based EC-VAR model is used to capture the nonlinear effects of economic variables on foreign trade from the nonlinear viewpoint. Subsequently, for incorporating the effects of irregular events on foreign trade, the text mining and expert's judgmental adjustments are also integrated into the nonlinear ANN-based EC-VAR model. Finally, all kinds of economic variables, the outputs of linear and nonlinear EC-VAR models and judgmental adjustment model are used as input variables of a typical kernel-based support vector regression (SVR) for en- semble prediction purpose. For illustration, the proposed kernel-based ensemble learning methodology hybridizing econometric techniques and AI methods is applied to China's foreign trade volume predic- tion problem. Experimental results reveal that the hybrid econometric-AI ensemble learning approach can significantly improve the prediction performance over other linear and nonlinear models listed in this study.
文摘Purpose-The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.Design/methodology/approach-After collecting the ultrasound images,contrast-limited adaptive histogram equalization approach(CLAHE)is applied as preprocessing,in order to enhance the visual quality of the images that helps in better segmentation.Then,adaptively regularized kernel-based fuzzy C means(ARKFCM)is used to segment tumor from the enhanced image along with local ternary pattern combined with selective level set approaches.Findings-The proposed segmentation algorithm precisely segments the tumor portions from the enhanced images with lower computation cost.The proposed segmentation algorithm is compared with the existing algorithms and ground truth values in terms of Jaccard coefficient,dice coefficient,precision,Matthews correlation coefficient,f-score and accuracy.The experimental analysis shows that the proposed algorithm achieved 99.18% of accuracy and 92.17% of f-score value,which is better than the existing algorithms.Practical implications-From the experimental analysis,the proposed ARKFCM with enhanced level set algorithm obtained better performance in ultrasound liver tumor segmentation related to graph-based algorithm.However,the proposed algorithm showed 3.11% improvement in dice coefficient compared to graph-based algorithm.Originality/value-The image preprocessing is carried out using CLAHE algorithm.The preprocessed image is segmented by employing selective level set model and Local Ternary Pattern in ARKFCM algorithm.In this research,the proposed algorithm has advantages such as independence of clustering parameters,robustness in preserving the image details and optimal in finding the threshold value that effectively reduces the computational cost.
基金supported by the director projects of Centre for Earth Observation and Digital Earth(CEODE)(Nos.Y2ZZ06101B and Y2ZZ18101B)the State Key Laboratory of Resources and Environmental Information System project+1 种基金the National Natural Science Foundation of China(project No.41371385)the National High Technology Research and Development Program of China(project No.2012AA12A403-5)
文摘Within the context of global change, marine sensitive factors or Marine Essential Climate Variables have been defined by many projects, and their sensitive spatial regions and time phases play significant roles in regional sea-air interactions and better understanding of their dynamic process. In this paper, we propose a cluster-based method for marine sensitive region extraction and representation. This method includes a kernel expansion algorithm for extracting marine sensitive regions, and a field-object triple form, integration of object-oriented and field-based model, for representing marine sensitive objects. Firstly, this method recognizes ENSO-related spatial patterns using empirical orthogonal decomposition of long term marine sensitive factors and correlation analysis with multiple ENSO index. The cluster kernel, defined by statistics of spatial patterns, is initialized to carry out spatial expansion and cluster mergence with spatial neighborhoods recursively, then all the related lattices with similar behavior are merged into marine sensitive regions. After this, the Field-object triple form of < O, A, F > is used to represent the marine sensitive objects, both with the discrete object with a precise extend and boundary, and the continuous field with variations dependent on spatial locations. Finally, the marine sensitive objects about sea surface temperature are extracted, represented and analyzed as a case of study, which proves the effectiveness and the efficiency of the proposed method.
基金Supported by the National Basic Research Program of China(973 Program)(2014CB340600)the National High Technology Research and Development Program of China(863 Program)(2015AA016002)the National Natural Science Foundation of China(61173138,61272452,61332018)
文摘Virtual trusted platform module (vTPM) is an impor- tant part in building trusted cloud environment. Aiming at the remediation of lack of effective security assurances of vTPM in- stances in the existing virtual TPM architecture, this paper pre- sents a security-improved scheme for virtual TPM based on ker- nel-based virtual machine (KVM). By realizing the TPM2.0 speci- fication in hardware and software, we add protection for vTPM's secrets using the asymmetric encryption algorithm of TPM. This scheme supports the safety migration of a TPM key during VM-vTPM migration and the security association for different virtual machines (VMs) with vTPM instances. We implement a virtual trusted platform with higher security based on KVM virtual infrastructure. The experiments show that the proposed scheme can enhance the security of virtual trusted platform and has fewer additional performance loss for the VM migration with vTPM.
基金National "863" project (2001AA114140) the National Natural Science Foundation of China (60135020).
文摘A non-parameter Bayesian classifier based on Kernel Density Estimation (KDE)is presented for face recognition, which can be regarded as a weighted Nearest Neighbor (NN)classifier in formation. The class conditional density is estimated by KDE and the bandwidthof the kernel function is estimated by Expectation Maximum (EM) algorithm. Two subspaceanalysis methods-linear Principal Component Analysis (PCA) and Kernel-based PCA (KPCA)are respectively used to extract features, and the proposed method is compared with ProbabilisticReasoning Models (PRM), Nearest Center (NC) and NN classifiers which are widely used in facerecognition systems. The experiments are performed on two benchmarks and the experimentalresults show that the KDE outperforms PRM, NC and NN classifiers.
基金supported by the National Natural Science Foundation of China(Nos.51275120 and 61275096)the Fundamental Research Funds for the Central Universities(No.HIT.NSRIF.2013012)
文摘For the laser-induced damage(LID) in large-aperture final optics, we present a novel approach of damage online inspection and its experimental system, which solves two problems: classification of true and false LID and size measurement of the LID. We first analyze the imaging principle of the experimental system for the true and false damage sites, then use kernel-based extreme learning machine(K-ELM) to distinguish them, and finally propose hierarchical kernel extreme learning machine(HK-ELM) to predict the damage size. The experimental results show that the classification accuracy is higher than 95%, the mean relative error of the predicted LID size is within 10%. So the proposed method meets the technical requirements for the damage online inspection.
基金Supported by National Natural Science Foundation of China (No. 60872065)
文摘To extract region of interests (ROI) in brain magnetic resonance imaging (MRI) with more than two objects and improve the segmentation accuracy, a hybrid model of a kemel-based fuzzy c-means (KFCM) clustering algorithm and Chan-Vese (CV) model for brain MRI segmentation is proposed. The approach consists of two succes- sive stages. Firstly, the KFCM is used to make a coarse segmentation, which achieves the automatic selection of initial contour. Then an improved CV model is utilized to subdivide the image. Fuzzy membership degree from KFCM clus- tering is incorporated into the fidelity term of the 2-phase piecewise constant CV model to obtain accurate multi-object segmentation. Experimental results show that the proposed model has advantages both in accuracy and in robustness to noise in comparison with fuzzy c-means (FCM) clustering, KFCM, and the hybrid model of FCM and CV on brain MRI segmentation.
文摘In this study, potential of Least Square-Support Vector Regression (LS-SVR) approach is utilized to model the daily variation of river flow. Inherent complexity, unavailability of reasonably long data set and heterogeneous catchment response are the couple of issues that hinder the generalization of relationship between previous and forthcoming river flow magnitudes. The problem complexity may get enhanced with the influence of upstream dam releases. These issues are investigated by exploiting the capability of LS-SVR–an approach that considers Structural Risk Minimization (SRM) against the Empirical Risk Minimization (ERM)–used by other learning approaches, such as, Artificial Neural Network (ANN). This study is conducted in upper Narmada river basin in India having Bargi dam in its catchment, constructed in 1989. The river gauging station–Sandia is located few hundred kilometer downstream of Bargi dam. The model development is carried out with pre-construction flow regime and its performance is checked for both pre- and post-construction of the dam for any perceivable difference. It is found that the performances are similar for both the flow regimes, which indicates that the releases from the dam at daily scale for this gauging site may be ignored. In order to investigate the temporal horizon over which the prediction performance may be relied upon, a multistep-ahead prediction is carried out and the model performance is found to be reasonably good up to 5-day-ahead predictions though the performance is decreasing with the increase in lead-time. Skills of both LS-SVR and ANN are reported and it is found that the former performs better than the latter for all the lead-times in general, and shorter lead times in particular.
基金the Privileged Shandong Provincial Government’s “Taishan Scholar” Program
文摘In view of weak defect signals and large acoustic emission(AE) data in low speed bearing condition monitoring, we propose a bearing fault diagnosis technique based on a combination of empirical mode decomposition(EMD), clear iterative interval threshold(CIIT) and the kernel-based fuzzy c-means(KFCM) eigenvalue extraction. In this technique, we use EMD-CIIT and EMD to complete the noise removal and to extract the intrinsic mode functions(IMFs). Then we select the first three IMFs and calculate their histogram entropies as the main fault features. These features are used for bearing fault classification using KFCM technique. The result shows that the combined EMD-CIIT and KFCM algorithm can accurately identify various bearing faults based on AE signals acquired from a low speed bearing test rig.
基金This project has been funded by the Scientific Research Deanship at the University of Ha’il-Saudi Arabia through project number BA-2105.
文摘Internet of things(IoT)field has emerged due to the rapid growth of artificial intelligence and communication technologies.The use of IoT technology in modern healthcare environments is convenient for doctors and patients as it can be used in real-time monitoring of patients,proper administration of patient information,and healthcare management.However,the usage of IoT in the healthcare domain will become a nightmare if patient information is not securely maintainedwhile transferring over an insecure network or storing at the administrator end.In this manuscript,the authors have developed a secure IoT healthcare monitoring system using the Blockchainbased XOR Elliptic Curve Cryptography(BC-XORECC)technique to avoid various vulnerable attacks.Initially,thework has established an authentication process for patient details by generating tokens,keys,and tags using Length Ceaser Cipher-based PearsonHashingAlgorithm(LCC-PHA),EllipticCurve Cryptography(ECC),and Fishers Yates Shuffled Based Adelson-Velskii and Landis(FYS-AVL)tree.The authentications prevent unauthorized users from accessing or misuse the data.After that,a secure data transfer is performed using BC-XORECC,which acts faster by maintaining high data privacy and blocking the path for the attackers.Finally,the Linear Spline Kernel-Based Recurrent Neural Network(LSK-RNN)classification monitors the patient’s health status.The whole developed framework brings out a secure data transfer without data loss or data breaches and remains efficient for health care monitoring via IoT.Experimental analysis shows that the proposed framework achieves a faster encryption and decryption time,classifies the patient’s health status with an accuracy of 89%,and remains robust comparedwith the existing state-of-the-art method.
基金supported by the National Natural Science Foundation of China (Grant No.62072060).
文摘As the boom of mobile devices,Android mobile apps play an irreplaceable roles in people’s daily life,which have the characteristics of frequent updates involving in many code commits to meet new requirements.Just-in-Time(JIT)defect prediction aims to identify whether the commit instances will bring defects into the new release of apps and provides immediate feedback to developers,which is more suitable to mobile apps.As the within-app defect prediction needs sufficient historical data to label the commit instances,which is inadequate in practice,one alternative method is to use the cross-project model.In this work,we propose a novel method,called KAL,for cross-project JIT defect prediction task in the context of Android mobile apps.More specifically,KAL first transforms the commit instances into a high-dimensional feature space using kernel-based principal component analysis technique to obtain the representative features.Then,the adversarial learning technique is used to extract the common feature embedding for the model building.We conduct experiments on 14 Android mobile apps and employ four effort-aware indicators for performance evaluation.The results on 182 cross-project pairs demonstrate that our proposed KAL method obtains better performance than 20 comparative methods.
文摘The technical advancement in information systems contributes towards the massive availability of the documents stored in the electronic databases such as e-mails,internet and web pages.Therefore,it becomes a complex task for arranging and browsing the required document.This paper proposes an approach for incremental clustering using the BatGrey Wolf Optimizer(BAGWO).The input documents are initially subjected to the pre-processing module to obtain useful keywords,and then the feature extraction is performed based on wordnet features.After feature extraction,feature selection is carried out using entropy function.Subsequently,the clustering is done using the proposed BAGWO algorithm.The BAGWO algorithm is designed by integrating the Bat Algorithm(BA)and Grey Wolf Optimizer(GWO)for generating the different clusters of text documents.Hence,the clustering is determined using the BAGWO algorithm,yielding the group of clusters.On the other side,upon the arrival of a new document,the same steps of pre-processing and feature extraction are performed.Based on the features of the test document,the mapping is done between the features of the test document,and the clusters obtained by the proposed BAGWO approach.The mapping is performed using the kernel-based deep point distance and once the mapping terminated,the representatives are updated based on the fuzzy-based representative update.The performance of the developed BAGWO outperformed the existing techniques in terms of clustering accuracy,Jaccard coefficient,and rand coefficient with maximal values 0.948,0.968,and 0.969,respectively.
基金This work was partially supported by the National Natural Science Foundation of China(Grant Nos.60872145,60902063)the National High Technology Research and Development Program of China(Grant No.2009AA01Z315)+1 种基金the Cultivation Fund of the Key Scientific and Technical Innovation Project,Ministry of Education of China(No.708085)the Henan Research Program of Foundation and Advanced Technology(No.082300410090).
文摘Kernel-based clustering is supposed to provide a better analysis tool for pattern classification,which implicitly maps input samples to a highdimensional space for improving pattern separability.For this implicit space map,the kernel trick is believed to elegantly tackle the problem of“curse of dimensionality”,which has actually been more challenging for kernel-based clustering in terms of computational complexity and classification accuracy,which traditional kernelized algorithms cannot effectively deal with.In this paper,we propose a novel kernel clustering algorithm,called KFCM-III,for this problem by replacing the traditional isotropic Gaussian kernel with the anisotropic kernel formulated by Mahalanobis distance.Moreover,a reduced-set represented kernelized center has been employed for reducing the computational complexity of KFCM-I algorithm and circumventing the model deficiency of KFCM-II algorithm.The proposed KFCMIII has been evaluated for segmenting magnetic resonance imaging(MRI)images.For this task,an image intensity inhomogeneity correction is employed during image segmentation process.With a scheme called preclassification,the proposed intensity correction scheme could further speed up image segmentation.The experimental results on public image data show the superiorities of KFCM-III.