Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the heart.Numerous researchers have made progress in correcting and predicting early heart d...Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the heart.Numerous researchers have made progress in correcting and predicting early heart disease,but more remains to be accomplished.The diagnostic accuracy of many current studies is inadequate due to the attempt to predict patients with heart disease using traditional approaches.By using data fusion from several regions of the country,we intend to increase the accuracy of heart disease prediction.A statistical approach that promotes insights triggered by feature interactions to reveal the intricate pattern in the data,which cannot be adequately captured by a single feature.We processed the data using techniques including feature scaling,outlier detection and replacement,null and missing value imputation,and more to improve the data quality.Furthermore,the proposed feature engineering method uses the correlation test for numerical features and the chi-square test for categorical features to interact with the feature.To reduce the dimensionality,we subsequently used PCA with 95%variation.To identify patients with heart disease,hyperparameter-based machine learning algorithms like RF,XGBoost,Gradient Boosting,LightGBM,CatBoost,SVM,and MLP are utilized,along with ensemble models.The model’s overall prediction performance ranges from 88%to 92%.In order to attain cutting-edge results,we then used a 1D CNN model,which significantly enhanced the prediction with an accuracy score of 96.36%,precision of 96.45%,recall of 96.36%,specificity score of 99.51%and F1 score of 96.34%.The RF model produces the best results among all the classifiers in the evaluation matrix without feature interaction,with accuracy of 90.21%,precision of 90.40%,recall of 90.86%,specificity of 90.91%,and F1 score of 90.63%.Our proposed 1D CNN model is 7%superior to the one without feature engineering when compared to the suggested approach.This illustrates how interaction-focused feature analysis can produce precise and useful insights for heart disease diagnosis.展开更多
In this paper, we study the problem of employ ensemble learning for computer forensic. We propose a Lazy Local Learning based bagging (L3B) approach, where base learners are trained from a small instance subset surr...In this paper, we study the problem of employ ensemble learning for computer forensic. We propose a Lazy Local Learning based bagging (L3B) approach, where base learners are trained from a small instance subset surrounding each test instance. More specifically, given a test instance x, L3B first discovers x's k nearest neighbours, and then applies progressive sampling to the selected neighbours to train a set of base classifiers, by using a given very weak (VW) learner. At the last stage, x is labeled as the most frequently voted class of all base classifiers. Finally, we apply the proposed L3B to computer forensic.展开更多
In this research paper, we evaluate an assortment of tools and intend to investigate multifarious characteristic of Imagix-4D Reverse Engineering Tool and on the basis of investigation find out inadequacy of Imagix-4D...In this research paper, we evaluate an assortment of tools and intend to investigate multifarious characteristic of Imagix-4D Reverse Engineering Tool and on the basis of investigation find out inadequacy of Imagix-4D Reverse Engineering Tool (illustrate only abstract Class Diagram, and it has no support to illustrate ER-Diagram and Sequence Diagram) and propose a Reverse Engineering Tool based on Unified Mapping Method (RETUM) for prominence of Class Diagram Visualizations which surmount the limitation (class diagram which is intricate in visualization) of Imagix-4D Reverse Engineering Tool.展开更多
One application of software engineering is the vast and widely popular video game entertainment industry. Success of a video game product depends on how well the player base receives it. Of research towards understand...One application of software engineering is the vast and widely popular video game entertainment industry. Success of a video game product depends on how well the player base receives it. Of research towards understanding factors of success behind releasing a video game, we are interested in studying a factor known as Replayability. Towards a software engineering oriented game design methodology, we collect player opinions on Replayability via surveys and provide methods to analyze the data. We believe these results can help game designers to more successfully produce entertaining games with longer lasting appeal by utilizing our software engineering techniques.展开更多
An embedded cryptosystem needs higher reconfiguration capability and security. After analyzing the newly emerging side-channel attacks on elliptic curve cryptosystem (ECC), an efficient fractional width-w NAF (FWNA...An embedded cryptosystem needs higher reconfiguration capability and security. After analyzing the newly emerging side-channel attacks on elliptic curve cryptosystem (ECC), an efficient fractional width-w NAF (FWNAF) algorithm is proposed to secure ECC scalar multiplication from these attacks. This algorithm adopts the fractional window method and probabilistic SPA scheme to reconfigure the pre-computed table, and it allows designers to make a dynamic configuration on pre-computed table. And then, it is enhanced to resist SPA, DPA, RPA and ZPA attacks by using the random masking method. Compared with the WBRIP and EBRIP methods, our proposals has the lowest total computation cost and reduce the shake phenomenon due to sharp fluctuation on computation performance.展开更多
Essentially, it is significant to supply the consumer with reliable and sufficient power. Since, power quality is measured by the consistency in frequency and power flow between control areas. Thus, in a power system ...Essentially, it is significant to supply the consumer with reliable and sufficient power. Since, power quality is measured by the consistency in frequency and power flow between control areas. Thus, in a power system operation and control,automatic generation control(AGC) plays a crucial role. In this paper, multi-area(Five areas: area 1, area 2, area 3, area 4 and area 5) reheat thermal power systems are considered with proportional-integral-derivative(PID) controller as a supplementary controller. Each area in the investigated power system is equipped with appropriate governor unit, turbine with reheater unit, generator and speed regulator unit. The PID controller parameters are optimized by considering nature bio-inspired firefly algorithm(FFA). The experimental results demonstrated the comparison of the proposed system performance(FFA-PID)with optimized PID controller based genetic algorithm(GAPID) and particle swarm optimization(PSO) technique(PSOPID) for the same investigated power system. The results proved the efficiency of employing the integral time absolute error(ITAE) cost function with one percent step load perturbation(1 % SLP) in area 1. The proposed system based FFA achieved the least settling time compared to using the GA or the PSO algorithms, while, it attained good results with respect to the peak overshoot/undershoot. In addition, the FFA performance is improved with the increased number of iterations which outperformed the other optimization algorithms based controller.展开更多
VisuShrink, ModineighShrink and NeighShrink are efficient image denoising algorithms based on the discrete wavelet transform (DWT). These methods have disadvantage of using a suboptimal universal threshold and identic...VisuShrink, ModineighShrink and NeighShrink are efficient image denoising algorithms based on the discrete wavelet transform (DWT). These methods have disadvantage of using a suboptimal universal threshold and identical neighbouring window size in all wavelet subbands. In this paper, an improved method is proposed, that determines a threshold as well as neighbouring window size for every subband using its lengths. Our experimental results illustrate that the proposed approach is better than the existing ones, i.e., NeighShrink, ModineighShrink and VisuShrink in terms of peak signal-to-noise ratio (PSNR) i.e. visual quality of the image.展开更多
This paper proposes a hybrid technique for color image segmentation. First an input image is converted to the image of CIE L*a*b* color space. The color features "a" and "b" of CIE L^*a^*b^* are then fed int...This paper proposes a hybrid technique for color image segmentation. First an input image is converted to the image of CIE L*a*b* color space. The color features "a" and "b" of CIE L^*a^*b^* are then fed into fuzzy C-means (FCM) clustering which is an unsupervised method. The labels obtained from the clustering method FCM are used as a target of the supervised feed forward neural network. The network is trained by the Levenberg-Marquardt back-propagation algorithm, and evaluates its performance using mean square error and regression analysis. The main issues of clustering methods are determining the number of clusters and cluster validity measures. This paper presents a method namely co-occurrence matrix based algorithm for finding the number of clusters and silhouette index values that are used for cluster validation. The proposed method is tested on various color images obtained from the Berkeley database. The segmentation results from the proposed method are validated and the classification accuracy is evaluated by the parameters sensitivity, specificity, and accuracy.展开更多
Wireless local area networks (WLAN) localization based on received signal strength is becoming an important enabler of location based services. Limited efficiency and accuracy are disadvantages to the deterministic lo...Wireless local area networks (WLAN) localization based on received signal strength is becoming an important enabler of location based services. Limited efficiency and accuracy are disadvantages to the deterministic location estimation techniques. The probabilistic techniques show their good accuracy but cost more computation overhead. A Gaussian mixture model based on clustering technique was presented to improve location determination efficiency. The proposed clustering algorithm reduces the number of candidate locations from the whole area to a cluster. Within a cluster, an improved nearest neighbor algorithm was used to estimate user location using signal strength from more access points. Experiments show that the location estimation time is greatly decreased while high accuracy can still be achieved.展开更多
A new real-time algorithm is proposed in this paperfor detecting moving object in color image sequencestaken from stationary cameras.This algorithm combines a temporal difference with an adaptive background subtractio...A new real-time algorithm is proposed in this paperfor detecting moving object in color image sequencestaken from stationary cameras.This algorithm combines a temporal difference with an adaptive background subtraction where the combination is novel.Ⅷ1en changes OCCUr.the background is automatically adapted to suit the new conditions.Forthe background model,a new model is proposed with each frame decomposed into regions and the model is based not only upon single pixel but also on the characteristic of a region.The hybrid presentationincludes a model for single pixel information and a model for the pixel’s neighboring area information.This new model of background can both improve the accuracy of segmentation due to that spatialinformation is taken into account and salientl5r speed up the processing procedure because porlion of neighboring pixel call be selected into modeling.The algorithm was successfully used in a video surveillance systern and the experiment result showsit call obtain a clearer foreground than the singleframe difference or background subtraction method.展开更多
The eigenface method that uses principal component analysis(PCA) has been the standard and popular method used in face recognition.This paper presents a PCA-memetic algorithm(PCA-MA) approach for feature selection.PCA...The eigenface method that uses principal component analysis(PCA) has been the standard and popular method used in face recognition.This paper presents a PCA-memetic algorithm(PCA-MA) approach for feature selection.PCA has been extended by MAs where the former was used for feature extraction/dimensionality reduction and the latter exploited for feature selection.Simulations were performed over ORL and YaleB face databases using Euclidean norm as the classifier.It was found that as far as the recognition rate is concerned,PCA-MA completely outperforms the eigenface method.We compared the performance of PCA extended with genetic algorithm(PCA-GA) with our proposed PCA-MA method.The results also clearly established the supremacy of the PCA-MA method over the PCA-GA method.We further extended linear discriminant analysis(LDA) and kernel principal component analysis(KPCA) approaches with the MA and observed significant improvement in recognition rate with fewer features.This paper also compares the performance of PCA-MA,LDA-MA and KPCA-MA approaches.展开更多
In this paper, a new partial transmit sequence(PTS)scheme with low computational complexity is proposed for the problems of high computational complexity in the conventional PTS method. By analyzing the relationship...In this paper, a new partial transmit sequence(PTS)scheme with low computational complexity is proposed for the problems of high computational complexity in the conventional PTS method. By analyzing the relationship of candidate sequences in the PTS method under the interleaved partition method, it has been discovered that some candidate sequences generated by phase factor sequences have the same peak average power ratio(PAPR). Hence, phase factor sequences can be optimized to reduce their searching times. Then, the computational process of generating candidate sequences can be simplified by improving the utilization of data and minimizing the calculations of complex multiplication. The performance analysis shows that, compared with the conventional PTS scheme, the proposed approach significantly decreases the computational complexity and has no loss of PAPR performance.展开更多
Multimedia is one of the important communication channels for mankind. Due to the advancement in technology and enormous growth of mankind, a vast array of multimedia data is available today. This has resulted in the ...Multimedia is one of the important communication channels for mankind. Due to the advancement in technology and enormous growth of mankind, a vast array of multimedia data is available today. This has resulted in the obvious need for some techniques for retrieving these data. This paper will give an overview of ontology-based image retrieval system for asteroideae flower family domain. In order to reduce the semantic gap between the low-level visual features of an image and the high-level domain knowledge, we have incorporated a concept of multi-modal image ontology. So, the created asteroideae flower domain specific ontology would have the knowledge about the domain and the visual features. The visual features used to define the ontology are prevalent color,basic intrinsic pattern and contour gradient. In prevalent color extraction, the most dominant color from the images was identified and indexed. In order to determine the texture pattern for a particular flower, basic intrinsic patterns were used. The contour gradients provide the information on the image edges with respect to the image base. These feature values are embedded in the ontology at appropriate slots with respect to the domain knowledge. This paper also defines some of the query axioms which are used to retrieve appropriate information from the created ontology. This ontology can be used for image retrieval system in semantic web.展开更多
Research on human emotions has started to address psychological aspects of human nature and has advanced to the point of designing various models that represent them quantitatively and systematically. Based on the fin...Research on human emotions has started to address psychological aspects of human nature and has advanced to the point of designing various models that represent them quantitatively and systematically. Based on the findings, a method is suggested for emotional space formation and emotional inference that enhance the quality and maximize the reality of emotion-based personalized services. In consideration of the subjective tendencies of individuals, AHP was adopted for the quantitative evaluation of human emotions, based on which an emotional space remodeling method is suggested in reference to the emotional model of Thayer and Plutchik, which takes into account personal emotions. In addition, Sugeno fuzzy inference, fuzzy measures, and Choquet integral were adopted for emotional inference in the remodeled personalized emotional space model. Its performance was evaluated through an experiment. Fourteen cases were analyzed with 4.0 and higher evaluation value of emotions inferred, for the evaluation of emotional similarity, through the case studies of 17 kinds of emotional inference methods. Matching results per inference method in ten cases accounting for 71% are confirmed. It is also found that the remaining two cases are inferred as adjoining emotion in the same section. In this manner, the similarity of inference results is verified.展开更多
Sarcasm is a type of sentiment where people express their negative feelings using positive or intensified positive words in the text. While speaking, people often use heavy tonal stress and certain gestural clues like...Sarcasm is a type of sentiment where people express their negative feelings using positive or intensified positive words in the text. While speaking, people often use heavy tonal stress and certain gestural clues like rolling of the eyes, hand movement, etc. to reveal sarcastic, In the textual data, these tonal and gestural clues are missing, making sarcasm detection very difficult for an average human. Due to these challenges, researchers show interest in sarcasm detection of social media text, especially in tweets. Rapid growth of tweets in volume and its analysis pose major challenges. In this paper, we proposed a Hadoop based framework that captures real time tweets and processes it with a set of algorithms which identifies sarcastic sentiment effectively. We observe that the elapse time for analyzing and processing under Hadoop based framework significantly outperforms the conventional methods and is more suited for real time streaming tweets.展开更多
Shannon observed the relation between information entropy and Maxwell demon experiment to come up with information entropy formula. After that, Shannon's entropy formula is widely used to measure information leakage ...Shannon observed the relation between information entropy and Maxwell demon experiment to come up with information entropy formula. After that, Shannon's entropy formula is widely used to measure information leakage in imperative programs. But in the present work, our aim is to go in a reverse direction and try to find possible Maxwell's demon experimental setup for contemporary practical imperative programs in which variations of Shannon's entropy formula has been applied to measure the information leakage. To establish the relation between the second principle of thermodynamics and quantitative analysis of information leakage, present work models contemporary variations of imperative programs in terms of Maxwell's demon experimental setup. In the present work five contemporary variations of imperative program related to information quantification are identified. They are: (i) information leakage in imperative program, (ii) imperative multi- threaded program, (iii) point to point leakage in the imperative program, (iv) imperative program with infinite observation, and (v) imperative program in the SOA-based environment. For these variations, minimal work required by an attacker to gain the secret is also calculated using historical Maxwell's demon experiment. To model the experimental setup of Maxwell's demon, non-interference security policy is used. In the present work, imperative programs with one-bit secret information have been considered to avoid the complexity. The findings of the present work from the history of physics can be utilized in many areas related to information flow of physical computing, nano-computing, quantum computing, biological computing, energy dissipation in computing, and computing power analysis.展开更多
In Heterogeneous Wireless Sensor Networks, the mobility of the sensor nodes becomes essential in various applications. During node mobility, there are possibilities for the malicious node to become the cluster head or...In Heterogeneous Wireless Sensor Networks, the mobility of the sensor nodes becomes essential in various applications. During node mobility, there are possibilities for the malicious node to become the cluster head or cluster member. This causes the cluster or the whole network to be controlled by the malicious nodes. To offer high level of security, the mobile sensor nodes need to be authenticated. Further, clustering of nodes improves scalability, energy efficient routing and data delivery. In this paper, we propose a cluster based secure dynamic keying technique to authenticate the nodes during mobility. The nodes with high configuration are chosen as cluster heads based on the weight value which is estimated using parameters such as the node degree, average distance, node's average speed, and virtual battery power. The keys are dynamically generated and used for providing security. Even the keys are compromised by the attackers, they are not able to use the previous keys to cheat or disuse the authenticated nodes. In addition, a bidirectional malicious node detection technique is employed which eliminates the malicious node from the network. By simulation, it is proved that the proposed technique provides efficient security with reduced energy consumption during node mobility.展开更多
基金supported by the Competitive Research Fund of the University of Aizu,Japan(Grant No.P-13).
文摘Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the heart.Numerous researchers have made progress in correcting and predicting early heart disease,but more remains to be accomplished.The diagnostic accuracy of many current studies is inadequate due to the attempt to predict patients with heart disease using traditional approaches.By using data fusion from several regions of the country,we intend to increase the accuracy of heart disease prediction.A statistical approach that promotes insights triggered by feature interactions to reveal the intricate pattern in the data,which cannot be adequately captured by a single feature.We processed the data using techniques including feature scaling,outlier detection and replacement,null and missing value imputation,and more to improve the data quality.Furthermore,the proposed feature engineering method uses the correlation test for numerical features and the chi-square test for categorical features to interact with the feature.To reduce the dimensionality,we subsequently used PCA with 95%variation.To identify patients with heart disease,hyperparameter-based machine learning algorithms like RF,XGBoost,Gradient Boosting,LightGBM,CatBoost,SVM,and MLP are utilized,along with ensemble models.The model’s overall prediction performance ranges from 88%to 92%.In order to attain cutting-edge results,we then used a 1D CNN model,which significantly enhanced the prediction with an accuracy score of 96.36%,precision of 96.45%,recall of 96.36%,specificity score of 99.51%and F1 score of 96.34%.The RF model produces the best results among all the classifiers in the evaluation matrix without feature interaction,with accuracy of 90.21%,precision of 90.40%,recall of 90.86%,specificity of 90.91%,and F1 score of 90.63%.Our proposed 1D CNN model is 7%superior to the one without feature engineering when compared to the suggested approach.This illustrates how interaction-focused feature analysis can produce precise and useful insights for heart disease diagnosis.
基金the National High Technology Research and Development Program(863) of China(No.2007AA01Z456)the National Natural Science Foundation of China(No.60703030)
文摘In this paper, we study the problem of employ ensemble learning for computer forensic. We propose a Lazy Local Learning based bagging (L3B) approach, where base learners are trained from a small instance subset surrounding each test instance. More specifically, given a test instance x, L3B first discovers x's k nearest neighbours, and then applies progressive sampling to the selected neighbours to train a set of base classifiers, by using a given very weak (VW) learner. At the last stage, x is labeled as the most frequently voted class of all base classifiers. Finally, we apply the proposed L3B to computer forensic.
文摘In this research paper, we evaluate an assortment of tools and intend to investigate multifarious characteristic of Imagix-4D Reverse Engineering Tool and on the basis of investigation find out inadequacy of Imagix-4D Reverse Engineering Tool (illustrate only abstract Class Diagram, and it has no support to illustrate ER-Diagram and Sequence Diagram) and propose a Reverse Engineering Tool based on Unified Mapping Method (RETUM) for prominence of Class Diagram Visualizations which surmount the limitation (class diagram which is intricate in visualization) of Imagix-4D Reverse Engineering Tool.
文摘One application of software engineering is the vast and widely popular video game entertainment industry. Success of a video game product depends on how well the player base receives it. Of research towards understanding factors of success behind releasing a video game, we are interested in studying a factor known as Replayability. Towards a software engineering oriented game design methodology, we collect player opinions on Replayability via surveys and provide methods to analyze the data. We believe these results can help game designers to more successfully produce entertaining games with longer lasting appeal by utilizing our software engineering techniques.
基金supported by the National Natural Science Foundation of China(60373109)Ministry of Science and Technologyof China and the National Commercial Cryptography Application Technology Architecture and Application DemonstrationProject(2008BAA22B02).
文摘An embedded cryptosystem needs higher reconfiguration capability and security. After analyzing the newly emerging side-channel attacks on elliptic curve cryptosystem (ECC), an efficient fractional width-w NAF (FWNAF) algorithm is proposed to secure ECC scalar multiplication from these attacks. This algorithm adopts the fractional window method and probabilistic SPA scheme to reconfigure the pre-computed table, and it allows designers to make a dynamic configuration on pre-computed table. And then, it is enhanced to resist SPA, DPA, RPA and ZPA attacks by using the random masking method. Compared with the WBRIP and EBRIP methods, our proposals has the lowest total computation cost and reduce the shake phenomenon due to sharp fluctuation on computation performance.
文摘Essentially, it is significant to supply the consumer with reliable and sufficient power. Since, power quality is measured by the consistency in frequency and power flow between control areas. Thus, in a power system operation and control,automatic generation control(AGC) plays a crucial role. In this paper, multi-area(Five areas: area 1, area 2, area 3, area 4 and area 5) reheat thermal power systems are considered with proportional-integral-derivative(PID) controller as a supplementary controller. Each area in the investigated power system is equipped with appropriate governor unit, turbine with reheater unit, generator and speed regulator unit. The PID controller parameters are optimized by considering nature bio-inspired firefly algorithm(FFA). The experimental results demonstrated the comparison of the proposed system performance(FFA-PID)with optimized PID controller based genetic algorithm(GAPID) and particle swarm optimization(PSO) technique(PSOPID) for the same investigated power system. The results proved the efficiency of employing the integral time absolute error(ITAE) cost function with one percent step load perturbation(1 % SLP) in area 1. The proposed system based FFA achieved the least settling time compared to using the GA or the PSO algorithms, while, it attained good results with respect to the peak overshoot/undershoot. In addition, the FFA performance is improved with the increased number of iterations which outperformed the other optimization algorithms based controller.
文摘VisuShrink, ModineighShrink and NeighShrink are efficient image denoising algorithms based on the discrete wavelet transform (DWT). These methods have disadvantage of using a suboptimal universal threshold and identical neighbouring window size in all wavelet subbands. In this paper, an improved method is proposed, that determines a threshold as well as neighbouring window size for every subband using its lengths. Our experimental results illustrate that the proposed approach is better than the existing ones, i.e., NeighShrink, ModineighShrink and VisuShrink in terms of peak signal-to-noise ratio (PSNR) i.e. visual quality of the image.
文摘This paper proposes a hybrid technique for color image segmentation. First an input image is converted to the image of CIE L*a*b* color space. The color features "a" and "b" of CIE L^*a^*b^* are then fed into fuzzy C-means (FCM) clustering which is an unsupervised method. The labels obtained from the clustering method FCM are used as a target of the supervised feed forward neural network. The network is trained by the Levenberg-Marquardt back-propagation algorithm, and evaluates its performance using mean square error and regression analysis. The main issues of clustering methods are determining the number of clusters and cluster validity measures. This paper presents a method namely co-occurrence matrix based algorithm for finding the number of clusters and silhouette index values that are used for cluster validation. The proposed method is tested on various color images obtained from the Berkeley database. The segmentation results from the proposed method are validated and the classification accuracy is evaluated by the parameters sensitivity, specificity, and accuracy.
基金the Shanghai Commission of Science and Technology Grant (No. 05SN07114)
文摘Wireless local area networks (WLAN) localization based on received signal strength is becoming an important enabler of location based services. Limited efficiency and accuracy are disadvantages to the deterministic location estimation techniques. The probabilistic techniques show their good accuracy but cost more computation overhead. A Gaussian mixture model based on clustering technique was presented to improve location determination efficiency. The proposed clustering algorithm reduces the number of candidate locations from the whole area to a cluster. Within a cluster, an improved nearest neighbor algorithm was used to estimate user location using signal strength from more access points. Experiments show that the location estimation time is greatly decreased while high accuracy can still be achieved.
基金National Natural Science Foundation Grant No.60072029
文摘A new real-time algorithm is proposed in this paperfor detecting moving object in color image sequencestaken from stationary cameras.This algorithm combines a temporal difference with an adaptive background subtraction where the combination is novel.Ⅷ1en changes OCCUr.the background is automatically adapted to suit the new conditions.Forthe background model,a new model is proposed with each frame decomposed into regions and the model is based not only upon single pixel but also on the characteristic of a region.The hybrid presentationincludes a model for single pixel information and a model for the pixel’s neighboring area information.This new model of background can both improve the accuracy of segmentation due to that spatialinformation is taken into account and salientl5r speed up the processing procedure because porlion of neighboring pixel call be selected into modeling.The algorithm was successfully used in a video surveillance systern and the experiment result showsit call obtain a clearer foreground than the singleframe difference or background subtraction method.
文摘The eigenface method that uses principal component analysis(PCA) has been the standard and popular method used in face recognition.This paper presents a PCA-memetic algorithm(PCA-MA) approach for feature selection.PCA has been extended by MAs where the former was used for feature extraction/dimensionality reduction and the latter exploited for feature selection.Simulations were performed over ORL and YaleB face databases using Euclidean norm as the classifier.It was found that as far as the recognition rate is concerned,PCA-MA completely outperforms the eigenface method.We compared the performance of PCA extended with genetic algorithm(PCA-GA) with our proposed PCA-MA method.The results also clearly established the supremacy of the PCA-MA method over the PCA-GA method.We further extended linear discriminant analysis(LDA) and kernel principal component analysis(KPCA) approaches with the MA and observed significant improvement in recognition rate with fewer features.This paper also compares the performance of PCA-MA,LDA-MA and KPCA-MA approaches.
基金supported by the National Natural Science Foundation of China(6167309361370152)the Science and Technology Project of Shenyang(F16-205-1-01)
文摘In this paper, a new partial transmit sequence(PTS)scheme with low computational complexity is proposed for the problems of high computational complexity in the conventional PTS method. By analyzing the relationship of candidate sequences in the PTS method under the interleaved partition method, it has been discovered that some candidate sequences generated by phase factor sequences have the same peak average power ratio(PAPR). Hence, phase factor sequences can be optimized to reduce their searching times. Then, the computational process of generating candidate sequences can be simplified by improving the utilization of data and minimizing the calculations of complex multiplication. The performance analysis shows that, compared with the conventional PTS scheme, the proposed approach significantly decreases the computational complexity and has no loss of PAPR performance.
文摘Multimedia is one of the important communication channels for mankind. Due to the advancement in technology and enormous growth of mankind, a vast array of multimedia data is available today. This has resulted in the obvious need for some techniques for retrieving these data. This paper will give an overview of ontology-based image retrieval system for asteroideae flower family domain. In order to reduce the semantic gap between the low-level visual features of an image and the high-level domain knowledge, we have incorporated a concept of multi-modal image ontology. So, the created asteroideae flower domain specific ontology would have the knowledge about the domain and the visual features. The visual features used to define the ontology are prevalent color,basic intrinsic pattern and contour gradient. In prevalent color extraction, the most dominant color from the images was identified and indexed. In order to determine the texture pattern for a particular flower, basic intrinsic patterns were used. The contour gradients provide the information on the image edges with respect to the image base. These feature values are embedded in the ontology at appropriate slots with respect to the domain knowledge. This paper also defines some of the query axioms which are used to retrieve appropriate information from the created ontology. This ontology can be used for image retrieval system in semantic web.
基金Project(2012R1A1A2042625) supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education
文摘Research on human emotions has started to address psychological aspects of human nature and has advanced to the point of designing various models that represent them quantitatively and systematically. Based on the findings, a method is suggested for emotional space formation and emotional inference that enhance the quality and maximize the reality of emotion-based personalized services. In consideration of the subjective tendencies of individuals, AHP was adopted for the quantitative evaluation of human emotions, based on which an emotional space remodeling method is suggested in reference to the emotional model of Thayer and Plutchik, which takes into account personal emotions. In addition, Sugeno fuzzy inference, fuzzy measures, and Choquet integral were adopted for emotional inference in the remodeled personalized emotional space model. Its performance was evaluated through an experiment. Fourteen cases were analyzed with 4.0 and higher evaluation value of emotions inferred, for the evaluation of emotional similarity, through the case studies of 17 kinds of emotional inference methods. Matching results per inference method in ten cases accounting for 71% are confirmed. It is also found that the remaining two cases are inferred as adjoining emotion in the same section. In this manner, the similarity of inference results is verified.
文摘Sarcasm is a type of sentiment where people express their negative feelings using positive or intensified positive words in the text. While speaking, people often use heavy tonal stress and certain gestural clues like rolling of the eyes, hand movement, etc. to reveal sarcastic, In the textual data, these tonal and gestural clues are missing, making sarcasm detection very difficult for an average human. Due to these challenges, researchers show interest in sarcasm detection of social media text, especially in tweets. Rapid growth of tweets in volume and its analysis pose major challenges. In this paper, we proposed a Hadoop based framework that captures real time tweets and processes it with a set of algorithms which identifies sarcastic sentiment effectively. We observe that the elapse time for analyzing and processing under Hadoop based framework significantly outperforms the conventional methods and is more suited for real time streaming tweets.
文摘Shannon observed the relation between information entropy and Maxwell demon experiment to come up with information entropy formula. After that, Shannon's entropy formula is widely used to measure information leakage in imperative programs. But in the present work, our aim is to go in a reverse direction and try to find possible Maxwell's demon experimental setup for contemporary practical imperative programs in which variations of Shannon's entropy formula has been applied to measure the information leakage. To establish the relation between the second principle of thermodynamics and quantitative analysis of information leakage, present work models contemporary variations of imperative programs in terms of Maxwell's demon experimental setup. In the present work five contemporary variations of imperative program related to information quantification are identified. They are: (i) information leakage in imperative program, (ii) imperative multi- threaded program, (iii) point to point leakage in the imperative program, (iv) imperative program with infinite observation, and (v) imperative program in the SOA-based environment. For these variations, minimal work required by an attacker to gain the secret is also calculated using historical Maxwell's demon experiment. To model the experimental setup of Maxwell's demon, non-interference security policy is used. In the present work, imperative programs with one-bit secret information have been considered to avoid the complexity. The findings of the present work from the history of physics can be utilized in many areas related to information flow of physical computing, nano-computing, quantum computing, biological computing, energy dissipation in computing, and computing power analysis.
文摘In Heterogeneous Wireless Sensor Networks, the mobility of the sensor nodes becomes essential in various applications. During node mobility, there are possibilities for the malicious node to become the cluster head or cluster member. This causes the cluster or the whole network to be controlled by the malicious nodes. To offer high level of security, the mobile sensor nodes need to be authenticated. Further, clustering of nodes improves scalability, energy efficient routing and data delivery. In this paper, we propose a cluster based secure dynamic keying technique to authenticate the nodes during mobility. The nodes with high configuration are chosen as cluster heads based on the weight value which is estimated using parameters such as the node degree, average distance, node's average speed, and virtual battery power. The keys are dynamically generated and used for providing security. Even the keys are compromised by the attackers, they are not able to use the previous keys to cheat or disuse the authenticated nodes. In addition, a bidirectional malicious node detection technique is employed which eliminates the malicious node from the network. By simulation, it is proved that the proposed technique provides efficient security with reduced energy consumption during node mobility.