Biometrics,which has become integrated with our daily lives,could fall prey to falsification attacks,leading to security concerns.In our paper,we use Transient Evoked Otoacoustic Emissions(TEOAE)that are generated by ...Biometrics,which has become integrated with our daily lives,could fall prey to falsification attacks,leading to security concerns.In our paper,we use Transient Evoked Otoacoustic Emissions(TEOAE)that are generated by the human cochlea in response to an external sound stimulus,as a biometric modality.TEOAE are robust to falsification attacks,as the uniqueness of an individual’s inner ear cannot be impersonated.In this study,we use both the raw 1D TEOAE signals,as well as the 2D time-frequency representation of the signal using Continuous Wavelet Transform(CWT).We use 1D and 2D Convolutional Neural Networks(CNN)for the former and latter,respectively,to derive the feature maps.The corresponding lower-dimensional feature maps are obtained using principal component analysis,which is then used as features to build classifiers using machine learning techniques for the task of person identification.T-SNE plots of these feature maps show that they discriminate well among the subjects.Among the various architectures explored,we achieve a best-performing accuracy of 98.95%and 100%using the feature maps of the 1D-CNN and 2D-CNN,respectively,with the latter performance being an improvement over all the earlier works.This performance makes the TEOAE based person identification systems deployable in real-world situations,along with the added advantage of robustness to falsification attacks.展开更多
基金The authors would like to thank the Biometrics Security Laboratory of the University of Toronto for providing the Transient Evoked Otoacoustic Emissions(TEOAE)dataset.
文摘Biometrics,which has become integrated with our daily lives,could fall prey to falsification attacks,leading to security concerns.In our paper,we use Transient Evoked Otoacoustic Emissions(TEOAE)that are generated by the human cochlea in response to an external sound stimulus,as a biometric modality.TEOAE are robust to falsification attacks,as the uniqueness of an individual’s inner ear cannot be impersonated.In this study,we use both the raw 1D TEOAE signals,as well as the 2D time-frequency representation of the signal using Continuous Wavelet Transform(CWT).We use 1D and 2D Convolutional Neural Networks(CNN)for the former and latter,respectively,to derive the feature maps.The corresponding lower-dimensional feature maps are obtained using principal component analysis,which is then used as features to build classifiers using machine learning techniques for the task of person identification.T-SNE plots of these feature maps show that they discriminate well among the subjects.Among the various architectures explored,we achieve a best-performing accuracy of 98.95%and 100%using the feature maps of the 1D-CNN and 2D-CNN,respectively,with the latter performance being an improvement over all the earlier works.This performance makes the TEOAE based person identification systems deployable in real-world situations,along with the added advantage of robustness to falsification attacks.