Biometric authentication provides a reliable,user-specific approach for identity verification,significantly enhancing access control and security against unauthorized intrusions in cybersecurity.Unimodal biometric sys...Biometric authentication provides a reliable,user-specific approach for identity verification,significantly enhancing access control and security against unauthorized intrusions in cybersecurity.Unimodal biometric systems that rely on either face or voice recognition encounter several challenges,including inconsistent data quality,environmental noise,and susceptibility to spoofing attacks.To address these limitations,this research introduces a robust multi-modal biometric recognition framework,namely Quantum-Enhanced Biometric Fusion Network.The proposed model strengthens security and boosts recognition accuracy through the fusion of facial and voice features.Furthermore,the model employs advanced pre-processing techniques to generate high-quality facial images and voice recordings,enabling more efficient face and voice recognition.Augmentation techniques are deployed to enhance model performance by enriching the training dataset with diverse and representative samples.The local features are extracted using advanced neural methods,while the voice features are extracted using a Pyramid-1D Wavelet Convolutional Bidirectional Network,which effectively captures speech dynamics.The Quantum Residual Network encodes facial features into quantum states,enabling powerful quantum-enhanced representations.These normalized feature sets are fused using an early fusion strategy that preserves complementary spatial-temporal characteristics.The experimental validation is conducted using a biometric audio and video dataset,with comprehensive evaluations including ablation and statistical analyses.The experimental analyses ensure that the proposed model attains superior performance,outperforming existing biometric methods with an average accuracy of 98.99%.The proposed model improves recognition robustness,making it an efficient multimodal solution for cybersecurity applications.展开更多
Personalized health services are of paramount importance for the treatment and prevention of cardiorespiratory diseases,such as hypertension.The assessment of cardiorespiratory function and biometric identification(ID...Personalized health services are of paramount importance for the treatment and prevention of cardiorespiratory diseases,such as hypertension.The assessment of cardiorespiratory function and biometric identification(ID)is crucial for the effectiveness of such personalized health services.To effectively and accurately monitor pulse wave signals,thus achieving the assessment of cardiorespiratory function,a wearable photonic smart wristband based on an all-polymer sensing unit(All-PSU)is proposed.The smart wristband enables the assessment of cardiorespiratory function by continuously monitoring respiratory rate(RR),heart rate(HR),and blood pressure(BP).Furthermore,it can be utilized for biometric ID purposes.Through the analysis of pulse wave signals using power spectral density(PSD),accurate monitoring of RR and HR is achieved.Additionally,utilizing peak detection algorithms for feature extraction from pulse signals and subsequently employing a variety of machine learning methods,accurate BP monitoring and biometric ID have been realized.For biometric ID,the accuracy rate is 98.55%.Aiming to monitor RR,HR,BP,and ID,our solution demonstrates advantages in integration,functionality,and monitoring precision.These enhancements may contribute to the development of personalized health services aimed at the treatment and prevention of cardiorespiratory diseases.展开更多
This study proposes a system for biometric access control utilising the improved Cultural Chicken Swarm Optimization(CCSO)technique.This approach mitigates the limitations of conventional Chicken Swarm Optimization(CS...This study proposes a system for biometric access control utilising the improved Cultural Chicken Swarm Optimization(CCSO)technique.This approach mitigates the limitations of conventional Chicken Swarm Optimization(CSO),especially in dealing with larger dimensions due to diversity loss during solution space exploration.Our experimentation involved 600 sample images encompassing facial,iris,and fingerprint data,collected from 200 students at Ladoke Akintola University of Technology(LAUTECH),Ogbomoso.The results demonstrate the remarkable effectiveness of CCSO,yielding accuracy rates of 90.42%,91.67%,and 91.25%within 54.77,27.35,and 113.92 s for facial,fingerprint,and iris biometrics,respectively.These outcomes significantly outperform those achieved by the conventional CSO technique,which produced accuracy rates of 82.92%,86.25%,and 84.58%at 92.57,63.96,and 163.94 s for the same biometric modalities.The study’s findings reveal that CCSO,through its integration of Cultural Algorithm(CA)Operators into CSO,not only enhances algorithm performance,exhibiting computational efficiency and superior accuracy,but also carries broader implications beyond biometric systems.This innovation offers practical benefits in terms of security enhancement,operational efficiency,and adaptability across diverse user populations,shaping more effective and resource-efficient access control systems with real-world applicability.展开更多
AIM:To assess the corneal biometric parameters and endothelial cell characteristics in microcornea patients,and exploring their correlations.METHODS:This cross-sectional study included 28 patients of microcornea with ...AIM:To assess the corneal biometric parameters and endothelial cell characteristics in microcornea patients,and exploring their correlations.METHODS:This cross-sectional study included 28 patients of microcornea with uveal coloboma(MCUC),13 patients of microcornea without coloboma(MCNC),and 30 age-matched healthy individuals(the control group).Corneal biometric parameters such as axial length(AL),anterior chamber depth(ACD),and white-to-white corneal diameter(WTW)were measured using the IOL Master.The corneal endothelial cell density(ECD),percentage of hexagonal cells(6A),average cell area(AVE),maximum cell area(MAX),minimum cell area(MIN),cell area standard deviation(SD),and coefficient of variation(CV)were collected by specular microscopy.RESULTS:This study included MCUC and MCNC patients with age-and sex-matched controls.All patients exhibited significantly reduced WTW(MCUC:8.51±0.71 mm;MCNC:9.08±0.42 mm)and worse logMAR BCVA(MCUC 0.62±0.43;MCNC 0.46±0.28)compared to controls(both P<0.001).The ECD was 3106.32±336.80 cells/mm²in the MCUC group and 2906.92±323.53 cells/mm²in the MCNC group,both significantly higher than the control group(2647.43±203.06 cells/mm²,P<0.05).In contrast,the CV,AVE,SD,and ACD in the MCUC and MCNC groups were significantly lower compared to controls(P<0.01).In patients with microcornea,the WTW was negatively correlated with the ECD and 6A,but positively with the CV,MAX,AVE,and SD.The ACD was negatively linked to the ECD,but positively to the AVE.CONCLUSION:The corneal ECD and 6A are increased,while the CV is decreased in patients with microcornea,particularly in those accompanied by uveal coloboma.The ECD and morphology demonstrate close correlations with the WTW and ACD.展开更多
Biometric template protection is essential for finger-based authentication systems,as template tampering and adversarial attacks threaten the security.This paper proposes a DCT-based fragile watermarking scheme incorp...Biometric template protection is essential for finger-based authentication systems,as template tampering and adversarial attacks threaten the security.This paper proposes a DCT-based fragile watermarking scheme incorporating AI-based tamper detection to improve the integrity and robustness of finger authentication.The system was tested against NIST SD4 and Anguli fingerprint datasets,wherein 10,000 watermarked fingerprints were employed for training.The designed approach recorded a tamper detection rate of 98.3%,performing 3–6%better than current DCT,SVD,and DWT-based watermarking approaches.The false positive rate(≤1.2%)and false negative rate(≤1.5%)were much lower compared to previous research,which maintained high reliability for template change detection.The system showed real-time performance,averaging 12–18 ms processing time per template,and is thus suitable for real-world biometric authentication scenarios.Quality analysis of fingerprints indicated that NFIQ scores were enhanced from 2.07 to 1.81,reflecting improved minutiae clarity and ridge structure preservation.The approach also exhibited strong resistance to compression and noise distortions,with the improvements in PSNR being 2 dB(JPEG compression Q=80)and the SSIM values rising by 3%–5%under noise attacks.Comparative assessment demonstrated that training with NIST SD4 data greatly improved the ridge continuity and quality of fingerprints,resulting in better match scores(260–295)when tested against Bozorth3.Smaller batch sizes(batch=2)also resulted in improved ridge clarity,whereas larger batch sizes(batch=8)resulted in distortions.The DCNN-based tamper detection model supported real-time classification,which greatly minimized template exposure to adversarial attacks and synthetic fingerprint forgeries.Results demonstrate that fragile watermarking with AI indeed greatly enhances fingerprint security,providing privacy-preserving biometric authentication with high robustness,accuracy,and computational efficiency.展开更多
Deep Learning-based systems for Finger vein recognition have gained rising attention in recent years due to improved efficiency and enhanced security.The performance of existing CNN-based methods is limited by the pun...Deep Learning-based systems for Finger vein recognition have gained rising attention in recent years due to improved efficiency and enhanced security.The performance of existing CNN-based methods is limited by the puny generalization of learned features and deficiency of the finger vein image training data.Considering the concerns of existing methods,in this work,a simplified deep transfer learning-based framework for finger-vein recognition is developed using an EfficientNet model of deep learning with a self-attention mechanism.Data augmentation using various geometrical methods is employed to address the problem of training data shortage required for a deep learning model.The proposed model is tested using K-fold cross-validation on three publicly available datasets:HKPU,FVUSM,and SDUMLA.Also,the developed network is compared with other modern deep nets to check its effectiveness.In addition,a comparison of the proposed method with other existing Finger vein recognition(FVR)methods is also done.The experimental results exhibited superior recognition accuracy of the proposed method compared to other existing methods.In addition,the developed method proves to be more effective and less sophisticated at extracting robust features.The proposed EffAttenNet achieves an accuracy of 98.14%on HKPU,99.03%on FVUSM,and 99.50%on SDUMLA databases.展开更多
The human ear has been substantiated as a viable nonintrusive biometric modality for identification or verification.Among many feasible techniques for ear biometric recognition,convolutional neural network(CNN)models ...The human ear has been substantiated as a viable nonintrusive biometric modality for identification or verification.Among many feasible techniques for ear biometric recognition,convolutional neural network(CNN)models have recently offered high-performance and reliable systems.However,their performance can still be further improved using the capabilities of soft biometrics,a research question yet to be investigated.This research aims to augment the traditional CNN-based ear recognition performance by adding increased discriminatory ear soft biometric traits.It proposes a novel framework of augmented ear identification/verification using a group of discriminative categorical soft biometrics and deriving new,more perceptive,comparative soft biometrics for feature-level fusion with hard biometric deep features.It conducts several identification and verification experiments for performance evaluation,analysis,and comparison while varying ear image datasets,hard biometric deep-feature extractors,soft biometric augmentation methods,and classifiers used.The experimental work yields promising results,reaching up to 99.94%accuracy and up to 14%improvement using the AMI and AMIC datasets,along with their corresponding soft biometric label data.The results confirm the proposed augmented approaches’superiority over their standard counterparts and emphasize the robustness of the new ear comparative soft biometrics over their categorical peers.展开更多
The rapid growth of smart technologies and services has intensified the challenges surrounding identity authenti-cation techniques.Biometric credentials are increasingly being used for verification due to their advant...The rapid growth of smart technologies and services has intensified the challenges surrounding identity authenti-cation techniques.Biometric credentials are increasingly being used for verification due to their advantages over traditional methods,making it crucial to safeguard the privacy of people’s biometric data in various scenarios.This paper offers an in-depth exploration for privacy-preserving techniques and potential threats to biometric systems.It proposes a noble and thorough taxonomy survey for privacy-preserving techniques,as well as a systematic framework for categorizing the field’s existing literature.We review the state-of-the-art methods and address their advantages and limitations in the context of various biometric modalities,such as face,fingerprint,and eye detection.The survey encompasses various categories of privacy-preserving mechanisms and examines the trade-offs between security,privacy,and recognition performance,as well as the issues and future research directions.It aims to provide researchers,professionals,and decision-makers with a thorough understanding of the existing privacy-preserving solutions in biometric recognition systems and serves as the foundation of the development of more secure and privacy-preserving biometric technologies.展开更多
With the rapid spread of the coronavirus epidemic all over the world,educational and other institutions are heading towards digitization.In the era of digitization,identifying educational e-platform users using ear an...With the rapid spread of the coronavirus epidemic all over the world,educational and other institutions are heading towards digitization.In the era of digitization,identifying educational e-platform users using ear and iris based multi-modal biometric systems constitutes an urgent and interesting research topic to pre-serve enterprise security,particularly with wearing a face mask as a precaution against the new coronavirus epidemic.This study proposes a multimodal system based on ear and iris biometrics at the feature fusion level to identify students in electronic examinations(E-exams)during the COVID-19 pandemic.The proposed system comprises four steps.Thefirst step is image preprocessing,which includes enhancing,segmenting,and extracting the regions of interest.The second step is feature extraction,where the Haralick texture and shape methods are used to extract the features of ear images,whereas Tamura texture and color histogram methods are used to extract the features of iris images.The third step is feature fusion,where the extracted features of the ear and iris images are combined into one sequential fused vector.The fourth step is the matching,which is executed using the City Block Dis-tance(CTB)for student identification.Thefindings of the study indicate that the system’s recognition accuracy is 97%,with a 2%False Acceptance Rate(FAR),a 4%False Rejection Rate(FRR),a 94%Correct Recognition Rate(CRR),and a 96%Genuine Acceptance Rate(GAR).In addition,the proposed recognition sys-tem achieved higher accuracy than other related systems.展开更多
The rise of the Internet and identity authentication systems has brought convenience to people's lives but has also introduced the potential risk of privacy leaks.Existing biometric authentication systems based on...The rise of the Internet and identity authentication systems has brought convenience to people's lives but has also introduced the potential risk of privacy leaks.Existing biometric authentication systems based on explicit and static features bear the risk of being attacked by mimicked data.This work proposes a highly efficient biometric authentication system based on transient eye blink signals that are precisely captured by a neuromorphic vision sensor with microsecond-level temporal resolution.The neuromorphic vision sensor only transmits the local pixel-level changes induced by the eye blinks when they occur,which leads to advantageous characteristics such as an ultra-low latency response.We first propose a set of effective biometric features describing the motion,speed,energy and frequency signal of eye blinks based on the microsecond temporal resolution of event densities.We then train the ensemble model and non-ensemble model with our Neuro Biometric dataset for biometrics authentication.The experiments show that our system is able to identify and verify the subjects with the ensemble model at an accuracy of 0.948 and with the non-ensemble model at an accuracy of 0.925.The low false positive rates(about 0.002)and the highly dynamic features are not only hard to reproduce but also avoid recording visible characteristics of a user's appearance.The proposed system sheds light on a new path towards safer authentication using neuromorphic vision sensors.展开更多
Biometric recognition refers to the process of recognizing a person’s identity using physiological or behavioral modalities,such as face,voice,fingerprint,gait,etc.Such biometric modalities are mostly used in recogni...Biometric recognition refers to the process of recognizing a person’s identity using physiological or behavioral modalities,such as face,voice,fingerprint,gait,etc.Such biometric modalities are mostly used in recognition tasks separately as in unimodal systems,or jointly with two or more as in multimodal systems.However,multimodal systems can usually enhance the recognition performance over unimodal systems by integrating the biometric data of multiple modalities at different fusion levels.Despite this enhancement,in real-life applications some factors degrade multimodal systems’performance,such as occlusion,face poses,and noise in voice data.In this paper,we propose two algorithms that effectively apply dynamic fusion at feature level based on the data quality of multimodal biometrics.The proposed algorithms attempt to minimize the negative influence of confusing and low-quality features by either exclusion or weight reduction to achieve better recognition performance.The proposed dynamic fusion was achieved using face and voice biometrics,where face features were extracted using principal component analysis(PCA),and Gabor filters separately,whilst voice features were extracted using Mel-Frequency Cepstral Coefficients(MFCCs).Here,the facial data quality assessment of face images is mainly based on the existence of occlusion,whereas the assessment of voice data quality is substantially based on the calculation of signal to noise ratio(SNR)as per the existence of noise.To evaluate the performance of the proposed algorithms,several experiments were conducted using two combinations of three different databases,AR database,and the extended Yale Face Database B for face images,in addition to VOiCES database for voice data.The obtained results show that both proposed dynamic fusion algorithms attain improved performance and offer more advantages in identification and verification over not only the standard unimodal algorithms but also the multimodal algorithms using standard fusion methods.展开更多
As multimedia data sharing increases,data security in mobile devices and its mechanism can be seen as critical.Biometrics combines the physiological and behavioral qualities of an individual to validate their characte...As multimedia data sharing increases,data security in mobile devices and its mechanism can be seen as critical.Biometrics combines the physiological and behavioral qualities of an individual to validate their character in real-time.Humans incorporate physiological attributes like a fingerprint,face,iris,palm print,finger knuckle print,Deoxyribonucleic Acid(DNA),and behavioral qualities like walk,voice,mark,or keystroke.The main goal of this paper is to design a robust framework for automatic face recognition.Scale Invariant Feature Transform(SIFT)and Speeded-up Robust Features(SURF)are employed for face recognition.Also,we propose a modified Gabor Wavelet Transform for SIFT/SURF(GWT-SIFT/GWT-SURF)to increase the recognition accuracy of human faces.The proposed scheme is composed of three steps.First,the entropy of the image is removed using Discrete Wavelet Transform(DWT).Second,the computational complexity of the SIFT/SURF is reduced.Third,the accuracy is increased for authentication by the proposed GWT-SIFT/GWT-SURF algorithm.A comparative analysis of the proposed scheme is done on real-time Olivetti Research Laboratory(ORL)and Poznan University of Technology(PUT)databases.When compared to the traditional SIFT/SURF methods,we verify that the GWT-SIFT achieves the better accuracy of 99.32%and the better approach is the GWT-SURF as the run time of the GWT-SURF for 100 images is 3.4 seconds when compared to the GWT-SIFT which has a run time of 4.9 seconds for 100 images.展开更多
Human recognition technology based on biometrics has become a fundamental requirement in all aspects of life due to increased concerns about security and privacy issues.Therefore,biometric systems have emerged as a te...Human recognition technology based on biometrics has become a fundamental requirement in all aspects of life due to increased concerns about security and privacy issues.Therefore,biometric systems have emerged as a technology with the capability to identify or authenticate individuals based on their physiological and behavioral characteristics.Among different viable biometric modalities,the human ear structure can offer unique and valuable discriminative characteristics for human recognition systems.In recent years,most existing traditional ear recognition systems have been designed based on computer vision models and have achieved successful results.Nevertheless,such traditional models can be sensitive to several unconstrained environmental factors.As such,some traits may be difficult to extract automatically but can still be semantically perceived as soft biometrics.This research proposes a new group of semantic features to be used as soft ear biometrics,mainly inspired by conventional descriptive traits used naturally by humans when identifying or describing each other.Hence,the research study is focused on the fusion of the soft ear biometric traits with traditional(hard)ear biometric features to investigate their validity and efficacy in augmenting human identification performance.The proposed framework has two subsystems:first,a computer vision-based subsystem,extracting traditional(hard)ear biometric traits using principal component analysis(PCA)and local binary patterns(LBP),and second,a crowdsourcing-based subsystem,deriving semantic(soft)ear biometric traits.Several feature-level fusion experiments were conducted using the AMI database to evaluate the proposed algorithm’s performance.The obtained results for both identification and verification showed that the proposed soft ear biometric information significantly improved the recognition performance of traditional ear biometrics,reaching up to 12%for LBP and 5%for PCA descriptors;when fusing all three capacities PCA,LBP,and soft traits using k-nearest neighbors(KNN)classifier.展开更多
Biometrics technologies have been around for quite some time and many have been deployed for different applications all around the world, ranging from small companies' time and attendance systems to access control...Biometrics technologies have been around for quite some time and many have been deployed for different applications all around the world, ranging from small companies' time and attendance systems to access control systems for nuclear facilities. Biometrics offer a reliable solution for the establishment of the distinctiveness of identity based on 'who an individual is', rather than what he or she knows or carries. Biometric Systems automatically verify a person's identity based on his/her anatomical and behavioral characteristics. Biometric traits represent a strong and undeviating link between a person and his/her identity, these traits cannot be easily lost or forgotten or faked, since biometric systems require the user to be present at the time of authentication. Some biometric systems are more reliable than others, yet they are neither secure nor accurate, all biometrics have their strengths and weaknesses. Although some of these systems have shown reliability and solidarity, work still has to be done to improve the quality of service they provide. Presented is the available standing biometric systems showing their strengths and weaknesses and also emerging technologies which may have great benefits for security applications in the near future.展开更多
Cancelable biometrics are required in most remote access applications that need an authentication stage such as the cloud and Internet of Things(IoT)networks.The objective of using cancelable biometrics is to save the...Cancelable biometrics are required in most remote access applications that need an authentication stage such as the cloud and Internet of Things(IoT)networks.The objective of using cancelable biometrics is to save the original ones from hacking attempts.A generalized algorithm to generate cancelable templates that is applicable on both single and multiple biometrics is proposed in this paper to be considered for cloud and IoT applications.The original biometric is blurred with two co-prime operators.Hence,it can be recovered as the Greatest Common Divisor(GCD)between its two blurred versions.Minimal changes if induced in the biometric image prior to processing with co-prime operators prevents the recovery of the original biometric image through a GCD operation.Hence,the ability to change cancelable templates is guaranteed,since the owner of the biometric can pre-determine and manage the minimal change induced in the biometric image.Furthermore,we test the utility of the proposed algorithm in the single-and multi-biometric scenarios.The multi-biometric scenario depends on compressing face,fingerprint,iris,and palm print images,simultaneously,to generate the cancelable templates.Evaluation metrics such as Equal Error Rate(EER)and Area and Receiver Operator Characteristic curve(AROC)are considered.Simulation results on single-and multi-biometric scenarios show high AROC values up to 99.59%,and low EER values down to 0.04%.展开更多
Most user authentication mechanisms of cloud systems depend on the credentials approach in which a user submits his/her identity through a username and password.Unfortunately,this approach has many security problems b...Most user authentication mechanisms of cloud systems depend on the credentials approach in which a user submits his/her identity through a username and password.Unfortunately,this approach has many security problems because personal data can be stolen or recognized by hackers.This paper aims to present a cloud-based biometric authentication model(CBioAM)for improving and securing cloud services.The research study presents the verification and identification processes of the proposed cloud-based biometric authentication system(CBioAS),where the biometric samples of users are saved in database servers and the authentication process is implemented without loss of the users’information.The paper presents the performance evaluation of the proposed model in terms of three main characteristics including accuracy,sensitivity,and specificity.The research study introduces a novel algorithm called“Bio_Authen_as_a_Service”for implementing and evaluating the proposed model.The proposed system performs the biometric authentication process securely and preserves the privacy of user information.The experimental result was highly promising for securing cloud services using the proposed model.The experiments showed encouraging results with a performance average of 93.94%,an accuracy average of 96.15%,a sensitivity average of 87.69%,and a specificity average of 97.99%.展开更多
In recent years,biometric sensors are applicable for identifying impor-tant individual information and accessing the control using various identifiers by including the characteristics like afingerprint,palm print,iris r...In recent years,biometric sensors are applicable for identifying impor-tant individual information and accessing the control using various identifiers by including the characteristics like afingerprint,palm print,iris recognition,and so on.However,the precise identification of human features is still physically chal-lenging in humans during their lifetime resulting in a variance in their appearance or features.In response to these challenges,a novel Multimodal Biometric Feature Extraction(MBFE)model is proposed to extract the features from the noisy sen-sor data using a modified Ranking-based Deep Convolution Neural Network(RDCNN).The proposed MBFE model enables the feature extraction from differ-ent biometric images that includes iris,palm print,and lip,where the images are preprocessed initially for further processing.The extracted features are validated after optimal extraction by the RDCNN by splitting the datasets to train the fea-ture extraction model and then testing the model with different sets of input images.The simulation is performed in matlab to test the efficacy of the modal over multi-modal datasets and the simulation result shows that the proposed meth-od achieves increased accuracy,precision,recall,and F1 score than the existing deep learning feature extraction methods.The performance improvement of the MBFE Algorithm technique in terms of accuracy,precision,recall,and F1 score is attained by 0.126%,0.152%,0.184%,and 0.38%with existing Back Propaga-tion Neural Network(BPNN),Human Identification Using Wavelet Transform(HIUWT),Segmentation Methodology for Non-cooperative Recognition(SMNR),Daugman Iris Localization Algorithm(DILA)feature extraction techni-ques respectively.展开更多
AIM: To compare the differences and agreement of ocular biometric parameters in highly myopic eyes obtained by optical biometric measurement instruments, the OA-2000 and IOLMaster 500. METHODS: Totally, 90 patients(90...AIM: To compare the differences and agreement of ocular biometric parameters in highly myopic eyes obtained by optical biometric measurement instruments, the OA-2000 and IOLMaster 500. METHODS: Totally, 90 patients(90 eyes) were included. They were divided into high myopia group and control group. Ocular parameters, including axial length(AL), mean keratometry(Km), anterior chamber depth(ACD), and white to white(WTW), were obtained from the OA-2000 and IOLMaster 500. RESULTS: For the control group, we applied BlandAltman graphs to assess the 95% limits of agreement(LoA) for most parameters including AL, ACD, Km, and WTW(-0.24 to 0.29 mm,-0.22 to 0.45 mm,-0.39 to 0.31 D, and-0.90 to 0.86 mm, respectively). In high myopia patients, AL, ACD, Km values had wider 95% LoA(-0.34 to 0.32 mm,-0.36 to 0.34 mm,-0.57 to 0.47 D, respectively), except WTW(-0.80 to 0.68 mm). Differences were not statistically significant between these two instruments(P>0.05). CONCLUSION: Most parameters obtained by the OA-2000 and IOLMaster 500 are comparable, including the AL, ACD, and K values. Among them, the agreement of the high myopia patients is poor compared to the patients without high myopia.展开更多
Multiple ocular region segmentation plays an important role in different applications such as biometrics,liveness detection,healthcare,and gaze estimation.Typically,segmentation techniques focus on a single region of ...Multiple ocular region segmentation plays an important role in different applications such as biometrics,liveness detection,healthcare,and gaze estimation.Typically,segmentation techniques focus on a single region of the eye at a time.Despite the number of obvious advantages,very limited research has focused on multiple regions of the eye.Similarly,accurate segmentation of multiple eye regions is necessary in challenging scenarios involving blur,ghost effects low resolution,off-angles,and unusual glints.Currently,the available segmentation methods cannot address these constraints.In this paper,to address the accurate segmentation of multiple eye regions in unconstrainted scenarios,a lightweight outer residual encoder-decoder network suitable for various sensor images is proposed.The proposed method can determine the true boundaries of the eye regions from inferior-quality images using the high-frequency information flow from the outer residual encoder-decoder deep convolutional neural network(called ORED-Net).Moreover,the proposed ORED-Net model does not improve the performance based on the complexity,number of parameters or network depth.The proposed network is considerably lighter than previous state-of-theart models.Comprehensive experiments were performed,and optimal performance was achieved using SBVPI and UBIRIS.v2 datasets containing images of the eye region.The simulation results obtained using the proposed OREDNet,with the mean intersection over union score(mIoU)of 89.25 and 85.12 on the challenging SBVPI and UBIRIS.v2 datasets,respectively.展开更多
文摘Biometric authentication provides a reliable,user-specific approach for identity verification,significantly enhancing access control and security against unauthorized intrusions in cybersecurity.Unimodal biometric systems that rely on either face or voice recognition encounter several challenges,including inconsistent data quality,environmental noise,and susceptibility to spoofing attacks.To address these limitations,this research introduces a robust multi-modal biometric recognition framework,namely Quantum-Enhanced Biometric Fusion Network.The proposed model strengthens security and boosts recognition accuracy through the fusion of facial and voice features.Furthermore,the model employs advanced pre-processing techniques to generate high-quality facial images and voice recordings,enabling more efficient face and voice recognition.Augmentation techniques are deployed to enhance model performance by enriching the training dataset with diverse and representative samples.The local features are extracted using advanced neural methods,while the voice features are extracted using a Pyramid-1D Wavelet Convolutional Bidirectional Network,which effectively captures speech dynamics.The Quantum Residual Network encodes facial features into quantum states,enabling powerful quantum-enhanced representations.These normalized feature sets are fused using an early fusion strategy that preserves complementary spatial-temporal characteristics.The experimental validation is conducted using a biometric audio and video dataset,with comprehensive evaluations including ablation and statistical analyses.The experimental analyses ensure that the proposed model attains superior performance,outperforming existing biometric methods with an average accuracy of 98.99%.The proposed model improves recognition robustness,making it an efficient multimodal solution for cybersecurity applications.
基金funded by the National Key R&D Program of China(2022YFE0140400)the National Natural Science Foundation of China(62405027, 62111530238, 62003046)+3 种基金Supporting project of major scientific research projects of Beijing Normal University at Zhuhai (ZHPT2023007)supported by the Tang Scholar of Beijing Normal Universityco-funded by the financial support of the European Union under the REFRESH-Research Excellence For REgion Sustainability and High-tech Industries project number CZ.10.03.01/00/22003/0000048 via the Operational Programme Just Transitionthe scope of the projects CICECO-Aveiro Institute of Materials, UIDB/50011/2020 (DOI 10.54499/UIDB/50011/2020), UIDP/50011/2020 (DOI 10.54499/UIDP/50011/2020) & LA/P/0006/2020 (DOI 10.54499/LA/P/0006/2020) financed by national funds through the FCT/MCTES (PIDDAC)
文摘Personalized health services are of paramount importance for the treatment and prevention of cardiorespiratory diseases,such as hypertension.The assessment of cardiorespiratory function and biometric identification(ID)is crucial for the effectiveness of such personalized health services.To effectively and accurately monitor pulse wave signals,thus achieving the assessment of cardiorespiratory function,a wearable photonic smart wristband based on an all-polymer sensing unit(All-PSU)is proposed.The smart wristband enables the assessment of cardiorespiratory function by continuously monitoring respiratory rate(RR),heart rate(HR),and blood pressure(BP).Furthermore,it can be utilized for biometric ID purposes.Through the analysis of pulse wave signals using power spectral density(PSD),accurate monitoring of RR and HR is achieved.Additionally,utilizing peak detection algorithms for feature extraction from pulse signals and subsequently employing a variety of machine learning methods,accurate BP monitoring and biometric ID have been realized.For biometric ID,the accuracy rate is 98.55%.Aiming to monitor RR,HR,BP,and ID,our solution demonstrates advantages in integration,functionality,and monitoring precision.These enhancements may contribute to the development of personalized health services aimed at the treatment and prevention of cardiorespiratory diseases.
基金supported by Ladoke Akintola University of Technology,Ogbomoso,Nigeria and the University of Zululand,South Africa.
文摘This study proposes a system for biometric access control utilising the improved Cultural Chicken Swarm Optimization(CCSO)technique.This approach mitigates the limitations of conventional Chicken Swarm Optimization(CSO),especially in dealing with larger dimensions due to diversity loss during solution space exploration.Our experimentation involved 600 sample images encompassing facial,iris,and fingerprint data,collected from 200 students at Ladoke Akintola University of Technology(LAUTECH),Ogbomoso.The results demonstrate the remarkable effectiveness of CCSO,yielding accuracy rates of 90.42%,91.67%,and 91.25%within 54.77,27.35,and 113.92 s for facial,fingerprint,and iris biometrics,respectively.These outcomes significantly outperform those achieved by the conventional CSO technique,which produced accuracy rates of 82.92%,86.25%,and 84.58%at 92.57,63.96,and 163.94 s for the same biometric modalities.The study’s findings reveal that CCSO,through its integration of Cultural Algorithm(CA)Operators into CSO,not only enhances algorithm performance,exhibiting computational efficiency and superior accuracy,but also carries broader implications beyond biometric systems.This innovation offers practical benefits in terms of security enhancement,operational efficiency,and adaptability across diverse user populations,shaping more effective and resource-efficient access control systems with real-world applicability.
基金Supported by the National Natural Science Foundation of China(No.82271052No.82201154)+2 种基金Shandong Provincial Key Research and Development Program(No.2024CXGC010617)Taishan Scholar Program(No.tstp20240858)Educational and Teaching Reform Research Project of Shandong First Medical University(No.XM2024001).
文摘AIM:To assess the corneal biometric parameters and endothelial cell characteristics in microcornea patients,and exploring their correlations.METHODS:This cross-sectional study included 28 patients of microcornea with uveal coloboma(MCUC),13 patients of microcornea without coloboma(MCNC),and 30 age-matched healthy individuals(the control group).Corneal biometric parameters such as axial length(AL),anterior chamber depth(ACD),and white-to-white corneal diameter(WTW)were measured using the IOL Master.The corneal endothelial cell density(ECD),percentage of hexagonal cells(6A),average cell area(AVE),maximum cell area(MAX),minimum cell area(MIN),cell area standard deviation(SD),and coefficient of variation(CV)were collected by specular microscopy.RESULTS:This study included MCUC and MCNC patients with age-and sex-matched controls.All patients exhibited significantly reduced WTW(MCUC:8.51±0.71 mm;MCNC:9.08±0.42 mm)and worse logMAR BCVA(MCUC 0.62±0.43;MCNC 0.46±0.28)compared to controls(both P<0.001).The ECD was 3106.32±336.80 cells/mm²in the MCUC group and 2906.92±323.53 cells/mm²in the MCNC group,both significantly higher than the control group(2647.43±203.06 cells/mm²,P<0.05).In contrast,the CV,AVE,SD,and ACD in the MCUC and MCNC groups were significantly lower compared to controls(P<0.01).In patients with microcornea,the WTW was negatively correlated with the ECD and 6A,but positively with the CV,MAX,AVE,and SD.The ACD was negatively linked to the ECD,but positively to the AVE.CONCLUSION:The corneal ECD and 6A are increased,while the CV is decreased in patients with microcornea,particularly in those accompanied by uveal coloboma.The ECD and morphology demonstrate close correlations with the WTW and ACD.
文摘Biometric template protection is essential for finger-based authentication systems,as template tampering and adversarial attacks threaten the security.This paper proposes a DCT-based fragile watermarking scheme incorporating AI-based tamper detection to improve the integrity and robustness of finger authentication.The system was tested against NIST SD4 and Anguli fingerprint datasets,wherein 10,000 watermarked fingerprints were employed for training.The designed approach recorded a tamper detection rate of 98.3%,performing 3–6%better than current DCT,SVD,and DWT-based watermarking approaches.The false positive rate(≤1.2%)and false negative rate(≤1.5%)were much lower compared to previous research,which maintained high reliability for template change detection.The system showed real-time performance,averaging 12–18 ms processing time per template,and is thus suitable for real-world biometric authentication scenarios.Quality analysis of fingerprints indicated that NFIQ scores were enhanced from 2.07 to 1.81,reflecting improved minutiae clarity and ridge structure preservation.The approach also exhibited strong resistance to compression and noise distortions,with the improvements in PSNR being 2 dB(JPEG compression Q=80)and the SSIM values rising by 3%–5%under noise attacks.Comparative assessment demonstrated that training with NIST SD4 data greatly improved the ridge continuity and quality of fingerprints,resulting in better match scores(260–295)when tested against Bozorth3.Smaller batch sizes(batch=2)also resulted in improved ridge clarity,whereas larger batch sizes(batch=8)resulted in distortions.The DCNN-based tamper detection model supported real-time classification,which greatly minimized template exposure to adversarial attacks and synthetic fingerprint forgeries.Results demonstrate that fragile watermarking with AI indeed greatly enhances fingerprint security,providing privacy-preserving biometric authentication with high robustness,accuracy,and computational efficiency.
文摘Deep Learning-based systems for Finger vein recognition have gained rising attention in recent years due to improved efficiency and enhanced security.The performance of existing CNN-based methods is limited by the puny generalization of learned features and deficiency of the finger vein image training data.Considering the concerns of existing methods,in this work,a simplified deep transfer learning-based framework for finger-vein recognition is developed using an EfficientNet model of deep learning with a self-attention mechanism.Data augmentation using various geometrical methods is employed to address the problem of training data shortage required for a deep learning model.The proposed model is tested using K-fold cross-validation on three publicly available datasets:HKPU,FVUSM,and SDUMLA.Also,the developed network is compared with other modern deep nets to check its effectiveness.In addition,a comparison of the proposed method with other existing Finger vein recognition(FVR)methods is also done.The experimental results exhibited superior recognition accuracy of the proposed method compared to other existing methods.In addition,the developed method proves to be more effective and less sophisticated at extracting robust features.The proposed EffAttenNet achieves an accuracy of 98.14%on HKPU,99.03%on FVUSM,and 99.50%on SDUMLA databases.
基金funded by WAQF at King Abdulaziz University,Jeddah,Saudi Arabia.
文摘The human ear has been substantiated as a viable nonintrusive biometric modality for identification or verification.Among many feasible techniques for ear biometric recognition,convolutional neural network(CNN)models have recently offered high-performance and reliable systems.However,their performance can still be further improved using the capabilities of soft biometrics,a research question yet to be investigated.This research aims to augment the traditional CNN-based ear recognition performance by adding increased discriminatory ear soft biometric traits.It proposes a novel framework of augmented ear identification/verification using a group of discriminative categorical soft biometrics and deriving new,more perceptive,comparative soft biometrics for feature-level fusion with hard biometric deep features.It conducts several identification and verification experiments for performance evaluation,analysis,and comparison while varying ear image datasets,hard biometric deep-feature extractors,soft biometric augmentation methods,and classifiers used.The experimental work yields promising results,reaching up to 99.94%accuracy and up to 14%improvement using the AMI and AMIC datasets,along with their corresponding soft biometric label data.The results confirm the proposed augmented approaches’superiority over their standard counterparts and emphasize the robustness of the new ear comparative soft biometrics over their categorical peers.
基金The research is supported by Nature Science Foundation of Zhejiang Province(LQ20F020008)“Pioneer”and“Leading Goose”R&D Program of Zhejiang(Grant Nos.2023C03203,2023C01150).
文摘The rapid growth of smart technologies and services has intensified the challenges surrounding identity authenti-cation techniques.Biometric credentials are increasingly being used for verification due to their advantages over traditional methods,making it crucial to safeguard the privacy of people’s biometric data in various scenarios.This paper offers an in-depth exploration for privacy-preserving techniques and potential threats to biometric systems.It proposes a noble and thorough taxonomy survey for privacy-preserving techniques,as well as a systematic framework for categorizing the field’s existing literature.We review the state-of-the-art methods and address their advantages and limitations in the context of various biometric modalities,such as face,fingerprint,and eye detection.The survey encompasses various categories of privacy-preserving mechanisms and examines the trade-offs between security,privacy,and recognition performance,as well as the issues and future research directions.It aims to provide researchers,professionals,and decision-makers with a thorough understanding of the existing privacy-preserving solutions in biometric recognition systems and serves as the foundation of the development of more secure and privacy-preserving biometric technologies.
文摘With the rapid spread of the coronavirus epidemic all over the world,educational and other institutions are heading towards digitization.In the era of digitization,identifying educational e-platform users using ear and iris based multi-modal biometric systems constitutes an urgent and interesting research topic to pre-serve enterprise security,particularly with wearing a face mask as a precaution against the new coronavirus epidemic.This study proposes a multimodal system based on ear and iris biometrics at the feature fusion level to identify students in electronic examinations(E-exams)during the COVID-19 pandemic.The proposed system comprises four steps.Thefirst step is image preprocessing,which includes enhancing,segmenting,and extracting the regions of interest.The second step is feature extraction,where the Haralick texture and shape methods are used to extract the features of ear images,whereas Tamura texture and color histogram methods are used to extract the features of iris images.The third step is feature fusion,where the extracted features of the ear and iris images are combined into one sequential fused vector.The fourth step is the matching,which is executed using the City Block Dis-tance(CTB)for student identification.Thefindings of the study indicate that the system’s recognition accuracy is 97%,with a 2%False Acceptance Rate(FAR),a 4%False Rejection Rate(FRR),a 94%Correct Recognition Rate(CRR),and a 96%Genuine Acceptance Rate(GAR).In addition,the proposed recognition sys-tem achieved higher accuracy than other related systems.
基金supported by the National Natural Science Foundation of China(61906138)the National Science and Technology Major Project of the Ministry of Science and Technology of China(2018AAA0102900)+2 种基金the Shanghai Automotive Industry Sci-Tech Development Program(1838)the European Union’s Horizon 2020 Research and Innovation Program(785907)the Shanghai AI Innovation Development Program 2018。
文摘The rise of the Internet and identity authentication systems has brought convenience to people's lives but has also introduced the potential risk of privacy leaks.Existing biometric authentication systems based on explicit and static features bear the risk of being attacked by mimicked data.This work proposes a highly efficient biometric authentication system based on transient eye blink signals that are precisely captured by a neuromorphic vision sensor with microsecond-level temporal resolution.The neuromorphic vision sensor only transmits the local pixel-level changes induced by the eye blinks when they occur,which leads to advantageous characteristics such as an ultra-low latency response.We first propose a set of effective biometric features describing the motion,speed,energy and frequency signal of eye blinks based on the microsecond temporal resolution of event densities.We then train the ensemble model and non-ensemble model with our Neuro Biometric dataset for biometrics authentication.The experiments show that our system is able to identify and verify the subjects with the ensemble model at an accuracy of 0.948 and with the non-ensemble model at an accuracy of 0.925.The low false positive rates(about 0.002)and the highly dynamic features are not only hard to reproduce but also avoid recording visible characteristics of a user's appearance.The proposed system sheds light on a new path towards safer authentication using neuromorphic vision sensors.
文摘Biometric recognition refers to the process of recognizing a person’s identity using physiological or behavioral modalities,such as face,voice,fingerprint,gait,etc.Such biometric modalities are mostly used in recognition tasks separately as in unimodal systems,or jointly with two or more as in multimodal systems.However,multimodal systems can usually enhance the recognition performance over unimodal systems by integrating the biometric data of multiple modalities at different fusion levels.Despite this enhancement,in real-life applications some factors degrade multimodal systems’performance,such as occlusion,face poses,and noise in voice data.In this paper,we propose two algorithms that effectively apply dynamic fusion at feature level based on the data quality of multimodal biometrics.The proposed algorithms attempt to minimize the negative influence of confusing and low-quality features by either exclusion or weight reduction to achieve better recognition performance.The proposed dynamic fusion was achieved using face and voice biometrics,where face features were extracted using principal component analysis(PCA),and Gabor filters separately,whilst voice features were extracted using Mel-Frequency Cepstral Coefficients(MFCCs).Here,the facial data quality assessment of face images is mainly based on the existence of occlusion,whereas the assessment of voice data quality is substantially based on the calculation of signal to noise ratio(SNR)as per the existence of noise.To evaluate the performance of the proposed algorithms,several experiments were conducted using two combinations of three different databases,AR database,and the extended Yale Face Database B for face images,in addition to VOiCES database for voice data.The obtained results show that both proposed dynamic fusion algorithms attain improved performance and offer more advantages in identification and verification over not only the standard unimodal algorithms but also the multimodal algorithms using standard fusion methods.
文摘As multimedia data sharing increases,data security in mobile devices and its mechanism can be seen as critical.Biometrics combines the physiological and behavioral qualities of an individual to validate their character in real-time.Humans incorporate physiological attributes like a fingerprint,face,iris,palm print,finger knuckle print,Deoxyribonucleic Acid(DNA),and behavioral qualities like walk,voice,mark,or keystroke.The main goal of this paper is to design a robust framework for automatic face recognition.Scale Invariant Feature Transform(SIFT)and Speeded-up Robust Features(SURF)are employed for face recognition.Also,we propose a modified Gabor Wavelet Transform for SIFT/SURF(GWT-SIFT/GWT-SURF)to increase the recognition accuracy of human faces.The proposed scheme is composed of three steps.First,the entropy of the image is removed using Discrete Wavelet Transform(DWT).Second,the computational complexity of the SIFT/SURF is reduced.Third,the accuracy is increased for authentication by the proposed GWT-SIFT/GWT-SURF algorithm.A comparative analysis of the proposed scheme is done on real-time Olivetti Research Laboratory(ORL)and Poznan University of Technology(PUT)databases.When compared to the traditional SIFT/SURF methods,we verify that the GWT-SIFT achieves the better accuracy of 99.32%and the better approach is the GWT-SURF as the run time of the GWT-SURF for 100 images is 3.4 seconds when compared to the GWT-SIFT which has a run time of 4.9 seconds for 100 images.
基金supported and funded by KAU Scientific Endowment,King Abdulaziz University,Jeddah,Saudi Arabia.
文摘Human recognition technology based on biometrics has become a fundamental requirement in all aspects of life due to increased concerns about security and privacy issues.Therefore,biometric systems have emerged as a technology with the capability to identify or authenticate individuals based on their physiological and behavioral characteristics.Among different viable biometric modalities,the human ear structure can offer unique and valuable discriminative characteristics for human recognition systems.In recent years,most existing traditional ear recognition systems have been designed based on computer vision models and have achieved successful results.Nevertheless,such traditional models can be sensitive to several unconstrained environmental factors.As such,some traits may be difficult to extract automatically but can still be semantically perceived as soft biometrics.This research proposes a new group of semantic features to be used as soft ear biometrics,mainly inspired by conventional descriptive traits used naturally by humans when identifying or describing each other.Hence,the research study is focused on the fusion of the soft ear biometric traits with traditional(hard)ear biometric features to investigate their validity and efficacy in augmenting human identification performance.The proposed framework has two subsystems:first,a computer vision-based subsystem,extracting traditional(hard)ear biometric traits using principal component analysis(PCA)and local binary patterns(LBP),and second,a crowdsourcing-based subsystem,deriving semantic(soft)ear biometric traits.Several feature-level fusion experiments were conducted using the AMI database to evaluate the proposed algorithm’s performance.The obtained results for both identification and verification showed that the proposed soft ear biometric information significantly improved the recognition performance of traditional ear biometrics,reaching up to 12%for LBP and 5%for PCA descriptors;when fusing all three capacities PCA,LBP,and soft traits using k-nearest neighbors(KNN)classifier.
文摘Biometrics technologies have been around for quite some time and many have been deployed for different applications all around the world, ranging from small companies' time and attendance systems to access control systems for nuclear facilities. Biometrics offer a reliable solution for the establishment of the distinctiveness of identity based on 'who an individual is', rather than what he or she knows or carries. Biometric Systems automatically verify a person's identity based on his/her anatomical and behavioral characteristics. Biometric traits represent a strong and undeviating link between a person and his/her identity, these traits cannot be easily lost or forgotten or faked, since biometric systems require the user to be present at the time of authentication. Some biometric systems are more reliable than others, yet they are neither secure nor accurate, all biometrics have their strengths and weaknesses. Although some of these systems have shown reliability and solidarity, work still has to be done to improve the quality of service they provide. Presented is the available standing biometric systems showing their strengths and weaknesses and also emerging technologies which may have great benefits for security applications in the near future.
基金This research was funded by the Deanship of Scientific Research at Princess Nourah Bint Abdulrahman University through the Fast-track Research Funding Program to support publication in the top journal(Grant No.42-FTTJ-13).
文摘Cancelable biometrics are required in most remote access applications that need an authentication stage such as the cloud and Internet of Things(IoT)networks.The objective of using cancelable biometrics is to save the original ones from hacking attempts.A generalized algorithm to generate cancelable templates that is applicable on both single and multiple biometrics is proposed in this paper to be considered for cloud and IoT applications.The original biometric is blurred with two co-prime operators.Hence,it can be recovered as the Greatest Common Divisor(GCD)between its two blurred versions.Minimal changes if induced in the biometric image prior to processing with co-prime operators prevents the recovery of the original biometric image through a GCD operation.Hence,the ability to change cancelable templates is guaranteed,since the owner of the biometric can pre-determine and manage the minimal change induced in the biometric image.Furthermore,we test the utility of the proposed algorithm in the single-and multi-biometric scenarios.The multi-biometric scenario depends on compressing face,fingerprint,iris,and palm print images,simultaneously,to generate the cancelable templates.Evaluation metrics such as Equal Error Rate(EER)and Area and Receiver Operator Characteristic curve(AROC)are considered.Simulation results on single-and multi-biometric scenarios show high AROC values up to 99.59%,and low EER values down to 0.04%.
基金funding for this study from King Khalid University,Grant Number(GRP-35–40/2019).
文摘Most user authentication mechanisms of cloud systems depend on the credentials approach in which a user submits his/her identity through a username and password.Unfortunately,this approach has many security problems because personal data can be stolen or recognized by hackers.This paper aims to present a cloud-based biometric authentication model(CBioAM)for improving and securing cloud services.The research study presents the verification and identification processes of the proposed cloud-based biometric authentication system(CBioAS),where the biometric samples of users are saved in database servers and the authentication process is implemented without loss of the users’information.The paper presents the performance evaluation of the proposed model in terms of three main characteristics including accuracy,sensitivity,and specificity.The research study introduces a novel algorithm called“Bio_Authen_as_a_Service”for implementing and evaluating the proposed model.The proposed system performs the biometric authentication process securely and preserves the privacy of user information.The experimental result was highly promising for securing cloud services using the proposed model.The experiments showed encouraging results with a performance average of 93.94%,an accuracy average of 96.15%,a sensitivity average of 87.69%,and a specificity average of 97.99%.
文摘In recent years,biometric sensors are applicable for identifying impor-tant individual information and accessing the control using various identifiers by including the characteristics like afingerprint,palm print,iris recognition,and so on.However,the precise identification of human features is still physically chal-lenging in humans during their lifetime resulting in a variance in their appearance or features.In response to these challenges,a novel Multimodal Biometric Feature Extraction(MBFE)model is proposed to extract the features from the noisy sen-sor data using a modified Ranking-based Deep Convolution Neural Network(RDCNN).The proposed MBFE model enables the feature extraction from differ-ent biometric images that includes iris,palm print,and lip,where the images are preprocessed initially for further processing.The extracted features are validated after optimal extraction by the RDCNN by splitting the datasets to train the fea-ture extraction model and then testing the model with different sets of input images.The simulation is performed in matlab to test the efficacy of the modal over multi-modal datasets and the simulation result shows that the proposed meth-od achieves increased accuracy,precision,recall,and F1 score than the existing deep learning feature extraction methods.The performance improvement of the MBFE Algorithm technique in terms of accuracy,precision,recall,and F1 score is attained by 0.126%,0.152%,0.184%,and 0.38%with existing Back Propaga-tion Neural Network(BPNN),Human Identification Using Wavelet Transform(HIUWT),Segmentation Methodology for Non-cooperative Recognition(SMNR),Daugman Iris Localization Algorithm(DILA)feature extraction techni-ques respectively.
基金Supported by the National Natural Science Foundation of China (No.81870686)Beijing Municipal Natural Science Foundation (No.7184201)Capital’s Funds for Health Improvement and Research (No.2018-1-2021)
文摘AIM: To compare the differences and agreement of ocular biometric parameters in highly myopic eyes obtained by optical biometric measurement instruments, the OA-2000 and IOLMaster 500. METHODS: Totally, 90 patients(90 eyes) were included. They were divided into high myopia group and control group. Ocular parameters, including axial length(AL), mean keratometry(Km), anterior chamber depth(ACD), and white to white(WTW), were obtained from the OA-2000 and IOLMaster 500. RESULTS: For the control group, we applied BlandAltman graphs to assess the 95% limits of agreement(LoA) for most parameters including AL, ACD, Km, and WTW(-0.24 to 0.29 mm,-0.22 to 0.45 mm,-0.39 to 0.31 D, and-0.90 to 0.86 mm, respectively). In high myopia patients, AL, ACD, Km values had wider 95% LoA(-0.34 to 0.32 mm,-0.36 to 0.34 mm,-0.57 to 0.47 D, respectively), except WTW(-0.80 to 0.68 mm). Differences were not statistically significant between these two instruments(P>0.05). CONCLUSION: Most parameters obtained by the OA-2000 and IOLMaster 500 are comparable, including the AL, ACD, and K values. Among them, the agreement of the high myopia patients is poor compared to the patients without high myopia.
基金the National Research Foundation of Korea(NRF,www.nrf.re.kr)grant funded by the Korean government(MSIT,www.msit.go.kr)(No.2018R1A2B6009188)(received by W.K.Loh).
文摘Multiple ocular region segmentation plays an important role in different applications such as biometrics,liveness detection,healthcare,and gaze estimation.Typically,segmentation techniques focus on a single region of the eye at a time.Despite the number of obvious advantages,very limited research has focused on multiple regions of the eye.Similarly,accurate segmentation of multiple eye regions is necessary in challenging scenarios involving blur,ghost effects low resolution,off-angles,and unusual glints.Currently,the available segmentation methods cannot address these constraints.In this paper,to address the accurate segmentation of multiple eye regions in unconstrainted scenarios,a lightweight outer residual encoder-decoder network suitable for various sensor images is proposed.The proposed method can determine the true boundaries of the eye regions from inferior-quality images using the high-frequency information flow from the outer residual encoder-decoder deep convolutional neural network(called ORED-Net).Moreover,the proposed ORED-Net model does not improve the performance based on the complexity,number of parameters or network depth.The proposed network is considerably lighter than previous state-of-theart models.Comprehensive experiments were performed,and optimal performance was achieved using SBVPI and UBIRIS.v2 datasets containing images of the eye region.The simulation results obtained using the proposed OREDNet,with the mean intersection over union score(mIoU)of 89.25 and 85.12 on the challenging SBVPI and UBIRIS.v2 datasets,respectively.