The lack of facial features caused by wearing masks degrades the performance of facial recognition systems.Traditional occluded face recognition methods cannot integrate the computational resources of the edge layer a...The lack of facial features caused by wearing masks degrades the performance of facial recognition systems.Traditional occluded face recognition methods cannot integrate the computational resources of the edge layer and the device layer.Besides,previous research fails to consider the facial characteristics including occluded and unoccluded parts.To solve the above problems,we put forward a device-edge collaborative occluded face recognition method based on cross-domain feature fusion.Specifically,the device-edge collaborative face recognition architecture gets the utmost out of maximizes device and edge resources for real-time occluded face recognition.Then,a cross-domain facial feature fusion method is presented which combines both the explicit domain and the implicit domain facial.Furthermore,a delay-optimized edge recognition task scheduling method is developed that comprehensively considers the task load,computational power,bandwidth,and delay tolerance constraints of the edge.This method can dynamically schedule face recognition tasks and minimize recognition delay while ensuring recognition accuracy.The experimental results show that the proposed method achieves an average gain of about 21%in recognition latency,while the accuracy of the face recognition task is basically the same compared to the baseline method.展开更多
Biometric recognition refers to the process of recognizing a person’s identity using physiological or behavioral modalities,such as face,voice,fingerprint,gait,etc.Such biometric modalities are mostly used in recogni...Biometric recognition refers to the process of recognizing a person’s identity using physiological or behavioral modalities,such as face,voice,fingerprint,gait,etc.Such biometric modalities are mostly used in recognition tasks separately as in unimodal systems,or jointly with two or more as in multimodal systems.However,multimodal systems can usually enhance the recognition performance over unimodal systems by integrating the biometric data of multiple modalities at different fusion levels.Despite this enhancement,in real-life applications some factors degrade multimodal systems’performance,such as occlusion,face poses,and noise in voice data.In this paper,we propose two algorithms that effectively apply dynamic fusion at feature level based on the data quality of multimodal biometrics.The proposed algorithms attempt to minimize the negative influence of confusing and low-quality features by either exclusion or weight reduction to achieve better recognition performance.The proposed dynamic fusion was achieved using face and voice biometrics,where face features were extracted using principal component analysis(PCA),and Gabor filters separately,whilst voice features were extracted using Mel-Frequency Cepstral Coefficients(MFCCs).Here,the facial data quality assessment of face images is mainly based on the existence of occlusion,whereas the assessment of voice data quality is substantially based on the calculation of signal to noise ratio(SNR)as per the existence of noise.To evaluate the performance of the proposed algorithms,several experiments were conducted using two combinations of three different databases,AR database,and the extended Yale Face Database B for face images,in addition to VOiCES database for voice data.The obtained results show that both proposed dynamic fusion algorithms attain improved performance and offer more advantages in identification and verification over not only the standard unimodal algorithms but also the multimodal algorithms using standard fusion methods.展开更多
基金supported by National Natural Science Foundation of China(61901071,61871062,61771082,U20A20157)Science and Natural Science Foundation of Chongqing,China(cstc2020jcyjzdxmX0024)+6 种基金University Innovation Research Group of Chongqing(CXQT20017)Program for Innovation Team Building at Institutions of Higher Education in Chongqing(CXTDX201601020)Natural Science Foundation of Chongqing,China(CSTB2022NSCQ-MSX0600)Youth Innovation Group Support Program of ICE Discipline of CQUPT(SCIE-QN-2022-04)Chongqing Municipal Technology Innovation and Application Development Special Key Project(cstc2020jscxdxwtBX0053)China Postdoctoral Science Foundation Project,China(2022MD723723)Chongqing Postdoctoral Research Project Special Funding,China(2023CQBSHTB3092)。
文摘The lack of facial features caused by wearing masks degrades the performance of facial recognition systems.Traditional occluded face recognition methods cannot integrate the computational resources of the edge layer and the device layer.Besides,previous research fails to consider the facial characteristics including occluded and unoccluded parts.To solve the above problems,we put forward a device-edge collaborative occluded face recognition method based on cross-domain feature fusion.Specifically,the device-edge collaborative face recognition architecture gets the utmost out of maximizes device and edge resources for real-time occluded face recognition.Then,a cross-domain facial feature fusion method is presented which combines both the explicit domain and the implicit domain facial.Furthermore,a delay-optimized edge recognition task scheduling method is developed that comprehensively considers the task load,computational power,bandwidth,and delay tolerance constraints of the edge.This method can dynamically schedule face recognition tasks and minimize recognition delay while ensuring recognition accuracy.The experimental results show that the proposed method achieves an average gain of about 21%in recognition latency,while the accuracy of the face recognition task is basically the same compared to the baseline method.
文摘Biometric recognition refers to the process of recognizing a person’s identity using physiological or behavioral modalities,such as face,voice,fingerprint,gait,etc.Such biometric modalities are mostly used in recognition tasks separately as in unimodal systems,or jointly with two or more as in multimodal systems.However,multimodal systems can usually enhance the recognition performance over unimodal systems by integrating the biometric data of multiple modalities at different fusion levels.Despite this enhancement,in real-life applications some factors degrade multimodal systems’performance,such as occlusion,face poses,and noise in voice data.In this paper,we propose two algorithms that effectively apply dynamic fusion at feature level based on the data quality of multimodal biometrics.The proposed algorithms attempt to minimize the negative influence of confusing and low-quality features by either exclusion or weight reduction to achieve better recognition performance.The proposed dynamic fusion was achieved using face and voice biometrics,where face features were extracted using principal component analysis(PCA),and Gabor filters separately,whilst voice features were extracted using Mel-Frequency Cepstral Coefficients(MFCCs).Here,the facial data quality assessment of face images is mainly based on the existence of occlusion,whereas the assessment of voice data quality is substantially based on the calculation of signal to noise ratio(SNR)as per the existence of noise.To evaluate the performance of the proposed algorithms,several experiments were conducted using two combinations of three different databases,AR database,and the extended Yale Face Database B for face images,in addition to VOiCES database for voice data.The obtained results show that both proposed dynamic fusion algorithms attain improved performance and offer more advantages in identification and verification over not only the standard unimodal algorithms but also the multimodal algorithms using standard fusion methods.