Robust face representation is imperative to highly accurate face recognition. In this work, we propose an open source face recognition method with deep representation named as VIPLFaceNet, which is a lO-layer deep con...Robust face representation is imperative to highly accurate face recognition. In this work, we propose an open source face recognition method with deep representation named as VIPLFaceNet, which is a lO-layer deep convolu- tional neural network with seven convolutional layers and three fully-connected layers. Compared with the well-known AlexNet, our VIPLFaceNet takes only 20% training time and 60% testing time, but achieves 40% drop in error rate on the real-world face recognition benchmark LFW. Our VIPLFaceNet achieves 98.60% mean accuracy on LFW us- ing one single network. An open-source C++ SDK based on VIPLFaceNet is released under BSD license. The SDK takes about 150ms to process one face image in a single thread on an i7 desktop CPU. VIPLFaceNet provides a state-of-the-art start point for both academic and industrial face recognition applications.展开更多
老年人因年龄增长、身体机能衰退和认知功能减弱而面临不同程度的生活危险。因此,为了及时发现、监测和处理老年人的危险姿势,从而保护老年人的安全和健康。研究提出一种融合端对端思想和卷积神经网络(Port to port convo-lutional neur...老年人因年龄增长、身体机能衰退和认知功能减弱而面临不同程度的生活危险。因此,为了及时发现、监测和处理老年人的危险姿势,从而保护老年人的安全和健康。研究提出一种融合端对端思想和卷积神经网络(Port to port convo-lutional neural network,PTP-CNN)的老年人危险位姿虚拟模型识别算法,从而做出预防性措施或及时的护理。研究结果表明,该系统在运用PTP-CNN算法时,Epochs的训练次数为15~30之间,MSE评价指标上PTP-CNN模型分别比SW-CNN、AlexNet降低25.33%、5.17%,说明PTP-CNN模型拥有更高的准确性和精确性,可以更好地进行图像识别任务,从而及时发现老年人的危险姿势。展开更多
基金This work was partially supported by the National Basic Research Program of China (973 Program) (2015CB351802), and the National Natural Science Foundation of China (Grant Nos. 61402443, 61390511, 61379083, 61222211).
文摘Robust face representation is imperative to highly accurate face recognition. In this work, we propose an open source face recognition method with deep representation named as VIPLFaceNet, which is a lO-layer deep convolu- tional neural network with seven convolutional layers and three fully-connected layers. Compared with the well-known AlexNet, our VIPLFaceNet takes only 20% training time and 60% testing time, but achieves 40% drop in error rate on the real-world face recognition benchmark LFW. Our VIPLFaceNet achieves 98.60% mean accuracy on LFW us- ing one single network. An open-source C++ SDK based on VIPLFaceNet is released under BSD license. The SDK takes about 150ms to process one face image in a single thread on an i7 desktop CPU. VIPLFaceNet provides a state-of-the-art start point for both academic and industrial face recognition applications.
文摘老年人因年龄增长、身体机能衰退和认知功能减弱而面临不同程度的生活危险。因此,为了及时发现、监测和处理老年人的危险姿势,从而保护老年人的安全和健康。研究提出一种融合端对端思想和卷积神经网络(Port to port convo-lutional neural network,PTP-CNN)的老年人危险位姿虚拟模型识别算法,从而做出预防性措施或及时的护理。研究结果表明,该系统在运用PTP-CNN算法时,Epochs的训练次数为15~30之间,MSE评价指标上PTP-CNN模型分别比SW-CNN、AlexNet降低25.33%、5.17%,说明PTP-CNN模型拥有更高的准确性和精确性,可以更好地进行图像识别任务,从而及时发现老年人的危险姿势。