Multimodal Biometrics for Enhanced IoT Security

2019 IEEE 9TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE (CCWC)(2019)

引用 17|浏览37
暂无评分
摘要
Biometric authentication is a promising approach to securing the Internet of Things (IoT). Although existing research shows that using multiple biometrics for authentication helps increase recognition accuracy, the majority of biometric approaches for IoT today continue to rely on a single modality. We propose a multimodal biometric approach for IoT based on face and voice modalities that is designed to scale to the limited resources of an IoT device. Our work builds on the foundation of Gofman et al. [7] in implementing face and voice feature-level fusion on mobile devices. We used discriminant correlation analysis (DCA) to fuse features from face and voice and used the K-nearest neighbors (KNN) algorithm to classify the features. The approach was implemented on the Raspberry Pi IoT device and was evaluated on a dataset of face images and voice files acquired using a Samsung Galaxy S5 device in real-world conditions such as dark rooms and noisy settings. The results show that fusion increased recognition accuracy by 52.45% compared to using face alone and 81.62% compared to using voice alone. It took an average of 1.34 seconds to enroll a user and 0.91 seconds to perform the authentication. To further optimize execution speed and reduce power consumption, we implemented classification on a field-programmable gate array (FPGA) chip that can be easily integrated into an IoT device. Experimental results showed that the proposed FPGA-accelerated KNN could achieve 150x faster execution time and 12x lower energy consumption compared to a CPU.
更多
查看译文
关键词
multimodal biometrics, IoT devices, accuracy, classification, FPGA, fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要