Gesture Interpreting of Alphabet Arabic Sign Language Based on Machine Learning algorithms

2022 Iraqi International Conference on Communication and Information Technologies (IICCIT)(2022)

引用 0|浏览1
暂无评分
摘要
Many methods have been applied to distinguish and interpret sign language (SL) since it is the only way that deaf people use to communicate with society. Interpret SL faces challenges, for example, variation of image illumination, background differences, video quality, hand geometry, skin colors, etc. This paper presents a system for distinguishing and interpreting the alphabet in Arabic Sign Language (ArSL). This proposal does not depend on using visible gloves or markers to carry out the classification procedure. Even images of bare hands obtained from videos are covered. It uses machine learning (ML) algorithms to detect hand gestures in videos and generate equivalent Arabic characters. Classification is achieved on a dataset of 32 SL alphabet characters, where features are extracted from the images and matched to the corresponding character in the gesture. The Linear Discriminant Analysis (LDA) algorithm is used for automatic feature extraction. Stochastic Gradient Descent (SGD), Decision Tree (DT), Naive Bayes (NB), k-Nearest Neighbors (k-NN) as well as Random Forest (RF) are five classification algorithms that are assessed. The k-NN model, which has an accuracy of 86%, was utilized to classify SL gestures from video frames.
更多
查看译文
关键词
Decision Tree,Naive Bayes,Random Forest,K-Nearest Neighbors,Stochastic Gradient Descent,Classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要