Sistem Klasifikasi Alfabet Bahasa Isyarat Indonesia Menggunakan CNN dengan MobileNetV2 berbasis Android
DOI:
https://doi.org/10.5281/zenodo.14686025Abstract
Communication is a basic human need, including for deaf and hard of hearing people who face difficulties in interacting using verbal language. One of the problems is the lack of optimal technology in supporting communication with Indonesian Sign Language (BISINDO) in real-time. This research aims to test the ability of MobileNetV2 in classifying the BISINDO alphabet as a technological solution that supports communication for the deaf and hard of hearing in real-time. This research dataset consists of 5200 BISINDO alphabet images (200 images per class) processed with data augmentation. The data is divided into 80% train data, 10% validation data, and 10% test data. Transfer learning was used to train the MobileNetV2 model with initial weights from ImageNet. The model is equipped with Global Average Pooling, Batch Normalization, and Dropout to improve performance. Training used Adam's optimizer with a learning rate of 1e-4. Evaluation resulted in validation accuracy of 95% and testing accuracy of 93.85%. The model is converted to TensorFlow Lite for use in Android applications, so that BISINDO alphabet classification can be done in real-time through the camera. This application is expected to be a solution for people who face difficulties in communicating with the deaf and speech impaired.
Keywords— Convolutional Neural Network (CNN), MobileNetV2, Sign Language
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Reihan Saputra, Gentur Wahyu Nyipto Wibowo, Akhmad Khanif Zyen
This work is licensed under a Creative Commons Attribution 4.0 International License.