Loading...
Loading...
J Kim, I Chung, HW Ka
KIBME 2024
This research develops an automatic system generating customized video captions for diverse linguistic needs. The target population encompasses sign language users with hearing impairments, Korean language learners, and individuals with language development disorders. The system employs a lexical mapping database based on a hash table containing evidence-based objective data combined with transformer-based large language models. Demonstrations using YouTube content confirmed the potential for universal applicability of this technology in generating adaptive captions for a variety of users in need of linguistic accommodation.