Mostrar el registro sencillo del ítem

dc.contributor.authorMora-Zarate J.E.
dc.contributor.authorGarzón-Castro C.L.
dc.contributor.authorCastellanos Rivillas J.A.
dc.date.accessioned2025-01-15T20:49:25Z
dc.date.available2025-01-15T20:49:25Z
dc.date.issued2024
dc.identifier.otherhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85210484408&doi=10.3389%2ffrobt.2024.1475069&partnerID=40&md5=a152b8f5692bca67ca16cceee1cc72c0
dc.identifier.urihttp://hdl.handle.net/10818/63341
dc.description.abstractSign languages are one of the main rehabilitation methods for dealing with hearing loss. Like any other language, the geographical location will influence on how signs are made. Particularly in Colombia, the hard of hearing population is lacking from education in the Colombian Sign Language, mainly due of the reduce number of interpreters in the educational sector. To help mitigate this problem, Machine Learning binded to data gloves or Computer Vision technologies have emerged to be the accessory of sign translation systems and educational tools, however, in Colombia the presence of this solutions is scarce. On the other hand, humanoid robots such as the NAO have shown significant results when used to support a learning process. This paper proposes a performance evaluation for the design of an activity to support the learning process of all the 11 color-based signs from the Colombian Sign Language. Which consists of an evaluation method with two modes activated through user interaction, the first mode will allow to choose the color sign to be evaluated, and the second will decide randomly the color sign. To achieve this, MediaPipe tool was used to extract torso and hand coordinates, which were the input for a Neural Network. The performance of the Neural Network was evaluated running continuously in two scenarios, first, video capture from the webcam of the computer which showed an overall F1 score of 91.6% and a prediction time of 85.2 m, second, wireless video streaming with NAO H25 V6 camera which had an F1 score of 93.8% and a prediction time of 2.29 s. In addition, we took advantage of the joint redundancy that NAO H25 V6 has, since with its 25 degrees of freedom we were able to use gestures that created nonverbal human-robot interactions, which may be useful in future works where we want to implement this activity with a deaf community. Copyright © 2024 Mora-Zarate, Garzón-Castro and Castellanos Rivillas.en
dc.formatapplication/pdfes_CO
dc.language.isoenges_CO
dc.publisherFrontiers in Robotics and AIes_CO
dc.relation.ispartofseriesFrontiers in Robotics and AI vol. 11
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subject.otherColombian sign language (Csl)
dc.subject.otherEducation
dc.subject.otherHuman-Robot Interaction (Hri)
dc.subject.otherMachine learning
dc.subject.otherNeural networks
dc.subject.otherSocial robots
dc.titleLearning signs with NAO: humanoid robot as a tool for helping to learn Colombian Sign Languageen
dc.typejournal articlees_CO
dc.type.hasVersionpublishedVersiones_CO
dc.rights.accessRightsopenAccesses_CO
dc.identifier.doi10.3389/frobt.2024.1475069


Ficheros en el ítem

FicherosTamañoFormatoVer

No hay ficheros asociados a este ítem.

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Attribution-NonCommercial-NoDerivatives 4.0 InternacionalExcepto si se señala otra cosa, la licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 Internacional