Home
  • English
  • ÄŒeÅ¡tina
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • LatvieÅ¡u
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Log In
    New user? Click here to register. Have you forgotten your password?
Home
  • Browse Our Collections
  • Publications
  • Researchers
  • Research Data
  • Institutions
  • Statistics
    • English
    • ÄŒeÅ¡tina
    • Deutsch
    • Español
    • Français
    • Gàidhlig
    • LatvieÅ¡u
    • Magyar
    • Nederlands
    • Português
    • Português do Brasil
    • Suomi
    • Log In
      New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Resources
  3. UniMAP Index Publications
  4. Publications 2024
  5. Real-time vision-based hand gesture to text interpreter by using artificial intelligence with augmented reality element
 
Options

Real-time vision-based hand gesture to text interpreter by using artificial intelligence with augmented reality element

Journal
AIP Conference Proceedings
ISSN
0094243X
Date Issued
2024-03-07
Author(s)
Rosnazri M.H.
Mohd Rizal Manan
Universiti Malaysia Perlis
Rosnazri Ali
Universiti Malaysia Perlis
Baseemah Mat Jalaluddin
Universiti Malaysia Perlis
Aimi Salihah Abdul Nasir
Universiti Malaysia Perlis
Muhammad Aizat Abu Bakar
Universiti Malaysia Perlis
Zamri N.F.
Rahmat M.A.
Zamzuri M.A.
Azmi M.A.A.
DOI
10.1063/5.0183121
Abstract
Real-time Vision-based Hand Gesture to Text Interpreter by Using Artificial Intelligence with Augmented Reality Element is a device that can interpret sign language to text in real-time. This communicator used a machine learning approach with a slight touch of deep learning elements, which are OpenCV, MediaPipe, and TensorFlow algorithms. Those algorithms have been used to differentiate the hand from other objects, detect the movement and coordinate of hands and perform imagery data analysis to produce output instantly in real-time. The camera will detect the user's hand movement, and the output will be produced on an LCD monitor. This project has been developed by using Python programming language. 13,000 of ASL's alphabets and 5,000 of ASL's number imagery datasets have been collected and trained by using cloud platforms which are Google Teachable Machine and Google Colab. The training process produced 99.85% of accuracy for the alphabets and 100% accuracy for the number. Finally, the constructed machine learning algorithm able to display alphabets and numbers on an LCD monitor by performing ASL's alphabet and number hand gesture in real-time. The performance of the prototype has been analyzed and experimented by two users at plain and noise background with different determined distances.
File(s)
research repository notification.pdf (4.4 MB)
Views
1
Acquisition Date
Nov 19, 2024
View Details
google-scholar
Downloads
  • About Us
  • Contact Us
  • Policies