Home Research and Innovation Projects IVLinG - Virtual Sign Language Interpreter
IVLinG - Virtual Sign Language Interpreter
Date: 2021-2023
Sectors

Software industry, ICT and media

Services

Research and Development under Contract

Departments

Computer Vision, Interaction and Graphics

Competences

Computer vision and image processing
Computer graphics, virtual, augmented and mixed reality

 

FRAMEWORK

The IVLinG project comprises the creation of a digital platform for virtual and bidirectional interpretation of Portuguese Sign Language (LGP), which allows to speed up communication between the deaf population and the hearing community.

 

 

Currently, with the advancement of technology and the emergence of emerging paradigms, whether associated with artificial intelligence (namely Machine and Deep Learning), or at the level of computer graphics and virtual reality, it becomes possible to create systems that allow you to assist your access to public service services, and effectively contribute to the integration of deaf people into society.

PROPOSED SOLUTIONS

Creation of a real-time virtual interpreter of Portuguese Sign Language (LGP), which will automatically recognize gestures and facial and body expressions. ​​​​

 

 

These movements are later translated into text and/or audio, and the listener receives this information on the computer or smartphone. The response in LGP is visualized through a three-dimensional avatar. This system can also be used from mobile devices and without the need for gloves or other elements to capture movements.

CCG/ZGDV CONTRIBUTION

CCG/ZGDV contributes to the IVLinG project through CVIG's R&I department by developing a gesture recognition system supported by Artificial Intelligence (AI), computer graphics, and virtual reality.

 

 

 

Recording what you want to express allows you to identify the gesture and the corresponding word/expression, which is subsequently converted into text and/or audio and sent to the non-deaf user on the computer or smartphone.

The LGP recognition component uses AI techniques to identify 𝑙𝑎𝑛𝑑𝑚𝑎𝑟𝑘𝑠 body parts and neural networks with LSTM sequence notion (𝑙𝑜𝑛𝑔 𝑠ℎ𝑜𝑟𝑡-𝑡𝑒𝑟𝑚 𝑚𝑒𝑚𝑜𝑟𝑦), and ChatGPT to articulate these words into sentences, with due grammatical coherence.

SignAI Integrated Vision offers an immersive and precise experience for interpreting and communicating in LGP. This innovative platform simplifies and speeds up interaction between deaf and hearing people, eliminating communication barriers and promoting social integration.