TİARA: Searching of Turkish Sign Language Videos with Visual Queries

Sign is the native language of the Deaf. Sign language uses a collection of visual constructs to convey messages: Gestures comprised of hand shapes and trajectories; body poses and facial expressions. Every cultural group has a distinct sign language and Turkish Sign Language (TSL) is the sign language of the Deaf community in Turkey. Translation of TSL to and from Turkish speech is possible only through hearing human translators proficient in both languages: an expensive process. For example, only a limited number of  television programs are broadcast with simıltaneous translation. Automatic recognition of sign language is one of the difficult, unsolved problems in computer vision. The development of sign language recognition technology will facilitate and accelerate the inclusion of the Deaf to society. In addition, advances in sign langauge recognition blaze the trail for other advances in computer vision; and lead to new technology in human-computer interaction.  

In the last 5 years, deep learning has scored great success in computer vision and has become the state of the art approach in object recognition. The collection and annotation of databases with a huge number of images is one of the factors facilitating this development. Large sign language video databases in different sign languages are being collected in recent years. In this project, our purpose is to use these databases to train a specialized deep neural network for sign language recognition. Transfer learning techniques may be employed to use  a network trained in a source domain in a different, target domain. Our aim is to reach a common representation for different sign languages; and then to use TSL data using transfer learning to improve this representation.

The results of the research will lead to the development of systems for isolated sign language recognition and for video based query in sign language videos. These systems may be used by a non-signing user to learn the meaning of a sign or by the Deaf to conduct sign-based searches in signed videos. Additionally, queries in videos with simultaneous speech and sign will be used to build a sign dictionary between signs and speech and text.

Funding Institution: 

TUBITAK

Principal Investigator / Project Partner: 

Lale Akarun

Date: 

2017

Project Code: 

117E059

Contact us

Department of Computer Engineering, Boğaziçi University,
34342 Bebek, Istanbul, Turkey

  • Phone: +90 212 359 45 23/24
  • Fax: +90 212 2872461
 

Connect with us

We're on Social Networks. Follow us & get in touch.