Learning Sign Language with AI Driven Grammar Checking

March 12, 2021, 10:00 AM - 11:00 AM

Location:

Online Event

YingLi Tian, City University of New York

American Sign Language (ASL) is a primary means of communication for over 500,000 people in the US, and a distinct language from English, conveyed through hands, facial expressions, and body movements. Most prior work on ASL recognition has focused on identifying a small set of simple signs performed, but current technology is not sufficiently accurate on continuous signing of sentences with an unrestricted vocabulary. In this talk, I will share our research of AI driven ASL learning tools to assist ASL students by enabling them to review and assess their signing skills through immediate, automatic, outside-of-classroom feedback. Our system can identify linguistic/performance attributes of ASL without necessarily identifying the entire sequence of signs and automatically determine if a performance contains grammatical errors through fusion of multimodality (facial expression, hand gesture, and body pose) and multisensory information (RBB and Depth videos). The system currently can recognize 8 types grammatical mistakes and is able to generate feedback for ASL learners on average in less than 2 minutes for each 1 minute ASL video. Our system has also been tested on videos recorded with cellphones and webcameras.

Bio: Dr. YingLi Tian is a CUNY Distinguished Professor in Electrical Engineering Department at the City College of New York (CCNY) and Computer Science Department at Graduate Center of the City University of New York (CUNY). She is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE), as well as a Fellow of International Association of Pattern Recognition (IAPR). She received her PhD from the Department of Electronic Engineering at the Chinese University of Hong Kong in 1996. Her research interests include computer vision, machine learning, artificial intelligence, assistive technology, medical imaging analysis, and remote sensing. She has published more than 200 peer-reviewed papers in journals and conferences in these areas with 21,500+ citations, and holds 29 issued patents.

She is a pioneer in automatic facial expression analysis, human activity understanding, and assistive technology. Dr. Tian’s research on automatic facial expression analysis and database development while working at the Robotics Institute at Carnegie Mellon University has made significant impact in the research community and received the “Test of Time Award” at IEEE International Conference on Automatic Face and Gesture Recognition in 2019. Before joining CCNY, Dr. Tian was a research staff member at IBM T. J. Watson Research Center and led the video analytics team. She received the IBM Outstanding Innovation Achievement Award in 2007 and the IBM Invention Achievement Awards every year from 2002 to 2007. Since Dr. Tian joined CCNY in Fall 2008, she has been focusing on assistive technology by applying computer vision and machine learning technologies to help people with special needs including the blind and visually impaired, deaf and hard-of-hearing, and the elderly. She serves as associate editors for IEEE Trans. on Multimedia (TMM), Computer Vision and Image Understanding (CVIU), Journal of Visual Communication and Image Representation (JVCI), and Machine Vision and Applications (MVAP).

 

SPECIAL NOTE: This seminar is presented online only.

You can join via Webex

Meeting number (access code):  120 318 8520

Meeting password: 1234