|
Keynote SpeakersStephen Brewster, University of Glasgow - Designing new user interfaces for cars Hongying Meng, Brunel University London - Recent Advances in Automatic Emotion Detection from Facial Expressions We thank the ACM Distinguished Speakers Program for sponsoring Prof. Brewster's talk. BiosStephen Brewster is a Professor of Human-Computer Interaction in the School of Computing Science at the University of Glasgow. He got his PhD in auditory interfaces at the University of York. After a period spent working in Finland and Norway, he has worked in Glasgow since 1995. He leads the Multimodal Interaction Group, which is very active and has a very strong international reputation. His research focuses on multimodal HCI, or using multiple sensory modalities and control mechanisms (particularly audio, haptics and gesture) to create a rich, natural interaction between human and computer. His work has a strong experimental focus, applying perceptual research to practical situations. A long term focus has been on mobile interaction and how we can design better user interfaces for users who are on the move. Other areas of interest include accessibility, wearable devices and in-car interaction. He pioneered the study of non-speech audio and haptic interaction for mobile devices with work starting in the 1990's. According to Google Scholar, he has 375 publications. He has served as an Associate Chair, Sub-Committee Chair and Papers Chair, and has chaired the Interactivity, Doctoral Consortium and Student Design Competition tracks at ACM CHI. He is a co-chair of ACM ICMI 2017, and an ACM Distinguished Speaker. Hongying Meng is a Senior Lecturer with the Department of Electronic and Computer Engineering, Brunel University London, in UK and a senior member of IEEE. He obtained his PhD from Xi’an Jiaotong University and was a lecturer of Tsinghua University. After that, he held research positions in several UK universities including University College London (UCL), University of York and University of Southampton. His research area includes digital signal processing, machine learning, computer vision, human computer interaction, and embedded systems. He has developed two different facial expression analysis systems that won the international challenge completions AVEC2011 (http://sspnet.eu/avec2011/) and AVEC2013 (http://sspnet.eu/avec2013/) respectively. In 2016, he was invited for a talk on Deep Learning for Facial Expression Analysis at Deep Learning Summit in London on Sept. 22 - 23. He also developed video based real-time facial expression recognition system in embedded devices. His real-time facial expression analysis tool was integrated in RIOT project shown at Festival of the Mind in Sheffield on Sept. 15–18 in 2016 and The Future of StoryTelling Festival (FoST FEST) in New York on Oct. 7 – 9, 2016. He has published more than 100 international journal and conference papers. |