Biography
Shrikanth (Shri) Narayanan is University Professor and holder of the Niki and Max Nikias Chair in Engineering at the University of Southern California (USC) and serves as the inaugural Vice President for Presidential Initiaitives on the Senior Leadership Team of USC's President. He is a Professor in the Signal and Image Processing Institute of USC's Ming Hsieh Electrical & Computer Engineering department with joint appointments as Professor in Computer Science, Linguistics, Psychology, Neuroscience, Pediatrics and Otolaryngology-Head and Neck Surgery. He is also the inaugural director of the Ming Hsieh Institute, a Research Director for the Information Sciences Institute at USC and a Visiting Faculty Researcher at Google. He held the inaugural Viterbi Professorship in Engineering at USC (2007-2016). He was also a Research Area Director of the Integrated Media Systems Center, an NSF Engineering Research Center at USC, and was the Research Principal for the USC Pratt and Whitney Institute for Collaborative Engineering, a unique partnership between academia and industry (2003-2007).
Research Interests
Shri Narayanan’s interdisciplinary research focuses on human-centered sensing/imaging, signal processing, and machine intelligence centered on human communication, interaction, emotions, and behavior. His work has a special emphasis on speech, audio, language, multimodal and biomedical problems and applications with direct societal relevance in defense, security, health, media and the arts. His laboratory is supported by federal (NSF, NIH, DARPA, IARPA, ONR, Army and DHS) and various foundation and industry grants. He has published over 1000 papers and has 19 granted U.S. patents . His research and inventions have led to technology commercialization including through startups he co-founded: Behavioral Signals Technologies focused on the telecommunication services and AI based conversational assistance industry and Lyssn focused on mental health care delivery, treatment and quality assurance.
HUMAN-CENTERED SENSING, COMPUTING AND INFORMATION PROCESSING
-Human-centered Signal Processing and Machine Learning
-Behavioral Signal Processing, Emotions, Behavioral Informatics
-Speech and Language Processing, Automatic Speech/Speaker Recognition, Speech Translation
-Multimedia Signal Processing, Computational Media Intelligence
-Human-Machine and Mediated Interactions; Spoken dialog and Multimodal Systems; Virtual Humans
-Speech production modeling, Articulatory-Acoustics, Speech/Audio Synthesis; Audio/Music
-Biomedical Signal Processing and Modeling: Imaging & Instrumentation
What does being an Amazon ML Fellow mean to you?