Electrical and Computer Engineering
University of Illinois, Urbana-Champaign Email: lwang114@illinois.edu Github
Welcome!
I'm a fourth-year PhD student, and a proud member of the UI Statistical Speech Technology group led by Prof. Mark Hasegawa-Johnson.
Current research interest includes:
Unsupervised representation learning for multimodal dialogue systems
Multimodal coreference resolution
Mathematical model of spoken language acquisition
Publications
Journal Paper
Liming Wang, Mark Hasegawa-Johnson. Multimodal Word Discovery with Spoken Descriptions
and Visual Concepts. Transactions on Audio, Speech and Language Processing, 2020. paper
Conference Paper
Junrui Ni, Liming Wang, Heting Gao, Kaizhi Qian, Yang Zhang, Shiyu Chang and Mark Hasegawa-Johnson. Unsupervised text-to-speech synthesis by unsupervised automatic speech recognition. Accepted by Interspeech, 2022. papercodedemo
Liming Wang, Siyuan Feng, Mark Hasegawa-Johnson and Chang D. Yoo. Self-supervised Semantic-driven Phoneme Discovery for Zero-resource Speech Recognition. Annual Meeting of the Association for Computational Linguistics, 2022. papercode
Liming Wang, Mark Hasegawa-Johnson. A Translation Framework for Multimodal Spoken Unit Discovery. Asilomar Conference on Signals, Systems, and Computers, 2021. slides
Liming Wang, Shengyu Feng, Xudong Lin, Manling Li, Heng Ji and Shih-Fu Chang. Coreference by Appearance: Visually Grounded Event Coreference Resolution. The Fourth Workshop on Computational Models of Reference, Anaphora andCoreference (CRAC), 2021. paper
Liming Wang, Mark Hasegawa-Johnson. Align or Attend? Toward More Efficient and Accurate Spoken Word Discovery Using Speech-To-Image Retrieval. International Conference
on Acoustics, Speech and Signal Processing (ICASSP), 2021. paper
Liming Wang, Mark Hasegawa-Johnson. A DNN-HMM-DNN Hybrid Model for Discovering Word-like Units from Spoken Captions and Image Regions. Interspeech 2020. paper
Liming Wang, Mark Hasegawa-Johnson. Multimodal Word Discovery with Phone Sequence and
Image Concepts. Interspeech 2019 (oral presentation). paperpresentation
Graham Neubig et al., XNMT: The eXtensible Neural Machine Translation Toolkit. Proceedings
of the 13th Conference of the Association for Machine Translation in the America, 2018.
Odette Scharenborg et al., Linguistic Unit Discovery from Multimodal Inputs in Unwritten
Languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop. International Conference
on Acoustics, Speech and Signal Processing, 2017.