Our Team

location: Home > Our Team > Research Team > Professor > Content

Jingjing LiuPrincipal Investigator/Guoqiang Professor

Dr. Jingjing Liu received the PhD degree in Computer Science from MIT EECS. She also holds an MBA degree from Judge Business School (JBS) at University of Cambridge. Dr. Liu was Senior Principal Research Manager at Microsoft, leading a research group in Multimodal AI, centering on Vision Language Multimodal Intelligence, the intersection between Natural Language Processing and Computer Vision. Before joining Microsoft Research, Dr. Liu was Research Scientist at MIT CSAIL, with the research focus on Spoken Dialogue Systems.

Research Fields:

Multimodal AI, Natural Language Processing, Large-scale Pre-training, Adversarial Learning

Selected Publications:

1. Zhe Gan, Yen-Chun Chen, Linjie Li, Tianlong Chen, Yu Cheng, Shuohang Wang, Jingjing Liu, Lijuan Wang and Zicheng Liu, Playing Lottery Tickets with Vision and Language, AAAI 2022.

2. Jinghui Chen, Yu Cheng, Zhe Gan, Quanquan Gu, and Jingjing Liu, Efficient Robust Training via Backward Smoothing, AAAI 2022.

3. Linjie Li, Jie Lei, Zhe Gan, and Jingjing Liu, Adversarial VQA: A New Benchmark for Evaluating the Robustness of VQA Models, ICCV 2021.

4. Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Jingjing Liu, Zhangyang Wang, The Elastic Lottery Ticket Hypothesis,NeurIPS 2021. (arXiv:2103.16547)

5. Tianlong Chen, Yu Cheng, Zhe Gan, Jingjing Liu, Zhangyang Wang, Data-Efficient GAN Training Beyond (Just) Augmentations: A Lottery Ticket Perspective,NeurIPS 2021. (arXiv:2103.00397)

6. Linjie Li, Jie Lei, Zhe Gan, Licheng Yu, Yen-Chun Chen, Rohit Pillai, Yu Cheng, Luowei Zhou, Xin Eric Wang, William Yang Wang, Tamara L. Berg, Mohit Bansal, Jingjing Liu, Lijuan Wang, Zicheng Liu,VALUE: A Multi-Task Benchmark for Video-and-Language Understanding Evaluation,NeurIPS 2021. (arXiv:2106.04632)

7. Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Zhangyang Wang, Jingjing LiuEarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets,ACL 2021 (Oral). (arXiv:2101.00063)

8. Shuohang Wang, Luowei Zhou, Zhe Gan, Yen-Chun Chen, Yuwei Fang, Siqi Sun, Yu Cheng, Jingjing LiuCluster-Former: Clustering-based Sparse Transformer for Question Answering, Findings of ACL 2021. (arXiv:2009.06097)

9. Chen Zhu, Yu Cheng, Zhe Gan, Furong Huang, Jingjing Liu, and Tom Goldstein,Adaptive Learning Rates with Maximum Variation Averaging, ECML 2021. (arXiv:2006.11918v1)

10. Jie Lei*, Linjie Li*, Luowei Zhou, Zhe Gan, Tamara L. Berg, Mohit Bansal, and Jingjing LiuLess is More: ClipBERT for Video-and-Language Learning via Sparse Sampling, CVPR 2021 (Oral). (arXiv:2102.06183)

11. Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu, and Jingjing Liu, UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training, CVPR 2021. (arXiv:2104.00332)

12. Liqun Chen*, Dong Wang*, Zhe Gan, Jingjing Liu, Ricardo Henao, and Lawrence Carin,Wasserstein Contrastive Representation Distillation, CVPR 2021. (arXiv:2012.08674).

13. Shuyang Dai, Zhe Gan, Yu Cheng, Chenyang Tao, Lawrence Carin, and Jingjing LiuAPo-VAE: Text Generation in Hyperbolic Space, NAACL 2021. (arXiv:2005.00054)

14. Siqi Sun, Yen-Chun Chen, Linjie Li, Shuohang Wang, Yuwei Fang, and Jingjing LiuLightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval, NAACL 2021.(arXiv:2103.08784)

15. Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, and Jingjing LiuInfoBERT: Improving Robustness of Language Models from an Information Theoretic Perspective, ICLR 2021. (arXiv:2010.02329)

16. Yuwei Fang*, Shuohang Wang*, Zhe Gan, Siqi Sun, and Jingjing LiuFILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding, AAAI 2021. (arXiv:2009.05166)

17. Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing LiuLarge-Scale Adversarial Training for Vision-and-Language Representation Learning, NeurIPS 2020 (Spotlight) (arXiv:2006.06195)

18. Linjie Li, Yen-Chun Chen, Yu Cheng, Zhe Gan, Licheng Yu, and Jingjing LiuHERO: Hierarchical Encoder for Video Language Omni-representation Pre-training, EMNLP 2020. (arXiv:2005.00200)

19. Siqi Sun, Zhe Gan, Yuwei Fang, Yu Cheng, Shuohang Wang, and Jingjing LiuContrastive Distillation on Intermediate Representations for Language Model Compression, EMNLP 2020. (arXiv:2009.14167)

20. Shuohang Wang, Yuwei Fang, Siqi Sun, Zhe Gan, Yu Cheng, Jiang Jing, and Jingjing Liu, Cross-Thought for Sentence Encoder Pre-training, EMNLP 2020. (arXiv:2010.03652)

21. Yuwei Fang, Siqi Sun, Zhe Gan, Rohit Pillai, Shuohang Wang, and Jingjing LiuHierarchical Graph Network for Multi-hop Question Answering, EMNLP 2020. (arXiv:1911.03631)

22. Yue Dong, Shuohang Wang, Zhe Gan, Yu Cheng, Jackie Chi Kit Cheung, andJingjing Liu,Multi-Fact Correction in Abstractive Text Summarization, EMNLP 2020. (arXiv:2010.02443)

23. Yu Cheng, Yizhe Zhang, Oussama Elachqar, Zhe Gan, and Jingjing LiuContextual Text Style Transfer, EMNLP 2020 (Findings of EMNLP). (arXiv:2005.00136)

24. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing LiuUNITER: Learning UNiversal Image-TExt Representations, ECCV 2020. (arXiv:1909.11740)

25. Jize Cao, Zhe Gan, Yu Cheng, Licheng Yu, Yen-Chun Chen, and Jingjing LiuBehind the Scene: Revealing the Secrets of Pre-trained Vision-and-Language Models, ECCV 2020 (Spotlight). (arXiv:2005.07310)

26. Liqun Chen, Zhe Gan, Yu Cheng, Linjie Li, Lawrence Carin, and Jingjing LiuGraph Optimal Transport for Cross-Domain Alignment, ICML 2020. (arXiv:2006.14744)

27. Yen-Chun Chen, Zhe Gan, Yu Cheng, Jingzhou Liu, and Jingjing LiuDistilling Knowledge Learned in BERT for Text Generation, ACL 2020. (arXiv:1911.03829)

28. Jiacheng Xu, Zhe Gan, Yu Cheng, and Jingjing LiuDiscourse-Aware Neural Extractive Model for Text Summarization, ACL 2020. (arXiv:1910.14142)

29. Yizhe Zhang Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan,DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation, ACL 2020. (arXiv:1911.00536)

30. Jingzhou Liu, Wenhu Chen, Yu Cheng, Zhe Gan, Licheng Yu, Yiming Yang, and Jingjing LiuVIOLIN: A Large-Scale Dataset for Video-and-Language Inference, CVPR 2020. (arXiv:2003.11618)

31. Yandong Li, Yu Cheng, Zhe Gan, Licheng Yu, Liqiang Wang, and Jingjing LiuBachGAN: High-Resolution Image Synthesis from Salient Object Layout, CVPR 2020. (arXiv:2003.11690)

32. Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing LiuFreeLB: Enhanced Adversarial Training for Language Understanding, ICLR 2020. (arXiv: 1909.11764)

33. Shuohang Wang, Yunshi Lan, Yi Tay, Jing Jiang, and Jingjing LiuMulti-level Head-wise Match and Aggregation in Transformer for Textual Sequence Matching, AAAI 2020.(arXiv:2001.07234)

34. Junjie Hu, Yu Cheng, Zhe Gan, Jingjing Liu, Jianfeng Gao, and Graham Neubig,What Makes A Good Story? Designing Composite Rewards for Visual Storytelling, AAAI 2020. (arXiv: 1909.05316)

35. Zhe Gan, Yu Cheng, Ahmed EI Kholy, Linjie Li, Jingjing Liu, and Jianfeng Gao,Multi-step Reasoning via Recurrent Dual Attention for Visual Dialog, ACL 2019. (arXiv: 1902.00579)

36. Linjie Li, Zhe Gan, Yu Cheng, and Jingjing LiuRelation-aware Graph Attention Network for Visual Question Answering, ICCV 2019. (arXiv: 1903.12314)

37. Yitong Li, Zhe Gan, Yelong Shen, Jingjing Liu, Yu Cheng, Yuexin Wu, Lawrence Carin, David Carlson, and Jianfeng Gao,StoryGAN: A Sequential Conditional GAN for Story Visualization, CVPR 2019. (arXiv: 1812.02784)

38. Liyiming Ke, Xiujun Li, Yonatan Bisk, Ari Holtzman, Zhe Gan, Jingjing Liu, Jianfeng Gao, Yejin Choi, and Siddhartha Srinivasa.Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation, CVPR 2019 (Oral). (arXiv: 1903.02547)

39. Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing LiuPatient Knowledge Distillation for BERT Model Compression, EMNLP 2019. (arXiv: 1908.09355)

上一条:Yunxin Liu 下一条:Yanyan Lan



Address:12 / F, block C, Qidi science and technology building, Tsinghua Science and Technology Park, Haidian District, Beijing


Jing ICP Bei No. 15006448 | all rights reserved@ Institute of intelligent industry, Tsinghua University