Huiqiang Jiang (姜慧强)

Research SDE in Microsoft Research Asia,
a fake NLPer Google schoal,
research, development, and maintenance all dabbled,
even was a weekends tutor.

A unpopular blogger Blog & Zhihu
A programming enthusiast @iofu728

Phone: +86 178 xxxx xxxx
Email: hjiang[aT]microsoft[DoT.]com


Huiqiang Jiang obtained his Master's Degree in Software Engineering from Peking University, working with A.P. Xiang Jing. And also was an research intern at the KC Group, Microsoft Research Asia (19/6-21/3) with Börje Karlsson and Guoxin Wang as well as the search group, Ant Group (20/6-20/8).
Huiqiang's research primarily concentrates on efficient methods to accelerate inference or training, including prompt compression (LLMLingua), KV-cache compression, speculative decoding, model compression, sparse inference, neural architecture search (NAS), and efficient tuning, with a particular emphasis on LLMs. Additionally, he is interested in addressing typical challenges in natural language processing, such as information extraction.

I'm looking for one research intern in efficient methods. Please get in touch with me (hjiang[aT]microsoft[DoT.]com) if you are interested in my research topics.

Selected Publications

NLP & Speech

  1. LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression
    Zhuoshi Pan, Qianhui Wu, Huiqiang Jiang, Menglin Xia, Xufang Luo, Jue Zhang, Qingwei Lin, Victor Rühle, Yuqing Yang, Chin-Yew Lin, H. Vicky Zhao, Lili Qiu, Dongmei Zhang
    arXiv
    [Code] [Project Page] [Demo]

  2. LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression
    Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, Lili Qiu
    In ICLR Workshop Mathematical and Empirical Understanding of Foundation Models (ME-FoMo), 2024.
    [Code] [Project Page] [Demo]

  3. LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models
    Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, Lili Qiu
    In Proc. of EMNLP'23 (Oral)
    [Code] [Project Page] [Demo]

  4. Accurate and Structured Pruning for Efficient Automatic Speech Recognition
    Huiqiang Jiang, Li Lyna Zhang, Yuang Li, Yu Wu, Shijie Cao, Ting Cao, Yuqing Yang, Jinyu Li, Mao Yang and Lili Qiu
    In Proc. of Interspeech'23

  5. CoLaDa: A Collaborative Label Denoising Framework for Cross-lingual Named Entity Recognition
    Tingting Ma, Qianhui Wu, Huiqiang Jiang, Börje Karlsson, Tiejun Zhao and Chin-Yew Lin
    In Proc. of ACL'23
    [Code]

  6. Multi-Level Knowledge Distillation for Out-of-Distribution Detection in Text
    Qianhui Wu, Huiqiang Jiang, Haonan Yin, Börje F. Karlsson, Chin-Yew Lin
    In Proc. of ACL'23
    [Code]

  7. Decomposed Meta-Learning for Few-Shot Sequence Labeling
    Tingting Ma, Qianhui Wu, Huiqiang Jiang, Jieru Lin, Börje F Karlsson, Tiejun Zhao, Chin-Yew Lin
    IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), 2024.
    [Code]

  8. Decomposed Meta-Learning for Few-Shot Named Entity Recognition
    Tingting Ma*, Huiqiang Jiang*, Qianhui Wu*, Tiejun Zhao, Chin-Yew Lin
    In Proc. of ACL'22 Findings
    [Code]

  9. AdvPicker: Effectively Leveraging Unlabeled Data via Adversarial Discriminator for Cross-Lingual NER
    Weile Chen, Huiqiang Jiang, Qianhui Wu, Börje F. Karlsson, Yi Guan
    In Proc. of ACL'21
    [Code]

  10. BoningKnife: Joint Entity Mention Detection and Typing for Nested NER via prior Boundary Knowledge
    Huiqiang Jiang, Guoxin Wang, Weile Chen, Chengxi Zhang, Börje F. Karlsson
    arXiv (work between 2019/2020)

CV

  1. ElasticViT: Conflict-aware Supernet Training for Deploying Fast Vision Transformer on Diverse Mobile Devices
    Chen Tang, Li Lyna Zhang, Huiqiang Jiang, Jiahang Xu, Ting Cao, Quanlu Zhang, Yuqing Yang, Zhi Wang, Mao Yang
    In Proc. of ICCV'23
    [Code]

  2. Attentive Mask CLIP
    Yifan Yang, Weiquan Huang, Yixuan Wei, Houwen Peng, Xinyang Jiang, Huiqiang Jiang, Fangyun Wei, Yin Wang, Han Hu, Lili Qiu, Yuqing Yang
    In Proc. of ICCV'23
    [Code]

System

  1. PIT: Optimization of Dynamic Sparse Deep Learning Models via Permutation Invariant Transformation
    Ningxin Zheng*, Huiqiang Jiang*, Quanlu Zhang, Zhenhua Han, Lingxiao Ma, Yuqing Yang, Fan Yang, Chengruidong Zhang, Lili Qiu, Mao Yang, Lidong Zhou
    In Proc. of SOSP'23
    [Code]

Selected Honors & Awards

  • Awarded as Zhejiang Province Excellent Graduate Award, 2018.
  • Awarded by Zhejiang Province Scholarship Awardee (5%), 2017.
  • Awarded by Zhejiang University First-Class Scholarship for Outstanding Students, 2017.
  • Awarded by Zhejiang University Excellent Student Award, 2017.
  • Awarded as First Prize in the College Students' Mathematical Contest in Modeling, Zhejiang Province, 2016.

Academic Service

  • ARR 23/24, EMNLP 23, COLING 24, ICLR 24, TIST

Last Updated: Mar., 2024 Website Hit Counter