Ph.D. Student @ The University of Hong Kong

Taiqiang Wu

Focused on LLM efficiency.
Mission: Empower AI for everthing!

Taiqiang Wu

Biography

I am now a PH.D. student from the NGai Lab in HKU. Before that, I received my master degree from Tsinghua University in 2023. For master, I studied in IIGroup in Tsinghua University, supervised by Prof. Yujiu Yang. In 2020, I received my bachelor degree in Department of Automation from Tsinghua University. My major research interests lie in Effcient Model methods for Large Language Models. Recently, I focus on the efficient reasoning via RL.

Recent News

Publications & Preprints

  • 🔥 The Art of Efficient Reasoning: Data, Reward, and Optimization

    • Taiqiang Wu, Zenan Xu, Bo Zhou, Ngai Wong
    • Arxiv Preprint [Paper] [Weights] [Blog]
  • 🔥 Revisiting Model Interpolation for Efficient Reasoning

    • Taiqiang Wu, Runming Yang, Tao Liu, Jiahao Wang, Ngai Wong
    • Arxiv Preprint [Paper] [Code]
  • Timber: Training-free Instruct Model Refining with Base via Effective Rank

    • Taiqiang Wu, Runming Yang, Tao Liu, Jiahao Wang, Zenan Xu, Ngai Wong
    • Arxiv Preprint [Paper] [Code] [Weight]
  • Shadow-FT: Tuning Instruct Model via Training on Paired Base Model

    • Taiqiang Wu*, Runming Yang*, Jiayi Li, Pengfei Hu, Yik Chung WU, Ngai Wong, Yujiu Yang
    • Arxiv Preprint [Paper] [Code] [Weight]
  • A Survey on the Honesty of Large Language Models

    • Siheng Li*, Cheng Yang*, Taiqiang Wu*, Chufan Shi, Yuji Zhang, Xinyu Zhu, Zesen Cheng, Deng Cai, Mo Yu, Lemao Liu, Jie Zhou, Yujiu Yang, Ngai Wong, Xixin Wu, Wai Lam
    • TMLR 2025 Journal [Paper] [Code]
  • Mixture-of-Subspaces in Low-Rank Adaptation

    • Taiqiang Wu, Jiahao Wang, Zhe Zhao, Ngai Wong
    • EMNLP 2024 Conference [Paper] [Code]
  • Rethinking Kullback-Leibler Divergence in Knowledge Distillation for Large Language Models

    • Taiqiang Wu, Chaofan Tao, Jiahao Wang, Runming Yang, Zhe Zhao, Ngai Wong
    • CoLING 2025 Conference [Paper] [Code] [Video]
  • Weight-Inherited Distillation for Task-Agnostic BERT Compression

    • Taiqiang Wu *, Cheng Hou *, Shanshan Lao, Jiayi Li, Ngai Wong, Zhe Zhao, Yujiu Yang.
    • NAACL 2024 findings Conference [Paper] [Code] [Poster]
  • Edge-free but Structure-aware: Prototype-Guided Knowledge Distillation from GNNs to MLPs

    • Taiqiang Wu, Zhe Zhao, Jiahao Wang, Xingyu Bai, Lei Wang, Ngai Wong, Yujiu Yang
    • CoLING 2025 Conference [Paper]

Internship

  • 2021.3~2022.5, Tencent
  • 2022.5~2023.5, Tencent Rahio Research Plan

Services

  • ARR2022, Reviewer
  • ARR2023, Reviewer
  • ARR2024, Reviewer
  • ARR2025, Reviewer
  • ARR2026, Reviewer

(Last updated on Mar, 2026)