avatar

Taiqiang Wu

PH.D. Student, The University of Hong Kong

Biography

I am now a PH.D. student from the NGai Lab in HKU. Before that, I received my master degree from Tsinghua University in 2023. For master, I studied in IIGroup in Tsinghua University, supervised by Prof. Yujiu Yang. In 2020, I received my bachelor degree in Department of Automation from Tsinghua University. My major research interests lie in Effcient Model methods for Large Language Models.

I am looking for an internship/visiting opportunity, please contact me if you are interested.

Recent News

  • 2025.03 - The paper A Survey on the Honesty of Large Language Models is accepted by TMLR!
  • 2025.02 - Release one blog about the KL divergence in RL algorithm. [Blog(中)]
  • 2025.01 - Release one report for the unified view of Attention and MoE. [Report][Blog(中)]
  • 2024.11 - Two papers are accepted by COLING 2025.
  • 2024.11 - Release one report for modeling Binary Quantization via Convex Optimization Methods (Project for HKU COMP9602, 2023). [Report]
  • 2024.09 - One paper is accepted by EMNLP 2024.
  • 2024.06 - One blog about improving LoRA is released. [Eng | ]
  • 2024.04 - One blog about rethinking KL divergence in KD for LLMs is released. [Eng | ]
  • 2024.03 - One paper is accepted by NAACL 2024 findings.
  • 2023.02 - Two papers are accepted by ICASSP 2023.
  • 2022.10 - One research long paper is accepted by WSDM 2023.

Publications & Preprints

  • A Survey on the Honesty of Large Language Models

    • Siheng Li*, Cheng Yang*, Taiqiang Wu*, Chufan Shi, Yuji Zhang, Xinyu Zhu, Zesen Cheng, Deng Cai, Mo Yu, Lemao Liu, Jie Zhou, Yujiu Yang, Ngai Wong, Xixin Wu, Wai Lam
    • TMLR 2025 [pdf] [code]
  • Mixture-of-Subspaces in Low-Rank Adaptation

    • Taiqiang Wu, Jiahao Wang, Zhe Zhao, Ngai Wong
    • EMNLP 2024 [pdf] [code]
  • Rethinking Kullback-Leibler Divergence in Knowledge Distillation for Large Language Models

    • Taiqiang Wu, Chaofan Tao, Jiahao Wang, Runming Yang, Zhe Zhao, Ngai Wong
    • CoLING 2025 [pdf] [code]
  • Weight-Inherited Distillation for Task-Agnostic BERT Compression

    • Taiqiang Wu *, Cheng Hou *, Shanshan Lao, Jiayi Li, Ngai Wong, Zhe Zhao, Yujiu Yang.
    • NAACL 2024 findings (research paper) [pdf] [code] [poster]
  • Modeling Fine-grained Information via Knowledge-aware Hierarchical Graph for Zero-shot Entity Retrieval

    • Taiqiang Wu *, Xingyu Bai *, Weigang Guo, Weijie Liu, Siheng Li, Yujiu Yang.
    • WSDM 2023 (CCF B, research paper) [pdf] [code] [poster]
  • Edge-free but Structure-aware: Prototype-Guided Knowledge Distillation from GNNs to MLPs

    • Taiqiang Wu, Zhe Zhao, Jiahao Wang, Xingyu Bai, Lei Wang, Ngai Wong, Yujiu Yang
    • CoLING 2025 [pdf]

Internship

  • 2021.3~2022.5, Tencent
  • 2022.5~2023.5, Tencent Rahio Research Plan

Services

  • ARR2022, Reviewer
  • ARR2023, Reviewer
  • ARR2024, Reviewer
  • ARR2025, Reviewer

(Last updated on Mar, 2025)