๐Ÿ‘‹ About Me

Hello! This is Enneng Yang. Iโ€™m currently a Postdoctoral Fellow at Shenzhen Campus of Sun Yat-sen University, China, advised by Assoc. Prof. Li Shen. Before that, I received the PhD degree (June 2025) from Northeastern University, China, advised by Prof. Guibing Guo. From March 2024 to March 2025, I am a visiting Ph.D. student in Prof. Jie Zhangโ€™s group at Nanyang Technological University, Singapore.

My research interests lie in large language models, machine learning, and recommender systems. More specifically, I focus on:

  • Large Language Model: continual pretraining/finetuning, knowledge editing
  • Machine Learning: model merging, multi-task learning, continual/incremental learning, data-free learning, dataset/knowledge distillation
  • Recommendation System: multi-task/multi-scenario recommendation, sequential recommendation, OOD recommendation

๐Ÿ”ฅ Our team is seeking self-motivated students (including remote internships, undergraduates, graduate students, and other candidates) to join research on LLMs, continual learning, and model merging, with the goal of publishing high-quality academic papers. If interested, please email me your resume.

๐Ÿ‘ News

  • 2026.01: Our two papers about model merging are accepted to ICLR 2026.
  • 2025.12: Our one paper about model merging survey is accepted to CSUR 2025.
  • 2025.11: Our one paper about model merging benchmark is accepted to JMLR 2025.
  • 2025.11: Our one paper about long-sequence recommendation is accepted to AAAI 2026.
  • 2025.10: Our one paper about model merging is accepted to TPAMI 2025.
  • 2025.09: Our two papers about continual model merging are accepted to NeurIPS 2025.
  • 2025.05: Our one paper about knowledge editing is accepted to ACL (main) 2025.
  • 2025.05: Our one paper about model merging is accepted to ICML 2025.
  • 2025.04: Our one paper about explainable recommendations is accepted to TOIS 2025.
  • 2025.04: Our two papers about sequential recommendations are accepted to SIGIR 2025.
  • 2025.02: Our one paper about out-of-distribution recommendation is accepted to TOIS 2025.
  • 2025.01: Our one paper about flatness-aware continual learning is accepted to TPAMI 2025.
  • 2025.01: Our two papers about out-of-distribution recommendations are accepted to WWW 2025.
  • 2024.12: Our two papers about LLMs fine-tuning and sequential recommendation are accepted to AAAI 2025.
  • 2024.11: Our one paper about recommendation unlearning is accepted to TOIS 2024.
  • 2024.11: Our one survey paper about forgetting in deep learning is accepted to TPAMI 2024.
  • 2024.09: Our one paper about continual learning is accepted to TPAMI 2024.
  • 2024.05: Our one paper about model merging is accepted to ICML 2024.
  • 2024.01: Our one paper about model merging is accepted to ICLR 2024.
  • 2023.10: Our one paper about sequential recommendation is accepted to TKDE 2023.
  • 2023.09: Our one paper about dataset condensation is accepted to NeurIPS 2023.
  • 2023.07: Our one paper about flatness-aware continual learning is accepted to ICCV 2023.
  • 2023.04: Our one paper about next-basket recommendation is accepted to IJCAI 2023.
  • More

โœจ Repositories

Comments and contributions are welcome.

๐Ÿ“ Selected Preprints and Publications

Survey or Benchmark Papers

Conference Papers

Journal Papers

๐Ÿ“– Educations

๐Ÿ’ป Internships

๐Ÿ† Honors and Awards

  • 2025.01: Youth Talents Support Project - Doctoral Student Special Program (First Session)
  • 2024.10: National Scholarship (Top 1%)
  • 2023.10: National Scholarship (Top 1%)
  • 2020.05: Tencent Rhino-Bird Elite Talent Training Program (51 People Worldwide)
  • 2019.10: National Scholarship (Top 1%)
  • 2017.10: National Scholarship (Top 1%)

๐Ÿ”– Services

  • Conference Reviewers: ICML 2026, ACL 2026, CVPR 2026, ICLR 2026, AAAI 2026, NeurIPS 2025, ICCV 2025, ICML 2025, WWW 2025, ICLR 2025, AAAI 2025, NeurIPS 2024, ICML 2024, KDD 2024, AAAI 2024, WWW 2023, WSDM 2023
  • Journal Reviewers: EAAI 2025, ML 2025, TMLR 2025, TSC 2024, TBD 2024, TCSVT 2024, NCAA 2024, TORS 2022