I am a PhD student at University of Manchester, supervised by Prof. Sophia Ananiadou. I worked as an NLP researcher at Tencent from 2020 to 2022. I graduated from Shanghai Jiao Tong University in 2020 with a Master’s Degree, supervised by Prof. Gongshen Liu, and in 2017 with a Bachelor’s Degree.

I believe mechanistic interpretability is crucial for achieving trustworthy AGI. Pursuing AGI without understanding the reasoning behind its decision-making is inherently risky. Also, mechanistic interpretability serves as a powerful tool for analyzing LLMs and guiding the development of better methodologies and architectural improvements. Currently, my research focuses on:

a) LLM mechanistic interpretability. Identifying important neurons and understanding the neuron-level information flow in LLMs. My research has investigated the underlying mechanisms of factual knowledge, arithmetic, latent multi-hop reasoning, in-context learning, and visual question answering.

b) LLM post-training. Applying interpretability techniques to analyze LLMs and design methods to enhance LLMs’ capabilities (knowledge, arithmetic, reasoning) during post-training. I have developed the back attention module to improve latent multi-hop reasoning ability.

c) LLM safety and fairness. Leveraging interpretability techniques to identify key neurons responsible for encoding gender bias, and applying the neuron-level model editing technique to mitigate gender bias while preserving the LLM’s existing capabilities.

My email is zepingyu@foxmail.com.

🔥 News

📝 Publications

Back Attention: Understanding and Enhancing Multi-Hop Reasoning in Large Language Models

Zeping Yu, Yonatan Belinkov, Sophia Ananiadou [arxiv: 2502.10835]

Understanding and Mitigating Gender Bias in LLMs via Interpretable Neuron Editing

Zeping Yu, Sophia Ananiadou [arxiv: 2501.14457]

Understanding Multimodal LLMs: the Mechanistic Interpretability of Llava in Visual Question Answering

Zeping Yu, Sophia Ananiadou [arxiv: 2411.10950]

Interpreting Arithmetic Mechanism in Large Language Models through Comparative Neuron Analysis

Zeping Yu, Sophia Ananiadou [EMNLP 2024 (main)]

Neuron-Level Knowledge Attribution in Large Language Models

Zeping Yu, Sophia Ananiadou [EMNLP 2024 (main)]

How do Large Language Models Learn In-Context? Query and Key Matrices of In-Context Heads are Two Towers for Metric Learning

Zeping Yu, Sophia Ananiadou [EMNLP 2024 (main)]

CodeCMR: Cross-modal retrieval for function-level binary source code matching

Zeping Yu, Wenxin Zheng, Jiaqi Wang, Qiyi Tang, Sen Nie, Shi Wu [NeurIPS 2020]

Order matters: Semantic-aware neural networks for binary code similarity detection

Zeping Yu, Rui Cao, Qiyi Tang, Sen Nie, Junzhou Huang, Shi Wu [AAAI 2020]

Adaptive User Modeling with Long and Short-Term Preferences for Personalized Recommendation

Zeping Yu, Jianxun Lian, Ahmad Mahmoody, Gongshen Liu, Xing Xie [IJCAI 2019]

Sliced recurrent neural networks

Zeping Yu, Gongshen Liu [COLING 2018]

📖 Educations

  • 2023.09 - 2027.02 (Expected), PhD, Computer Science, University of Manchester.
  • 2017.09 - 2020.03, Master of Engineering, Shanghai Jiao Tong University.
  • 2013.09 - 2017.06, Bachelor of Engineering, Shanghai Jiao Tong University.