Aswathy Ajith

Hi there! I am a PhD student in the Computer Science Department at the University of Chicago, where I am jointly advised by Prof. Ari Holtzman, Dr. Kyle Chard, and Prof. Ian Foster. I am broadly interested in the behavorial analysis of generative models to understand their capabilities and limitations. My current research focuses on exploring and characterizing the impact of parameter-efficient finetuning techniques on the generated outputs of large language models.

  • Github
  • LinkedIn
  • Twitter

Selected Publications

ICLR '25
Mitigating Memorization in Language Models (spotlight)
Mansi Sakarvadia, Aswathy Ajith, Arham Khan, Nathaniel Hudson, Caleb Geniesse, Kyle Chard, Yaoqing Yang, Ian Foster, Michael W. Mahoney
BlackboxNLP '23
Memory Injections: Correcting Multi-Hop Reasoning Failures during Inference in Transformer-Based Language Models
Mansi Sakarvadia, Aswathy Ajith, Arham Khan, Daniel Grzenda, Nathaniel Hudson, André Bauer, Kyle Chard, Ian Foster
ACL '23
The Diminishing Returns of Masked Language Models to Science
Zhi Hong, Aswathy Ajith, James Pauloski, Eamon Duede, Kyle Chard, Ian Foster