Nazneen Rajani


News

  • Invited talk and panelist at NeurIPS workshops on instruction tuning ( slides) and ENLSP
  • Honored to serve on the United Nation's AI Advisory Board [ link ]
  • Featured in the NYT article cover on RLHF [ link ]
  • Zephyr 7B performance as good as ChatGPT [ paper ]
  • Transformers United seminar lecture [ slides]
  • Invited talk at Stanford's NLP Seminar [slides]
  • Invited talk at UC Berkeley Decentralization Summit [ slides ]

Hello! Welcome to my page 👋🏽

Until recently, I was a Research Lead at Hugging Face 🤗, where I was part of the team (called H4) focused on democratizing the secret sauce of alignment that makes ChatGPT wildly different than GPT-3. During the process, we worked with external vendors, Surge and Scale AI, to collect data for SFT and RLHF. I led the work on benchmarking the data quality and deciding on key criteria such as data quantity, task distribution, length distribution, what values to align the models towards, whether we should use rating vs. ranking, etc. Apart from involving humans in the data collection processes for SFT and RLHF, we also experimented with AI distillation. Our Zephyr model is finetuned for alignment using only AI distilled data. My talk at NeurIPS '23 (slides available here) compared our work on using manual curation vs. AI distillation for alignment. The NYT covered my work on SFT and RLHF data collection and finetuning.

I believe that market for adotping open-access LLMs is growing rapidly and there is a great appetite for deploying models that can be governed and customized without paying huge amounts of money. However, these open-access LLMs do not work out-of-the-box for most enterprise use cases that have specific requirements and behavior that they want in their models. This led me to leave my position at HF and apply my learnings to bridge this gap.

I love teaching and I have taught courses with Uplimit on Interpreting ML Models that include hands-on projects for implementing and evaluating algorithms for interpreting deep learning models applied to vision and NLP. I have also given lectures for Seminar courses at UVirginia, UMich, Stanford, UChicago, Georgia Tech, and UT Austin.

Before joing Hugging Face, I was a Senior Research Scientist at Salesforce Research where I worked on commonsense reasoning and interpretability in NLP. I led a small team focused on building robust natural language generation models. Prior to working at Salesforce, I completed my Ph.D. thesis in the Department of Computer Science at the Machine Learning Research Group of the University of Texas, Austin. I worked with my advisor Prof. Ray Mooney on problems in NLP, Vision and at the intersection of NLP and Vision. I have also worked on problems in Explainable AI (XAI) wherein I proposed a scalable approach to generate visual explanations for ensemble methods using the localization maps of the component systems. Evaluating explanations is also a challenging problem and I proposed two novel evaluation metrics that does not require human generated GT.

I completed my MS in CS with thesis advised by Jason Baldridge on new topic detection using topical alignment from tweets based on their author and recipient.