This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
Interpretability Hackathon
Accepted at the 
Interpretability Hackathon
 research sprint on 
November 15, 2022

An Informal Investigation of Indirect Object Identification in Mistral GPT2-Small Battlestar

This report represents an informal investigation of an IOI circuit within the Mistral GPT2-Small x49 Battlestar transformer model, inspired by the work performed by Kevin Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris and Jacob Steinhardt at Redwood Research in their Interpretability in the Wild paper and the mechanistic interpretability work conducted by Neel Nanda.

Chris Mathwin
4th place
3rd place
2nd place
1st place
 by peer review