Background: I graduated from college at the University of Michigan this spring, I majored in Math and CS. In college I worked on vision research for self-driving cars, and wrote my undergrad thesis on robustness (my linkedin). I spent a lot of time running the EA group at Michigan. I’m currently doing SERI MATS under John Wentworth.
Research taste: currently very bad and confused and uncertain. I want to become better at research and this is mostly why I am doing MATS right now. I guess I especially enjoy reading and thinking about mathy research like Infra-Bayesianism and MIRI embedded agency stuff, but I’ll be excited about whatever research I think is the most important.
I’m pretty new to interacting with the alignment sphere (before this summer I had just read things online and taken AGISF). Who I’ve interacted with (I’m probably forgetting some, but gives a rough idea):
1 conversation with Andrew Critch
~3 conversations with people at each of Conjecture and MIRI
~8 conversations with various people at Redwood
Many conversations with people who hang around Lightcone, especially John and other SERI MATS participants (including Team Shard)
This summer, when I started talking to alignment people, I had a massive rush of information and so this was initially just a google doc of notes to organize my thoughts and figure out what people were doing. I then polished this and published this after some friends encouraged me to. I emphasize that nothing I write in the opinion section are strongly held beliefs—I am still deeply confused about a lot of things in alignment. I’m hoping that by posting this more publicly I can also get feedback / perspectives from others who are not in my social sphere right now.
That makes sense. For me:
Background: I graduated from college at the University of Michigan this spring, I majored in Math and CS. In college I worked on vision research for self-driving cars, and wrote my undergrad thesis on robustness (my linkedin). I spent a lot of time running the EA group at Michigan. I’m currently doing SERI MATS under John Wentworth.
Research taste: currently very bad and confused and uncertain. I want to become better at research and this is mostly why I am doing MATS right now. I guess I especially enjoy reading and thinking about mathy research like Infra-Bayesianism and MIRI embedded agency stuff, but I’ll be excited about whatever research I think is the most important.
I’m pretty new to interacting with the alignment sphere (before this summer I had just read things online and taken AGISF). Who I’ve interacted with (I’m probably forgetting some, but gives a rough idea):
1 conversation with Andrew Critch
~3 conversations with people at each of Conjecture and MIRI
~8 conversations with various people at Redwood
Many conversations with people who hang around Lightcone, especially John and other SERI MATS participants (including Team Shard)
This summer, when I started talking to alignment people, I had a massive rush of information and so this was initially just a google doc of notes to organize my thoughts and figure out what people were doing. I then polished this and published this after some friends encouraged me to. I emphasize that nothing I write in the opinion section are strongly held beliefs—I am still deeply confused about a lot of things in alignment. I’m hoping that by posting this more publicly I can also get feedback / perspectives from others who are not in my social sphere right now.