Research tastes: I’m not great at understanding and working on super mathy stuff, so I mostly avoided giving opinions on these. I enjoy toy programming puzzles/competitions but got bored of engineering large/complex systems which is part of why I left Ought. I’m generally excited about some level of automating alignment research.
Who I’ve interacted with:
A ton: Ought
~3-10 conversations: Conjecture (vast majority being “Simulacra Theory” team), Team Shard
~1-2 conversations with some team members: ARC, CAIS, CHAI, CLR, Encultured, Externalized Reasoning Oversight, MIRI, OpenAI, John Wentworth, Truthful AI / Owain Evans
Good point. For myself:
Background (see also https://www.elilifland.com/): I did some research on adversarial robustness of NLP models while in undergrad. I then worked at Ought as a software/research engineer for 1.5 years, was briefly a longtermist forecasting entrepreneur then have been thinking independently about alignment strategy among other things for the past 2 months.
Research tastes: I’m not great at understanding and working on super mathy stuff, so I mostly avoided giving opinions on these. I enjoy toy programming puzzles/competitions but got bored of engineering large/complex systems which is part of why I left Ought. I’m generally excited about some level of automating alignment research.
Who I’ve interacted with:
A ton: Ought
~3-10 conversations: Conjecture (vast majority being “Simulacra Theory” team), Team Shard
~1-2 conversations with some team members: ARC, CAIS, CHAI, CLR, Encultured, Externalized Reasoning Oversight, MIRI, OpenAI, John Wentworth, Truthful AI / Owain Evans