Working independently on making AI systems reason safely about decision theory and acausal interactions, collaborating with Caspar Oesterheld and Emery Cooper.
I try to check the LessWrong infrequently. If you wanna get in touch, you can use my admonymous link and leave your email address there, so I can reply to you! (If you don’t include some contact details in your message, I can’t reply)
You can also just send me thoughts and questions anonymously!
How others can help me
Be interested in working on/implementing ideas from research on acausal cooperations! Or connect me with people who might be.
How I can help others
Ask me about acausal stuff!
Or any of my background: Before doing independent research, I worked for the Center on Long-Term Risk on s-risk reduction projects (hiring, community building, and grantmaking.) Previously, I was a guest manager at the EA Infrastructure Fund (2021), did some research for 1 Day Sooner on Human Challenge Trials for Covid vaccines (2020), did the summer research fellowship at FHI writing about IDA (2019), worked a few hours a week for CEA on local groups mentoring for a few months (2018), and helped a little bit with organizing EA Oxford (2018/19). I studied PPE at Oxford (2018-2021) and psychology in Freiburg (2015-2018.)
I also have things to say about mental health and advice for taking a break from work.
My current guess is that occasional volunteers are totally fine! There’s some onboarding cost but mostly, the cost on our side scales with the number of argument-critique pairs we get. Since the whole point is to have critiques of a large variety of quality, I don’t expect the nth argument-critque pair we get to be much more useable than the 1st one. I might be wrong about this one and change my mind as we try this out with people though!
(Btw I didn’t get a notification for your comment, so maybe better to dm if you’re interested.)