I’m Jérémy Perret. Based in France. PhD in AI (NLP). AI Safety & EA meetup organizer. Information sponge. Mostly lurking since 2014. Seeking more experience, and eventually a position, in AI safety/governance.
Extremely annoyed by the lack of an explorable framework for AI risk/benefits. Working on that.
The section ends here but… isn’t there a paragraph missing? I was expecting the standard continuation along the lines of “Will the second team make the same decision, once they reach the same capability? Will the third, or the fourth?” and so on.