I’m Jérémy Perret. Based in France. PhD in AI (NLP). AI Safety & EA meetup organizer. Information sponge. Mostly lurking since 2014. Seeking more experience, and eventually a position, in AI safety/governance.
Extremely annoyed by the lack of an explorable framework for AI risk/benefits. Working on that.
Could you provide an example of prediction the Γ Framework makes which highlights the divergence between it and the Standard Model? Especially in cases the Standard Model falls short of describing reality well enough?