I’m happy to state on the record that, if I had a magic button that I could press that would stop all AGI progress for 50 years, I would absolutely press that button. I don’t agree with the idea that it’s super important to trot everyone out and get them to say that publicly, but I’m happy to say it for myself.
I would like to observe to onlookers that you did in fact say something similar in your post on RSPs. Your very first sentence was:
Recently, there’s been a lot of discussion and advocacy around AI pauses—which, to be clear, I think is great: pause advocacy pushes in the right direction and works to build a good base of public support for x-risk-relevant regulation.
If I had clear lines in my mind between AGI capabilities progress, AGI alignment progress, and narrow AI progress, I would be 100% with you on stopping AGI capabilities. As it is, though, I don’t know how to count things. Is “understanding why neural net training behaves as it does” good or bad? (SLT’s goal). Is “determining the necessary structures of intelligence for a given architecture” good or bad? (Some strands of mech interp). Is an LLM narrow or general?
How do you tell, or at least approximate? (These are genuine questions, not rhetorical)
I’m happy to state on the record that, if I had a magic button that I could press that would stop all AGI progress for 50 years, I would absolutely press that button. I don’t agree with the idea that it’s super important to trot everyone out and get them to say that publicly, but I’m happy to say it for myself.
I would like to observe to onlookers that you did in fact say something similar in your post on RSPs. Your very first sentence was:
If I had clear lines in my mind between AGI capabilities progress, AGI alignment progress, and narrow AI progress, I would be 100% with you on stopping AGI capabilities. As it is, though, I don’t know how to count things. Is “understanding why neural net training behaves as it does” good or bad? (SLT’s goal). Is “determining the necessary structures of intelligence for a given architecture” good or bad? (Some strands of mech interp). Is an LLM narrow or general?
How do you tell, or at least approximate? (These are genuine questions, not rhetorical)