FWIW I think this would be a lot less like “tutoring” and a lot more like “paying people to tell you their opinions”. Which is a fine thing to want to do, but I just want to make sure you don’t think there’s any kind of objective curriculum that comprises AI alignment.
Hmm, a bit confused what this means. There is I think a relatively large set of skills and declarative knowledge that is pretty verifiable and objective and associated with AI Alignment.
It is the case that there is no consensus on what solutions to the AI Alignment problem might look like, but I think the basic arguments for why this is a thing to be concerned about are pretty straightforward and are associated with some pretty objective arguments.
There’s a lot of detailed arguments for why alignment it’s going to be more or less difficult. Understanding all of those arguments, starting with the most respected, is a curriculum. Just pulling a number out of your own limited perspective is a whole different thing.
FWIW I think this would be a lot less like “tutoring” and a lot more like “paying people to tell you their opinions”. Which is a fine thing to want to do, but I just want to make sure you don’t think there’s any kind of objective curriculum that comprises AI alignment.
Hmm, a bit confused what this means. There is I think a relatively large set of skills and declarative knowledge that is pretty verifiable and objective and associated with AI Alignment.
It is the case that there is no consensus on what solutions to the AI Alignment problem might look like, but I think the basic arguments for why this is a thing to be concerned about are pretty straightforward and are associated with some pretty objective arguments.
There’s a lot of detailed arguments for why alignment it’s going to be more or less difficult. Understanding all of those arguments, starting with the most respected, is a curriculum. Just pulling a number out of your own limited perspective is a whole different thing.