FWIW I think this would be a lot less like “tutoring” and a lot more like “paying people to tell you their opinions”. Which is a fine thing to want to do, but I just want to make sure you don’t think there’s any kind of objective curriculum that comprises AI alignment.
Hmm, a bit confused what this means. There is I think a relatively large set of skills and declarative knowledge that is pretty verifiable and objective and associated with AI Alignment.
It is the case that there is no consensus on what solutions to the AI Alignment problem might look like, but I think the basic arguments for why this is a thing to be concerned about are pretty straightforward and are associated with some pretty objective arguments.
FWIW I think this would be a lot less like “tutoring” and a lot more like “paying people to tell you their opinions”. Which is a fine thing to want to do, but I just want to make sure you don’t think there’s any kind of objective curriculum that comprises AI alignment.
Hmm, a bit confused what this means. There is I think a relatively large set of skills and declarative knowledge that is pretty verifiable and objective and associated with AI Alignment.
It is the case that there is no consensus on what solutions to the AI Alignment problem might look like, but I think the basic arguments for why this is a thing to be concerned about are pretty straightforward and are associated with some pretty objective arguments.