Hmm, a bit confused what this means. There is I think a relatively large set of skills and declarative knowledge that is pretty verifiable and objective and associated with AI Alignment.
It is the case that there is no consensus on what solutions to the AI Alignment problem might look like, but I think the basic arguments for why this is a thing to be concerned about are pretty straightforward and are associated with some pretty objective arguments.
Hmm, a bit confused what this means. There is I think a relatively large set of skills and declarative knowledge that is pretty verifiable and objective and associated with AI Alignment.
It is the case that there is no consensus on what solutions to the AI Alignment problem might look like, but I think the basic arguments for why this is a thing to be concerned about are pretty straightforward and are associated with some pretty objective arguments.