A little out of context, but do you know whether Paul Rosenbloom has similar ideas to you about AI existential risk, AI cooperation and autonomy, or strong limitations on super-human AI without commensurate scaling of resources?
A little out of context, but do you know whether Paul Rosenbloom has similar ideas to you about AI existential risk, AI cooperation and autonomy, or strong limitations on super-human AI without commensurate scaling of resources?