The better we can solve the key questions (“what are these ‘wiser’ versions?”, “how is the whole setup designed?”, “what questions exactly is it trying to answer?”), the better the wiser ourselves will be at their tasks.
I feel like this statement suggests that we might not be doomed if we make a bunch of progress, but not full progress on these statements. I agree with that assessment, but it felt on reading the post like the post was making the claim “Unless we fully specify a correct theory of human values, we are doomed”.
I think that I’d view something like Paul’s indirect normativity approach as requiring that we do enough thinking in advance to get some critical set of considerations known by the participating humans, but once that’s in place we should be able to go from this core set to get the rest of the considerations. And it seems possible that we can do this without a fully-solved theory of human value (but any theoretical progress in advance we can make on defining human value is quite useful).
I feel like this statement suggests that we might not be doomed if we make a bunch of progress, but not full progress on these statements. I agree with that assessment, but it felt on reading the post like the post was making the claim “Unless we fully specify a correct theory of human values, we are doomed”.
I think that I’d view something like Paul’s indirect normativity approach as requiring that we do enough thinking in advance to get some critical set of considerations known by the participating humans, but once that’s in place we should be able to go from this core set to get the rest of the considerations. And it seems possible that we can do this without a fully-solved theory of human value (but any theoretical progress in advance we can make on defining human value is quite useful).
I currently agree with this view. But I’d add that a theory of human values is a direct way to solve some of the critical considerations.