This post hits me far more strongly than the previous ones on this subject.
I think your main point is that it’s positively dangerous to believe in an objective account of morality, if you’re trying to build an AI. Because you will then falsely believe that a sufficiently intelligent AI will be able to determine the correct morality—so you don’t have to worry about programming it to be friendly (or Friendly).
I’m sure you’ve mentioned this before, but this is more forceful, at least to me. Thanks.
Personally, even though I’ve mentioned that I thought there might be an objective basis for morality, I’ve never believed that every mind (or even a large fraction of minds) would be able to find it. So I’m in total agreement that we shouldn’t just assume a superintelligent AI would do good things.
In other words, this post drives home to me that, pragmatically, the view of morality you propose is the best one to have, from the point of view of building an AI.
This post hits me far more strongly than the previous ones on this subject.
I think your main point is that it’s positively dangerous to believe in an objective account of morality, if you’re trying to build an AI. Because you will then falsely believe that a sufficiently intelligent AI will be able to determine the correct morality—so you don’t have to worry about programming it to be friendly (or Friendly).
I’m sure you’ve mentioned this before, but this is more forceful, at least to me. Thanks.
Personally, even though I’ve mentioned that I thought there might be an objective basis for morality, I’ve never believed that every mind (or even a large fraction of minds) would be able to find it. So I’m in total agreement that we shouldn’t just assume a superintelligent AI would do good things.
In other words, this post drives home to me that, pragmatically, the view of morality you propose is the best one to have, from the point of view of building an AI.