That assumes that AIs maximize things, and in my opinion they won’t, just as humans don’t. But in any case, if you think that the AI is simply implementing the true extrapolation of human values, then it can only do that if it is the true extrapolation of human values. Which can hardly be called an accident.
“then it can only do that if it is the true extrapolation of human values. Which can hardly be called an accident.” Good point, but what if we don’t understand our true values and accidently implement them via AI?
It seems like you are conceding that we CAN wipe out alien civilizations accidentally. Thread over. But the “unimportant” qualifier makes me think that it isn’t quite so cut and dried. Can you explain what you mean?
Naturally if I were mistaken it would be appropriate to concede that I was mistaken. However, it was not about being mistaken. The point is that in arguments the truth is rarely all on one side. There is usually some truth in both. And in this case, in the way that matters, namely which I was calling important, it is not possible to accidentally wipe out alien civilizations. But in another way, the unimportant way, it would be possible in the scenario under consideration (which scenario is also very unlikely in the first place.)
In particular, when someone fears something happening “accidentally”, they mean to imply that it would be bad if that happened. But if you accidentally fulfill your true values, there is nothing bad about that, nor is it something to be feared, just as you do not fear accidentally winning the lottery. Especially since you would have done it anyway, if you had known it was contained in your true values.
In any case I do not concede that it is contained in people’s true values, nor that there will be such an AI. But even apart from that, the important point is that it is not possible to accidentally wipe out alien civilizations, if that would be a bad thing.
We create friendly AI that maximizes the happiness of humans. This AI figures that we would be happiest in our galaxy if we were alone.
That assumes that AIs maximize things, and in my opinion they won’t, just as humans don’t. But in any case, if you think that the AI is simply implementing the true extrapolation of human values, then it can only do that if it is the true extrapolation of human values. Which can hardly be called an accident.
“then it can only do that if it is the true extrapolation of human values. Which can hardly be called an accident.” Good point, but what if we don’t understand our true values and accidently implement them via AI?
That would be accidental, but in an unimportant sense. You could call it accidentally accidental.
Run that by me one time?
It seems like you are conceding that we CAN wipe out alien civilizations accidentally. Thread over. But the “unimportant” qualifier makes me think that it isn’t quite so cut and dried. Can you explain what you mean?
Naturally if I were mistaken it would be appropriate to concede that I was mistaken. However, it was not about being mistaken. The point is that in arguments the truth is rarely all on one side. There is usually some truth in both. And in this case, in the way that matters, namely which I was calling important, it is not possible to accidentally wipe out alien civilizations. But in another way, the unimportant way, it would be possible in the scenario under consideration (which scenario is also very unlikely in the first place.)
In particular, when someone fears something happening “accidentally”, they mean to imply that it would be bad if that happened. But if you accidentally fulfill your true values, there is nothing bad about that, nor is it something to be feared, just as you do not fear accidentally winning the lottery. Especially since you would have done it anyway, if you had known it was contained in your true values.
In any case I do not concede that it is contained in people’s true values, nor that there will be such an AI. But even apart from that, the important point is that it is not possible to accidentally wipe out alien civilizations, if that would be a bad thing.