This is mostly true but not relevant, because we can’t wipe out alien civilizations accidentally. Most planets will not have aliens on them, and if we go to some particular planet and wipe out the civilization, that will surely be on purpose. Likewise if they do it.
That assumes that AIs maximize things, and in my opinion they won’t, just as humans don’t. But in any case, if you think that the AI is simply implementing the true extrapolation of human values, then it can only do that if it is the true extrapolation of human values. Which can hardly be called an accident.
“then it can only do that if it is the true extrapolation of human values. Which can hardly be called an accident.” Good point, but what if we don’t understand our true values and accidently implement them via AI?
It seems like you are conceding that we CAN wipe out alien civilizations accidentally. Thread over. But the “unimportant” qualifier makes me think that it isn’t quite so cut and dried. Can you explain what you mean?
Naturally if I were mistaken it would be appropriate to concede that I was mistaken. However, it was not about being mistaken. The point is that in arguments the truth is rarely all on one side. There is usually some truth in both. And in this case, in the way that matters, namely which I was calling important, it is not possible to accidentally wipe out alien civilizations. But in another way, the unimportant way, it would be possible in the scenario under consideration (which scenario is also very unlikely in the first place.)
In particular, when someone fears something happening “accidentally”, they mean to imply that it would be bad if that happened. But if you accidentally fulfill your true values, there is nothing bad about that, nor is it something to be feared, just as you do not fear accidentally winning the lottery. Especially since you would have done it anyway, if you had known it was contained in your true values.
In any case I do not concede that it is contained in people’s true values, nor that there will be such an AI. But even apart from that, the important point is that it is not possible to accidentally wipe out alien civilizations, if that would be a bad thing.
We might not know much about a planet at the time we send a mission to the planet. Additional we might simply want to go to every planet within X light-years.
It’s plausible that we will colonize every planet within 100 light years of earth within the next 1000 years.
I don’t think we would be terraforming planets without checking what was already there, and there would be no reason to interfere with a planet that was already inhabited.
You make the decision to send the resources necessary to transform a galaxy without knowing much about the galaxy. The only things you know are based on the radiation that you can pick up many light years away.
Once you have sent your vehicle to the galaxy it could of course decide to do nothing or fly into the sun but that would be a waste of resources.
This is mostly true but not relevant, because we can’t wipe out alien civilizations accidentally. Most planets will not have aliens on them, and if we go to some particular planet and wipe out the civilization, that will surely be on purpose. Likewise if they do it.
We create friendly AI that maximizes the happiness of humans. This AI figures that we would be happiest in our galaxy if we were alone.
That assumes that AIs maximize things, and in my opinion they won’t, just as humans don’t. But in any case, if you think that the AI is simply implementing the true extrapolation of human values, then it can only do that if it is the true extrapolation of human values. Which can hardly be called an accident.
“then it can only do that if it is the true extrapolation of human values. Which can hardly be called an accident.” Good point, but what if we don’t understand our true values and accidently implement them via AI?
That would be accidental, but in an unimportant sense. You could call it accidentally accidental.
Run that by me one time?
It seems like you are conceding that we CAN wipe out alien civilizations accidentally. Thread over. But the “unimportant” qualifier makes me think that it isn’t quite so cut and dried. Can you explain what you mean?
Naturally if I were mistaken it would be appropriate to concede that I was mistaken. However, it was not about being mistaken. The point is that in arguments the truth is rarely all on one side. There is usually some truth in both. And in this case, in the way that matters, namely which I was calling important, it is not possible to accidentally wipe out alien civilizations. But in another way, the unimportant way, it would be possible in the scenario under consideration (which scenario is also very unlikely in the first place.)
In particular, when someone fears something happening “accidentally”, they mean to imply that it would be bad if that happened. But if you accidentally fulfill your true values, there is nothing bad about that, nor is it something to be feared, just as you do not fear accidentally winning the lottery. Especially since you would have done it anyway, if you had known it was contained in your true values.
In any case I do not concede that it is contained in people’s true values, nor that there will be such an AI. But even apart from that, the important point is that it is not possible to accidentally wipe out alien civilizations, if that would be a bad thing.
We might not know much about a planet at the time we send a mission to the planet. Additional we might simply want to go to every planet within X light-years.
It’s plausible that we will colonize every planet within 100 light years of earth within the next 1000 years.
I don’t think we would be terraforming planets without checking what was already there, and there would be no reason to interfere with a planet that was already inhabited.
You don’t need terraforming for a self-replicating AI to take root in a galaxy and convert the galaxy into useful stuff.
I don’t think gathering information about whether or not a solar system is populated will be significantly more expensive then colonizing it.
So you’re saying that people will send self-replicating AIs to convert galaxies into useful stuff without paying attention to what is already there?
That doesn’t seem at all likely to me. The AI will probably pay attention even if you don’t explicitly program it to do so.
I don’t think that there’s a way to “pay attention” that’s significantly cheaper then converting galaxies.
I think converting galaxies already includes paying attention, since if you don’t know what’s there it’s difficult to change it into something else.
Maybe you’re thinking of this as though it were a fire that just burned things up, but I don’t think “converting galaxies” can or will work that way.
You make the decision to send the resources necessary to transform a galaxy without knowing much about the galaxy. The only things you know are based on the radiation that you can pick up many light years away.
Once you have sent your vehicle to the galaxy it could of course decide to do nothing or fly into the sun but that would be a waste of resources.
The Law of Unintended Consequences says you’re wrong.
That’s not an argument.
It’s an observation :-P