The issue is the standard “The AI neither loves you nor hates you, but you’re made out of atoms...”. The Europeans did not desire to wipe out Native Americans, they just wanted land and no annoying people who kept on shooting arrows at them.
The native American thing isn’t analogous to paperclipping because they weren’t exterminated as part of a deliberate plan.
The alien encounter thing isn’t all that analogous, either. It makes a little sense for paperclippers to take resources from humans, because humans are at least nearby. How much sense does it make sense to cross interstellar space to take resources from a species that is likely to fight back?
The ready made economic answer to intra species conflict is to make use of the considerable amounts of no-mans-land the universe has provided you with to stay out if each other’s way.
Non interaction was historically an option when the human population was much lower. Since the universe appears not to be densekey populated , my argument is that the same strategy would be favoured.
There have been wars over land since humans have existed. And non interaction, even if initially widespread, clearly eventually stopped when it became clear the world wasn’t infinite and that particular parts had special value and were contested by multiple tribes. Australia being huge and largely empty didn’t stop European tribes from having a series of wars increasing in intensity until we had WW1 and WW2, which were unfathomably violent and huge clashes over ideology and resources. This is what happened in Europe, where multiple tribes of comparable strength grew up near each other over a period of time. In America, settlers simply neutralized Native Americans while the settlers’ technological superiority was overwhelming, a much better idea than simply letting them grow powerful enough to eventually challenge you.
You write as though the amount of free land or buffer zone was constant, that is, as though the world population was constant. My point t was that walking in separate directions was a more viable option when the population was much lower...that, where available, it is usually an attractive option because it is like cost. That’s a probabilistic argument. The point is probabilistic. There have always been wars, the question is how many.
Do I really have to explain why Australia wasn’t a buffer zone between European nations? On a planet, there is no guarantee that rival nations won’t be cheek by jowl, but galactic civilisations are guaranteed to be separated by interstellar space. Given reasonable assumptions about the scarcity of intelligent life, and the light barrier, the situation is much better than it ever was on earth.
Native Americans were “neutralized” mostly as a side effect of the diseases brought by colonists, and then outcompeted by economically more successful cultures. Instead of strategic effort to prevent WW1 and WW2 happening on another continent, settlers from different European nations actually had “violent clash over resources” with each other. (also here)
The reasoning may seem sound, but it doesn’t correspond to historical facts.
“So what might really aged civilizations do? Disperse, of course, and also not attack new arrivals in the galaxy, for fear that they might not get them all. Why? Because revenge is probably selected for in surviving species, and anybody truly looking out for long-term interests will not want to leave a youthful species with a grudge, sneaking around behind its back...”
This is why you want to have colonies and habitats outside the Sol system especially,
This is mostly true but not relevant, because we can’t wipe out alien civilizations accidentally. Most planets will not have aliens on them, and if we go to some particular planet and wipe out the civilization, that will surely be on purpose. Likewise if they do it.
That assumes that AIs maximize things, and in my opinion they won’t, just as humans don’t. But in any case, if you think that the AI is simply implementing the true extrapolation of human values, then it can only do that if it is the true extrapolation of human values. Which can hardly be called an accident.
“then it can only do that if it is the true extrapolation of human values. Which can hardly be called an accident.” Good point, but what if we don’t understand our true values and accidently implement them via AI?
It seems like you are conceding that we CAN wipe out alien civilizations accidentally. Thread over. But the “unimportant” qualifier makes me think that it isn’t quite so cut and dried. Can you explain what you mean?
Naturally if I were mistaken it would be appropriate to concede that I was mistaken. However, it was not about being mistaken. The point is that in arguments the truth is rarely all on one side. There is usually some truth in both. And in this case, in the way that matters, namely which I was calling important, it is not possible to accidentally wipe out alien civilizations. But in another way, the unimportant way, it would be possible in the scenario under consideration (which scenario is also very unlikely in the first place.)
In particular, when someone fears something happening “accidentally”, they mean to imply that it would be bad if that happened. But if you accidentally fulfill your true values, there is nothing bad about that, nor is it something to be feared, just as you do not fear accidentally winning the lottery. Especially since you would have done it anyway, if you had known it was contained in your true values.
In any case I do not concede that it is contained in people’s true values, nor that there will be such an AI. But even apart from that, the important point is that it is not possible to accidentally wipe out alien civilizations, if that would be a bad thing.
We might not know much about a planet at the time we send a mission to the planet. Additional we might simply want to go to every planet within X light-years.
It’s plausible that we will colonize every planet within 100 light years of earth within the next 1000 years.
I don’t think we would be terraforming planets without checking what was already there, and there would be no reason to interfere with a planet that was already inhabited.
You make the decision to send the resources necessary to transform a galaxy without knowing much about the galaxy. The only things you know are based on the radiation that you can pick up many light years away.
Once you have sent your vehicle to the galaxy it could of course decide to do nothing or fly into the sun but that would be a waste of resources.
The issue is the standard “The AI neither loves you nor hates you, but you’re made out of atoms...”. The Europeans did not desire to wipe out Native Americans, they just wanted land and no annoying people who kept on shooting arrows at them.
The native American thing isn’t analogous to paperclipping because they weren’t exterminated as part of a deliberate plan.
The alien encounter thing isn’t all that analogous, either. It makes a little sense for paperclippers to take resources from humans, because humans are at least nearby. How much sense does it make sense to cross interstellar space to take resources from a species that is likely to fight back?
The ready made economic answer to intra species conflict is to make use of the considerable amounts of no-mans-land the universe has provided you with to stay out if each other’s way.
Kicking the can down the road doesn’t seem to be a likely action of an intelligent civilisation.
Best to control us while they still can, or while the resulting war will not result in unparalleled destruction.
Why? Provide some reasoning.
Non interaction was historically an option when the human population was much lower. Since the universe appears not to be densekey populated , my argument is that the same strategy would be favoured.
There have been wars over land since humans have existed. And non interaction, even if initially widespread, clearly eventually stopped when it became clear the world wasn’t infinite and that particular parts had special value and were contested by multiple tribes. Australia being huge and largely empty didn’t stop European tribes from having a series of wars increasing in intensity until we had WW1 and WW2, which were unfathomably violent and huge clashes over ideology and resources. This is what happened in Europe, where multiple tribes of comparable strength grew up near each other over a period of time. In America, settlers simply neutralized Native Americans while the settlers’ technological superiority was overwhelming, a much better idea than simply letting them grow powerful enough to eventually challenge you.
You write as though the amount of free land or buffer zone was constant, that is, as though the world population was constant. My point t was that walking in separate directions was a more viable option when the population was much lower...that, where available, it is usually an attractive option because it is like cost. That’s a probabilistic argument. The point is probabilistic. There have always been wars, the question is how many.
Do I really have to explain why Australia wasn’t a buffer zone between European nations? On a planet, there is no guarantee that rival nations won’t be cheek by jowl, but galactic civilisations are guaranteed to be separated by interstellar space. Given reasonable assumptions about the scarcity of intelligent life, and the light barrier, the situation is much better than it ever was on earth.
This seems like very sound reasoning.
Native Americans were “neutralized” mostly as a side effect of the diseases brought by colonists, and then outcompeted by economically more successful cultures. Instead of strategic effort to prevent WW1 and WW2 happening on another continent, settlers from different European nations actually had “violent clash over resources” with each other. (also here)
The reasoning may seem sound, but it doesn’t correspond to historical facts.
Ah, you have been at Atomic Rockets, reading up on aliens? The only reason they came up with.
http://www.projectrho.com/public_html/rocket/aliens.php
“So what might really aged civilizations do? Disperse, of course, and also not attack new arrivals in the galaxy, for fear that they might not get them all. Why? Because revenge is probably selected for in surviving species, and anybody truly looking out for long-term interests will not want to leave a youthful species with a grudge, sneaking around behind its back...”
This is why you want to have colonies and habitats outside the Sol system especially,
https://www.researchgate.net/publication/283986931_The_Dark_Forest_Rule_One_Solution_to_the_Fermi_Paradox
Anything remotely resembling humans can’t win a war against an extremely smart AI that had millions of years to optimize itself.
This is mostly true but not relevant, because we can’t wipe out alien civilizations accidentally. Most planets will not have aliens on them, and if we go to some particular planet and wipe out the civilization, that will surely be on purpose. Likewise if they do it.
We create friendly AI that maximizes the happiness of humans. This AI figures that we would be happiest in our galaxy if we were alone.
That assumes that AIs maximize things, and in my opinion they won’t, just as humans don’t. But in any case, if you think that the AI is simply implementing the true extrapolation of human values, then it can only do that if it is the true extrapolation of human values. Which can hardly be called an accident.
“then it can only do that if it is the true extrapolation of human values. Which can hardly be called an accident.” Good point, but what if we don’t understand our true values and accidently implement them via AI?
That would be accidental, but in an unimportant sense. You could call it accidentally accidental.
Run that by me one time?
It seems like you are conceding that we CAN wipe out alien civilizations accidentally. Thread over. But the “unimportant” qualifier makes me think that it isn’t quite so cut and dried. Can you explain what you mean?
Naturally if I were mistaken it would be appropriate to concede that I was mistaken. However, it was not about being mistaken. The point is that in arguments the truth is rarely all on one side. There is usually some truth in both. And in this case, in the way that matters, namely which I was calling important, it is not possible to accidentally wipe out alien civilizations. But in another way, the unimportant way, it would be possible in the scenario under consideration (which scenario is also very unlikely in the first place.)
In particular, when someone fears something happening “accidentally”, they mean to imply that it would be bad if that happened. But if you accidentally fulfill your true values, there is nothing bad about that, nor is it something to be feared, just as you do not fear accidentally winning the lottery. Especially since you would have done it anyway, if you had known it was contained in your true values.
In any case I do not concede that it is contained in people’s true values, nor that there will be such an AI. But even apart from that, the important point is that it is not possible to accidentally wipe out alien civilizations, if that would be a bad thing.
We might not know much about a planet at the time we send a mission to the planet. Additional we might simply want to go to every planet within X light-years.
It’s plausible that we will colonize every planet within 100 light years of earth within the next 1000 years.
I don’t think we would be terraforming planets without checking what was already there, and there would be no reason to interfere with a planet that was already inhabited.
You don’t need terraforming for a self-replicating AI to take root in a galaxy and convert the galaxy into useful stuff.
I don’t think gathering information about whether or not a solar system is populated will be significantly more expensive then colonizing it.
So you’re saying that people will send self-replicating AIs to convert galaxies into useful stuff without paying attention to what is already there?
That doesn’t seem at all likely to me. The AI will probably pay attention even if you don’t explicitly program it to do so.
I don’t think that there’s a way to “pay attention” that’s significantly cheaper then converting galaxies.
I think converting galaxies already includes paying attention, since if you don’t know what’s there it’s difficult to change it into something else.
Maybe you’re thinking of this as though it were a fire that just burned things up, but I don’t think “converting galaxies” can or will work that way.
You make the decision to send the resources necessary to transform a galaxy without knowing much about the galaxy. The only things you know are based on the radiation that you can pick up many light years away.
Once you have sent your vehicle to the galaxy it could of course decide to do nothing or fly into the sun but that would be a waste of resources.
The Law of Unintended Consequences says you’re wrong.
That’s not an argument.
It’s an observation :-P