Can someone here come up with any sort of realistic value system a foreign civilisation might have that would result in it not destroying the human race, or at least permanently stunting our continued development, should they become aware of us?
As has come to light with research on super intelligences, an actor does not have to hate us to destroy us, but rather realise we conflict, even in a very minor way, with its goals. As a rapidly advancing intelligent civilisation, it is likely our continued growth and existence will hamper the goals of other intelligent civilisations, so it will be in their interests to either stunt our growth or wipe us out. They don’t have to hate us. They might be very empathetic. But if their goals are not exactly the same as ours, it seems a huge liability to leave us to challenge their power. I know that I would stop the development of any other rapidly advancing intelligent species if I could, simply because struggles over our inevitably conflicting goals would be best avoided.
So, my question is, can you see any realistic value system a superintelligent alien civilisation might hold that would result in them not stopping us from going on growing and developing our power as a civilisation in a self-directed way? I cannot.
Given this, why is it in any way legal to broadcast our existence and location? There have been efforts in the past to send radio signals to distant solar systems. A superintelligent civilisation may well pick these up and come on the hunt for us. I think that this is one of the biggest existential threats we face, and our only real advantage is the element of stealth and surprise, which several incomprehensibly stupid individuals seem to threaten with their attempts to contact other actors in the universe. Should the military physically bomb and attack installations that attempt to broadcast our location? How do we get the people doing this stuff to stop?
Can someone here come up with any sort of realistic value system a foreign civilisation might have that would result in it not destroying the human race, or at least permanently stunting our continued development, should they become aware of us?
Not being bored. Living systems (and presumably more so for living systems that include intelligence) show more complex behavior than dead systems.
If we developed practical interstellar travel, and went to a star system with an intelligent species somewhat below our technological level, our first choice would probably not be annihilating them. Why? Because it would not fit into our values to consider exterminating them as the primary choice. And how did we develop our values like this? I guess at least in some part it’s because we evolved and built our civilizations among plenty of species of animals, some of which we hunted for food (and not all of them to extinction, and even those which got extinct, wiping them out was not our goal), some of which we domesticated, and plenty of which we left alone. We also learned that other species besides us have a role in the natural cycle, and it was never in our interest to wipe out other species (unless in rare circumstances, when they were a pest or a dangerous disease vector).
Unless the extraterrestrial species are the only macroscopic life-form on their planet, it’s likely they evolved among other species and did not exterminate them all. This might lead to them having cultural values about preserving biodiversity and not exterminating species unless really necessary.
I’m surprised to find such rhetoric on this site. There is an image now popularized by certain political activists and ideologically-driven cartoons, which depict the colonization of the Americas as a mockery of the D-Day landing, with peaceful Natives standing on the shore and smiling, while gun-toting Europeans jump out of the ships and start shooting at them. That image is even more false than the racist depictions in the late 19th century glorifying the westward expansion of the USA while vilifying the natives.
The truth is much more complicated than that.
If you look at the big picture, there was no such conquest in America like the Mongol invasion. There wasn’t even a concentrated “every newcomer versus every native” warfare. The diverse European nations fought among themselves a lot, the Natives also fought among themselves a lot, both before and after the arrival of the Europeans. Europeans allied themselves with the Natives at least as often as they fought against them. Even the history of the unquestionably ruthless conquistadors like Cortez didn’t feature an army of Europeans set out to exterminate a specific ethnicity. He only had a few hundred Europeans with him, and had tens of thousands of Native allies. If you look at the whole history from the beginning, there was no concentrated military invasion with the intent to conquer a continent. Everything happened during a relatively long period of time. The settlements coexisted peacefully with the natives in multiple occasions, traded with each other, and when conflict developed between them it was no more different than any conflict at any other place on the planet. Conflict develops sooner or later, in the new world just as in the old world. Although there certainly were acts of injustice, the bigger picture is that there was no central “us vs them”, not in any stronger form than how the European powers fought wars among themselves. The Natives had the disadvantage of the diseases as other commenters have already stated, but also of the smaller numbers, of the less advanced societal structures (the civilizations of the Old World needed a lot of time between living in tribes and developing forms of governments sufficient to lead nations of millions) and of inferior technology. The term out-competed is much more fitting than exterminated, which is a very biased and politically loaded word.
You cannot compare the colonization of the Americas to the scenario when a starfleet arrives to the planet and proceeds with a controlled extermination of the population.
The Europeans did not “proceed with a controlled extermination of the population”. Yet, what happened to that population?
You don’t need to start with a deliberate decision to exterminate in order to end up with almost none of the original population. Sometimes you just need to not care much.
The Europeans did not “proceed with a controlled extermination of the population”. Yet, what happened to that population?
They still exist… so they were not exterminated? They did not carry out purposeful extermination, and in fact the indigenous people were not exterminated. So what exactly are you arguing?
The only thing that was very truly devastating to indigenous populations was smallpox exposure, and that was an accident. Also lots of internal wars, famine, civilization collapse, etc. But most of that was triggered by the smallpox plague 30+% die-off.
The fact that Europeans outnumber indigenous people 100:1 in north america (less so in central and south america) isn’t some purposeful, master plan of the European colonialists. It’s just the inevitable outcome of a number of historical accidents with compounding effects.
The development of Native Americans has been stunted and they simply exist within the controlled conditions imposed by the new civilization now. They aren’t all dead, but they can’t actually control their own destiny as a people. Native American reservations seem like exactly the sort of thing aliens might put us in. Very limited control over our own affairs in desolate parts of the universe with the addition of welfare payments to give us some sort of quality of life.
True, the scenario is not implausible for a non-hostile alien civilization to arrive who are more efficient than us, and in the long term they will out-compete and out-breed us.
Such non-hostile assimilation is not unheard of in real life. It is happening now (or at least claimed by many to be happening) in Europe, both in the form of the migrant crisis and also in the form of smaller countries fearing that their cultural identities and values are being eroded by the larger, richer countries of the union.
Fortunately, Native American populations didn’t plummet because they were intentionally killed, they mostly did so because of diseases brought by Europeans.
I think Val’s argument is that “no realistic value system implies not destroying alien civilizations” implies “either our value system is unrealistic, or we would take the first opportunity to destroy any alien civilization we came across.” Perhaps you intended your comment to imply that we would do that, but I am skeptical. And if we would not do that, Val’s argument is a good one. The only intelligent species we know does not desire to wipe out aliens, so it is more likely than not that alien species will not be interested in wiping us out.
The issue is the standard “The AI neither loves you nor hates you, but you’re made out of atoms...”. The Europeans did not desire to wipe out Native Americans, they just wanted land and no annoying people who kept on shooting arrows at them.
The native American thing isn’t analogous to paperclipping because they weren’t exterminated as part of a deliberate plan.
The alien encounter thing isn’t all that analogous, either. It makes a little sense for paperclippers to take resources from humans, because humans are at least nearby. How much sense does it make sense to cross interstellar space to take resources from a species that is likely to fight back?
The ready made economic answer to intra species conflict is to make use of the considerable amounts of no-mans-land the universe has provided you with to stay out if each other’s way.
Non interaction was historically an option when the human population was much lower. Since the universe appears not to be densekey populated , my argument is that the same strategy would be favoured.
There have been wars over land since humans have existed. And non interaction, even if initially widespread, clearly eventually stopped when it became clear the world wasn’t infinite and that particular parts had special value and were contested by multiple tribes. Australia being huge and largely empty didn’t stop European tribes from having a series of wars increasing in intensity until we had WW1 and WW2, which were unfathomably violent and huge clashes over ideology and resources. This is what happened in Europe, where multiple tribes of comparable strength grew up near each other over a period of time. In America, settlers simply neutralized Native Americans while the settlers’ technological superiority was overwhelming, a much better idea than simply letting them grow powerful enough to eventually challenge you.
You write as though the amount of free land or buffer zone was constant, that is, as though the world population was constant. My point t was that walking in separate directions was a more viable option when the population was much lower...that, where available, it is usually an attractive option because it is like cost. That’s a probabilistic argument. The point is probabilistic. There have always been wars, the question is how many.
Do I really have to explain why Australia wasn’t a buffer zone between European nations? On a planet, there is no guarantee that rival nations won’t be cheek by jowl, but galactic civilisations are guaranteed to be separated by interstellar space. Given reasonable assumptions about the scarcity of intelligent life, and the light barrier, the situation is much better than it ever was on earth.
Native Americans were “neutralized” mostly as a side effect of the diseases brought by colonists, and then outcompeted by economically more successful cultures. Instead of strategic effort to prevent WW1 and WW2 happening on another continent, settlers from different European nations actually had “violent clash over resources” with each other. (also here)
The reasoning may seem sound, but it doesn’t correspond to historical facts.
“So what might really aged civilizations do? Disperse, of course, and also not attack new arrivals in the galaxy, for fear that they might not get them all. Why? Because revenge is probably selected for in surviving species, and anybody truly looking out for long-term interests will not want to leave a youthful species with a grudge, sneaking around behind its back...”
This is why you want to have colonies and habitats outside the Sol system especially,
This is mostly true but not relevant, because we can’t wipe out alien civilizations accidentally. Most planets will not have aliens on them, and if we go to some particular planet and wipe out the civilization, that will surely be on purpose. Likewise if they do it.
That assumes that AIs maximize things, and in my opinion they won’t, just as humans don’t. But in any case, if you think that the AI is simply implementing the true extrapolation of human values, then it can only do that if it is the true extrapolation of human values. Which can hardly be called an accident.
“then it can only do that if it is the true extrapolation of human values. Which can hardly be called an accident.” Good point, but what if we don’t understand our true values and accidently implement them via AI?
It seems like you are conceding that we CAN wipe out alien civilizations accidentally. Thread over. But the “unimportant” qualifier makes me think that it isn’t quite so cut and dried. Can you explain what you mean?
Naturally if I were mistaken it would be appropriate to concede that I was mistaken. However, it was not about being mistaken. The point is that in arguments the truth is rarely all on one side. There is usually some truth in both. And in this case, in the way that matters, namely which I was calling important, it is not possible to accidentally wipe out alien civilizations. But in another way, the unimportant way, it would be possible in the scenario under consideration (which scenario is also very unlikely in the first place.)
In particular, when someone fears something happening “accidentally”, they mean to imply that it would be bad if that happened. But if you accidentally fulfill your true values, there is nothing bad about that, nor is it something to be feared, just as you do not fear accidentally winning the lottery. Especially since you would have done it anyway, if you had known it was contained in your true values.
In any case I do not concede that it is contained in people’s true values, nor that there will be such an AI. But even apart from that, the important point is that it is not possible to accidentally wipe out alien civilizations, if that would be a bad thing.
We might not know much about a planet at the time we send a mission to the planet. Additional we might simply want to go to every planet within X light-years.
It’s plausible that we will colonize every planet within 100 light years of earth within the next 1000 years.
I don’t think we would be terraforming planets without checking what was already there, and there would be no reason to interfere with a planet that was already inhabited.
You make the decision to send the resources necessary to transform a galaxy without knowing much about the galaxy. The only things you know are based on the radiation that you can pick up many light years away.
Once you have sent your vehicle to the galaxy it could of course decide to do nothing or fly into the sun but that would be a waste of resources.
If we were rational, we would stop their continued self-directed development, because having a rapidly advancing alien civilisation with goals different to ours is a huge liability.
So maybe we would not wipe them out, but we would not let them continue on as normal.
Unless the extraterrestrial species are the only macroscopic life-form on their planet, it’s likely they evolved among other species and did not exterminate them all. This might lead to them having cultural values about preserving biodiversity and not exterminating species unless really necessary.
To me that’s not a culture , but a bias (the hunter gatherer bias).....there are thousands of animal species serving no real purpose for our cause and still we slow down our growth because of concerns regarding their survival , not only that , but after having analyzed our daily values and necessities it becomes perfectly crystal clear how we’d only really need the 5 big crops + plants for photosynthesis , insects and impollinators in order to survive and thrive , plus we would be able to support much more people ! Imagine a planet where 15 billions humans live and each and everyone of them consumes 2700 kcal/day and contributes to the world’s economy because nobody has to suffer hunger anymore.… that would be possible if we got rid of wastes and inefficiencies . So In my opinion if we ever find other forms of intelligent life and we can’t trade with them , eat them , learn from them or acquire knowledge studying them , then yes I am all up for bombing them , just as I am all up for (and I know many will hate me for this :-D ) running a railway + HVDC line through the giant panda’s territory , or finally get rid of domesticated animals like cows which convert calories and proteins from grains so poorly .
Also I agree with @woodchopper , we should stop sending messages literally “Across the Universe” in order to avoid perishing .
An other approach we might use in the remote future could be only using old technologies to broadcast an “hello signal”..… stuff we’ve long moved from , so we could try to select for civilizations which are way behind us technologically so we could sort of be in control of their destiny like your usual anthill , but even then it could be a trap or they might catch up during the time necessary to make the trip or they could be monitored by some other advanced civilization which is not monitoring us , so we would just signal our presence to them as well...
we’d only really need the 5 big crops + plants for photosynthesis , insects and impollinators in order to survive and thrive
Time and time it turned out that we underestimated the complexity of the biosphere. And time and time again our meddling backfired horribly.
Even if we were utterly selfish and had no moral objections, wiping out all but a handful of “useful” species would almost certainly lead to unforeseen consequences ending in the total destruction of the planet’s biosphere. We did not yet manage to fully map the role each species plays in the natural balance, but it seems like it’s very deeply entangled, everything depending on lots of other species. You cannot just remove a handful of them and expect them to thrive on their own.
More like leading to a temporary collapse to a lower level of complexity (including much less if any in the way of humans) until all the available niches were re-filled by radiating evolution from the surviving forms.
Hmmm. wonders if the open thread is a place for a quick analysis of that news item going around about interesting optical signals seen in a few hundred stellar spectra from the sloan digital sky survey …
Can someone here come up with any sort of realistic value system a foreign civilisation might have that would result in it not destroying the human race, or at least permanently stunting our continued development, should they become aware of us?
“permanently stunting our continued development” might be the only way not to destroy the human race.
it seems a huge liability to leave us to challenge their power.
It’s not clear that we have a realistic change to develop capabilities that threaten a civilization that has a head start of 100 million years.
In addition it’s worth noting that a galactic civilization needs moral norms that allow societies that exist millions of light years apart and to coexist when it’s not possible to attribute attacks to their sources.
Hanson’s argues in the Age of Em that Em’s are likely religious and might follow religious norms.
There are Buddhists who don’t eat meat for religious reasons and in a similar way an alien civilization might not kill us for religious reasons.
Given this, why is it in any way legal to broadcast our existence and location? There have been efforts in the past to send radio signals to distant solar systems.
You don’t need a special effort to broadcast signals for a civilization that cares about emerging species to listen to normal radio broadcasts.
I don’t think humans as a species or earth creatures as a … evolutionary life-root, have coherent goals or linear development in a way that makes this concern valid.
If a more intelligent self-sustaining agent or group comes along and replaces humans, good. Whether that’s future-humans, human-created AIs, or ETs doesn’t matter all that much.
Did the people of the 19th century make a mistake by creating and educating the next generations of humans which replaced them?
As an aside, it’s far too late to stop broadcasts. The marginal risk of discovery imposed by any action today is pretty much zero—we’ve been sending LOTS of EM outward in all directions for many many decades, and there’s no way to recall any of it.
Heh, it’s been long enough (~35 years, since BBS systems in the early 80s) that I’ve gone by the name that I often completely forget it has any context outside of my usage.
In this case, I’m using “good” in the sense of “I don’t think I, or any other dead-by-then being, has standing to object”.
A lot of humans care (or at least signal that they care in far-mode) about what happens in the future. That doesn’t make it sane or reasonable.
Why does it matter to anyone today whether the beings inhabiting Earth’s solar system in 20 centuries are descended from apes, or made of silicon, or came from elsewhere?
I think we can all agree that an entity’s anticipated future experiences matter to that entity. I hope (but would be interested to learn otherwise) that imaginary events such as fiction don’t matter. In between, there is a hugely wide range of how much it’s worth caring about distant events.
I’d argue that outside your light-cone is pretty close to imaginary in terms of care level. I’d also argue that events after your death are pretty unlikely to effect you (modulo basilisk-like punishment or reward).
I actually buy the idea that you care about (and are willing to expend resources on) subjunctive realities on behalf of not-quite-real other people. You get present value from imagining good outcomes for imagined-possible people even if they’re not you. This has to get weaker as it gets more distant in time and more tenuous in connection to reality, though.
But that’s not even the point I meant to make. Even if you care deeply about the far future for some reason, why is it reasonable to prefer weak, backward, stupid entities over more intelligent and advanced ones? Just because they’re made of similar meat-substance as you seems a bit parochial, and hypocritical given the way you treat slightly less-capable organic beings like lettuce.
Woodchopper’s post indicated that he’d violently interfere with (indirectly via criminalization) activities that make it infinitesimally more likely to be identified and located by ETs. This is well beyond reason, even if I overstated my long-term lack of care.
You have failed to answer my question. Why does anything at all matter? Why does anything care about anything at all? Why don’t I want my dog to die? Obviously, when I’m actually dead, I won’t want anything at all. But there is no reason I cannot have preferences now regarding events that will occur after I am dead. And I do.
Dude, if you are preaching Might Makes Right you don’t have to bring up nonsense like “standing to object”.
Anything that can replace us will get to decide if the fact that it has done so is “good”. Our arguments will have failed to convince the universe, and we will be gone. Physics is a garbage arbitrator, but from its decision there can be no appeal.
Arguments made by humans can effect other humans, and from that effect their actins, and from that effect the universe.
In this case, the argument is about whether humans should resist or acquiesce to their own replacement. I take Dagn’s “good” to indicate support for the latter option.
I mean, he can chime in, but I think he is looking at it from the perspective of a “thing that has happened”. We don’t have standing to object because we are gone.
I doubt he thinks there is a duty to roll over. (Don’t want to put words in your mouth tho, man. Let me know if I’m misunderstanding you here.) The vibe I get from his argument is that, once we are gone, who cares what we think?
As an aside, it’s far too late to stop broadcasts. The marginal risk of discovery imposed by any action today is pretty much zero—we’ve been sending LOTS of EM outward in all directions for many many decades, and there’s no way to recall any of it.
Thankfully aside from military radar, which is highly directional and sporadic, the rest is lost in the background noise after a few dozen lightyears.
Can someone here come up with any sort of realistic value system a foreign civilisation might have that would result in it not destroying the human race, or at least permanently stunting our continued development, should they become aware of us?
As has come to light with research on super intelligences, an actor does not have to hate us to destroy us, but rather realise we conflict, even in a very minor way, with its goals. As a rapidly advancing intelligent civilisation, it is likely our continued growth and existence will hamper the goals of other intelligent civilisations, so it will be in their interests to either stunt our growth or wipe us out. They don’t have to hate us. They might be very empathetic. But if their goals are not exactly the same as ours, it seems a huge liability to leave us to challenge their power. I know that I would stop the development of any other rapidly advancing intelligent species if I could, simply because struggles over our inevitably conflicting goals would be best avoided.
So, my question is, can you see any realistic value system a superintelligent alien civilisation might hold that would result in them not stopping us from going on growing and developing our power as a civilisation in a self-directed way? I cannot.
Given this, why is it in any way legal to broadcast our existence and location? There have been efforts in the past to send radio signals to distant solar systems. A superintelligent civilisation may well pick these up and come on the hunt for us. I think that this is one of the biggest existential threats we face, and our only real advantage is the element of stealth and surprise, which several incomprehensibly stupid individuals seem to threaten with their attempts to contact other actors in the universe. Should the military physically bomb and attack installations that attempt to broadcast our location? How do we get the people doing this stuff to stop?
Not being bored. Living systems (and presumably more so for living systems that include intelligence) show more complex behavior than dead systems.
If we developed practical interstellar travel, and went to a star system with an intelligent species somewhat below our technological level, our first choice would probably not be annihilating them. Why? Because it would not fit into our values to consider exterminating them as the primary choice. And how did we develop our values like this? I guess at least in some part it’s because we evolved and built our civilizations among plenty of species of animals, some of which we hunted for food (and not all of them to extinction, and even those which got extinct, wiping them out was not our goal), some of which we domesticated, and plenty of which we left alone. We also learned that other species besides us have a role in the natural cycle, and it was never in our interest to wipe out other species (unless in rare circumstances, when they were a pest or a dangerous disease vector).
Unless the extraterrestrial species are the only macroscopic life-form on their planet, it’s likely they evolved among other species and did not exterminate them all. This might lead to them having cultural values about preserving biodiversity and not exterminating species unless really necessary.
Did you ask the Native Americans whether they hold a similar opinion?
I’m surprised to find such rhetoric on this site. There is an image now popularized by certain political activists and ideologically-driven cartoons, which depict the colonization of the Americas as a mockery of the D-Day landing, with peaceful Natives standing on the shore and smiling, while gun-toting Europeans jump out of the ships and start shooting at them. That image is even more false than the racist depictions in the late 19th century glorifying the westward expansion of the USA while vilifying the natives.
The truth is much more complicated than that.
If you look at the big picture, there was no such conquest in America like the Mongol invasion. There wasn’t even a concentrated “every newcomer versus every native” warfare. The diverse European nations fought among themselves a lot, the Natives also fought among themselves a lot, both before and after the arrival of the Europeans. Europeans allied themselves with the Natives at least as often as they fought against them. Even the history of the unquestionably ruthless conquistadors like Cortez didn’t feature an army of Europeans set out to exterminate a specific ethnicity. He only had a few hundred Europeans with him, and had tens of thousands of Native allies. If you look at the whole history from the beginning, there was no concentrated military invasion with the intent to conquer a continent. Everything happened during a relatively long period of time. The settlements coexisted peacefully with the natives in multiple occasions, traded with each other, and when conflict developed between them it was no more different than any conflict at any other place on the planet. Conflict develops sooner or later, in the new world just as in the old world. Although there certainly were acts of injustice, the bigger picture is that there was no central “us vs them”, not in any stronger form than how the European powers fought wars among themselves. The Natives had the disadvantage of the diseases as other commenters have already stated, but also of the smaller numbers, of the less advanced societal structures (the civilizations of the Old World needed a lot of time between living in tribes and developing forms of governments sufficient to lead nations of millions) and of inferior technology. The term out-competed is much more fitting than exterminated, which is a very biased and politically loaded word.
You cannot compare the colonization of the Americas to the scenario when a starfleet arrives to the planet and proceeds with a controlled extermination of the population.
You misunderstood my point.
The Europeans did not “proceed with a controlled extermination of the population”. Yet, what happened to that population?
You don’t need to start with a deliberate decision to exterminate in order to end up with almost none of the original population. Sometimes you just need to not care much.
They still exist… so they were not exterminated? They did not carry out purposeful extermination, and in fact the indigenous people were not exterminated. So what exactly are you arguing?
The only thing that was very truly devastating to indigenous populations was smallpox exposure, and that was an accident. Also lots of internal wars, famine, civilization collapse, etc. But most of that was triggered by the smallpox plague 30+% die-off.
The fact that Europeans outnumber indigenous people 100:1 in north america (less so in central and south america) isn’t some purposeful, master plan of the European colonialists. It’s just the inevitable outcome of a number of historical accidents with compounding effects.
The development of Native Americans has been stunted and they simply exist within the controlled conditions imposed by the new civilization now. They aren’t all dead, but they can’t actually control their own destiny as a people. Native American reservations seem like exactly the sort of thing aliens might put us in. Very limited control over our own affairs in desolate parts of the universe with the addition of welfare payments to give us some sort of quality of life.
True, the scenario is not implausible for a non-hostile alien civilization to arrive who are more efficient than us, and in the long term they will out-compete and out-breed us.
Such non-hostile assimilation is not unheard of in real life. It is happening now (or at least claimed by many to be happening) in Europe, both in the form of the migrant crisis and also in the form of smaller countries fearing that their cultural identities and values are being eroded by the larger, richer countries of the union.
Native American population is at 3 million (not included “mixed race”) and the trend is that it’s growing.
Fortunately, Native American populations didn’t plummet because they were intentionally killed, they mostly did so because of diseases brought by Europeans.
Maybe the aliens will bring some kind of nanotechnology that works okay with their ecosystem, but will destroy ours.
I think Val’s argument is that “no realistic value system implies not destroying alien civilizations” implies “either our value system is unrealistic, or we would take the first opportunity to destroy any alien civilization we came across.” Perhaps you intended your comment to imply that we would do that, but I am skeptical. And if we would not do that, Val’s argument is a good one. The only intelligent species we know does not desire to wipe out aliens, so it is more likely than not that alien species will not be interested in wiping us out.
The issue is the standard “The AI neither loves you nor hates you, but you’re made out of atoms...”. The Europeans did not desire to wipe out Native Americans, they just wanted land and no annoying people who kept on shooting arrows at them.
The native American thing isn’t analogous to paperclipping because they weren’t exterminated as part of a deliberate plan.
The alien encounter thing isn’t all that analogous, either. It makes a little sense for paperclippers to take resources from humans, because humans are at least nearby. How much sense does it make sense to cross interstellar space to take resources from a species that is likely to fight back?
The ready made economic answer to intra species conflict is to make use of the considerable amounts of no-mans-land the universe has provided you with to stay out if each other’s way.
Kicking the can down the road doesn’t seem to be a likely action of an intelligent civilisation.
Best to control us while they still can, or while the resulting war will not result in unparalleled destruction.
Why? Provide some reasoning.
Non interaction was historically an option when the human population was much lower. Since the universe appears not to be densekey populated , my argument is that the same strategy would be favoured.
There have been wars over land since humans have existed. And non interaction, even if initially widespread, clearly eventually stopped when it became clear the world wasn’t infinite and that particular parts had special value and were contested by multiple tribes. Australia being huge and largely empty didn’t stop European tribes from having a series of wars increasing in intensity until we had WW1 and WW2, which were unfathomably violent and huge clashes over ideology and resources. This is what happened in Europe, where multiple tribes of comparable strength grew up near each other over a period of time. In America, settlers simply neutralized Native Americans while the settlers’ technological superiority was overwhelming, a much better idea than simply letting them grow powerful enough to eventually challenge you.
You write as though the amount of free land or buffer zone was constant, that is, as though the world population was constant. My point t was that walking in separate directions was a more viable option when the population was much lower...that, where available, it is usually an attractive option because it is like cost. That’s a probabilistic argument. The point is probabilistic. There have always been wars, the question is how many.
Do I really have to explain why Australia wasn’t a buffer zone between European nations? On a planet, there is no guarantee that rival nations won’t be cheek by jowl, but galactic civilisations are guaranteed to be separated by interstellar space. Given reasonable assumptions about the scarcity of intelligent life, and the light barrier, the situation is much better than it ever was on earth.
This seems like very sound reasoning.
Native Americans were “neutralized” mostly as a side effect of the diseases brought by colonists, and then outcompeted by economically more successful cultures. Instead of strategic effort to prevent WW1 and WW2 happening on another continent, settlers from different European nations actually had “violent clash over resources” with each other. (also here)
The reasoning may seem sound, but it doesn’t correspond to historical facts.
Ah, you have been at Atomic Rockets, reading up on aliens? The only reason they came up with.
http://www.projectrho.com/public_html/rocket/aliens.php
“So what might really aged civilizations do? Disperse, of course, and also not attack new arrivals in the galaxy, for fear that they might not get them all. Why? Because revenge is probably selected for in surviving species, and anybody truly looking out for long-term interests will not want to leave a youthful species with a grudge, sneaking around behind its back...”
This is why you want to have colonies and habitats outside the Sol system especially,
https://www.researchgate.net/publication/283986931_The_Dark_Forest_Rule_One_Solution_to_the_Fermi_Paradox
Anything remotely resembling humans can’t win a war against an extremely smart AI that had millions of years to optimize itself.
This is mostly true but not relevant, because we can’t wipe out alien civilizations accidentally. Most planets will not have aliens on them, and if we go to some particular planet and wipe out the civilization, that will surely be on purpose. Likewise if they do it.
We create friendly AI that maximizes the happiness of humans. This AI figures that we would be happiest in our galaxy if we were alone.
That assumes that AIs maximize things, and in my opinion they won’t, just as humans don’t. But in any case, if you think that the AI is simply implementing the true extrapolation of human values, then it can only do that if it is the true extrapolation of human values. Which can hardly be called an accident.
“then it can only do that if it is the true extrapolation of human values. Which can hardly be called an accident.” Good point, but what if we don’t understand our true values and accidently implement them via AI?
That would be accidental, but in an unimportant sense. You could call it accidentally accidental.
Run that by me one time?
It seems like you are conceding that we CAN wipe out alien civilizations accidentally. Thread over. But the “unimportant” qualifier makes me think that it isn’t quite so cut and dried. Can you explain what you mean?
Naturally if I were mistaken it would be appropriate to concede that I was mistaken. However, it was not about being mistaken. The point is that in arguments the truth is rarely all on one side. There is usually some truth in both. And in this case, in the way that matters, namely which I was calling important, it is not possible to accidentally wipe out alien civilizations. But in another way, the unimportant way, it would be possible in the scenario under consideration (which scenario is also very unlikely in the first place.)
In particular, when someone fears something happening “accidentally”, they mean to imply that it would be bad if that happened. But if you accidentally fulfill your true values, there is nothing bad about that, nor is it something to be feared, just as you do not fear accidentally winning the lottery. Especially since you would have done it anyway, if you had known it was contained in your true values.
In any case I do not concede that it is contained in people’s true values, nor that there will be such an AI. But even apart from that, the important point is that it is not possible to accidentally wipe out alien civilizations, if that would be a bad thing.
We might not know much about a planet at the time we send a mission to the planet. Additional we might simply want to go to every planet within X light-years.
It’s plausible that we will colonize every planet within 100 light years of earth within the next 1000 years.
I don’t think we would be terraforming planets without checking what was already there, and there would be no reason to interfere with a planet that was already inhabited.
You don’t need terraforming for a self-replicating AI to take root in a galaxy and convert the galaxy into useful stuff.
I don’t think gathering information about whether or not a solar system is populated will be significantly more expensive then colonizing it.
So you’re saying that people will send self-replicating AIs to convert galaxies into useful stuff without paying attention to what is already there?
That doesn’t seem at all likely to me. The AI will probably pay attention even if you don’t explicitly program it to do so.
I don’t think that there’s a way to “pay attention” that’s significantly cheaper then converting galaxies.
I think converting galaxies already includes paying attention, since if you don’t know what’s there it’s difficult to change it into something else.
Maybe you’re thinking of this as though it were a fire that just burned things up, but I don’t think “converting galaxies” can or will work that way.
You make the decision to send the resources necessary to transform a galaxy without knowing much about the galaxy. The only things you know are based on the radiation that you can pick up many light years away.
Once you have sent your vehicle to the galaxy it could of course decide to do nothing or fly into the sun but that would be a waste of resources.
The Law of Unintended Consequences says you’re wrong.
That’s not an argument.
It’s an observation :-P
If we were rational, we would stop their continued self-directed development, because having a rapidly advancing alien civilisation with goals different to ours is a huge liability.
So maybe we would not wipe them out, but we would not let them continue on as normal.
To me that’s not a culture , but a bias (the hunter gatherer bias).....there are thousands of animal species serving no real purpose for our cause and still we slow down our growth because of concerns regarding their survival , not only that , but after having analyzed our daily values and necessities it becomes perfectly crystal clear how we’d only really need the 5 big crops + plants for photosynthesis , insects and impollinators in order to survive and thrive , plus we would be able to support much more people ! Imagine a planet where 15 billions humans live and each and everyone of them consumes 2700 kcal/day and contributes to the world’s economy because nobody has to suffer hunger anymore.… that would be possible if we got rid of wastes and inefficiencies . So In my opinion if we ever find other forms of intelligent life and we can’t trade with them , eat them , learn from them or acquire knowledge studying them , then yes I am all up for bombing them , just as I am all up for (and I know many will hate me for this :-D ) running a railway + HVDC line through the giant panda’s territory , or finally get rid of domesticated animals like cows which convert calories and proteins from grains so poorly .
Also I agree with @woodchopper , we should stop sending messages literally “Across the Universe” in order to avoid perishing . An other approach we might use in the remote future could be only using old technologies to broadcast an “hello signal”..… stuff we’ve long moved from , so we could try to select for civilizations which are way behind us technologically so we could sort of be in control of their destiny like your usual anthill , but even then it could be a trap or they might catch up during the time necessary to make the trip or they could be monitored by some other advanced civilization which is not monitoring us , so we would just signal our presence to them as well...
Time and time it turned out that we underestimated the complexity of the biosphere. And time and time again our meddling backfired horribly.
Even if we were utterly selfish and had no moral objections, wiping out all but a handful of “useful” species would almost certainly lead to unforeseen consequences ending in the total destruction of the planet’s biosphere. We did not yet manage to fully map the role each species plays in the natural balance, but it seems like it’s very deeply entangled, everything depending on lots of other species. You cannot just remove a handful of them and expect them to thrive on their own.
More like leading to a temporary collapse to a lower level of complexity (including much less if any in the way of humans) until all the available niches were re-filled by radiating evolution from the surviving forms.
Hmmm. wonders if the open thread is a place for a quick analysis of that news item going around about interesting optical signals seen in a few hundred stellar spectra from the sloan digital sky survey …
/facepalm
“permanently stunting our continued development” might be the only way not to destroy the human race.
It’s not clear that we have a realistic change to develop capabilities that threaten a civilization that has a head start of 100 million years.
In addition it’s worth noting that a galactic civilization needs moral norms that allow societies that exist millions of light years apart and to coexist when it’s not possible to attribute attacks to their sources.
Hanson’s argues in the Age of Em that Em’s are likely religious and might follow religious norms.
There are Buddhists who don’t eat meat for religious reasons and in a similar way an alien civilization might not kill us for religious reasons.
You don’t need a special effort to broadcast signals for a civilization that cares about emerging species to listen to normal radio broadcasts.
I don’t think humans as a species or earth creatures as a … evolutionary life-root, have coherent goals or linear development in a way that makes this concern valid.
If a more intelligent self-sustaining agent or group comes along and replaces humans, good. Whether that’s future-humans, human-created AIs, or ETs doesn’t matter all that much.
Did the people of the 19th century make a mistake by creating and educating the next generations of humans which replaced them?
As an aside, it’s far too late to stop broadcasts. The marginal risk of discovery imposed by any action today is pretty much zero—we’ve been sending LOTS of EM outward in all directions for many many decades, and there’s no way to recall any of it.
Define “good”.
His name is literally Dagon.
Heh, it’s been long enough (~35 years, since BBS systems in the early 80s) that I’ve gone by the name that I often completely forget it has any context outside of my usage.
In this case, I’m using “good” in the sense of “I don’t think I, or any other dead-by-then being, has standing to object”.
At the present for most social changes there are people who object because the change goes against their values.
You might not care, but a lot of humans do care, and will continue to care. That’s why we’re discussing it.
A lot of humans care (or at least signal that they care in far-mode) about what happens in the future. That doesn’t make it sane or reasonable.
Why does it matter to anyone today whether the beings inhabiting Earth’s solar system in 20 centuries are descended from apes, or made of silicon, or came from elsewhere?
Why does anything at all matter?
I think we can all agree that an entity’s anticipated future experiences matter to that entity. I hope (but would be interested to learn otherwise) that imaginary events such as fiction don’t matter. In between, there is a hugely wide range of how much it’s worth caring about distant events.
I’d argue that outside your light-cone is pretty close to imaginary in terms of care level. I’d also argue that events after your death are pretty unlikely to effect you (modulo basilisk-like punishment or reward).
I actually buy the idea that you care about (and are willing to expend resources on) subjunctive realities on behalf of not-quite-real other people. You get present value from imagining good outcomes for imagined-possible people even if they’re not you. This has to get weaker as it gets more distant in time and more tenuous in connection to reality, though.
But that’s not even the point I meant to make. Even if you care deeply about the far future for some reason, why is it reasonable to prefer weak, backward, stupid entities over more intelligent and advanced ones? Just because they’re made of similar meat-substance as you seems a bit parochial, and hypocritical given the way you treat slightly less-capable organic beings like lettuce.
Woodchopper’s post indicated that he’d violently interfere with (indirectly via criminalization) activities that make it infinitesimally more likely to be identified and located by ETs. This is well beyond reason, even if I overstated my long-term lack of care.
You have failed to answer my question. Why does anything at all matter? Why does anything care about anything at all? Why don’t I want my dog to die? Obviously, when I’m actually dead, I won’t want anything at all. But there is no reason I cannot have preferences now regarding events that will occur after I am dead. And I do.
So it’s just après nous le déluge?
Dude, if you are preaching Might Makes Right you don’t have to bring up nonsense like “standing to object”.
Anything that can replace us will get to decide if the fact that it has done so is “good”. Our arguments will have failed to convince the universe, and we will be gone. Physics is a garbage arbitrator, but from its decision there can be no appeal.
Arguments made by humans can effect other humans, and from that effect their actins, and from that effect the universe.
In this case, the argument is about whether humans should resist or acquiesce to their own replacement. I take Dagn’s “good” to indicate support for the latter option.
I mean, he can chime in, but I think he is looking at it from the perspective of a “thing that has happened”. We don’t have standing to object because we are gone.
I doubt he thinks there is a duty to roll over. (Don’t want to put words in your mouth tho, man. Let me know if I’m misunderstanding you here.) The vibe I get from his argument is that, once we are gone, who cares what we think?
Yeah, well, the Deep Ones already “came along” :-/
Thankfully aside from military radar, which is highly directional and sporadic, the rest is lost in the background noise after a few dozen lightyears.
I’m not sure what the “realistic” word is doing in here. Do you, by any chance, mean “one I can imagine”? I can imagine many things.