I fully agree to you. We are for sure not alone in our galaxy. But I disagree to Bostrums instability thesis either extinction or cosmic endowment. This duopolar final outcome is reasonable if the world is modelled by differential equations which I doubt. AGI might help us to make or world a self stabilizing sustainable system. An AGI that follows goals of sustainability is by far safer than an AGI thriving for cosmic endowment.
I fully agree to you. We are for sure not alone in our galaxy.
That is close to the exact opposite of what I wrote; please re-read.
AGI might help us to make or world a self stabilizing sustainable system.
There are at least three major issues with this approach, any one of which would make it a bad idea to attempt.
Self-sustainability is very likely impossible under our physics. This could be incorrect—there’s always a chance our models are missing something crucial—but right now, the laws of thermodynamics strongly point at a world where you need to increase entropy to compute, and so the total extent of your civilization will be limited by how much negentropy you can acquire.
If you can find a way to avoid 1., you still risk someone else (read: independently evolved aliens) with a less limited view gobbling up the resources, and then knocking on your door to get yours too. There’s some risk of this anyway, but deliberately leaving all these resources lying around means you’re not just exposed to greedy aliens in your past, you’re also exposed to ones that svolve in the future. The only sufficient response to that would be if you can’t just get unlimited computation and storage out of limitd material resources, but you also get an insurmountable defense to let you keep it against a less restrained attacker. This is looking seriously unlikely!
Let’s say you get all of these, unlikely though they look right now. Ok, so what leaving the resources around does in that scenario is to relinquish any control about what newly evolved aliens get up to. Humanity’s history is incredibly brutal and full of evil. The rest of our biosphere most likely has a lot of it, too. Any aliens with similar morals would have been incredibly negligent to simply let things go on naturally for this long.
And as for us, with other aliens, it’s worse; they’re fairly likely to have entirely incompatible value systems, and may very well develop into civilizations that we would continue a blight on our universe … oh, and also they’d have impenetrable shields to hide behind, since we postulated those in 2. So in this case we’re likely to end up stuck with the babyeaters or their less nice siblings as neighbors. Augh!
And beyond that, I don’t think it even makes the FAI problem any easier. There’s nothing inherently destabilizing about an endowment grab. You research some techs, you send out a wave of von neumann probes, make some decisions about how to consolidate or distribute your civilization according to your values, and have the newly built intergalactic infrastructure implement your values. That part is unrelated to any of the hard parts of FAI, which would still be just as hard if you somehow wrote your AI to self-limit to a single solar system. The only thing that gets you is less usefulness.
You’re suggesting a counterfactual trade with them?
Perhaps that could be made to work; I don’t understand those well. It doesn’t matter to my main point: even if you do make something like that work, it only changes what you’d do once you run into aliens with which the trade works (you’d be more likely to help them out and grant them part of your infrastructure or the resources it produces). Leaving all those stars on to burn through resources without doing anything useful is just wasteful; you’d turn them off, regardless of how exactly you deal with aliens. In addition, the aliens may still have birthing problems that they could really use help with; you wouldn’t leave them to face those alone if you made it through that phase first.
I am suggesting, that methastasis method of growth could be good for first multicell organisms, but unstable, not very succesful in evolution and probably refused by every superintelligence as malign.
Your argument we could be the first intelligent species
in our past light-cone is quite weak because of the extreme extension. You are putting your own argument aside by saying:
We might still run into aliens later …
A time frame for our discussion is covering maybe dozens of millenia, but not millions of years.
Milky way diameter is about 100,000 lightyears. Milky way and its satellite and dwarf galaxies around
have a radius of about 900,000 lightyears (300kpc). Our
next neighbor galaxy Andromeda is about 2.5 million light years away.
If we run into aliens this encounter will be within our own galaxy. If there is no intelligent life
within Milky Way we have to wait for more than 2 million years to receive a visitor from Andromeda.
This weeks publication of a first
image of planetary genesis by ALMA radio telescope
makes it likely that nearly every star in our galaxy has a set of planets. If every third star has a planet in
the habitable zone we will have in the order of 100 billion planets in our galaxy where life could evolve. The
probability to run into aliens in our galaxy is therefore not neglectable and I appreciate that you discuss the
implications of alien encounters.
If we together with our AGIs decide against CE with von Neumann probes for the next ten to hundred millenia this does not exclude that we prepare our infrastructure for CE. We should not “leaving the resources around”. If von Neumann probes were found too early by an alien civilization they
could start a war against us with far superior technology. Sending out von Neumann probes should be postponed until
our AGIs are absolutely sure that they can defend our solar system. If we have transformed our asteroid belt into
fusion powered spaceships we could think about CE, but not earlier. Expansion into other star systems is a political
decision and not a solution to a differential equation as Bostrum puts it.
When we discuss about evil AI I was thinking (and still count it as plausible) about possibility that self destruction could be not evil act. That Fermi paradox could be explained as natural law = best moral answer for superintelligence at some level.
Now I am thankful because your comment enlarge possibilities to think about Fermi.
We could not think only self destruction—we could think modesty and self sustainability.
Sauron’s ring could be superpowerfull, but clever Gandalf could (and have!) resist offer to use it. (And use another ring to destroy strongest one).
We could think hidden places (like Lothlorien, Rivendell) in universe where clever owners use limited but nondestructive powers.
I fully agree to you. We are for sure not alone in our galaxy. But I disagree to Bostrums instability thesis either extinction or cosmic endowment. This duopolar final outcome is reasonable if the world is modelled by differential equations which I doubt. AGI might help us to make or world a self stabilizing sustainable system. An AGI that follows goals of sustainability is by far safer than an AGI thriving for cosmic endowment.
That is close to the exact opposite of what I wrote; please re-read.
There are at least three major issues with this approach, any one of which would make it a bad idea to attempt.
Self-sustainability is very likely impossible under our physics. This could be incorrect—there’s always a chance our models are missing something crucial—but right now, the laws of thermodynamics strongly point at a world where you need to increase entropy to compute, and so the total extent of your civilization will be limited by how much negentropy you can acquire.
If you can find a way to avoid 1., you still risk someone else (read: independently evolved aliens) with a less limited view gobbling up the resources, and then knocking on your door to get yours too. There’s some risk of this anyway, but deliberately leaving all these resources lying around means you’re not just exposed to greedy aliens in your past, you’re also exposed to ones that svolve in the future. The only sufficient response to that would be if you can’t just get unlimited computation and storage out of limitd material resources, but you also get an insurmountable defense to let you keep it against a less restrained attacker. This is looking seriously unlikely!
Let’s say you get all of these, unlikely though they look right now. Ok, so what leaving the resources around does in that scenario is to relinquish any control about what newly evolved aliens get up to. Humanity’s history is incredibly brutal and full of evil. The rest of our biosphere most likely has a lot of it, too. Any aliens with similar morals would have been incredibly negligent to simply let things go on naturally for this long. And as for us, with other aliens, it’s worse; they’re fairly likely to have entirely incompatible value systems, and may very well develop into civilizations that we would continue a blight on our universe … oh, and also they’d have impenetrable shields to hide behind, since we postulated those in 2. So in this case we’re likely to end up stuck with the babyeaters or their less nice siblings as neighbors. Augh!
And beyond that, I don’t think it even makes the FAI problem any easier. There’s nothing inherently destabilizing about an endowment grab. You research some techs, you send out a wave of von neumann probes, make some decisions about how to consolidate or distribute your civilization according to your values, and have the newly built intergalactic infrastructure implement your values. That part is unrelated to any of the hard parts of FAI, which would still be just as hard if you somehow wrote your AI to self-limit to a single solar system. The only thing that gets you is less usefulness.
Think prisoner’s dilemma!
What would aliens do?
Is selfish (self centered) reaction really best possibitlity?
What will do superintelligence which aliens construct?
(no discussion that humans history is brutal and selfish)
You’re suggesting a counterfactual trade with them?
Perhaps that could be made to work; I don’t understand those well. It doesn’t matter to my main point: even if you do make something like that work, it only changes what you’d do once you run into aliens with which the trade works (you’d be more likely to help them out and grant them part of your infrastructure or the resources it produces). Leaving all those stars on to burn through resources without doing anything useful is just wasteful; you’d turn them off, regardless of how exactly you deal with aliens. In addition, the aliens may still have birthing problems that they could really use help with; you wouldn’t leave them to face those alone if you made it through that phase first.
I am suggesting, that methastasis method of growth could be good for first multicell organisms, but unstable, not very succesful in evolution and probably refused by every superintelligence as malign.
Your argument we could be the first intelligent species in our past light-cone is quite weak because of the extreme extension. You are putting your own argument aside by saying:
A time frame for our discussion is covering maybe dozens of millenia, but not millions of years. Milky way diameter is about 100,000 lightyears. Milky way and its satellite and dwarf galaxies around have a radius of about 900,000 lightyears (300kpc). Our next neighbor galaxy Andromeda is about 2.5 million light years away.
If we run into aliens this encounter will be within our own galaxy. If there is no intelligent life within Milky Way we have to wait for more than 2 million years to receive a visitor from Andromeda. This weeks publication of a first image of planetary genesis by ALMA radio telescope makes it likely that nearly every star in our galaxy has a set of planets. If every third star has a planet in the habitable zone we will have in the order of 100 billion planets in our galaxy where life could evolve. The probability to run into aliens in our galaxy is therefore not neglectable and I appreciate that you discuss the implications of alien encounters.
If we together with our AGIs decide against CE with von Neumann probes for the next ten to hundred millenia this does not exclude that we prepare our infrastructure for CE. We should not “leaving the resources around”. If von Neumann probes were found too early by an alien civilization they could start a war against us with far superior technology. Sending out von Neumann probes should be postponed until our AGIs are absolutely sure that they can defend our solar system. If we have transformed our asteroid belt into fusion powered spaceships we could think about CE, but not earlier. Expansion into other star systems is a political decision and not a solution to a differential equation as Bostrum puts it.
When we discuss about evil AI I was thinking (and still count it as plausible) about possibility that self destruction could be not evil act. That Fermi paradox could be explained as natural law = best moral answer for superintelligence at some level.
Now I am thankful because your comment enlarge possibilities to think about Fermi.
We could not think only self destruction—we could think modesty and self sustainability.
Sauron’s ring could be superpowerfull, but clever Gandalf could (and have!) resist offer to use it. (And use another ring to destroy strongest one).
We could think hidden places (like Lothlorien, Rivendell) in universe where clever owners use limited but nondestructive powers.