Waffled between putting this here and putting this in the Stupid Questions thread:
Why is the default assumption that a superintelligence of any type will populate its light cone?
I can see why any sort of tiling AI would do this—paperclip maximizers and the like. And for obvious reasons there’s an inherent problem with predicting the actions of an alien FAI (friendly relative to alien values).
But it certainly seems to me that a human CEV-equivalent wouldn’t necessarily support lightspeed expansion. Certainly, humanity has expanded whenever it has the opportunity—but not at its maximum speed, nor did entire population centers move. The top few percent of adventurous or less-affluent people leave, and that is all.
On top of this, I … well, I can’t say “can’t imagine,” but I find it unlikely that a CEV would support mass cloning or generation of humans (though if it supports mass uploading, then accelerated living might produce a population boom sufficient to support luminal expansion.) In which case, an FAI that did occupy as much space as possible, as rapidly as possible, would find itself spending resources on planets that wouldn’t be used for millenia, when it could instead focus on improving local life.
There is, of course, the intelligence-explosion argument, but I’d think even intelligence would hit diminishing marginal returns eventually.
So to sum up, it seems not unreasonable that certain plausible categories of superintelligences would willingly not expand at near-luminal velocities—in which case there’s quite a bit more leeway in the Fermi Paradox.
Due to the way the universe expends, even if you travel at the speed of light forever, you can only reach a finite portion of it. The longer you wait, the less that is. Because of this, an AI that doesn’t send out probes as fast as possible and, to a lesser extent, as soon as possible, will only be able to control a smaller portion of the universe. If you have any preferences about what happens in the rest of the universe, you’d want to leave early.
Also, as Oscar said, you don’t want the resources you can easily reach to go to waste while you’re putting off using them.
I suspect using them is more likely. They certainly aren’t going to just let them keep wasting fuel. Not unless they have the opportunity to prevent even more waste. For example, they will send out probes to other systems before worrying too much about this system.
Is that even possible!? The FAI would want to somehow pause the burning of the star, allowing it to begin producing energy again when needed. For example collapsing it into a black hole wouldn’t be what we want, since the energy would be wasted.
Would star lifting be enough to slow the burning of a star to a standstill?
Right, but I’m not sure that’s the right precedent to use. Space is big: it’d be more equivalent to, oh, dumping the Lost Roman Legion in a prehistoric Asia and expecting them to divvy up the continent as fast as they could march.
Jack Sparrow: Aha! So we’ve established my proposal is sound in principle, now we’re just haggling over price.
-- Pirates of the Caribbean: Dead Man’s Chest
Or in this case, scope instead of price.
Jokes aside, the point being is the sponsored settlement of the prairies had an influence of the negotiations of the Canada / U.S.A. border. If an human civilization had belief that it may have future competition with aliens for territory in space it would make sense to them to secured as much as possible as a Schelling Point in negotiations / conflicts.
… and once an FAI has sent out probes to claim territory anyway, it loses nothing by making those probes nanotech with a copy of the FAI loaded on it, so we would indeed expect to see lightspeed expansions of FAI-controlled civilizations. Fair enough, then.
Waffled between putting this here and putting this in the Stupid Questions thread:
Why is the default assumption that a superintelligence of any type will populate its light cone?
I can see why any sort of tiling AI would do this—paperclip maximizers and the like. And for obvious reasons there’s an inherent problem with predicting the actions of an alien FAI (friendly relative to alien values).
But it certainly seems to me that a human CEV-equivalent wouldn’t necessarily support lightspeed expansion. Certainly, humanity has expanded whenever it has the opportunity—but not at its maximum speed, nor did entire population centers move. The top few percent of adventurous or less-affluent people leave, and that is all.
On top of this, I … well, I can’t say “can’t imagine,” but I find it unlikely that a CEV would support mass cloning or generation of humans (though if it supports mass uploading, then accelerated living might produce a population boom sufficient to support luminal expansion.) In which case, an FAI that did occupy as much space as possible, as rapidly as possible, would find itself spending resources on planets that wouldn’t be used for millenia, when it could instead focus on improving local life.
There is, of course, the intelligence-explosion argument, but I’d think even intelligence would hit diminishing marginal returns eventually.
So to sum up, it seems not unreasonable that certain plausible categories of superintelligences would willingly not expand at near-luminal velocities—in which case there’s quite a bit more leeway in the Fermi Paradox.
Due to the way the universe expends, even if you travel at the speed of light forever, you can only reach a finite portion of it. The longer you wait, the less that is. Because of this, an AI that doesn’t send out probes as fast as possible and, to a lesser extent, as soon as possible, will only be able to control a smaller portion of the universe. If you have any preferences about what happens in the rest of the universe, you’d want to leave early.
Also, as Oscar said, you don’t want the resources you can easily reach to go to waste while you’re putting off using them.
It’s because we want to secure as many resources as possible, before the aliens get to them.
I expect an FAI to expand rapidly, but merely securing resources and saving them for humans to use much later.
So maybe the Solar System has been secured by an alien-FAI and we’re being saved for the aliens to use much later..?
It’s totally possible, but they’d have to have a good reason for staying hidden for the reason nyan_sandwich gives.
Most valuable of those resources is free energy. The sun is burning that into low grade light and heat at an incredible rate.
So does that imply that a rapidly expanding resource-saving FAI would go around extinguishing stars?
Seems prudent to do.
Unless it values the existence of stars more than it values other things it could do with that energy.
Upvoted for being the first instance I’ve seen of someone describing extinguishing all the stars in the night sky as being prudent.
I suspect using them is more likely. They certainly aren’t going to just let them keep wasting fuel. Not unless they have the opportunity to prevent even more waste. For example, they will send out probes to other systems before worrying too much about this system.
Is that even possible!? The FAI would want to somehow pause the burning of the star, allowing it to begin producing energy again when needed. For example collapsing it into a black hole wouldn’t be what we want, since the energy would be wasted.
Would star lifting be enough to slow the burning of a star to a standstill?
Hm. Point.
Read up on the Dominion Lands Act and the Homestead Act for a historic human precedent.
Right, but I’m not sure that’s the right precedent to use. Space is big: it’d be more equivalent to, oh, dumping the Lost Roman Legion in a prehistoric Asia and expecting them to divvy up the continent as fast as they could march.
-- Pirates of the Caribbean: Dead Man’s Chest
Or in this case, scope instead of price.
Jokes aside, the point being is the sponsored settlement of the prairies had an influence of the negotiations of the Canada / U.S.A. border. If an human civilization had belief that it may have future competition with aliens for territory in space it would make sense to them to secured as much as possible as a Schelling Point in negotiations / conflicts.
Point granted.
… and once an FAI has sent out probes to claim territory anyway, it loses nothing by making those probes nanotech with a copy of the FAI loaded on it, so we would indeed expect to see lightspeed expansions of FAI-controlled civilizations. Fair enough, then.