I came to a similar conclusion after reading Accelerando, but don’t forget about existential risk. Some intelligent agents don’t care what happens in a future they never experience, but many humans do, and if a Friendly Singularity occurs, it will probably preserve our drive to make the future a good one even if we aren’t around to see it. Matrioshka brain beats space colonization; supernova beats matrioshka brain; space colonization beats supernova.
If you care about that sort of thing, it pays to diversify.
I don’t have the astrophysics background to say for sure, but if subjective time is a function of total computational resources and computational resources are a function of energy input, then you might well get more subjective time out of a highly luminous supernova precursor than a red dwarf with a lifetime of a trillion years. Existential risk isn’t going to be seen in the same way in a CPU-bound civilization as in a time-bound one.
If computation is bound by energy input and you’re prepared to take advantage of a supernova, you still only get one massive burst and then you’re done. Think of how many future civilizations could be supercharged and then destroyed by supernovae if only you’d launched that space colonization program first!
I’m skeptical about Matrioshka brains because of the latency issue. They seem to be an astronomical waste. Also, I suspect future civilizations will want to preserve much of the pristine matter in their systems because it serves as valuable prime information. If you rip apart a planet and turn it into a bunch of circuitry you have just lost a trove of information about the real universe.
I haven’t analyzed it in detail, but it seems that a supernova wouldn’t be as big a deal for an advanced AI society. They could just as easily live underground behind shielding and use the surface merely for harvesting some solar power.
I came to a similar conclusion after reading Accelerando, but don’t forget about existential risk. Some intelligent agents don’t care what happens in a future they never experience, but many humans do, and if a Friendly Singularity occurs, it will probably preserve our drive to make the future a good one even if we aren’t around to see it. Matrioshka brain beats space colonization; supernova beats matrioshka brain; space colonization beats supernova.
If you care about that sort of thing, it pays to diversify.
I don’t have the astrophysics background to say for sure, but if subjective time is a function of total computational resources and computational resources are a function of energy input, then you might well get more subjective time out of a highly luminous supernova precursor than a red dwarf with a lifetime of a trillion years. Existential risk isn’t going to be seen in the same way in a CPU-bound civilization as in a time-bound one.
If computation is bound by energy input and you’re prepared to take advantage of a supernova, you still only get one massive burst and then you’re done. Think of how many future civilizations could be supercharged and then destroyed by supernovae if only you’d launched that space colonization program first!
I’m skeptical about Matrioshka brains because of the latency issue. They seem to be an astronomical waste. Also, I suspect future civilizations will want to preserve much of the pristine matter in their systems because it serves as valuable prime information. If you rip apart a planet and turn it into a bunch of circuitry you have just lost a trove of information about the real universe.
I haven’t analyzed it in detail, but it seems that a supernova wouldn’t be as big a deal for an advanced AI society. They could just as easily live underground behind shielding and use the surface merely for harvesting some solar power.