I restored the question as asking about probability that we’ll be finishing an FAI project in the next 100 years. Dying of engineered virus doesn’t seem like an example of “deciding the fate of 80 billion galaxies”, although it’s determining that fate.
FAI looks really hard. Improvements in mathematical understanding to bridge comparable gaps in understanding can take at least many decades. I don’t expect a reasonable attempt at actually building a FAI anytime soon (crazy potentially world-destroying AGI projects go in the same category as engineered viruses). One possible shortcut is ems, that effectively compress the required time, but I estimate that they probably won’t be here for at least 80 more years, and then they’ll still need time to become strong enough and break the problem. (By that time, biological intelligence amplification could take over as a deciding factor, using clarity of thought instead of lots of time to think.)
My question has only a little bit to do with the probability that an AI project is successful. It has mostly to do with P(universe goes to waste | AI projects are unsuccessful). For instance, couldn’t the universe go on generating human utility after humans go extinct?
Aliens. I would be pleased to learn that something amazing was happening (or was going to happen, long “after” I was dead) in one of those galaxies. Since it’s quite likely that something amazing is happening in one of those 80 billion galaxies, shouldn’t I be pleased even without learning about it?
Of course, I would be correspondingly distressed to learn that something horrible was happening in one of those galaxies.
Could you please detail your working to get to this 10% number? I’m interested in how one would derive it, in detail.
I restored the question as asking about probability that we’ll be finishing an FAI project in the next 100 years. Dying of engineered virus doesn’t seem like an example of “deciding the fate of 80 billion galaxies”, although it’s determining that fate.
FAI looks really hard. Improvements in mathematical understanding to bridge comparable gaps in understanding can take at least many decades. I don’t expect a reasonable attempt at actually building a FAI anytime soon (crazy potentially world-destroying AGI projects go in the same category as engineered viruses). One possible shortcut is ems, that effectively compress the required time, but I estimate that they probably won’t be here for at least 80 more years, and then they’ll still need time to become strong enough and break the problem. (By that time, biological intelligence amplification could take over as a deciding factor, using clarity of thought instead of lots of time to think.)
My question has only a little bit to do with the probability that an AI project is successful. It has mostly to do with P(universe goes to waste | AI projects are unsuccessful). For instance, couldn’t the universe go on generating human utility after humans go extinct?
How? By coincidence?
(I’m assuming you also mean no posthumans, if humans go extinct and AI is unsuccessful.)
Aliens. I would be pleased to learn that something amazing was happening (or was going to happen, long “after” I was dead) in one of those galaxies. Since it’s quite likely that something amazing is happening in one of those 80 billion galaxies, shouldn’t I be pleased even without learning about it?
Of course, I would be correspondingly distressed to learn that something horrible was happening in one of those galaxies.