Sounds like a good bet even if you are brilliant. Make money, use money to produce academic institute, do your research in concert with academics at your institute. This solves all problems of needing to be part of academia, and also solves the problem of academics doing lots of unnecessary stuff—at your institute, academics will not be required to do unnecessary stuff.
Maybe. The disadvantage is lag time, of course. Discount rate for Singularity is very high. Assume that there are 100 years to the singularity, and that P(success) is linearly decreasing in lag time; then every second approximately 25 galaxies are lost, assuming that the entire 80 billion galaxies’ fate is decided then.
I’m surprised that noone has asked Roko where he got these numbers from.
Wikipedia says that there are about 80 billion galaxies in the “observable universe”, so that part is pretty straightforward. Though there’s still the question of why all of them are being counted, when most of them probably aren’t reachable with slower-than-light travel.
But I still haven’t found any explanation for the “25 galaxies per second”. Is this the rate at which the galaxies burn out? Or the rate at which something else causes them to be unreachable? Is it the number of galaxies, multiplied by the distance to the edge of the observable universe, divided by the speed of light?
calculating...
Wikipedia says that the comoving distance from Earth to the edge of the observable universe is about 14 billion parsecs (46 billion light-years short scale, i.e. 4.6 × 10^10 light years) in any direction.
Google Calculator says 80 billion galaxies / 46 billion light years = 1.73 galaxies per year, or 5.48 × 10^-8 galaxies per second
so no, that’s not it.
If I’m going to allow my mind to be blown by this number, I would like to know where the number came from.
I also took a while to understand what was meant, so here is my understanding of the meaning:
Assumptions:
There will be a singularity in 100 years.
If the proposed research is started now it will be a successful singularity,
e.g. friendly AI.
If the proposed research isn’t started by the time of the singularity, it will be a unsuccessful (negative) singularity, but still a singularity.
The probability of the successful singularity linearly decreases with the time when the research starts, from 100 percent now, to 0 percent in 100 years time.
A 1 in 80 billion chance of saving 80 billion galaxies is equivalent to definitely saving 1 galaxy, and the linearly decreasing chance of a successful singularity affecting all of them is equivalent to a linearly decreasing number being affected. 25 galaxies per second is the rate of that decrease.
I meant if you divide the number of galaxies by the number of seconds to an event 100 years from now. Yes, not all reachable. Probably need to discount by an order of magnitude for reachability at lightspeed.
Hmm, by the second wikipedia link there is no basis for the 80 billion galaxies since only a relatively small fraction of the observable universe (4.2%?) is reachable if limited by the speed of light, and if not the whole universe is probably at least 10^23 times larger (by volume or by radius?).
Guh. Every now and then something reminds me of how important the Singularity is. Time to reliable life extension is measured in lives per minute, time to Singularity is measured in galaxies per second.
Well conservatively assuming that each galaxy supports lives at 10^9 per sun per century (1/10th of our solar system), that’s already 10^29 lives per second right there.
And assuming utilization of all the output of the sun for living, i.e. some kind of giant spherical shell of habitable land, we can add another 12 orders of magnitude straight away. Then if we upload people that’s probably another 10 orders of magnitude.
Probably up to 10^50 lives per second, without assuming any new physics could be discovered (a dubious assumption). If instead we assume that quantum gravity gives us as much of an increase in power as going from newtonian physics to quantum mechanics did, we can pretty much slap another 20 orders of magnitude onto it, with some small probability of the answer being “infinity”.
About 10% (if we ignore existential risk, which is a way of resolving the ambiguity of “will be decided”). Multiply that by opportunity cost of 80 billion galaxies.
I restored the question as asking about probability that we’ll be finishing an FAI project in the next 100 years. Dying of engineered virus doesn’t seem like an example of “deciding the fate of 80 billion galaxies”, although it’s determining that fate.
FAI looks really hard. Improvements in mathematical understanding to bridge comparable gaps in understanding can take at least many decades. I don’t expect a reasonable attempt at actually building a FAI anytime soon (crazy potentially world-destroying AGI projects go in the same category as engineered viruses). One possible shortcut is ems, that effectively compress the required time, but I estimate that they probably won’t be here for at least 80 more years, and then they’ll still need time to become strong enough and break the problem. (By that time, biological intelligence amplification could take over as a deciding factor, using clarity of thought instead of lots of time to think.)
My question has only a little bit to do with the probability that an AI project is successful. It has mostly to do with P(universe goes to waste | AI projects are unsuccessful). For instance, couldn’t the universe go on generating human utility after humans go extinct?
Aliens. I would be pleased to learn that something amazing was happening (or was going to happen, long “after” I was dead) in one of those galaxies. Since it’s quite likely that something amazing is happening in one of those 80 billion galaxies, shouldn’t I be pleased even without learning about it?
Of course, I would be correspondingly distressed to learn that something horrible was happening in one of those galaxies.
I disagree. I claim that the probability of >50% of the universe being already populated (using the space of simultaneity defined by a frame of reference comoving with earth) is maybe 10%.
“Already populated” is a red herring. What’s the probability that >50% of the universe will ever be populated? I don’t see any reason for it to be sensitive to how well things go on Earth in the next 100 years.
I think it is likely that we are the only spontaneously-created intelligent species in the entire 4-manifold that is the universe, space and time included (excluding species which we might create in the future, of course).
Yes, I have a critique. Most of anthropics is gibberish. Until someone makes anthropics work, I refuse to update on any of it. (Apart from the bits that are commonsensical enough to derive without knowing about “anthropics”, e.g. that if your fising net has holes 2 inches big, don’t expect to catch fish smaller then 2 inches wide)
I don’t think you can really avoid anthropic ideas—or the universe stops making sense. Some anthropic ideas can be challenging—but I think we have got to try.
Anyway, you did the critique—but didn’t go for a supporting argument. I can’t think of very much that you could say. We don’t have very much idea yet about what’s out there—and claims to know such things just seem over-confident.
Fermi’s paradox implying no aliens surely applies within-galaxy only. Many galaxies are distant, and intelligent life forming there concurrently (or long before us) is quite compatible with it not having arrived on our doorsteps yet—due to the speed of light limitation.
If you think we should be able to at least see life in distant galaxies, then, in short, not really—or at least we don’t know enough to say yea or nay on that issue with any confidence yet.
The Andromeda Galaxy is 2.5 million light-years away. The universe is about 1250 million years old. Therefore that’s not far enough away to protect us from colonizing aliens travelling at 0.5c or above.
The universe is about 13,750 million years old. The Fermi argument suggests that—if there were intelligent aliens in this galaxy, they should probably have filled it by now—unless they originated very close to us in time—which seems unlikely. The argument applies much more weakly to galaxies, because they are much further away, and they are separated from each other by huge regions of empty space. Also, the Andromeda Galaxy is just one galaxy. Say only one galaxy in 100 has intelligent life—and the Andromeda Galaxy isn’t among them. That bumps the required distance to be travelled up to 10 million light years or so.
Even within this galaxy, the Fermi argument is not that strong. Maybe intelligent aliens formed in the last billion years, and haven’t made it here yet—because space travel is tricky, and 0.1c is about the limit. The universe is only about 14 billion years old. For some of of that there were not too many second generations stars. The odds are against there being aliens nearby—but they are not that heavily stacked. For other galaxies, the argument is much, much less compelling.
There are some more practical and harmless applications as well. In Nick Bostrom’s Anthropic Bias, for example, there is an application of the Self-Sampling Assumption to traffic analysis.
Even Nick Bostrom, who is arguably the leading expert on anthropic problems, rejects SIA for a number of reasons (see his book Anthropic Bias). That alone is a pretty big blow to its credibility.
That is curious. Anyway, the self-indication assumption seems fairly straight-forwards (as much as any anthropic reasoning is, anyway). The critical material from Bostrom on the topic I have read seems unpersuasive. He doesn’t seem to “get” the motivation for the idea in the first place.
If you think there is a significant probability that an intelligence explosion is possible or likely, then that question is sensitive to how well things go on Earth in the next 100 years.
However likely they are, I expect intelligence explosions to be evenly distributed through space and time. If 100 years from now Earth loses by a hair, there are still plenty of folks around the universe who will win or have won by a hair. They’ll make whatever use of the 80 billion galaxies that they can—will they be wasting them?
If Earth wins by a hair, or by a lot, we’ll be competing with those folks. This also significantly reduces the opportunity cost Roko was referring to.
That seems like a rather exaggerated sense of importance. It may be a fun fantasy in which the fate of the entire universe hangs in the balance in the next century—but do bear in mind the disconnect between that and the real world.
“Convince me”—with some unspecified level of confidence? That is not a great question :-|
We lack knowlegde of the existence (or non-existence) of aliens in other galaxies. Until we have such knowledge, our uncertainty on this matter will necessarily be high—and we should not be “convinced” of anything.
What evidence would convince you, with 95% confidence, that the fate of the universe hangs in the balance in this next century on Earth?
You may specify evidence such as “strong evidence that we are completely alone in the universe” even if you think it is unlikely we will get such evidence.
Sounds like a good bet even if you are brilliant. Make money, use money to produce academic institute, do your research in concert with academics at your institute. This solves all problems of needing to be part of academia, and also solves the problem of academics doing lots of unnecessary stuff—at your institute, academics will not be required to do unnecessary stuff.
Maybe. The disadvantage is lag time, of course. Discount rate for Singularity is very high. Assume that there are 100 years to the singularity, and that P(success) is linearly decreasing in lag time; then every second approximately 25 galaxies are lost, assuming that the entire 80 billion galaxies’ fate is decided then.
25 galaxies per second. Wow.
I’m surprised that noone has asked Roko where he got these numbers from.
Wikipedia says that there are about 80 billion galaxies in the “observable universe”, so that part is pretty straightforward. Though there’s still the question of why all of them are being counted, when most of them probably aren’t reachable with slower-than-light travel.
But I still haven’t found any explanation for the “25 galaxies per second”. Is this the rate at which the galaxies burn out? Or the rate at which something else causes them to be unreachable? Is it the number of galaxies, multiplied by the distance to the edge of the observable universe, divided by the speed of light?
calculating...
Wikipedia says that the comoving distance from Earth to the edge of the observable universe is about 14 billion parsecs (46 billion light-years short scale, i.e. 4.6 × 10^10 light years) in any direction.
Google Calculator says 80 billion galaxies / 46 billion light years = 1.73 galaxies per year, or 5.48 × 10^-8 galaxies per second
so no, that’s not it.
If I’m going to allow my mind to be blown by this number, I would like to know where the number came from.
I also took a while to understand what was meant, so here is my understanding of the meaning:
Assumptions: There will be a singularity in 100 years. If the proposed research is started now it will be a successful singularity, e.g. friendly AI. If the proposed research isn’t started by the time of the singularity, it will be a unsuccessful (negative) singularity, but still a singularity. The probability of the successful singularity linearly decreases with the time when the research starts, from 100 percent now, to 0 percent in 100 years time.
A 1 in 80 billion chance of saving 80 billion galaxies is equivalent to definitely saving 1 galaxy, and the linearly decreasing chance of a successful singularity affecting all of them is equivalent to a linearly decreasing number being affected. 25 galaxies per second is the rate of that decrease.
I meant if you divide the number of galaxies by the number of seconds to an event 100 years from now. Yes, not all reachable. Probably need to discount by an order of magnitude for reachability at lightspeed.
Hmm, by the second wikipedia link there is no basis for the 80 billion galaxies since only a relatively small fraction of the observable universe (4.2%?) is reachable if limited by the speed of light, and if not the whole universe is probably at least 10^23 times larger (by volume or by radius?).
Guh. Every now and then something reminds me of how important the Singularity is. Time to reliable life extension is measured in lives per minute, time to Singularity is measured in galaxies per second.
Now thats a way to eat up your brain.
Well conservatively assuming that each galaxy supports lives at 10^9 per sun per century (1/10th of our solar system), that’s already 10^29 lives per second right there.
And assuming utilization of all the output of the sun for living, i.e. some kind of giant spherical shell of habitable land, we can add another 12 orders of magnitude straight away. Then if we upload people that’s probably another 10 orders of magnitude.
Probably up to 10^50 lives per second, without assuming any new physics could be discovered (a dubious assumption). If instead we assume that quantum gravity gives us as much of an increase in power as going from newtonian physics to quantum mechanics did, we can pretty much slap another 20 orders of magnitude onto it, with some small probability of the answer being “infinity”.
In what I take to be a positive step towards viscerally conquering my scope neglect, I got a wave of chills reading this.
What’s your P of “the fate of all 80 billion galaxies will be decided on Earth in the next 100 years”?
About 10% (if we ignore existential risk, which is a way of resolving the ambiguity of “will be decided”). Multiply that by opportunity cost of 80 billion galaxies.
Could you please detail your working to get to this 10% number? I’m interested in how one would derive it, in detail.
I restored the question as asking about probability that we’ll be finishing an FAI project in the next 100 years. Dying of engineered virus doesn’t seem like an example of “deciding the fate of 80 billion galaxies”, although it’s determining that fate.
FAI looks really hard. Improvements in mathematical understanding to bridge comparable gaps in understanding can take at least many decades. I don’t expect a reasonable attempt at actually building a FAI anytime soon (crazy potentially world-destroying AGI projects go in the same category as engineered viruses). One possible shortcut is ems, that effectively compress the required time, but I estimate that they probably won’t be here for at least 80 more years, and then they’ll still need time to become strong enough and break the problem. (By that time, biological intelligence amplification could take over as a deciding factor, using clarity of thought instead of lots of time to think.)
My question has only a little bit to do with the probability that an AI project is successful. It has mostly to do with P(universe goes to waste | AI projects are unsuccessful). For instance, couldn’t the universe go on generating human utility after humans go extinct?
How? By coincidence?
(I’m assuming you also mean no posthumans, if humans go extinct and AI is unsuccessful.)
Aliens. I would be pleased to learn that something amazing was happening (or was going to happen, long “after” I was dead) in one of those galaxies. Since it’s quite likely that something amazing is happening in one of those 80 billion galaxies, shouldn’t I be pleased even without learning about it?
Of course, I would be correspondingly distressed to learn that something horrible was happening in one of those galaxies.
Some complexities regarding “decided” since physics is deterministic, but hand waving that aside, I’d say 50%.
With high probability, many of those galaxies are already populated. Is that irrelevant?
I disagree. I claim that the probability of >50% of the universe being already populated (using the space of simultaneity defined by a frame of reference comoving with earth) is maybe 10%.
“Already populated” is a red herring. What’s the probability that >50% of the universe will ever be populated? I don’t see any reason for it to be sensitive to how well things go on Earth in the next 100 years.
I think it is likely that we are the only spontaneously-created intelligent species in the entire 4-manifold that is the universe, space and time included (excluding species which we might create in the future, of course).
I’m curious to know how likely, and why. But do you agree that aliens are relevant to evaluating astronomical waste?
That seems contrary to the http://en.wikipedia.org/wiki/Self-Indication_Assumption
Do you have a critique—or a supporting argument?
Yes, I have a critique. Most of anthropics is gibberish. Until someone makes anthropics work, I refuse to update on any of it. (Apart from the bits that are commonsensical enough to derive without knowing about “anthropics”, e.g. that if your fising net has holes 2 inches big, don’t expect to catch fish smaller then 2 inches wide)
I don’t think you can really avoid anthropic ideas—or the universe stops making sense. Some anthropic ideas can be challenging—but I think we have got to try.
Anyway, you did the critique—but didn’t go for a supporting argument. I can’t think of very much that you could say. We don’t have very much idea yet about what’s out there—and claims to know such things just seem over-confident.
Basically Rare Earth seems to me to be the only tenable solution to Fermi’s paradox.
Fermi’s paradox implying no aliens surely applies within-galaxy only. Many galaxies are distant, and intelligent life forming there concurrently (or long before us) is quite compatible with it not having arrived on our doorsteps yet—due to the speed of light limitation.
If you think we should be able to at least see life in distant galaxies, then, in short, not really—or at least we don’t know enough to say yea or nay on that issue with any confidence yet.
The Andromeda Galaxy is 2.5 million light-years away. The universe is about 1250 million years old. Therefore that’s not far enough away to protect us from colonizing aliens travelling at 0.5c or above.
The universe is about 13,750 million years old. The Fermi argument suggests that—if there were intelligent aliens in this galaxy, they should probably have filled it by now—unless they originated very close to us in time—which seems unlikely. The argument applies much more weakly to galaxies, because they are much further away, and they are separated from each other by huge regions of empty space. Also, the Andromeda Galaxy is just one galaxy. Say only one galaxy in 100 has intelligent life—and the Andromeda Galaxy isn’t among them. That bumps the required distance to be travelled up to 10 million light years or so.
Even within this galaxy, the Fermi argument is not that strong. Maybe intelligent aliens formed in the last billion years, and haven’t made it here yet—because space travel is tricky, and 0.1c is about the limit. The universe is only about 14 billion years old. For some of of that there were not too many second generations stars. The odds are against there being aliens nearby—but they are not that heavily stacked. For other galaxies, the argument is much, much less compelling.
There are strained applications of anthropics, like the doomsday argument. “What happened here might happen elsewhere” is much more innocuous.
There are some more practical and harmless applications as well. In Nick Bostrom’s Anthropic Bias, for example, there is an application of the Self-Sampling Assumption to traffic analysis.
Bostrom says: “Cars in the next lane really do go faster”
I agree.
Even Nick Bostrom, who is arguably the leading expert on anthropic problems, rejects SIA for a number of reasons (see his book Anthropic Bias). That alone is a pretty big blow to its credibility.
That is curious. Anyway, the self-indication assumption seems fairly straight-forwards (as much as any anthropic reasoning is, anyway). The critical material from Bostrom on the topic I have read seems unpersuasive. He doesn’t seem to “get” the motivation for the idea in the first place.
If you think there is a significant probability that an intelligence explosion is possible or likely, then that question is sensitive to how well things go on Earth in the next 100 years.
However likely they are, I expect intelligence explosions to be evenly distributed through space and time. If 100 years from now Earth loses by a hair, there are still plenty of folks around the universe who will win or have won by a hair. They’ll make whatever use of the 80 billion galaxies that they can—will they be wasting them?
If Earth wins by a hair, or by a lot, we’ll be competing with those folks. This also significantly reduces the opportunity cost Roko was referring to.
That seems like a rather exaggerated sense of importance. It may be a fun fantasy in which the fate of the entire universe hangs in the balance in the next century—but do bear in mind the disconnect between that and the real world.
Out of curiosity: what evidence would convince you that the fate of the entire universe does hang in the balance?
No human-comparable aliens, for one.
Which seems awfully unlikely, the more we learn about solar systems.
“Convince me”—with some unspecified level of confidence? That is not a great question :-|
We lack knowlegde of the existence (or non-existence) of aliens in other galaxies. Until we have such knowledge, our uncertainty on this matter will necessarily be high—and we should not be “convinced” of anything.
What evidence would convince you, with 95% confidence, that the fate of the universe hangs in the balance in this next century on Earth?
You may specify evidence such as “strong evidence that we are completely alone in the universe” even if you think it is unlikely we will get such evidence.
I did get the gist of your question the first time—and answered according. The question takes us far into counter-factual territory, though.
I was just curious to see if you rejected the fantasy on principle, or if you had other reasons.