I’m curious where you’d estimate 50% chance of it existing and where you’d estimate 90%.
The jump from 76% to 99.8% is to my mind striking for a variety of reasons. Among other concerns, I suspect that many people here would put a greater than 0.2% chance of some sort of extreme civilization disrupting event above that. Assuming a 0. 2% chance of a civilization disrupting event in an 8 year period is roughly the same as a 2% chance of such an event occurring in the next hundred years which doesn’t look to be so unreasonable but for the fact that longer term predictions should have more uncertainty. Overall, a 0.2% chance of disruption seems to be too high, and if your probability model is accurate then one should expect the functional simulation to arrive well before then. But note also that civilization collapsing is not the only thing that could easily block this sort of event. Events much smaller than full on collapse could do it also, as could many more mundane issues.
That high an estimate seems to be likely vulnerable to the planning fallacy.
Overall, your estimate seems to be too confident, the 2020 estimate especially so.
I would put something like a 0.04% chance on a neuroscience disrupting event (including a biology disrupting event, or a science disrupting event, or a civilization disrupting event). I put something like a 0.16% chance on uploading the nematode actually being so hard that it takes 8 years. I totally buy that this estimate is a planning fallacy. Unfortunately, being aware of the planning fallacy does not make it go away.
Unfortunately, being aware of the planning fallacy does not make it go away.
True. But there are ways to calibrate for it. It seems that subtracting off 10-15% for technological predictions works well. If one is being more careful it probably would do something that was more careful, say taking not a fixed percentage but something that became less severe as the probability estimate of the event went up, so that one could still have genuinely high confidence intervals. But if one is in doubt simply reducing the probability until it doesn’t look like the planning fallacy is likely is one way to approach things.
I’m curious where you’d estimate 50% chance of it existing and where you’d estimate 90%.
The jump from 76% to 99.8% is to my mind striking for a variety of reasons. Among other concerns, I suspect that many people here would put a greater than 0.2% chance of some sort of extreme civilization disrupting event above that. Assuming a 0. 2% chance of a civilization disrupting event in an 8 year period is roughly the same as a 2% chance of such an event occurring in the next hundred years which doesn’t look to be so unreasonable but for the fact that longer term predictions should have more uncertainty. Overall, a 0.2% chance of disruption seems to be too high, and if your probability model is accurate then one should expect the functional simulation to arrive well before then. But note also that civilization collapsing is not the only thing that could easily block this sort of event. Events much smaller than full on collapse could do it also, as could many more mundane issues.
That high an estimate seems to be likely vulnerable to the planning fallacy.
Overall, your estimate seems to be too confident, the 2020 estimate especially so.
I would put something like a 0.04% chance on a neuroscience disrupting event (including a biology disrupting event, or a science disrupting event, or a civilization disrupting event). I put something like a 0.16% chance on uploading the nematode actually being so hard that it takes 8 years. I totally buy that this estimate is a planning fallacy. Unfortunately, being aware of the planning fallacy does not make it go away.
True. But there are ways to calibrate for it. It seems that subtracting off 10-15% for technological predictions works well. If one is being more careful it probably would do something that was more careful, say taking not a fixed percentage but something that became less severe as the probability estimate of the event went up, so that one could still have genuinely high confidence intervals. But if one is in doubt simply reducing the probability until it doesn’t look like the planning fallacy is likely is one way to approach things.