This is the multiple stages fallacy. Not only is each of the probabilities in your list too low, if you actually consider them as conditional probabilities they’re double- and triple-counting the same uncertainties. And since they’re all mulitplied together, and all err in the same direction, the error compounds.
P(We invent algorithms for transformative AGI | No derailment from regulation, AI, wars, pandemics, or severe depressions): .8
P(We invent a way for AGIs to learn faster than humans | We invent algorithms for transformative AGI): 1. This row is already incorporated into the previous row.
P(AGI inference costs drop below $25/hr (per human equivalent): 1. This is also already incorporated into “we invent algorithms for transformative AGI”; an algorithm with such extreme inference costs wouldn’t count (and, I think, would be unlikely to be developed in the first place).
We invent and scale cheap, quality robots: Not a prerequisite.
We massively scale production of chips and power: Not a prerequisite if we have already already conditioned on inference costs.
We avoid derailment by human regulation: 0.9
We avoid derailment by AI-caused delay: 1. I would consider an AI that derailed development of other AI ot be transformative.
We avoid derailment from wars (e.g., China invades Taiwan): 0.98.
We avoid derailment from pandemics: 0.995. Thanks to COVID, our ability to continue making technological progress during a pandemic which requires everyone to isolate is already battle-tested.
We avoid derailment from severe depressions: 0.99.
Hey Tamay, nice meeting you at The Curve. Just saw your comment here today.
Things we could potentially bet on: - rate of GDP growth by 2027 / 2030 / 2040 - rate of energy consumption growth by 2027 / 2030 / 2040 - rate of chip production by 2027 / 2030 / 2040 - rates of unemployment (though confounded)
Any others you’re interested in? Degree of regulation feels like a tricky one to quantify.
How about AI company and hardware company valuations? (Maybe in 2026, 2027, 2030 or similar.)
Or what about benchmark/task performance? Is there any benchmark/task you think won’t get beaten in the next few years? (And, ideally, if it did get beaten, you would change you mind.) Maybe “AI won’t be able to autonomously write good ML research papers (as judged by (e.g.) not having notably more errors than human written papers and getting into NeurIPS with good reviews)”? Could do “make large PRs to open source repos that are considered highly valuable” or “make open source repos that are widely used”.
These might be a bit better to bet on as they could be leading indicators
(It’s still the case that betting on the side of fast AI progress might be financially worse than just trying to invest or taking out a loan, but it could be easier to bet than to invest in e.g. OpenAI. Regardless, part of the point of betting is clearly demonstrating a view.)
There is an additional problem where one of the two key principles for their estimates is
Avoid extreme confidence
If this principle leads you to picking probability estimates that have some distance to 1 (eg by picking at most 0.95).
If you build a fully conjunctive model, and you are not that great at extreme probabilities, then you will have a strong bias towards low overall estimates.
And you can make your probability estimates even lower by introducing more (conjunctive) factors.
This is the multiple stages fallacy. Not only is each of the probabilities in your list too low, if you actually consider them as conditional probabilities they’re double- and triple-counting the same uncertainties. And since they’re all mulitplied together, and all err in the same direction, the error compounds.
What conditional probabilities would you assign, if you think ours are too low?
P(We invent algorithms for transformative AGI | No derailment from regulation, AI, wars, pandemics, or severe depressions): .8
P(We invent a way for AGIs to learn faster than humans | We invent algorithms for transformative AGI): 1. This row is already incorporated into the previous row.
P(AGI inference costs drop below $25/hr (per human equivalent): 1. This is also already incorporated into “we invent algorithms for transformative AGI”; an algorithm with such extreme inference costs wouldn’t count (and, I think, would be unlikely to be developed in the first place).
We invent and scale cheap, quality robots: Not a prerequisite.
We massively scale production of chips and power: Not a prerequisite if we have already already conditioned on inference costs.
We avoid derailment by human regulation: 0.9
We avoid derailment by AI-caused delay: 1. I would consider an AI that derailed development of other AI ot be transformative.
We avoid derailment from wars (e.g., China invades Taiwan): 0.98.
We avoid derailment from pandemics: 0.995. Thanks to COVID, our ability to continue making technological progress during a pandemic which requires everyone to isolate is already battle-tested.
We avoid derailment from severe depressions: 0.99.
Interested in betting thousands of dollars on this prediction? I’m game.
I’m interested. What bets would you offer?
Hey Tamay, nice meeting you at The Curve. Just saw your comment here today.
Things we could potentially bet on:
- rate of GDP growth by 2027 / 2030 / 2040
- rate of energy consumption growth by 2027 / 2030 / 2040
- rate of chip production by 2027 / 2030 / 2040
- rates of unemployment (though confounded)
Any others you’re interested in? Degree of regulation feels like a tricky one to quantify.
How about AI company and hardware company valuations? (Maybe in 2026, 2027, 2030 or similar.)
Or what about benchmark/task performance? Is there any benchmark/task you think won’t get beaten in the next few years? (And, ideally, if it did get beaten, you would change you mind.) Maybe “AI won’t be able to autonomously write good ML research papers (as judged by (e.g.) not having notably more errors than human written papers and getting into NeurIPS with good reviews)”? Could do “make large PRs to open source repos that are considered highly valuable” or “make open source repos that are widely used”.
These might be a bit better to bet on as they could be leading indicators
(It’s still the case that betting on the side of fast AI progress might be financially worse than just trying to invest or taking out a loan, but it could be easier to bet than to invest in e.g. OpenAI. Regardless, part of the point of betting is clearly demonstrating a view.)
There is an additional problem where one of the two key principles for their estimates is
If this principle leads you to picking probability estimates that have some distance to 1 (eg by picking at most 0.95).
If you build a fully conjunctive model, and you are not that great at extreme probabilities, then you will have a strong bias towards low overall estimates. And you can make your probability estimates even lower by introducing more (conjunctive) factors.