P(We invent algorithms for transformative AGI | No derailment from regulation, AI, wars, pandemics, or severe depressions): .8
P(We invent a way for AGIs to learn faster than humans | We invent algorithms for transformative AGI): 1. This row is already incorporated into the previous row.
P(AGI inference costs drop below $25/hr (per human equivalent): 1. This is also already incorporated into “we invent algorithms for transformative AGI”; an algorithm with such extreme inference costs wouldn’t count (and, I think, would be unlikely to be developed in the first place).
We invent and scale cheap, quality robots: Not a prerequisite.
We massively scale production of chips and power: Not a prerequisite if we have already already conditioned on inference costs.
We avoid derailment by human regulation: 0.9
We avoid derailment by AI-caused delay: 1. I would consider an AI that derailed development of other AI ot be transformative.
We avoid derailment from wars (e.g., China invades Taiwan): 0.98.
We avoid derailment from pandemics: 0.995. Thanks to COVID, our ability to continue making technological progress during a pandemic which requires everyone to isolate is already battle-tested.
We avoid derailment from severe depressions: 0.99.
What conditional probabilities would you assign, if you think ours are too low?
P(We invent algorithms for transformative AGI | No derailment from regulation, AI, wars, pandemics, or severe depressions): .8
P(We invent a way for AGIs to learn faster than humans | We invent algorithms for transformative AGI): 1. This row is already incorporated into the previous row.
P(AGI inference costs drop below $25/hr (per human equivalent): 1. This is also already incorporated into “we invent algorithms for transformative AGI”; an algorithm with such extreme inference costs wouldn’t count (and, I think, would be unlikely to be developed in the first place).
We invent and scale cheap, quality robots: Not a prerequisite.
We massively scale production of chips and power: Not a prerequisite if we have already already conditioned on inference costs.
We avoid derailment by human regulation: 0.9
We avoid derailment by AI-caused delay: 1. I would consider an AI that derailed development of other AI ot be transformative.
We avoid derailment from wars (e.g., China invades Taiwan): 0.98.
We avoid derailment from pandemics: 0.995. Thanks to COVID, our ability to continue making technological progress during a pandemic which requires everyone to isolate is already battle-tested.
We avoid derailment from severe depressions: 0.99.
Interested in betting thousands of dollars on this prediction? I’m game.
I’m interested. What bets would you offer?