I emphasize that we arrived at this concrete disagreement in the wake of my naming multiple sure-that-could-happen predictions of Paul’s that I thought looked more likely in Paul’s universe than my own, including:
Average Mike Blume-level programmers making $10M/year.
The world economy doubling in any four-year period before the world ends.
I’d be happy to do bet about fast growth prior to end of world. I’d say something like:
> 25% chance that there are 4 consecutive years before 2045 with >10% real economic growth per year, under an accounting scheme that feels plausible/real to Eliezer and Paul, and the fast growth sure feels to Paul and Eliezer like it’s about automation rather than some weird thing like recovery from a pandemic.
You would win when the end of the world looks close enough that we can’t get 4 years of progress (e.g. if we have 50% growth in one year with <10% progress in the year before, or if it becomes clear enough that we won’t have time for rapid growth before the AI-R&D-AI gets us to a singularity). It’s kind of unsatisfying not to pay out until right before the end with kind of arbitrary conditions (since it’s so hard for you to actually win before 2045 unless the world ends).
I would give you higher probabilities if we relax 2045, but then it just gets even harder for you to win. Probably there is some better operationalization which I’m open to. Maybe a higher threshold than 10% growth for 4 years is better, but my probabilities will start to decline and I’d guess we get more expected info from disagreements about higher probability events.
(I don’t know what kind of programmer Mike Blume is but don’t expect to ever have many $10M+ programmers. Likewise, I don’t think we have $10T spent training a single AI model until after we’ve already faced down alignment risk, and I assume that you also thinks that society will eventually use >>$10T of resources on training a single ML model. I would have thought we could also bet about softer versions of those things—I assume I also assign much higher probability than you do $1B training runs even if it’s not worldview-falsifying, I don’t think worldviews usually get falsified so much as modestly undermined.)
I emphasize that we arrived at this concrete disagreement in the wake of my naming multiple sure-that-could-happen predictions of Paul’s that I thought looked more likely in Paul’s universe than my own, including:
Average Mike Blume-level programmers making $10M/year.
The world economy doubling in any four-year period before the world ends.
$10 trillion spent on training any AI model.
I’d be happy to do bet about fast growth prior to end of world. I’d say something like:
You would win when the end of the world looks close enough that we can’t get 4 years of progress (e.g. if we have 50% growth in one year with <10% progress in the year before, or if it becomes clear enough that we won’t have time for rapid growth before the AI-R&D-AI gets us to a singularity). It’s kind of unsatisfying not to pay out until right before the end with kind of arbitrary conditions (since it’s so hard for you to actually win before 2045 unless the world ends).
I would give you higher probabilities if we relax 2045, but then it just gets even harder for you to win. Probably there is some better operationalization which I’m open to. Maybe a higher threshold than 10% growth for 4 years is better, but my probabilities will start to decline and I’d guess we get more expected info from disagreements about higher probability events.
(I don’t know what kind of programmer Mike Blume is but don’t expect to ever have many $10M+ programmers. Likewise, I don’t think we have $10T spent training a single AI model until after we’ve already faced down alignment risk, and I assume that you also thinks that society will eventually use >>$10T of resources on training a single ML model. I would have thought we could also bet about softer versions of those things—I assume I also assign much higher probability than you do $1B training runs even if it’s not worldview-falsifying, I don’t think worldviews usually get falsified so much as modestly undermined.)