You’re right, that’s what would happen with an update.
I think that the model I have in mind (although I hadn’t explicitly thought about it until know), is something like a distribution over ways to reach TAI (capturing how probable it is that they’re the first way to reach AGI), and each option comes with its own distribution (let’s say over years). Obviously you can compress that into a single distribution over years, but then you lose the ability to do fine grained updating.
For example, I imagine that someone with relatively low probability that prosaic AGI will be the first to reach AGI, upon reading your post, would have reasons to update the distribution for prosaic AGI in the way you discuss, but not to update the probability that prosaic AGI will be the first to reach TAI. On the other hand, if there was a argument centered more around an amount of compute we could plausibly get in a short timeframe (the kind of thing we discuss as potential follow-up work), then I’d expect that this same person, if convinced, would put more probability that prosaic AGI will be the first to reach TAI.
Graph-based argument
I must admit that I have trouble reading your graph because there’s no scale (although I expect the spiky part is centered at +12 OOMs? As for the textual argument, I actually think it makes sense to put quite low probability to +13 OOMs if one agrees with your scenario.
Maybe my argument is a bit weird, but it goes something like this: based on your scenarios, it should be almost sure that we can reach TAI with +12 OOMs of magnitude. If it’s not the case, then there’s something fundamentally difficult about reaching TAI with prosaic AGI (because you’re basically throwing all the compute we want at it), and so I expect very little probability of a gain from 1 OOMs.
The part about this reasoning that feels weird is that I reason about 13 OOMs based on what happens at 12 OOMs, and the idea that we care about 13 OOMs iff 12 OOMs is not enough. It might be completely wrong.
Reasons for 12 OOMs
To the first suspicion I’ll say: I had good reasons for writing about 12 rather than 6 which I am happy to tell you about if you like.
I’m both interested, and (without knowing them), I expect that I will want you to have put them in the post, to deal with the implicit conclusion that you couldn’t argue 6 OOMs.
Also interested by your arguments for 6 OOMs or pointers.
About the update
You’re right, that’s what would happen with an update.
I think that the model I have in mind (although I hadn’t explicitly thought about it until know), is something like a distribution over ways to reach TAI (capturing how probable it is that they’re the first way to reach AGI), and each option comes with its own distribution (let’s say over years). Obviously you can compress that into a single distribution over years, but then you lose the ability to do fine grained updating.
For example, I imagine that someone with relatively low probability that prosaic AGI will be the first to reach AGI, upon reading your post, would have reasons to update the distribution for prosaic AGI in the way you discuss, but not to update the probability that prosaic AGI will be the first to reach TAI. On the other hand, if there was a argument centered more around an amount of compute we could plausibly get in a short timeframe (the kind of thing we discuss as potential follow-up work), then I’d expect that this same person, if convinced, would put more probability that prosaic AGI will be the first to reach TAI.
Graph-based argument
I must admit that I have trouble reading your graph because there’s no scale (although I expect the spiky part is centered at +12 OOMs? As for the textual argument, I actually think it makes sense to put quite low probability to +13 OOMs if one agrees with your scenario.
Maybe my argument is a bit weird, but it goes something like this: based on your scenarios, it should be almost sure that we can reach TAI with +12 OOMs of magnitude. If it’s not the case, then there’s something fundamentally difficult about reaching TAI with prosaic AGI (because you’re basically throwing all the compute we want at it), and so I expect very little probability of a gain from 1 OOMs.
The part about this reasoning that feels weird is that I reason about 13 OOMs based on what happens at 12 OOMs, and the idea that we care about 13 OOMs iff 12 OOMs is not enough. It might be completely wrong.
Reasons for 12 OOMs
I’m both interested, and (without knowing them), I expect that I will want you to have put them in the post, to deal with the implicit conclusion that you couldn’t argue 6 OOMs.
Also interested by your arguments for 6 OOMs or pointers.