Nice! This has been a productive exchange; it seems we agree on the following things:
--We both agree that probably the GPT scaling trends will continue, at least for the next few OOMs; the main disagreement is about what the practical implications of this will be—sure, we’ll have human-level text prediction and superhuman multiple-choice-test-takers, but will we have APS-AI? Etc.
--I agree with what you said about chimps and GPT-3 etc. GPT-3 is more impressive than a chimp in some ways, and less in others, and just because we could easily get from chimp to AGI doesn’t mean we can easily get from GPT-3 to AGI. (And OmegaStar may be relevantly similar to GPT-3 in this regard, for all we know.) My point was a weak one which I think you’d agree with: Generally speaking, the more ways in which system X seems smarter than a chimp, the more plausible it should seem that we can easily get from X to AGI, since we believe we could easily get from a chimp to AGI.
--Now we are on the same page about Premise 2 and the graphs. Sorry it was so confusing. I totally agree, if instead of 80% you only have 55% by +12 OOMs, then you are free to have relatively little probability mass by +6. And you do.
(Note that my numbers re: short-horizon systems + 12 OOMs being enough, and for +12 OOMs in general, changed since an earlier version you read, to 35% and 65% respectively.)
I built it by taking Ajeya’s distribution from her report and modifying it so that: --25% is in the red zone (the next 6 ooms) --65% is in the red+blue zone (the next 12) --It looks as smooth and reasonable as I could make it subject to those constraints, and generally departs only a little from Ajeya’s. Note that it still has 10% in the purple zone representing “Not even +50 OOMs would be enough with 2020′s ideas”
I encourage you (and everyone else!) to play around with drawing distributions, I found it helpful. You should be able to make a copy of my drawing in Grid Paint and then modify it.
Nice! This has been a productive exchange; it seems we agree on the following things:
--We both agree that probably the GPT scaling trends will continue, at least for the next few OOMs; the main disagreement is about what the practical implications of this will be—sure, we’ll have human-level text prediction and superhuman multiple-choice-test-takers, but will we have APS-AI? Etc.
--I agree with what you said about chimps and GPT-3 etc. GPT-3 is more impressive than a chimp in some ways, and less in others, and just because we could easily get from chimp to AGI doesn’t mean we can easily get from GPT-3 to AGI. (And OmegaStar may be relevantly similar to GPT-3 in this regard, for all we know.) My point was a weak one which I think you’d agree with: Generally speaking, the more ways in which system X seems smarter than a chimp, the more plausible it should seem that we can easily get from X to AGI, since we believe we could easily get from a chimp to AGI.
--Now we are on the same page about Premise 2 and the graphs. Sorry it was so confusing. I totally agree, if instead of 80% you only have 55% by +12 OOMs, then you are free to have relatively little probability mass by +6. And you do.
(Note that my numbers re: short-horizon systems + 12 OOMs being enough, and for +12 OOMs in general, changed since an earlier version you read, to 35% and 65% respectively.)
Ok, cool! Here, is this what your distribution looks like basically?
Joe’s Distribution?? - Grid Paint (grid-paint.com)
I built it by taking Ajeya’s distribution from her report and modifying it so that:
--25% is in the red zone (the next 6 ooms)
--65% is in the red+blue zone (the next 12)
--It looks as smooth and reasonable as I could make it subject to those constraints, and generally departs only a little from Ajeya’s.
Note that it still has 10% in the purple zone representing “Not even +50 OOMs would be enough with 2020′s ideas”
I encourage you (and everyone else!) to play around with drawing distributions, I found it helpful. You should be able to make a copy of my drawing in Grid Paint and then modify it.