When I first read the now-classic arguments for slow takeoff—e.g. from Paul and Katja—I was excited; I thought they described a serious alternative scenario to the classic FOOM scenarios. However I never thought, and still do not think, that the classic FOOM scenarios were very unlikely; I feel that the slow takeoff and fast takeoff scenarios are probably within a factor of 2 of each other in probability.
Yet more and more nowadays I get the impression that people think slow takeoff is the only serious possibility. For example, Ajeya and Rohin seem very confident that if TAI was coming in the next five to ten years we would see loads more economic applications of AI now, therefore TAI isn’t coming in the next five to ten years...
I need to process my thoughts more on this, and reread their claims; maybe they aren’t as confident as they sound to me. But I worry that I need to go back to doing AI forecasting work after all (I left AI Impacts for CLR because I thought AI forecasting was less neglected) since so many people seem to have wrong views. ;)
This random rant/musing probably isn’t valuable to anyone besides me, but hey, it’s just a shortform. If you are reading this and you have thoughts or advice for me I’d love to hear it.
So there is a distribution over AGI plan costs. The max cost is some powerful bureaucrat/CEO/etc who has no idea how to do it at all but has access to huge amounts of funds, so their best bet is to try and brute force it by hiring all the respected scientists (eg manhattan project). But notice—if any of these scientists (or small teams) actually could do it mostly on their own (perhaps say with vc funding) - then usually they’d get a dramatically better deal doing it on their own rather than for bigcorp.
The min cost is the lucky smart researcher who has mostly figured out the solution, but probably has little funds, because they spent career time only on a direct path. Think wright brothers after the wing warping control trick they got from observing bird flight. Could a bigcorp or government have beat them? Of course, but the bigcorp would have had to spend OOM more.
Now add a second dimension let’s call vision variance—the distribution of AGI plan cost over all entities pursuing it. If that distribution is very flat, then everyone has the same obvious vision plan (or different but equivalently costly plans) and the winner is inevitably a big central player. However if the variance over visions/plans is high, then the winner is inevitably a garage researcher.
Software is much like flight in this regard—high vision variance. Nearly all major software tech companies were scrappy garage startups—google, microsoft, apple, facebook, etc. Why? Because it simply doesn’t matter at all how much money the existing bigcorp has—when the idea for X new software thing first occurs in human minds, it only occurs in a few, and those few minds are smart enough to realize it’s value, and they can implement it. The big central player is a dinosaur with zero leverage, and doesn’t see it coming until it’s too late.
AGI could be like software because . . it probably will be software. Alternatively it could be more like the manhattan project in that it fits into a well known and widely shared sci-fi level vision; all the relevant players know AGI is coming; it wasn’t so obvious that a new graph connectivity algorithm would then enable a search engine which actually works which then would takeover advertising—what?.
Finally another difference between the manhattan project and software is that the manhattan project required a non-trivial amount of tech tree climbing that was all done in secret, which is much harder for a small team to boostrap. Software research is done near fully in the open, which makes it much easier for a small team because they usually just need to provide the final recombinative innovation or two, building off the communal tech tree. Likewise aviation research was in the open, the wright brothers directly literally started with a big book of known airplane designs, like a training dataset.
So anyway one take of this is one shouldn’t discount AGI being created by an unknown garage researcher, as the probability mass in “AGI will be like other software” is non-trivial.
When I first read the now-classic arguments for slow takeoff—e.g. from Paul and Katja—I was excited; I thought they described a serious alternative scenario to the classic FOOM scenarios. However I never thought, and still do not think, that the classic FOOM scenarios were very unlikely; I feel that the slow takeoff and fast takeoff scenarios are probably within a factor of 2 of each other in probability.
Yet more and more nowadays I get the impression that people think slow takeoff is the only serious possibility. For example, Ajeya and Rohin seem very confident that if TAI was coming in the next five to ten years we would see loads more economic applications of AI now, therefore TAI isn’t coming in the next five to ten years...
I need to process my thoughts more on this, and reread their claims; maybe they aren’t as confident as they sound to me. But I worry that I need to go back to doing AI forecasting work after all (I left AI Impacts for CLR because I thought AI forecasting was less neglected) since so many people seem to have wrong views. ;)
This random rant/musing probably isn’t valuable to anyone besides me, but hey, it’s just a shortform. If you are reading this and you have thoughts or advice for me I’d love to hear it.
So there is a distribution over AGI plan costs. The max cost is some powerful bureaucrat/CEO/etc who has no idea how to do it at all but has access to huge amounts of funds, so their best bet is to try and brute force it by hiring all the respected scientists (eg manhattan project). But notice—if any of these scientists (or small teams) actually could do it mostly on their own (perhaps say with vc funding) - then usually they’d get a dramatically better deal doing it on their own rather than for bigcorp.
The min cost is the lucky smart researcher who has mostly figured out the solution, but probably has little funds, because they spent career time only on a direct path. Think wright brothers after the wing warping control trick they got from observing bird flight. Could a bigcorp or government have beat them? Of course, but the bigcorp would have had to spend OOM more.
Now add a second dimension let’s call vision variance—the distribution of AGI plan cost over all entities pursuing it. If that distribution is very flat, then everyone has the same obvious vision plan (or different but equivalently costly plans) and the winner is inevitably a big central player. However if the variance over visions/plans is high, then the winner is inevitably a garage researcher.
Software is much like flight in this regard—high vision variance. Nearly all major software tech companies were scrappy garage startups—google, microsoft, apple, facebook, etc. Why? Because it simply doesn’t matter at all how much money the existing bigcorp has—when the idea for X new software thing first occurs in human minds, it only occurs in a few, and those few minds are smart enough to realize it’s value, and they can implement it. The big central player is a dinosaur with zero leverage, and doesn’t see it coming until it’s too late.
AGI could be like software because . . it probably will be software. Alternatively it could be more like the manhattan project in that it fits into a well known and widely shared sci-fi level vision; all the relevant players know AGI is coming; it wasn’t so obvious that a new graph connectivity algorithm would then enable a search engine which actually works which then would takeover advertising—what?.
Finally another difference between the manhattan project and software is that the manhattan project required a non-trivial amount of tech tree climbing that was all done in secret, which is much harder for a small team to boostrap. Software research is done near fully in the open, which makes it much easier for a small team because they usually just need to provide the final recombinative innovation or two, building off the communal tech tree. Likewise aviation research was in the open, the wright brothers directly literally started with a big book of known airplane designs, like a training dataset.
So anyway one take of this is one shouldn’t discount AGI being created by an unknown garage researcher, as the probability mass in “AGI will be like other software” is non-trivial.