I’m guessing that a proponent of Christiano’s theory would say: sure, such-and-such startup succeeded but it was because they were the only ones working on problem P, so problem P was an uncrowded field at the time. Okay, but why do we draw the boundary around P rather than around “software” or around something in between which was crowded?
I’d make a different reply: you need to not just look at the winning startup, but all startups. If it’s the case that the ‘startup ecosystem’ is earning 100% returns and the rest of the economy is earning 5% returns, then something weird is up and the model is falsified, but if the startup ecosystem is earning 10% returns once you average together the few successes and many failures, then this looks more like a normal risk-return story.
Furthermore, there’s something interesting where the modern startup economy feels much more like the Paulian ‘concentration of power’ story than the Yudkowskian ‘wisdom of USENET’ story; teams that make video games might be able to turn a handful of people into tens of millions of dollars in revenue (or billions in an extreme case), but teams that make self-driving cars mostly have to be able to tell a story about being able to turn billions of investor dollars into teams of engineers and mountains of hardware that will then be able to produce the self-driving cars, with the race between companies not being “who has a better product already” but “who can better acquire the means to create a better product.”
I’m pretty sympathetic to the view that the first transformative intelligence will look more like a breakout indie game than a AAA title because there’s some new ‘gimmick’ that can be made by a small team and that has an outsized impact on usefulness. But it seems important to note that a lot of the economy doesn’t work that way, even lots of the ‘build speculative future tech’ part!
I don’t see what it has to do with risk-return. Sure, many startups fail. And, plausibly many people tried to build an airplane and failed before the Wright brothers. And, many people keep trying building AGI and failing. This doesn’t mean there won’t be kinks in AI progress or even a TAI created by a small group.
Saying that “the subjective expected value of AI progress over time is a smooth curve” is a very different proposition from “the actual AI progress over time will be a smooth curve”.
My line of argument here is not trying to prove a particular story about AI progress (e.g. “TAI will be similar to a startup”) but push pack against (/ voice my confusions about) the confidence level of predictions made by Christiano’s model.
My line of argument here is not trying to prove a particular story about AI progress (e.g. “TAI will be similar to a startup”) but push pack against (/ voice my confusions about) the confidence level of predictions made by Christiano’s model.
What is the confidence level of predictions you are pushing back against? I’m at like 30% on fast takeoff in the sense of “1 year doubling without preceding 4 year doubling” (a threshold roughly set to break any plausible quantitative historical precedent a threshold intended to be faster than historical precedent but that’s probably similar to the agricultural revolution sped up 10,000x). I’m at maybe 10-20% on the kind of crazier world Eliezer imagines.
Is that a high level of confidence? I’m not sure I would be able to spread my probability in a way that felt unconfident (to me) without giving probabilities that low to lots of particular ways the future could be crazy. E.g. 10-20% is similar to the probability I put on other crazy-feeling possibilities like no singularity at all, rapid GDP acceleration with only moderate cognitive automation, or singleton that arrests economic growth before we get to 4 year doubling times...
I’m at like 30% on fast takeoff in the sense of “1 year doubling without preceding 4 year doubling” (a threshold roughly set to break any plausible quantitative historical precedent).
Huh, AI impacts looked at one dataset of GWP (taken from wikipedia, in turn taken from here) and found 2 precedents for “x year doubling without preceding 4x year doubling”, roughly during the agricultural evolution. The dataset seems to be a combination of lots of different papers’ estimates of human population, plus an assumption of ~constant GWP/capita early in history.
Yeah, I think this was wrong. I’m somewhat skeptical of the numbers and suspect future revisions systematically softening those accelerations, but 4x still won’t look that crazy.
(I don’t remember exactly how I chose that number but it probably involved looking at the same time series so wasn’t designed to be much more abrupt.)
I’d make a different reply: you need to not just look at the winning startup, but all startups. If it’s the case that the ‘startup ecosystem’ is earning 100% returns and the rest of the economy is earning 5% returns, then something weird is up and the model is falsified, but if the startup ecosystem is earning 10% returns once you average together the few successes and many failures, then this looks more like a normal risk-return story.
Furthermore, there’s something interesting where the modern startup economy feels much more like the Paulian ‘concentration of power’ story than the Yudkowskian ‘wisdom of USENET’ story; teams that make video games might be able to turn a handful of people into tens of millions of dollars in revenue (or billions in an extreme case), but teams that make self-driving cars mostly have to be able to tell a story about being able to turn billions of investor dollars into teams of engineers and mountains of hardware that will then be able to produce the self-driving cars, with the race between companies not being “who has a better product already” but “who can better acquire the means to create a better product.”
I’m pretty sympathetic to the view that the first transformative intelligence will look more like a breakout indie game than a AAA title because there’s some new ‘gimmick’ that can be made by a small team and that has an outsized impact on usefulness. But it seems important to note that a lot of the economy doesn’t work that way, even lots of the ‘build speculative future tech’ part!
I don’t see what it has to do with risk-return. Sure, many startups fail. And, plausibly many people tried to build an airplane and failed before the Wright brothers. And, many people keep trying building AGI and failing. This doesn’t mean there won’t be kinks in AI progress or even a TAI created by a small group.
Saying that “the subjective expected value of AI progress over time is a smooth curve” is a very different proposition from “the actual AI progress over time will be a smooth curve”.
My line of argument here is not trying to prove a particular story about AI progress (e.g. “TAI will be similar to a startup”) but push pack against (/ voice my confusions about) the confidence level of predictions made by Christiano’s model.
What is the confidence level of predictions you are pushing back against? I’m at like 30% on fast takeoff in the sense of “1 year doubling without preceding 4 year doubling” (
a threshold roughly set to break any plausible quantitative historical precedenta threshold intended to be faster than historical precedent but that’s probably similar to the agricultural revolution sped up 10,000x). I’m at maybe 10-20% on the kind of crazier world Eliezer imagines.Is that a high level of confidence? I’m not sure I would be able to spread my probability in a way that felt unconfident (to me) without giving probabilities that low to lots of particular ways the future could be crazy. E.g. 10-20% is similar to the probability I put on other crazy-feeling possibilities like no singularity at all, rapid GDP acceleration with only moderate cognitive automation, or singleton that arrests economic growth before we get to 4 year doubling times...
Huh, AI impacts looked at one dataset of GWP (taken from wikipedia, in turn taken from here) and found 2 precedents for “x year doubling without preceding 4x year doubling”, roughly during the agricultural evolution. The dataset seems to be a combination of lots of different papers’ estimates of human population, plus an assumption of ~constant GWP/capita early in history.
Yeah, I think this was wrong. I’m somewhat skeptical of the numbers and suspect future revisions systematically softening those accelerations, but 4x still won’t look that crazy.
(I don’t remember exactly how I chose that number but it probably involved looking at the same time series so wasn’t designed to be much more abrupt.)