Are we imagining a small team of hackers in their basement trying to get AGI on a laptop, or a big corporation using tons of resources?
How does the AGI learn about the world? If you say “it reads the Internet”, how does it learn to read?
When the developers realize that they’ve built AGI, is it still possible for them to pull the plug?
Why doesn’t the AGI try to be deceptive in ways that we can detect, the way children do? Is it just immediately as capable as a smart human and doesn’t need any training? How can that happen by just “finding the right architecture”?
Why is this likely to happen soon when it hasn’t happened in the last sixty years?
I suspect answers to these will provoke lots of other questions. In contrast, the non-foom worlds that still involve AGI + very fast growth seem much closer to a “business-as-usual” world.
I also think that if you’re worried about foom, you should basically not care about any of the work being done at DeepMind / OpenAI right now, because that’s not the kind of work that can foom (except in the “we suddenly find the right architecture” story); yet I notice lots of doomy predictions about AGI are being driven by DM / OAI’s work. (Of course, plausibly you think OpenAI / DM are not going to succeed, even if others do.)
Lots of other things:
Are we imagining a small team of hackers in their basement trying to get AGI on a laptop, or a big corporation using tons of resources?
How does the AGI learn about the world? If you say “it reads the Internet”, how does it learn to read?
When the developers realize that they’ve built AGI, is it still possible for them to pull the plug?
Why doesn’t the AGI try to be deceptive in ways that we can detect, the way children do? Is it just immediately as capable as a smart human and doesn’t need any training? How can that happen by just “finding the right architecture”?
Why is this likely to happen soon when it hasn’t happened in the last sixty years?
I suspect answers to these will provoke lots of other questions. In contrast, the non-foom worlds that still involve AGI + very fast growth seem much closer to a “business-as-usual” world.
I also think that if you’re worried about foom, you should basically not care about any of the work being done at DeepMind / OpenAI right now, because that’s not the kind of work that can foom (except in the “we suddenly find the right architecture” story); yet I notice lots of doomy predictions about AGI are being driven by DM / OAI’s work. (Of course, plausibly you think OpenAI / DM are not going to succeed, even if others do.)
I’m going to start a fresh thread on this, it sounds more interesting (at least to me) than most of the other stuff being discussed here.