A thing that didn’t appear on your list, and which I think is pretty important (cruxy for a lot of discussions; closest to what Hanson meant in the FOOM debate), is “human-relative discontinuity/speed”. Here the question is something like: “how much faster does AI get smarter, compared to humans?”. There’s conceptual confusion / talking past each other in part because one aspect of the debate is:
how much locking force there is between AI and humans (e.g. humans can learn from AIs teaching them, can learn from AI’s internals, can use AIs, and humans share ideas with other humans about AI (this was what Hanson argued))
and other aspect is
how fast does an intelligence explosion go, by the stars (sidereal).
If you think there’s not much coupling, then sidereal speed is the crux about whether takeoff will look discontinuous. But if you think there’s a lot of coupling, then you might think something else is a crux about continuity, e.g. “how big are the biggest atomic jumps in capability”.
Not sure I understand your question. If you mean just what I think is the case about FOOM:
Obviously, there’s no strong reason humans will stay coupled with an AGI. The AGI’s thoughts will be highly alien—that’s kinda the point.
Obviously, new ways of thinking recursively beget powerful new ways of thinking. This is obvious from the history of thinking and from introspection. And obviously this goes faster and faster. And obviously will go much faster in an AGI.
Therefore, from our perspective, there will be a fast-and-sharp FOOM.
But I don’t really know what to think about Christiano-slow takeoff.
I.e. a 4-year GDP doubling before a 1-year GDP doubling.
I think Christiano agrees that there will later be a sharp/fast/discontinuous(??) FOOM, but he thinks things will get really weird and fast before that point. To me this is vaguely in the genre of trying to predict whether you can usefully get nuclear power out of a pile without setting off a massive explosion, when you’ve only heard conceptually about the idea of nuclear decay. But I imagine Christiano actually did some BOTECs to get the numbers “4” and “1″.
If I were to guess at where I’d disagree with Christiano: Maybe he thinks that in the slow part of the slow takeoff, humans can make a bunch of progress on aligning / interfacing with / getting work out of AI stuff, to such an extent that from those future humans’s perspectives, the fast part of the slow takeoff will actually be slow, in the relative sense. In other words, if the fast part came today, it would be fast, but if it came later, it would be slow, because we’d be able to keep up. Whereas I think aligning/interfacing, in the part where it counts, is crazy hard, and doesn’t especially have to be coupled with nascent-AGI-driven capabilities advances. A lot of Christiano’s work has (explicitly) a strategy-stealing flavor: if capability X exists, then we / an aligned thingy should be able to steal the way to do X and do it alignedly. If you think you can do that, then it makes sense to think that our understanding will be coupled with AGI’s understanding.
I meant ‘do you think it’s good, bad, or neutral that people use the phrase ‘slow’/‘fast’ takeoff? And, if bad, what do you wish people did instead in those sentences?
Depends on context; I guess by raw biomass, it’s bad because those phrases would probably indicate that people aren’t really thinking and they should taboo those phrases and ask why they wanted to discuss them? But if that’s the case and they haven’t already done that, maybe there’s a more important underlying problem, such as Sinclair’s razor.
A thing that didn’t appear on your list, and which I think is pretty important (cruxy for a lot of discussions; closest to what Hanson meant in the FOOM debate), is “human-relative discontinuity/speed”. Here the question is something like: “how much faster does AI get smarter, compared to humans?”. There’s conceptual confusion / talking past each other in part because one aspect of the debate is:
how much locking force there is between AI and humans (e.g. humans can learn from AIs teaching them, can learn from AI’s internals, can use AIs, and humans share ideas with other humans about AI (this was what Hanson argued))
and other aspect is
how fast does an intelligence explosion go, by the stars (sidereal).
If you think there’s not much coupling, then sidereal speed is the crux about whether takeoff will look discontinuous. But if you think there’s a lot of coupling, then you might think something else is a crux about continuity, e.g. “how big are the biggest atomic jumps in capability”.
What does this cache out to in terms of what terms you think make sense?
Not sure I understand your question. If you mean just what I think is the case about FOOM:
Obviously, there’s no strong reason humans will stay coupled with an AGI. The AGI’s thoughts will be highly alien—that’s kinda the point.
Obviously, new ways of thinking recursively beget powerful new ways of thinking. This is obvious from the history of thinking and from introspection. And obviously this goes faster and faster. And obviously will go much faster in an AGI.
Therefore, from our perspective, there will be a fast-and-sharp FOOM.
But I don’t really know what to think about Christiano-slow takeoff.
I.e. a 4-year GDP doubling before a 1-year GDP doubling.
I think Christiano agrees that there will later be a sharp/fast/discontinuous(??) FOOM, but he thinks things will get really weird and fast before that point. To me this is vaguely in the genre of trying to predict whether you can usefully get nuclear power out of a pile without setting off a massive explosion, when you’ve only heard conceptually about the idea of nuclear decay. But I imagine Christiano actually did some BOTECs to get the numbers “4” and “1″.
If I were to guess at where I’d disagree with Christiano: Maybe he thinks that in the slow part of the slow takeoff, humans can make a bunch of progress on aligning / interfacing with / getting work out of AI stuff, to such an extent that from those future humans’s perspectives, the fast part of the slow takeoff will actually be slow, in the relative sense. In other words, if the fast part came today, it would be fast, but if it came later, it would be slow, because we’d be able to keep up. Whereas I think aligning/interfacing, in the part where it counts, is crazy hard, and doesn’t especially have to be coupled with nascent-AGI-driven capabilities advances. A lot of Christiano’s work has (explicitly) a strategy-stealing flavor: if capability X exists, then we / an aligned thingy should be able to steal the way to do X and do it alignedly. If you think you can do that, then it makes sense to think that our understanding will be coupled with AGI’s understanding.
I meant ‘do you think it’s good, bad, or neutral that people use the phrase ‘slow’/‘fast’ takeoff? And, if bad, what do you wish people did instead in those sentences?
Depends on context; I guess by raw biomass, it’s bad because those phrases would probably indicate that people aren’t really thinking and they should taboo those phrases and ask why they wanted to discuss them? But if that’s the case and they haven’t already done that, maybe there’s a more important underlying problem, such as Sinclair’s razor.