Re (a): I looked at chapters 4 and 5 of Superintelligence again, and I can kind of see what you mean, but I’m also frustrated that Bostrom seems really non-committal in the book. He lists a whole bunch of possibilities but then doesn’t seem to actually come out and give his mainline visualization/”median future”. For example he looks at historical examples of technology races and compares how much lag there was, which seems a lot like the kind of thinking you are doing, but then he also says things like “For example, if human-level AI is delayed because one key insight long eludes programmers, then when the final breakthrough occurs, the AI might leapfrog from below to radically above human level without even touching the intermediary rungs.” which sounds like the deep math view. Another relevant quote:
Building a seed AI might require insights and algorithms developed over many decades by the scientific community around the world. But it is possible that the last critical breakthrough idea might come from a single individual or a small group that succeeds in putting everything together. This scenario is less realistic for some AI architectures than others. A system that has a large number of parts that need to be tweaked and tuned to work effectively together, and then painstakingly loaded with custom-made cognitive content, is likely to require a larger project. But if a seed AI could be instantiated as a simple system, one whose construction depends only on getting a few basic principles right, then the feat might be within the reach of a small team or an individual. The likelihood of the final breakthrough being made by a small project increases if most previous progress in the field has been published in the open literature or made available as open source software.
Re (b): I don’t disagree with you here. (The only part that worries me is, I don’t have a good idea of what percentage of “AI safety people” shifted from one view to the other, whether were were also new people with different views coming in to the field, etc.) I realize the OP was mainly about failure scenarios, but it did also mention takeoffs (“takeoffs won’t be too fast”) and I was most curious about that part.
I also wish I knew what Bostrom’s median future was like, though I perhaps understand why he didn’t put it in his book—the incentives all push against it. Predicting the future is hard and people will hold it against you if you fail, whereas if you never try at all and instead say lots of vague prophecies, people will laud you as a visionary prophet.
Re (b) cool, I think we are on the same page then. Re takeoff being too fast—I think a lot of people these days think there will be plenty of big scary warning shots and fire alarms that motivate lots of people to care about AI risk and take it seriously. I think that suggests that a lot of people expect a fairly slow takeoff, slower than I think is warranted. Might happen, yes, but I don’t think Paul & Katja’s arguments are that convincing that takeoff will be this slow. It’s a big source of uncertainty for me though.
Re (a): I looked at chapters 4 and 5 of Superintelligence again, and I can kind of see what you mean, but I’m also frustrated that Bostrom seems really non-committal in the book. He lists a whole bunch of possibilities but then doesn’t seem to actually come out and give his mainline visualization/”median future”. For example he looks at historical examples of technology races and compares how much lag there was, which seems a lot like the kind of thinking you are doing, but then he also says things like “For example, if human-level AI is delayed because one key insight long eludes programmers, then when the final breakthrough occurs, the AI might leapfrog from below to radically above human level without even touching the intermediary rungs.” which sounds like the deep math view. Another relevant quote:
Re (b): I don’t disagree with you here. (The only part that worries me is, I don’t have a good idea of what percentage of “AI safety people” shifted from one view to the other, whether were were also new people with different views coming in to the field, etc.) I realize the OP was mainly about failure scenarios, but it did also mention takeoffs (“takeoffs won’t be too fast”) and I was most curious about that part.
I also wish I knew what Bostrom’s median future was like, though I perhaps understand why he didn’t put it in his book—the incentives all push against it. Predicting the future is hard and people will hold it against you if you fail, whereas if you never try at all and instead say lots of vague prophecies, people will laud you as a visionary prophet.
Re (b) cool, I think we are on the same page then. Re takeoff being too fast—I think a lot of people these days think there will be plenty of big scary warning shots and fire alarms that motivate lots of people to care about AI risk and take it seriously. I think that suggests that a lot of people expect a fairly slow takeoff, slower than I think is warranted. Might happen, yes, but I don’t think Paul & Katja’s arguments are that convincing that takeoff will be this slow. It’s a big source of uncertainty for me though.