These three are the combo that seem, to me, better modeled as something different from “the economy just doing it’s thing, but acceleratingly”.
I don’t see this.
And why is “arbitrary learning capacity” a discrete thing? I’d think the important thing is that future systems will learn radically faster than current systems and be able to learn more complex things, but still won’t learn infinitely faster or be able to learn arbitrarily complex things (in the same ways that humans can’t). Why wouldn’t these parameters increase gradually?
A thought: you’ve been using the phrase “slow takeoff” to distinguish your model vs the MIRI-ish model, but I think the relevant phrase is more like “smooth takeoff vs sharp takeoff” (where the shape of the curve changes at some point)
But, your other comment + Robby’s has me convinced that the key disagreement doesn’t have anything to do with smooth vs sharp takeoff either. Just happens to be a point of disagreement without being an important.
Not sure if this is part of the confusion/disagreement, but by “arbitrary” I mean “able to learn ‘anything’” as opposed to “able to learn everything arbitrarily fast/well.” (i.e. instead of systems tailored to learn specific things like we have today, a system that can look at the domains that it might want to learn, choose which of those domains are most strategically relevant, and then learn whichever ones seem highest priority)
(The thing clearly needs to be better than a chimp at general purpose learning, it’s not obvious to me if it needs any particular equivalent IQ for this to start changing the nature of technological progress, but probably needs to be at least equivalent IQ 80 and maybe IQ 100 at least in some domains before it transitions from ‘cute science fair project’ to ‘industry-relevant’)
I don’t see this.
And why is “arbitrary learning capacity” a discrete thing? I’d think the important thing is that future systems will learn radically faster than current systems and be able to learn more complex things, but still won’t learn infinitely faster or be able to learn arbitrarily complex things (in the same ways that humans can’t). Why wouldn’t these parameters increase gradually?
A thought: you’ve been using the phrase “slow takeoff” to distinguish your model vs the MIRI-ish model, but I think the relevant phrase is more like “smooth takeoff vs sharp takeoff” (where the shape of the curve changes at some point)
But, your other comment + Robby’s has me convinced that the key disagreement doesn’t have anything to do with smooth vs sharp takeoff either. Just happens to be a point of disagreement without being an important.
Not sure if this is part of the confusion/disagreement, but by “arbitrary” I mean “able to learn ‘anything’” as opposed to “able to learn everything arbitrarily fast/well.” (i.e. instead of systems tailored to learn specific things like we have today, a system that can look at the domains that it might want to learn, choose which of those domains are most strategically relevant, and then learn whichever ones seem highest priority)
(The thing clearly needs to be better than a chimp at general purpose learning, it’s not obvious to me if it needs any particular equivalent IQ for this to start changing the nature of technological progress, but probably needs to be at least equivalent IQ 80 and maybe IQ 100 at least in some domains before it transitions from ‘cute science fair project’ to ‘industry-relevant’)