And yes, this does seem quite consistent with Ord’s framing. E.g., he writes “my estimates above incorporate the possibility that we get our act together and start taking these risks very seriously.” So I guess I’ve seen it presented this way at least that once, but I’m not sure I’ve seen it made explicit like that very often (and doing so seems useful and retrospectively-obvious).
But if we just exerted a lot more effort (i.e. “surprisingly much action”), the extra effort probably doesn’t help much more than the initial effort, so maybe… 1 in 25? 1 in 30?
Are you thinking roughly that (a) returns diminish steeply from the current point, or (b) that effort will likely ramp up a lot in future and pluck a large quantity of the low hanging fruit that currently remain, such that even more ramping up would face steeply diminishing returns?
That’s a vague question, and may not be very useful. The motivation for it is that I was surprised you saw the gap between business as usual and “surprisingly much action” as being as small as you did, and wonder roughly what portion of that is about you thinking additional people working on this won’t be very useful, vs thinking very super useful additional people will eventually jump aboard “by default”.
Are you thinking roughly that (a) returns diminish steeply from the current point, or (b) that effort will likely ramp up a lot in future and pluck a large quantity of the low hanging fruit that currently remain, such that even more ramping up would face steeply diminishing returns?
More like (b) than (a). In particular, I’m think of lots of additional effort by longtermists, which probably doesn’t result in lots of additional effort by everyone else, which already means that we’re scaling sublinearly. In addition, you should then expect diminishing marginal returns to more research, which lessens it even more.Also, a thing that I realized
Also, I was thinking about this recently, and I am pretty pessimistic about worlds with discontinuous takeoff, which should maybe add another ~5 percentage points to my risk estimate conditional on no intervention by longtermists, and ~4 percentage points to my unconditional risk estimate.
So you’ve updated your unconditional estimate from ~5% (1 in 20) to ~9%? If so, people may have to stop citing you as an “optimist”… (which was already perhaps a tad misleading, given what the 1 in 20 was about)
(I mean, I know we’re all sort-of just playing with incredibly uncertain numbers about fuzzy scenarios anyway, but still.)
If so, people may have to stop citing you as an “optimist”
I wouldn’t be surprised if the median number from MIRI researchers was around 50%. I think the people who cite me as an optimist are people with those background beliefs. I think even at 5% I’d fall on the pessimistic side at FHI (though certainly not the most pessimistic, e.g. Toby is more pessimistic than I am.
’Actually, the people Tim is talking about here are often more pessimistic about societal outcomes than Tim is suggesting. Many of them are, roughly speaking, 65%-85% confident that machine superintelligence will lead to human extinction, and that it’s only in a small minority of possible worlds that humanity rises to the challenge and gets a machine superintelligence robustly aligned with humane values.’ — Luke Muehlhauser, https://lukemuehlhauser.com/a-reply-to-wait-but-why-on-machine-superintelligence/
’In terms of falsifiability, if you have an AGI that passes the real no-holds-barred Turing Test over all human capabilities that can be tested in a one-hour conversation, and life as we know it is still continuing 2 years later, I’m pretty shocked. In fact, I’m pretty shocked if you get up to that point at all before the end of the world.’ — Eliezer Yudkowsky, https://www.econlib.org/archives/2016/03/so_far_my_respo.html
Quite interesting. Thanks for that response.
And yes, this does seem quite consistent with Ord’s framing. E.g., he writes “my estimates above incorporate the possibility that we get our act together and start taking these risks very seriously.” So I guess I’ve seen it presented this way at least that once, but I’m not sure I’ve seen it made explicit like that very often (and doing so seems useful and retrospectively-obvious).
Are you thinking roughly that (a) returns diminish steeply from the current point, or (b) that effort will likely ramp up a lot in future and pluck a large quantity of the low hanging fruit that currently remain, such that even more ramping up would face steeply diminishing returns?
That’s a vague question, and may not be very useful. The motivation for it is that I was surprised you saw the gap between business as usual and “surprisingly much action” as being as small as you did, and wonder roughly what portion of that is about you thinking additional people working on this won’t be very useful, vs thinking very super useful additional people will eventually jump aboard “by default”.
More like (b) than (a). In particular, I’m think of lots of additional effort by longtermists, which probably doesn’t result in lots of additional effort by everyone else, which already means that we’re scaling sublinearly. In addition, you should then expect diminishing marginal returns to more research, which lessens it even more.Also, a thing that I realized
Also, I was thinking about this recently, and I am pretty pessimistic about worlds with discontinuous takeoff, which should maybe add another ~5 percentage points to my risk estimate conditional on no intervention by longtermists, and ~4 percentage points to my unconditional risk estimate.
Interesting (again!).
So you’ve updated your unconditional estimate from ~5% (1 in 20) to ~9%? If so, people may have to stop citing you as an “optimist”… (which was already perhaps a tad misleading, given what the 1 in 20 was about)
(I mean, I know we’re all sort-of just playing with incredibly uncertain numbers about fuzzy scenarios anyway, but still.)
I wouldn’t be surprised if the median number from MIRI researchers was around 50%. I think the people who cite me as an optimist are people with those background beliefs. I think even at 5% I’d fall on the pessimistic side at FHI (though certainly not the most pessimistic, e.g. Toby is more pessimistic than I am.
It may be useful.
’Actually, the people Tim is talking about here are often more pessimistic about societal outcomes than Tim is suggesting. Many of them are, roughly speaking, 65%-85% confident that machine superintelligence will lead to human extinction, and that it’s only in a small minority of possible worlds that humanity rises to the challenge and gets a machine superintelligence robustly aligned with humane values.’ — Luke Muehlhauser, https://lukemuehlhauser.com/a-reply-to-wait-but-why-on-machine-superintelligence/
’In terms of falsifiability, if you have an AGI that passes the real no-holds-barred Turing Test over all human capabilities that can be tested in a one-hour conversation, and life as we know it is still continuing 2 years later, I’m pretty shocked. In fact, I’m pretty shocked if you get up to that point at all before the end of the world.’ — Eliezer Yudkowsky, https://www.econlib.org/archives/2016/03/so_far_my_respo.html