So, I do think definitely I’ve got some confirmation bias here – I know because the first thing I thought when I saw was “man this sure looks like the thing Eliezer was complaining about” and it was awhile later, thinking it through, that was like “this does seem like it should make you really doomy about any agent-foundations-y plans, or other attempts to sidestep modern ML and cut towards ‘getting the hard problem right on the first try.’”
I did (later) think about that a bunch and integrate it into the post.
I don’t know whether I think it’s reasonable to say “it’s additionally confirmation-bias-indicative that the post doesn’t talk about general doom arguments.” As Eli says, the post is mostly observing a phenonenon that seems more about planmaking than general reasoning.
(fwiw my own p(doom) is more like ‘I dunno man, somewhere between 10% and 90%, and I’d need to see a lot of things going concretely right before my emotional center of mass shifted below 50%’)
Thanks for clarifying. I do agree with the broader point that one should have a sort of radical uncertainty about (e.g.) a post AGI world. I’m not sure I agree it’s a big issue to leave that out of any given discussion though, since it shifts probability mass from any particular describable outcome to the big “anything can happen” area.
(This might be what people mean by “Knightian uncertainty”?)
Yes, I’m saying it’s a reasonable conclusion to draw, and the fact that it isn’t drawn here is indicative of a kind of confirmation bias.
So, I do think definitely I’ve got some confirmation bias here – I know because the first thing I thought when I saw was “man this sure looks like the thing Eliezer was complaining about” and it was awhile later, thinking it through, that was like “this does seem like it should make you really doomy about any agent-foundations-y plans, or other attempts to sidestep modern ML and cut towards ‘getting the hard problem right on the first try.’”
I did (later) think about that a bunch and integrate it into the post.
I don’t know whether I think it’s reasonable to say “it’s additionally confirmation-bias-indicative that the post doesn’t talk about general doom arguments.” As Eli says, the post is mostly observing a phenonenon that seems more about planmaking than general reasoning.
(fwiw my own p(doom) is more like ‘I dunno man, somewhere between 10% and 90%, and I’d need to see a lot of things going concretely right before my emotional center of mass shifted below 50%’)
Thanks for clarifying. I do agree with the broader point that one should have a sort of radical uncertainty about (e.g.) a post AGI world. I’m not sure I agree it’s a big issue to leave that out of any given discussion though, since it shifts probability mass from any particular describable outcome to the big “anything can happen” area. (This might be what people mean by “Knightian uncertainty”?)