So, I do think definitely I’ve got some confirmation bias here – I know because the first thing I thought when I saw was “man this sure looks like the thing Eliezer was complaining about” and it was awhile later, thinking it through, that was like “this does seem like it should make you really doomy about any agent-foundations-y plans, or other attempts to sidestep modern ML and cut towards ‘getting the hard problem right on the first try.’”
I did (later) think about that a bunch and integrate it into the post.
I don’t know whether I think it’s reasonable to say “it’s additionally confirmation-bias-indicative that the post doesn’t talk about general doom arguments.” As Eli says, the post is mostly observing a phenonenon that seems more about planmaking than general reasoning.
(fwiw my own p(doom) is more like ‘I dunno man, somewhere between 10% and 90%, and I’d need to see a lot of things going concretely right before my emotional center of mass shifted below 50%’)
So, I do think definitely I’ve got some confirmation bias here – I know because the first thing I thought when I saw was “man this sure looks like the thing Eliezer was complaining about” and it was awhile later, thinking it through, that was like “this does seem like it should make you really doomy about any agent-foundations-y plans, or other attempts to sidestep modern ML and cut towards ‘getting the hard problem right on the first try.’”
I did (later) think about that a bunch and integrate it into the post.
I don’t know whether I think it’s reasonable to say “it’s additionally confirmation-bias-indicative that the post doesn’t talk about general doom arguments.” As Eli says, the post is mostly observing a phenonenon that seems more about planmaking than general reasoning.
(fwiw my own p(doom) is more like ‘I dunno man, somewhere between 10% and 90%, and I’d need to see a lot of things going concretely right before my emotional center of mass shifted below 50%’)