Yes, this is all reasonable, but as a description of Eliezer’s behavior as understood by him, and also as understood by, like, an ordinary person, “doesn’t have a workable alignment plan, so he decided to just live off our donations, running out the clock” is just… totally wrong… isn’t it?
That is, that characterization doesn’t match what Eliezer sees himself as doing, nor does it match how an ordinary person (and one who had no particular antipathy toward Eliezer, and thus was not inclined to describe his behavior uncharitably, only impartially), speaking in ordinary English, would describe Eliezer as doing—correct?
Yes, that is my belief. (Sorry, should have said that concretely). I’m not sure what an ‘ordinary person’ should think because ‘AI is dangerous’ has a lot of moving pieces and I think most people are (kinda reasonably?) epistemically helpless about the situation. But I do think iceman’s summary is basically obviously false, yes.
My own current belief is “Eliezer/MIRI probably had something-like-a-plan around 2017, probably didn’t have much of a plan by 2019 that Eliezer himself believed in, but, ‘take a break, and then come back to the problem after thinking about it’ feels like a totally reasonable thing to me to do”. (and meanwhile there were still people at MIRI working on various concrete projects that at least at the people involved thought were worthwhile).
I do think, if you don’t share Eliezer’s worldview, it’s a reasonable position to be suspicious and hypothesize that MIRI’s current activities are some sort of motivated-cognition-y cope, but I think confidentlyasserting that seems wrong to me. (I also think there’s a variety of worldviews that aren’t Eliezer’s exact worldview that make his actions still pretty coherent, and if I think it’s a pretty sketchy position to assert all those nearby-worldviews are so obviously wrong as to make ‘motivated cope/fraud’ your primary frame)
Yes, this is all reasonable, but as a description of Eliezer’s behavior as understood by him, and also as understood by, like, an ordinary person, “doesn’t have a workable alignment plan, so he decided to just live off our donations, running out the clock” is just… totally wrong… isn’t it?
That is, that characterization doesn’t match what Eliezer sees himself as doing, nor does it match how an ordinary person (and one who had no particular antipathy toward Eliezer, and thus was not inclined to describe his behavior uncharitably, only impartially), speaking in ordinary English, would describe Eliezer as doing—correct?
Yes, that is my belief. (Sorry, should have said that concretely). I’m not sure what an ‘ordinary person’ should think because ‘AI is dangerous’ has a lot of moving pieces and I think most people are (kinda reasonably?) epistemically helpless about the situation. But I do think iceman’s summary is basically obviously false, yes.
My own current belief is “Eliezer/MIRI probably had something-like-a-plan around 2017, probably didn’t have much of a plan by 2019 that Eliezer himself believed in, but, ‘take a break, and then come back to the problem after thinking about it’ feels like a totally reasonable thing to me to do”. (and meanwhile there were still people at MIRI working on various concrete projects that at least at the people involved thought were worthwhile).
i.e. I don’t think MIRI “gave up”
I do think, if you don’t share Eliezer’s worldview, it’s a reasonable position to be suspicious and hypothesize that MIRI’s current activities are some sort of motivated-cognition-y cope, but I think confidently asserting that seems wrong to me. (I also think there’s a variety of worldviews that aren’t Eliezer’s exact worldview that make his actions still pretty coherent, and if I think it’s a pretty sketchy position to assert all those nearby-worldviews are so obviously wrong as to make ‘motivated cope/fraud’ your primary frame)