I was given ridiculous statements and assignments including the claim that MIRI already knew about a working AGI design and that it would not be that hard for me to come up with a working AGI design on short notice just by thinking about it, without being given hints.
There’s a huge gulf between “AGI ideal that sounds like it will work to the researcher who came up with it” and “AGI idea that actually works when tried.” Like, to the point where AGI researchers having ideas that they’re insanely overconfident in, that don’t work when tried, is almost being a trope. How much of an epistemic problem this is depends on the response to outside view, I think. If you remove the bluster, “working AGI design” turns into “research direction that sounds promising”. I do think that, if it’s going to be judged by the “promising research direction” standard rather than the “will actually work when tried” standard, then coming up with an AGI design is a pretty reasonable standard?
I agree that coming up with a “promising research direction” AI design would have been a reasonable assignment. However, such a research direction if found wouldn’t provide significant evidence for Nate’s claim that “the pieces to make AGI are already out there and someone just has to put them together”, since such research directions have been found throughout the AI field without correspondingly short AI timelines.
There’s a huge gulf between “AGI ideal that sounds like it will work to the researcher who came up with it” and “AGI idea that actually works when tried.” Like, to the point where AGI researchers having ideas that they’re insanely overconfident in, that don’t work when tried, is almost being a trope. How much of an epistemic problem this is depends on the response to outside view, I think. If you remove the bluster, “working AGI design” turns into “research direction that sounds promising”. I do think that, if it’s going to be judged by the “promising research direction” standard rather than the “will actually work when tried” standard, then coming up with an AGI design is a pretty reasonable standard?
I agree that coming up with a “promising research direction” AI design would have been a reasonable assignment. However, such a research direction if found wouldn’t provide significant evidence for Nate’s claim that “the pieces to make AGI are already out there and someone just has to put them together”, since such research directions have been found throughout the AI field without correspondingly short AI timelines.