Yeah, sorry about not owning that more, and for the frame being muddled. I don’t endorse the “asking Eliezer” or “agreeing with Eliezer” bits, but I do basically think he’s right about many object-level problems he identifies (and thus people disagreeing with him about that is not a feature) and think ‘security mindset’ is the right orientation to have towards AGI alignment. That hypothesis is a ‘worry’ primarily because asymmetric costs means it’s more worth investigating than the raw probability would suggest. [Tho the raw probability of components of it do feel pretty substantial to me.]
[EDIT: I should say I think ARC’s approach to ELK seems like a great example of “people breaking their own proposals”. As additional data to update on, I’d be interested in seeing, like, a graph of people’s optimism about ELK over time, or something similar.]
Yeah, sorry about not owning that more, and for the frame being muddled. I don’t endorse the “asking Eliezer” or “agreeing with Eliezer” bits, but I do basically think he’s right about many object-level problems he identifies (and thus people disagreeing with him about that is not a feature) and think ‘security mindset’ is the right orientation to have towards AGI alignment. That hypothesis is a ‘worry’ primarily because asymmetric costs means it’s more worth investigating than the raw probability would suggest. [Tho the raw probability of components of it do feel pretty substantial to me.]
[EDIT: I should say I think ARC’s approach to ELK seems like a great example of “people breaking their own proposals”. As additional data to update on, I’d be interested in seeing, like, a graph of people’s optimism about ELK over time, or something similar.]