I also think I naturally interpreted the terms in Adam’s comment as pointing to specific clusters of work in today’s world, rather than universal claims about all work that could ever be done. That is, when I see “experimental work and not doing only decision theory and logic”, I automatically think of “experimental work” as pointing to a specific cluster of work that exists in today’s world (which we might call mainstream ML alignment), rather than “any information you can get by running code”. Whereas it seems you interpreted it as something closer to “MIRI thinks there isn’t any information to get by running code”.
My brain insists that my interpretation is the obvious one and is confused how anyone (within the AI alignment field, who knows about the work that is being done) could interpret it as the latter. (Although the existence of non-public experimental work that isn’t mainstream ML is a good candidate for how you would start to interpret “experimental work” as the latter.) But this seems very plausibly a typical mind fallacy.
EDIT: Also, to explicitly say it, sorry for misunderstanding what you were trying to say. I did in fact read your comments as saying “no, MIRI is not categorically against mainstream ML work, and MIRI is not only working on HRAD-ish stuff like decision theory and logic, and furthermore this should be pretty obvious to outside observers”, and now I realize that is not what you were saying.
^ This response is great.
I also think I naturally interpreted the terms in Adam’s comment as pointing to specific clusters of work in today’s world, rather than universal claims about all work that could ever be done. That is, when I see “experimental work and not doing only decision theory and logic”, I automatically think of “experimental work” as pointing to a specific cluster of work that exists in today’s world (which we might call mainstream ML alignment), rather than “any information you can get by running code”. Whereas it seems you interpreted it as something closer to “MIRI thinks there isn’t any information to get by running code”.
My brain insists that my interpretation is the obvious one and is confused how anyone (within the AI alignment field, who knows about the work that is being done) could interpret it as the latter. (Although the existence of non-public experimental work that isn’t mainstream ML is a good candidate for how you would start to interpret “experimental work” as the latter.) But this seems very plausibly a typical mind fallacy.
EDIT: Also, to explicitly say it, sorry for misunderstanding what you were trying to say. I did in fact read your comments as saying “no, MIRI is not categorically against mainstream ML work, and MIRI is not only working on HRAD-ish stuff like decision theory and logic, and furthermore this should be pretty obvious to outside observers”, and now I realize that is not what you were saying.