(I tried writing up comments here as if I were commenting on a google doc, rather than a LW post, as part of an experiment I had talked about with AdamShimi. I found that actually it was fairly hard – both because I couldn’t make quick comments on. a given section without it feeling like a bigger-deal than I meant it to be, and also because the overall thing came out more critical feeling than feels right on a public post. This is ironic since I was the the one who told Adam “I bet if you just ask people to comment on it as if it’s a google doc it’ll go fine.”)
((I started writing this before the exchange between Adam and Daniel and am not sure if it’s redundant with that))
I think my primary comment (perhaps similar to Daniel’s) is that it’s not clear to me that this post is arguing against a position that anyone holds. What I hear lots of people saying is “AI Alignment is preparadigmatic, and that makes it confusing and hard to navigate”, which is fairly different from the particular solution of “and we should get to a point of paradigmaticness soon”. I don’t know of anyone who seems to me to be pushing for that.
(fake edit: Adam says in another comment that these arguments have come up in person, not online. That’s annoying to deal with for sure. I’d still put some odds on there being some kind of miscommunication going on, where Adam’s interlocuter says something like “it’s annoying/bad that alignment is pre-paradigmatic”, and Adam assumed that meant “we should get it to the paradigmatic stage ASAP”. But, it’s hard to have a clear opinion here without knowing more about the side-channel arguments. I think in the world where there are people who are saying in private “we should get Alignment to a paradigmatic state”, I think it might be good to have some context-setting in the post explaining that)
...
I do agree with the stated problem of “in order to know how to respond to a given Alignment post, I kinda need to know who wrote it and what sort of other things they think about, and that’s annoying.” But I’m not sure what to do with that.
The two solutions I can think of are basically “Have the people with established paradigms spend more time clarifying their frame” (for the benefit of people who don’t already know), and “have new up-and-coming people put in more time clarifying their frame” (also for the benefit of those who don’t already know, but in this case many fewer people know them so it’s more obviously worth it).
Something about this feels bad to me because I think most attempts to do it will create a bunch of boilerplate cruft that makes the posts harder to read. (This might be a UI problem that lesswrong should try to fix)
I read the OP as suggesting
a meta-level strategy of “have a shared frame for framing things, which we all pay a one-time cost of getting up to speed with”
a specific shared-frame of “1. Defining the terms of the problem, 2. Exploring these definitions, 3. Solving the now well-defined problem.” I think these things are… plausibly fine, but I don’t know how strongly to endorse them because I feel like I haven’t gotten to see much of the space of possible alternative shared-frames.
On the meta-side: an update I made writing this comment is that inline-google-doc-style commenting is pretty important. It allows you to tag a specific part of the post and say “hey, these seems wrong/confused” without making that big a deal about it, whereas writing a LW comment you sort of have to establish the context which intrinsically means making into A Thing.
(I tried writing up comments here as if I were commenting on a google doc, rather than a LW post, as part of an experiment I had talked about with AdamShimi. I found that actually it was fairly hard – both because I couldn’t make quick comments on. a given section without it feeling like a bigger-deal than I meant it to be, and also because the overall thing came out more critical feeling than feels right on a public post. This is ironic since I was the the one who told Adam “I bet if you just ask people to comment on it as if it’s a google doc it’ll go fine.”)
((I started writing this before the exchange between Adam and Daniel and am not sure if it’s redundant with that))
I think my primary comment (perhaps similar to Daniel’s) is that it’s not clear to me that this post is arguing against a position that anyone holds. What I hear lots of people saying is “AI Alignment is preparadigmatic, and that makes it confusing and hard to navigate”, which is fairly different from the particular solution of “and we should get to a point of paradigmaticness soon”. I don’t know of anyone who seems to me to be pushing for that.
(fake edit: Adam says in another comment that these arguments have come up in person, not online. That’s annoying to deal with for sure. I’d still put some odds on there being some kind of miscommunication going on, where Adam’s interlocuter says something like “it’s annoying/bad that alignment is pre-paradigmatic”, and Adam assumed that meant “we should get it to the paradigmatic stage ASAP”. But, it’s hard to have a clear opinion here without knowing more about the side-channel arguments. I think in the world where there are people who are saying in private “we should get Alignment to a paradigmatic state”, I think it might be good to have some context-setting in the post explaining that)
...
I do agree with the stated problem of “in order to know how to respond to a given Alignment post, I kinda need to know who wrote it and what sort of other things they think about, and that’s annoying.” But I’m not sure what to do with that.
The two solutions I can think of are basically “Have the people with established paradigms spend more time clarifying their frame” (for the benefit of people who don’t already know), and “have new up-and-coming people put in more time clarifying their frame” (also for the benefit of those who don’t already know, but in this case many fewer people know them so it’s more obviously worth it).
Something about this feels bad to me because I think most attempts to do it will create a bunch of boilerplate cruft that makes the posts harder to read. (This might be a UI problem that lesswrong should try to fix)
I read the OP as suggesting
a meta-level strategy of “have a shared frame for framing things, which we all pay a one-time cost of getting up to speed with”
a specific shared-frame of “1. Defining the terms of the problem, 2. Exploring these definitions, 3. Solving the now well-defined problem.” I think these things are… plausibly fine, but I don’t know how strongly to endorse them because I feel like I haven’t gotten to see much of the space of possible alternative shared-frames.
On the meta-side: an update I made writing this comment is that inline-google-doc-style commenting is pretty important. It allows you to tag a specific part of the post and say “hey, these seems wrong/confused” without making that big a deal about it, whereas writing a LW comment you sort of have to establish the context which intrinsically means making into A Thing.