I agree that Zvi made a technical error in the conclusion, in a way that reliably caused misinterpretation towards construing things as calls to action, and that it was good to point this out. Nothing amiss here.
This summary seems wrong or confused or confusing to me.
What is the actual error you have in mind? (I myself have made a couple of different criticisms about the post but I’m not sure any of them fits your description of “minor technical error” that “reliably caused misinterpretation towards construing things as calls to action”.)
“Call to action” is apparently a loaded term with negative connotations among the mods and perhaps others here (which I wasn’t previously aware of). Are you using it in this derogatory sense or some other sense?
Zvi himself has confirmed that his original conclusion was intended as a call to action, albeit an “incidental” one. Why do you keep saying that there wasn’t a call to action, and that “call to action” is a misinterpretation?
But, the fact that this minor technical error was so important relative to the rest of the post is, itself, a huge red flag that something is wrong with our discourse, and we should be trying to figure that out if we think something like FAI might turn out to be important.
I believe there have been several different layers of confusion happening in this episode (and may continue to be happening), which has contributed to the large number of comments written about it and maybe a sense that it’s more important than the rest of the post. Also, again, depending on exactly what you mean, I’m not sure I’d agree with “minor technical error”. It seems like some of my own criticisms of the post were actually fairly substantial and combined with the aforementioned confusions and the fact that disagreements will naturally generate more discussion than agreements, I don’t understand why you think there is a “huge red flag that something is wrong with our discourse” here. I wanted to disengage as I’m not sure continuing to participate in this debate (including retrying to fully resolve all the layers of confusion) is the best use of my time, but I’m happy to listen to you explain more if you still think this is actually important or has relevance to FAI.
I think the “call to action” issue is important for bigger reasons than LessWrong’s governance, but I’ll taboo the phrase for now.
It seems to me like the default paradigm, including in Rationalist circles, has increasingly become the following: Words are not communicative unless they are commands. Anything that does not terminate in a command, a “pitch,” or something in that class, is construed as therefore unclear.
The relevance to FAI is that any group trying to design one (or really design anything substantively new from first principles) needs to be able to have internal communication that is really, really robustly not made out of telling each other to do specific things, and it seems like the default expectation, including in Rationalist circles, has increasingly become that words are not communicative unless they are commands.
None of the work people were doing several years ago on decision theory was like this.
Here’s why I interpreted Zvi’s rhetoric as a technical error. In another comment, when I asked you:
What’s the model whereby a LessWrong post ought to have a “takeaway message” or “call to action”?
You replied:
I was trying to figure out what “Let us all be wise enough to aim higher.” was intended to mean. It seemed like it was either a “takeaway message” (e.g., suggestion or call to action), or an applause light (i.e., something that doesn’t mean anything but sounds good), hence my question.
I took this to mean that but for this sentence (which I took to be a superfluous conclusion-flavored end, and Zvi agrees wasn’t part of the core content of the post), you wouldn’t have focused on the question of what specific actions the post was asking the reader to perform. Was I misreading you? If so, what did I get wrong?
The relevance to FAI is that any group trying to design one (or really design anything substantively new from first principles) needs to be able to have internal communication that is really, really robustly not made out of telling each other to do specific things, and it seems like the default expectation, including in Rationalist circles, has increasingly become that words are not communicative unless they are commands.
I haven’t seen this myself. If you want, I can point you to any number of posts on the Alignment Forum that are not made out of telling each other to do specific things. Can you give some examples of what you’ve seen that made you say this?
None of the work people were doing several years ago on decision theory was like this.
Again, I’m not really seeing this now either.
I took this to mean that but for this sentence (which I took to be a superfluous conclusion-flavored end, and Zvi agrees wasn’t part of the core content of the post), you wouldn’t have focused on the question of what specific actions the post was asking the reader to perform.
Probably, but I’m not totally sure. I guess unless the (counterfactual) conclusion said something to the effect of “This seems bad, and I’m not sure what to do about it” I might have asked something like “The models in this post don’t seem to include enough gears for me to figure out what I can do to help with the situation. Do you have any further thoughts about that?” And then maybe he would have either said “I’m still trying to figure that out” in which case the conversation would have ended, or maybe he would have said “I think we should try not to use asymmetrical mental point systems unless structurally necessary” and then we would have had the same debate about whether that implication is justified or not.
(I’m not sure where this line of question is leading… Also I still don’t understand why you’re calling it a “technical” error. If the mistake was writing a superfluous conclusion-flavored end, wouldn’t “rhetorical” or “presentation” error be more appropriate? What is technical about the error?)
I can point you to any number of posts on the Alignment Forum that are not made out of telling each other to do specific things.
My understanding from personal experience and the reports of people at MIRI is that MIRI isn’t even using very basic decision theory or AI alignment results in practice. I’m not doubting that people are still participating in a kind of dissociated discourse that doesn’t affect actions, and separately that they do a thing where they try to use words to compel actions from others. The problem is that the former seems to be increasingly just for show, and not predictive of behavior the way you’d expect if stated preferences and models were accurate.
This summary seems wrong or confused or confusing to me.
What is the actual error you have in mind? (I myself have made a couple of different criticisms about the post but I’m not sure any of them fits your description of “minor technical error” that “reliably caused misinterpretation towards construing things as calls to action”.)
“Call to action” is apparently a loaded term with negative connotations among the mods and perhaps others here (which I wasn’t previously aware of). Are you using it in this derogatory sense or some other sense?
Zvi himself has confirmed that his original conclusion was intended as a call to action, albeit an “incidental” one. Why do you keep saying that there wasn’t a call to action, and that “call to action” is a misinterpretation?
I believe there have been several different layers of confusion happening in this episode (and may continue to be happening), which has contributed to the large number of comments written about it and maybe a sense that it’s more important than the rest of the post. Also, again, depending on exactly what you mean, I’m not sure I’d agree with “minor technical error”. It seems like some of my own criticisms of the post were actually fairly substantial and combined with the aforementioned confusions and the fact that disagreements will naturally generate more discussion than agreements, I don’t understand why you think there is a “huge red flag that something is wrong with our discourse” here. I wanted to disengage as I’m not sure continuing to participate in this debate (including retrying to fully resolve all the layers of confusion) is the best use of my time, but I’m happy to listen to you explain more if you still think this is actually important or has relevance to FAI.
I think the “call to action” issue is important for bigger reasons than LessWrong’s governance, but I’ll taboo the phrase for now.
It seems to me like the default paradigm, including in Rationalist circles, has increasingly become the following: Words are not communicative unless they are commands. Anything that does not terminate in a command, a “pitch,” or something in that class, is construed as therefore unclear.
The relevance to FAI is that any group trying to design one (or really design anything substantively new from first principles) needs to be able to have internal communication that is really, really robustly not made out of telling each other to do specific things, and it seems like the default expectation, including in Rationalist circles, has increasingly become that words are not communicative unless they are commands.
None of the work people were doing several years ago on decision theory was like this.
Here’s why I interpreted Zvi’s rhetoric as a technical error. In another comment, when I asked you:
You replied:
I took this to mean that but for this sentence (which I took to be a superfluous conclusion-flavored end, and Zvi agrees wasn’t part of the core content of the post), you wouldn’t have focused on the question of what specific actions the post was asking the reader to perform. Was I misreading you? If so, what did I get wrong?
I want to check on that before I say more.
I haven’t seen this myself. If you want, I can point you to any number of posts on the Alignment Forum that are not made out of telling each other to do specific things. Can you give some examples of what you’ve seen that made you say this?
Again, I’m not really seeing this now either.
Probably, but I’m not totally sure. I guess unless the (counterfactual) conclusion said something to the effect of “This seems bad, and I’m not sure what to do about it” I might have asked something like “The models in this post don’t seem to include enough gears for me to figure out what I can do to help with the situation. Do you have any further thoughts about that?” And then maybe he would have either said “I’m still trying to figure that out” in which case the conversation would have ended, or maybe he would have said “I think we should try not to use asymmetrical mental point systems unless structurally necessary” and then we would have had the same debate about whether that implication is justified or not.
(I’m not sure where this line of question is leading… Also I still don’t understand why you’re calling it a “technical” error. If the mistake was writing a superfluous conclusion-flavored end, wouldn’t “rhetorical” or “presentation” error be more appropriate? What is technical about the error?)
My understanding from personal experience and the reports of people at MIRI is that MIRI isn’t even using very basic decision theory or AI alignment results in practice. I’m not doubting that people are still participating in a kind of dissociated discourse that doesn’t affect actions, and separately that they do a thing where they try to use words to compel actions from others. The problem is that the former seems to be increasingly just for show, and not predictive of behavior the way you’d expect if stated preferences and models were accurate.