Thank you. I agree that the definition isn’t perfect and can likely be improved. “Freedom provided by having spare resources” isn’t a bad second attempt but I sense we can do better; I will continue to think about the best way to pin this down concisely. Suggestions welcome!
“Freedom by having spare tolerances in your action → utility map”.
Resources are just one set of inputs to the function that maps actions to utility. Slack is what happens when your utility map has a plateau rather than a sharp peak.
I.e., arrange all your available actions on an N-dimensional field, separated by distinguishable salience. Then create an N+1 dimensional graph, where each such action is mapped to the total utility that results from that action. You have lots of slack if your region of maximum utility look like plateaus—that is, noticeable adjustments to your input don’t pertub you out of your ‘win’ - and you have no slack if your region of maximum utility looks like a sharp peak—that is, noticeable adjustments to your input almost instantly perturb you out of your ‘win’ and into a significantly lower-utility part of your action space.
Suppose we are in state X, and a change Y1 is proposed which will lead to a better state X1. Then suppose the individual with power to implement Y1 replied:
Y1 is not bad, but I had a sense that more is possible. I am waiting for Yj. Suggestions appreciated.
Would you not suspect that there was some deeper irrationality at play—that the individual was at least a motivated sceptic?
By induction the argument in quotes can be applied against any Yi by appealing to an even better Yj. You have just given yourself a fully general counterargument (FGA) against any proposed change.
What is your true rejection of Chris_Leong’s proposed change?
If a better definition is your true rejection, then beware the FGA. Either implement the proposed change, or explicitly state the requirements for a definition you would accept. FGAs are very tempting, and we change our minds less often than we think. Masking your true rejection behind a FGA only exacerbates the problem. Stating explicitly what your criteria are would make your true rejection plainly visible, and destroy the FGA.
I am happy to say more about my thinking and did so in the other comment, but I will say that this comment seemed hostile and uncharitable. (But to be clear, I am 100% not calling in the Sunshine Regiment or even down-voting the comment, I just hope to make this less common in the future by highlighting it).
Responding to promises to put thought into constructive criticisms, and an indication of willingness to change one’s mind, with accusations of fully general counterarguments, suspicions of irrationality and resistance to change, and demands for true rejections, seems to me neither charitable, reasonable nor productive. I’m confused why you pulled out such big guns and essentially accused me of responding in bad faith. If you wanted more detailed thoughts on the definition, you could simply have asked for them (and I gave them anyway).
Chris did not even quite suggest a new definition; I highlighted his phrase as pointing towards a potential better version and was myself pointing out that it wasn’t half bad. I didn’t even reject it at the time, although I do so now on reflection. I definitely wasn’t saying that it was better than the original but I was rejecting it because I was holding out for better still. I also promised to think more about it. I hadn’t even rejected Yi yet.
In addition, changing definitions is expensive once you’ve opened a concept to public discussion. It would in fact be perfectly reasonable to say that yes, Yi is better than X, but it is not better enough yet to justify switching, or that we should consider whether there exists Yj before paying switching costs.
It was not my intention to be hostile or overly uncharitable. Furthermore, it seems we have different expectations. Pointing out that you made a fully general counterargument (when that is in fact what you did) is not in my opinion bad behaviour. Pointing out that you come across to me as a motivated sceptic and are hiding your true rejection is not in my opinion bad behaviour. Pointing out that you acted irrationally (insomuch as fully general counterarguments and motivated scepticism are irrational) is not bad behaviour.
I gave an example to highlight the (to me) blatantly obvious irrationality in your reply.
You seem to expect that I interpret your arguments in “good faith”. I do; I assume there was no malice or ill intent behind the rationalist faux pas. I assume it wasn’t intentional and sought to draw it to your attention. I assume your intentions were benevolent; that in my honest opinion is what charity is.
My enemies (not to say you’re one, but I’d rather not alter the saying) are not inherently evil, but neither are they inherently rational. You seem to expect that I interpret your arguments in the most rational way possible? What even? We are aspiring rationalists—getting it right is hard, sometimes we make mistakes, sometimes we commit errors of rationality. We have a bias blind spot that are preventing us from seeing our own biases. I’ve seen Scott fall for correspondence bias (well I wasn’t there when it happened, but I read the comments) and don’t even let me get started on Eliezer Yudkowsky. We are not perfect rationalists; we are not above error and bias; we are not beyond logical fallacies. We try to overcome our bias, we have not eliminated them.
I don’t know if you think you’ve already overcome bias, but if then we live in entirely different worlds. To suggest that because I point out that you acted irrationally I am being “overly hostile” and “you want to discourage it”? I’m sorry, but this sounds like a blatant attempt to shut down criticism (you seem to be fine with criticism of other areas). (Now, I almost fell for correspondence bias and suggested you might be someone who prides themselves in their rationality, and me pointing out your irrationality was causing you to lose status, but “correspondence bias”, so I wouldn’t). Also, “big guns” rubs me the wrong way. Arguments are not soldiers, I do not aim to cause you to lose status (I think your falling for correspondence bias here). I did not simply want a clarification on the definition, I did not accuse you of responding in bad faith, I thought you were being irrational, so I called you out on it.
If you think accusations of irrationality are accusations of bad faith, then once again, we live in different worlds. I don’t assume that you are uber rational, that you are so skilled in the way that common errors of rationality are beyond you. I don’t assume that for everyone. Being rational is hard, and I know that I certainly struggle.
Now I’ll be explicit; this is part of my way of good cognitive citizenship, this is my way of proper epistemic hygiene, this is my way of raising the sanity waterline. I love lesswrong, I boast about lesswrong. Anyone who knows me well enough knows about lesswrong. Lesswrong is a safe space for me. I kind of think of it like a garden—my garden. I am happy when I see debates being resolved and people changing their minds. I want the standards of discourse on LW to be as high as possible. It is because I care that I point out irrationality when I see it. If lesswrong was on the average more rational, then it grants me positive effect. I feel warm and fuzzy. I am not your enemy. (At least not the sense it is commonly used. The way I internally refer to enemies and opponents is just assuming everyone is a player in a game of unknown alignment (zero sum, cooperative, etc)).
I’ll be charitable, I’ll assume this was not a deliberate effort to shut down criticism. I’ll assume that you do not think yourself to be beyond bias. I’ll assume that there was a scenario based cause behind your attempt to suppress my criticism.
I’ll assume my opponents are angels if that’s the norm promoted here—but assume my opponents are rational? Why I’ve never heard a more silly idea.
Now as for the implied threat to downvote my post. Well, in the spirit of charity, I’ll assume this was not a deliberate threat, was not an indication of willingness to do such and was a slip. I’m sorry, but if calling the “sunshine regiment” is a option that is even in your search space and/or sufficiently salient enough that you felt the need to stress that you wouldn’t do it in response to constructive criticism (and my criticism was constructive; I specified viable courses of action for you to take)? Then I don’t even.
But worse still, if the sunshine regiment was actually an option if there would have actually been moderator action on my post because I pointed out that you were being irrational? Well then that would break my heart. I really love lesswrong and go above and beyond to recommend it. But if that was truly a viable option for you, then it would be heart breaking. It’ll seem lesswrong wasn’t what I thought it would be, it wasn’t what I hoped it would be.
P.S: I think I’ll write a post titled “Beware the fully general counterargument” soon. Permission to use this exchange as part of the post?
There is a reason why I don’t think it’s perfectly valid, and that involves, “shifting the goal posts”, “fully general counterarguments”, “motivated scepticism”, “hiding your true rejection”, and possibly other biases/fallacies. I think I’ll possibly throw in “proving too much”, I’ll try and do justice it in my post on it.
P. P. S: I did not consider switching costs. I think that even if you do, because of the failures of rationality I highlighted above, that you should explicitly specify your requirements for Yj.
I think you are being unreasonable. I mean, really unreasonable.
First, let’s look at the original exchange. Chris says “I think your definition isn’t quite right and X would be an improvement”. Zvi says “You might be right, but I think one can do still better; I’ll think about it and am open to suggestions”. And you say: That’s a fully general counterargument; Zvi should immediately have rewritten his article to use Chris’s proposed definition, and not doing so shows that he is engaging in motivated skepticism and not offering his true rejection.
Sorry, but that’s just silly. Rewriting the article in line with a slightly modified definition would be a pile of work; if Zvi thinks the slightly modified definition is neither a huge improvement nor the best one can do, it is absolutely reasonable for him to look and see if something better can be found. (In your more recent comment you say “I did not consider switching costs”. OK then: so you didn’t consider the obviously most important factor here but were still happy to leap from “Zvi didn’t do so-and-so” to “Zvi is being a motivated skeptic, hiding his true rejection, etc., etc.”. What the hell?)
Even if in fact nothing better can be found, it is absolutely not reasonable to expect that every time A finds a suboptimality in something B has written B should do anything more than acknowledge it. Does Chris’s refinement of Zvi’s definition invalidate everything Zvi wrote? It doesn’t seem like it. So if Zvi’s article was useful before, it’s still useful now. Leaving it as it is, with one prominent comment describing that refinement and an agreement from Zvi right there under it, will do just fine.
And what’s this “fully general counterargument” business? First of all, Zvi wasn’t making a counterargument (nor purporting to make one). He agreed with what Chris said, after all. It would be a counterargument if Chris had added something like ”… so you should rewrite your article to use my revised definition”, but he didn’t so it isn’t. Second, being “fully general” doesn’t make things wrong, it makes them incomplete. There’s nothing wrong with saying “You say X. I disagree.” even though “I disagree” is a thing anyone can say about anything. It can provide useful context for an actual argument that follows; it can simply be a statement of the speaker’s position, when providing that information is worth the small effort it takes but engaging in a detailed explanation of why isn’t worth the much larger effort that takes; move it one step closer to what Zvi wrote (“You say X. I disagree, but I don’t have my reasons perfectly straight in my mind yet”) and it’s a placeholder for a later more detailed explanation. Nothing wrong with any of that.
Your most recent comment says a thing or two about correspondence bias, so I’d like to draw your attention to what I find a striking feature of both of your replies to Zvi in this thread: you shift very rapidly from observing a particular (alleged) defect in what he wrote to speculating about how he’s thinking badly. So you go straight from “this is a fully general counterargument” to guessing at particular kinds of bias that might be in Zvi’s head. He’s a motivated skeptic! He isn’t giving his true rejection! And you adopt what seems to me (and I think, from what he says in response, also to Zvi) a needlessly accusatory tone.
You could e.g. just have written this: “What do you find still unsatisfactory about Chris’s proposed refinement of your definition, and what are you looking for in suggestions for further improvements? What would it take to justify revising the article to use a better definition?”. That would have done the same job of soliciting more explicit criteria from Zvi. If you were extra-anxious to point out the logical defect you think you see in what he wrote: “That seems like something one could say to any proposed improvement. What do you find still unsatisfactory [etc.]”. What advantage do you see in the much more confrontational approach you took?
(Your recent comment suggests that the answer is something like “By saying such things I hope to make people examine their biases and reduce them”. Take a look at what’s actually happening here. Does it seem like that’s working well?)
So much for the original exchange. Now let’s look at your more recent comment. It doubles down on the accusatory, confrontational approach. It takes Zvi’s statement that (despite the disapproval he expresses) he isn’t downvoting your comment, and casts it as a “threat to downvote”, which makes no sense to me at all: downvoting a single comment is not an action major enough to be worth threatening, and Zvi’s whole point is that he isn’t doing it. I think you may have misconstrued his reason for saying what he said, which (I am guessing, on the basis that it’s what I have meant when I’ve said similar things in the past) is not at all “I didn’t downvote you, but I could and you’d better watch out in case I do, bwahahahaa” but “I know I’m saying something negative about what you wrote, and you may suspect I’ve downvoted you and feel that as a hostile action; but in fact I haven’t downvoted you and I’d like you to know that in the hope that it will help our interactions be friendlier than they otherwise might be”.
Similarly for the bit about the Sunshine Regiment; the point here is that there is a general policy on LW2 of being nice to one another when there isn’t a cogent reason, and Zvi’s statement that your comment was needlessly hostile could be taken as a suggestion that you be officially censured for it, and so he wanted to make it clear that he doesn’t want that. (This one is less obviously not-a-concealed-threat, but I am still 95% confident that it was not intended as anything of the sort.)
And then you start talking about how it would break your heart if you got censured somehow for being needlessly confrontational, and how Less Wrong is your garden. This attempt to tug at your readers’ heartstrings looks extremely odd alongside the no-fuzziness-allowed demands you’ve been making of Zvi. (Not because there’s anything wrong with feeling strongly about things, of course. But because “if you do X it will make me sad” is generally an even worse counterargument to X than Zvi’s allegedly fully general “I think one can do better than X”.)
Take a look at what you’ve written to Zvi in this thread. Look in particular at all the things that (1) refer explicitly to “opponents” and “enemies” and (2) make uncomplimentary statements or conjectures about Zvi’s intellect or character. And then, please, ask yourself the following question: This “way [you] internally refer to enemies and opponents”—is it actually leading to good outcomes for you? Because I don’t think it is. In so far as this thread is any guide, I think it’s leading you to adopt an approach to discussion that annoys other people and makes them less, not more, receptive to anything useful you may have to say.
You have my full permission to use this exchange if you feel it would be illustrative, and I thank you for asking permission rather than forgiveness. The topic is definitely worthy of more attention.
I also want to make clear that it was not my intent to make any implied threats. Quite the opposite; I was concerned that my reply would be viewed as an attack or call for reaction, and wanted to make it clear I did not want that. Saying “I’m not even going to downvote” was me saying I didn’t think it even rose to that level of disagreement, nothing more.
I also think that we have an even bigger issue with karma than I thought, if a downvote as a threat is even a coherent concept. I mean, wow. I knew numbers on things were powerful but it seems I didn’t know it enough (I suspect it’s one of those no-matter-how-many-times-you-update-it’s-not-nearly-enough things, since it’s a class of incentives-matter).
Anyway, no offense taken. I am not your enemy in either direction, and I hope we can avoid talking about others as if they are enemies or coming across as if we’re thinking that way. I do think you give that impression quite heavily here, even if unintended, and that this is bad.
I also want to make clear that it was not my intent to make any implied threats. Quite the opposite; I was concerned that my reply would be viewed as an attack or call for reaction, and wanted to make it clear I did not want that. Saying “I’m not even going to downvote” was me saying I didn’t think it even rose to that level of disagreement, nothing more.
Understood.
I also think that we have an even bigger issue with karma than I thought, if a downvote as a threat is even a coherent concept.
Considering I have −12 karma on both posts (possibly from the same people) I think so. The idea of tying certain features to certain karma levels is pretty scary. If criticism is met with downvotes, then even if you don’t care about karma in itself, insomuch as your karma determines what you can and cannot do, you may want watch what you say. ________________________________________________________________ In favour of the spirit of civilisation, I would like to say a few things:
I don’t think of you as an enemy. “Opponent” is an internal label (that in certain circumstances) I apply to everyone not myself. This does not preclude cooperation.
I may have come across as hostile and accusatory; I apologise. That was not my intention. (I do not apologise for mentioning FGA, and motivated cognition (so perhaps you can file this as an apology that is not really an apology), but I apologise for offense received (unless you find the very suggestion of irrationality offensive, in which case I retract my apology (so once again an apology that is not really an apology, or as I prefer to say a “conditional apology”)).
My intention was to be a helpful critic. I was not suggesting that you must change you to change definition immeidately: My suggestions were:
If a better definition is your true rejection, then beware the FGA. Either implement the proposed change, or explicitly state the requirements for a definition you would accept. FGAs are very tempting, and we change our minds less often than we think. Masking your true rejection behind a FGA only exacerbates the problem. Stating explicitly what your criteria are would make your true rejection plainly visible, and destroy the FGA.
More explicitly: If you were indeed holding out for a better definition:
Temporary implement the proposed change (insomuch as it is better than what you originally proposed).
Explicitly specify what your requirements are for a new definition. This way, you would precommit to accepting a definition that meets your explicit requirements. You wouldn’t be able to use an FGA to reject any proposed change, and would be able to overcome motivated cognition.
Irrationality is the default. Overcoming motivated cognition and being rational is an active effort; I stand by this statement. We are aspiring rationalists, and we are not beyond errors. We are overcoming bias, and have not overcome bias. When I point out what I perceive as errors in rationality, it’s not meant to be offensive. If my wording makes it offensive, I can optimise that. If you find the concept of being irrational is offensive, then I wouldn’t even bother optimising wording.
My initial message can be summed as thus:
The reply you gave can be applied to any proposed change, and thus is a fully general counterargument. Be careful how you use it. FGAs support motivated cognition. If actual quality of the definition is not your true rejection, then you would be a motivated sceptic. With an FGA, you can resist any definition proposed no matter how good it is. It is better to remove temptation than to try and overcome it. Thus, it is better if you remove the temptation that the FGA offers. Two ways to do this are:
Commit to accepting any definition better than the current one.
Explicitly state your criteria for accepting a definition. This criteria must be externally verifiable. This is precommitting to accepting a definition that meets the proposed standards. You’ll be able to remove the temptation of the FGA and motivated cognition.
Based on this:
In addition, changing definitions is expensive once you’ve opened a concept to public discussion. It would in fact be perfectly reasonable to say that yes, Yi is better than X, but it is not better enough yet to justify switching, or that we should consider whether there exists Yj before paying switching costs.
It may be that you find “1” to be an unacceptable choice. I was not aware of that argument when I proposed “1“. Personally, I would still go ahead with “1”, but I’m not you, and we have different value systems. I have not seen a good argument against “2”, and I urge you to enact it.
I am happy to say more after having had time to reflect.
I do like the idea of spare resources quite a bit, but it doesn’t encompass properly the class of things you can bounded on. I also think that defining via the negative is actually important here. A person who must wear a suit and tie, or the color blue, lacks Slack in this way, and one who is not so forced has it, but it is odd to think of this as spare resources. In many cases the way you retain Slack is to avoid incentive structures that impose constraints rather than having resources. Perhaps this is simply a case of not knowing what you’ve got till its gone, but I think that getting there would prove confusing.
Another intuition that points against spare resources is that you can substitute the ability to acquire resources for resources. Slack can often come simply from the ability to trade, including trade among things not literally traded (e.g. trading time for money for emotional stability etc etc) or the ability to produce. Again, you can call all such things “resources” or even “spare resources” but that would imply a pretty non-useful and non-intuitive definition of spare and resources, that wouldn’t help explain the concept very well.
That all does suggest freedom might be a good word but I think it’s not right. Certainly Slack implies freedom and freedom requires Slack, but freedom is a very overloaded (and loaded) word that has a lot of meanings that would be misleading. My model of how people think about concepts says that if we use the word freedom in this way, people will pattern match heavily on their current affectations and models of freedom, and won’t grok the concept we’re pointing towards with Slack as easily. I also think there’d be an instinct to think things of the class “oh,that’s just...” and that’s a curiosity stopper.
I don’t understand how your definition is different from freedom.
I’m using resource in a very broad sense in that there is something that can be modelled as roughly on a linear scale and that there is some level beneath where bad things happen (often this is 0, but not always). So emotional stability can be thrown into a linear scale in very high level models.
Thank you. I agree that the definition isn’t perfect and can likely be improved. “Freedom provided by having spare resources” isn’t a bad second attempt but I sense we can do better; I will continue to think about the best way to pin this down concisely. Suggestions welcome!
“Freedom by having spare tolerances in your action → utility map”.
Resources are just one set of inputs to the function that maps actions to utility. Slack is what happens when your utility map has a plateau rather than a sharp peak.
I.e., arrange all your available actions on an N-dimensional field, separated by distinguishable salience. Then create an N+1 dimensional graph, where each such action is mapped to the total utility that results from that action. You have lots of slack if your region of maximum utility look like plateaus—that is, noticeable adjustments to your input don’t pertub you out of your ‘win’ - and you have no slack if your region of maximum utility looks like a sharp peak—that is, noticeable adjustments to your input almost instantly perturb you out of your ‘win’ and into a significantly lower-utility part of your action space.
How about the following definition: slack is the range where a quantitative change in behavior does not result in a qualitative change in outcome?
Suppose we are in state X, and a change Y1 is proposed which will lead to a better state X1. Then suppose the individual with power to implement Y1 replied:
Would you not suspect that there was some deeper irrationality at play—that the individual was at least a motivated sceptic?
By induction the argument in quotes can be applied against any Yi by appealing to an even better Yj. You have just given yourself a fully general counterargument (FGA) against any proposed change.
What is your true rejection of Chris_Leong’s proposed change?
If a better definition is your true rejection, then beware the FGA. Either implement the proposed change, or explicitly state the requirements for a definition you would accept. FGAs are very tempting, and we change our minds less often than we think. Masking your true rejection behind a FGA only exacerbates the problem. Stating explicitly what your criteria are would make your true rejection plainly visible, and destroy the FGA.
Personally? I think you should do both.
I am happy to say more about my thinking and did so in the other comment, but I will say that this comment seemed hostile and uncharitable. (But to be clear, I am 100% not calling in the Sunshine Regiment or even down-voting the comment, I just hope to make this less common in the future by highlighting it).
Responding to promises to put thought into constructive criticisms, and an indication of willingness to change one’s mind, with accusations of fully general counterarguments, suspicions of irrationality and resistance to change, and demands for true rejections, seems to me neither charitable, reasonable nor productive. I’m confused why you pulled out such big guns and essentially accused me of responding in bad faith. If you wanted more detailed thoughts on the definition, you could simply have asked for them (and I gave them anyway).
Chris did not even quite suggest a new definition; I highlighted his phrase as pointing towards a potential better version and was myself pointing out that it wasn’t half bad. I didn’t even reject it at the time, although I do so now on reflection. I definitely wasn’t saying that it was better than the original but I was rejecting it because I was holding out for better still. I also promised to think more about it. I hadn’t even rejected Yi yet.
In addition, changing definitions is expensive once you’ve opened a concept to public discussion. It would in fact be perfectly reasonable to say that yes, Yi is better than X, but it is not better enough yet to justify switching, or that we should consider whether there exists Yj before paying switching costs.
It was not my intention to be hostile or overly uncharitable. Furthermore, it seems we have different expectations. Pointing out that you made a fully general counterargument (when that is in fact what you did) is not in my opinion bad behaviour. Pointing out that you come across to me as a motivated sceptic and are hiding your true rejection is not in my opinion bad behaviour. Pointing out that you acted irrationally (insomuch as fully general counterarguments and motivated scepticism are irrational) is not bad behaviour.
I gave an example to highlight the (to me) blatantly obvious irrationality in your reply.
You seem to expect that I interpret your arguments in “good faith”. I do; I assume there was no malice or ill intent behind the rationalist faux pas. I assume it wasn’t intentional and sought to draw it to your attention. I assume your intentions were benevolent; that in my honest opinion is what charity is.
My enemies (not to say you’re one, but I’d rather not alter the saying) are not inherently evil, but neither are they inherently rational. You seem to expect that I interpret your arguments in the most rational way possible? What even? We are aspiring rationalists—getting it right is hard, sometimes we make mistakes, sometimes we commit errors of rationality. We have a bias blind spot that are preventing us from seeing our own biases. I’ve seen Scott fall for correspondence bias (well I wasn’t there when it happened, but I read the comments) and don’t even let me get started on Eliezer Yudkowsky. We are not perfect rationalists; we are not above error and bias; we are not beyond logical fallacies. We try to overcome our bias, we have not eliminated them.
I don’t know if you think you’ve already overcome bias, but if then we live in entirely different worlds. To suggest that because I point out that you acted irrationally I am being “overly hostile” and “you want to discourage it”? I’m sorry, but this sounds like a blatant attempt to shut down criticism (you seem to be fine with criticism of other areas). (Now, I almost fell for correspondence bias and suggested you might be someone who prides themselves in their rationality, and me pointing out your irrationality was causing you to lose status, but “correspondence bias”, so I wouldn’t). Also, “big guns” rubs me the wrong way. Arguments are not soldiers, I do not aim to cause you to lose status (I think your falling for correspondence bias here). I did not simply want a clarification on the definition, I did not accuse you of responding in bad faith, I thought you were being irrational, so I called you out on it.
If you think accusations of irrationality are accusations of bad faith, then once again, we live in different worlds. I don’t assume that you are uber rational, that you are so skilled in the way that common errors of rationality are beyond you. I don’t assume that for everyone. Being rational is hard, and I know that I certainly struggle.
Now I’ll be explicit; this is part of my way of good cognitive citizenship, this is my way of proper epistemic hygiene, this is my way of raising the sanity waterline. I love lesswrong, I boast about lesswrong. Anyone who knows me well enough knows about lesswrong. Lesswrong is a safe space for me. I kind of think of it like a garden—my garden. I am happy when I see debates being resolved and people changing their minds. I want the standards of discourse on LW to be as high as possible. It is because I care that I point out irrationality when I see it. If lesswrong was on the average more rational, then it grants me positive effect. I feel warm and fuzzy. I am not your enemy. (At least not the sense it is commonly used. The way I internally refer to enemies and opponents is just assuming everyone is a player in a game of unknown alignment (zero sum, cooperative, etc)).
I’ll be charitable, I’ll assume this was not a deliberate effort to shut down criticism. I’ll assume that you do not think yourself to be beyond bias. I’ll assume that there was a scenario based cause behind your attempt to suppress my criticism.
I’ll assume my opponents are angels if that’s the norm promoted here—but assume my opponents are rational? Why I’ve never heard a more silly idea.
Now as for the implied threat to downvote my post. Well, in the spirit of charity, I’ll assume this was not a deliberate threat, was not an indication of willingness to do such and was a slip. I’m sorry, but if calling the “sunshine regiment” is a option that is even in your search space and/or sufficiently salient enough that you felt the need to stress that you wouldn’t do it in response to constructive criticism (and my criticism was constructive; I specified viable courses of action for you to take)? Then I don’t even.
But worse still, if the sunshine regiment was actually an option if there would have actually been moderator action on my post because I pointed out that you were being irrational? Well then that would break my heart. I really love lesswrong and go above and beyond to recommend it. But if that was truly a viable option for you, then it would be heart breaking. It’ll seem lesswrong wasn’t what I thought it would be, it wasn’t what I hoped it would be.
P.S: I think I’ll write a post titled “Beware the fully general counterargument” soon. Permission to use this exchange as part of the post?
There is a reason why I don’t think it’s perfectly valid, and that involves, “shifting the goal posts”, “fully general counterarguments”, “motivated scepticism”, “hiding your true rejection”, and possibly other biases/fallacies. I think I’ll possibly throw in “proving too much”, I’ll try and do justice it in my post on it.
P. P. S: I did not consider switching costs. I think that even if you do, because of the failures of rationality I highlighted above, that you should explicitly specify your requirements for Yj.
I think you are being unreasonable. I mean, really unreasonable.
First, let’s look at the original exchange. Chris says “I think your definition isn’t quite right and X would be an improvement”. Zvi says “You might be right, but I think one can do still better; I’ll think about it and am open to suggestions”. And you say: That’s a fully general counterargument; Zvi should immediately have rewritten his article to use Chris’s proposed definition, and not doing so shows that he is engaging in motivated skepticism and not offering his true rejection.
Sorry, but that’s just silly. Rewriting the article in line with a slightly modified definition would be a pile of work; if Zvi thinks the slightly modified definition is neither a huge improvement nor the best one can do, it is absolutely reasonable for him to look and see if something better can be found. (In your more recent comment you say “I did not consider switching costs”. OK then: so you didn’t consider the obviously most important factor here but were still happy to leap from “Zvi didn’t do so-and-so” to “Zvi is being a motivated skeptic, hiding his true rejection, etc., etc.”. What the hell?)
Even if in fact nothing better can be found, it is absolutely not reasonable to expect that every time A finds a suboptimality in something B has written B should do anything more than acknowledge it. Does Chris’s refinement of Zvi’s definition invalidate everything Zvi wrote? It doesn’t seem like it. So if Zvi’s article was useful before, it’s still useful now. Leaving it as it is, with one prominent comment describing that refinement and an agreement from Zvi right there under it, will do just fine.
And what’s this “fully general counterargument” business? First of all, Zvi wasn’t making a counterargument (nor purporting to make one). He agreed with what Chris said, after all. It would be a counterargument if Chris had added something like ”… so you should rewrite your article to use my revised definition”, but he didn’t so it isn’t. Second, being “fully general” doesn’t make things wrong, it makes them incomplete. There’s nothing wrong with saying “You say X. I disagree.” even though “I disagree” is a thing anyone can say about anything. It can provide useful context for an actual argument that follows; it can simply be a statement of the speaker’s position, when providing that information is worth the small effort it takes but engaging in a detailed explanation of why isn’t worth the much larger effort that takes; move it one step closer to what Zvi wrote (“You say X. I disagree, but I don’t have my reasons perfectly straight in my mind yet”) and it’s a placeholder for a later more detailed explanation. Nothing wrong with any of that.
Your most recent comment says a thing or two about correspondence bias, so I’d like to draw your attention to what I find a striking feature of both of your replies to Zvi in this thread: you shift very rapidly from observing a particular (alleged) defect in what he wrote to speculating about how he’s thinking badly. So you go straight from “this is a fully general counterargument” to guessing at particular kinds of bias that might be in Zvi’s head. He’s a motivated skeptic! He isn’t giving his true rejection! And you adopt what seems to me (and I think, from what he says in response, also to Zvi) a needlessly accusatory tone.
You could e.g. just have written this: “What do you find still unsatisfactory about Chris’s proposed refinement of your definition, and what are you looking for in suggestions for further improvements? What would it take to justify revising the article to use a better definition?”. That would have done the same job of soliciting more explicit criteria from Zvi. If you were extra-anxious to point out the logical defect you think you see in what he wrote: “That seems like something one could say to any proposed improvement. What do you find still unsatisfactory [etc.]”. What advantage do you see in the much more confrontational approach you took?
(Your recent comment suggests that the answer is something like “By saying such things I hope to make people examine their biases and reduce them”. Take a look at what’s actually happening here. Does it seem like that’s working well?)
So much for the original exchange. Now let’s look at your more recent comment. It doubles down on the accusatory, confrontational approach. It takes Zvi’s statement that (despite the disapproval he expresses) he isn’t downvoting your comment, and casts it as a “threat to downvote”, which makes no sense to me at all: downvoting a single comment is not an action major enough to be worth threatening, and Zvi’s whole point is that he isn’t doing it. I think you may have misconstrued his reason for saying what he said, which (I am guessing, on the basis that it’s what I have meant when I’ve said similar things in the past) is not at all “I didn’t downvote you, but I could and you’d better watch out in case I do, bwahahahaa” but “I know I’m saying something negative about what you wrote, and you may suspect I’ve downvoted you and feel that as a hostile action; but in fact I haven’t downvoted you and I’d like you to know that in the hope that it will help our interactions be friendlier than they otherwise might be”.
Similarly for the bit about the Sunshine Regiment; the point here is that there is a general policy on LW2 of being nice to one another when there isn’t a cogent reason, and Zvi’s statement that your comment was needlessly hostile could be taken as a suggestion that you be officially censured for it, and so he wanted to make it clear that he doesn’t want that. (This one is less obviously not-a-concealed-threat, but I am still 95% confident that it was not intended as anything of the sort.)
And then you start talking about how it would break your heart if you got censured somehow for being needlessly confrontational, and how Less Wrong is your garden. This attempt to tug at your readers’ heartstrings looks extremely odd alongside the no-fuzziness-allowed demands you’ve been making of Zvi. (Not because there’s anything wrong with feeling strongly about things, of course. But because “if you do X it will make me sad” is generally an even worse counterargument to X than Zvi’s allegedly fully general “I think one can do better than X”.)
Take a look at what you’ve written to Zvi in this thread. Look in particular at all the things that (1) refer explicitly to “opponents” and “enemies” and (2) make uncomplimentary statements or conjectures about Zvi’s intellect or character. And then, please, ask yourself the following question: This “way [you] internally refer to enemies and opponents”—is it actually leading to good outcomes for you? Because I don’t think it is. In so far as this thread is any guide, I think it’s leading you to adopt an approach to discussion that annoys other people and makes them less, not more, receptive to anything useful you may have to say.
You have my full permission to use this exchange if you feel it would be illustrative, and I thank you for asking permission rather than forgiveness. The topic is definitely worthy of more attention.
I also want to make clear that it was not my intent to make any implied threats. Quite the opposite; I was concerned that my reply would be viewed as an attack or call for reaction, and wanted to make it clear I did not want that. Saying “I’m not even going to downvote” was me saying I didn’t think it even rose to that level of disagreement, nothing more.
I also think that we have an even bigger issue with karma than I thought, if a downvote as a threat is even a coherent concept. I mean, wow. I knew numbers on things were powerful but it seems I didn’t know it enough (I suspect it’s one of those no-matter-how-many-times-you-update-it’s-not-nearly-enough things, since it’s a class of incentives-matter).
Anyway, no offense taken. I am not your enemy in either direction, and I hope we can avoid talking about others as if they are enemies or coming across as if we’re thinking that way. I do think you give that impression quite heavily here, even if unintended, and that this is bad.
Understood.
Considering I have −12 karma on both posts (possibly from the same people) I think so. The idea of tying certain features to certain karma levels is pretty scary. If criticism is met with downvotes, then even if you don’t care about karma in itself, insomuch as your karma determines what you can and cannot do, you may want watch what you say.
________________________________________________________________
In favour of the spirit of civilisation, I would like to say a few things:
I don’t think of you as an enemy. “Opponent” is an internal label (that in certain circumstances) I apply to everyone not myself. This does not preclude cooperation.
I may have come across as hostile and accusatory; I apologise. That was not my intention. (I do not apologise for mentioning FGA, and motivated cognition (so perhaps you can file this as an apology that is not really an apology), but I apologise for offense received (unless you find the very suggestion of irrationality offensive, in which case I retract my apology (so once again an apology that is not really an apology, or as I prefer to say a “conditional apology”)).
My intention was to be a helpful critic. I was not suggesting that you must change you to change definition immeidately: My suggestions were:
More explicitly: If you were indeed holding out for a better definition:
Temporary implement the proposed change (insomuch as it is better than what you originally proposed).
Explicitly specify what your requirements are for a new definition. This way, you would precommit to accepting a definition that meets your explicit requirements. You wouldn’t be able to use an FGA to reject any proposed change, and would be able to overcome motivated cognition.
Irrationality is the default. Overcoming motivated cognition and being rational is an active effort; I stand by this statement. We are aspiring rationalists, and we are not beyond errors. We are overcoming bias, and have not overcome bias. When I point out what I perceive as errors in rationality, it’s not meant to be offensive. If my wording makes it offensive, I can optimise that. If you find the concept of being irrational is offensive, then I wouldn’t even bother optimising wording.
My initial message can be summed as thus:
Based on this:
It may be that you find “1” to be an unacceptable choice. I was not aware of that argument when I proposed “1“. Personally, I would still go ahead with “1”, but I’m not you, and we have different value systems. I have not seen a good argument against “2”, and I urge you to enact it.
I am happy to say more after having had time to reflect.
I do like the idea of spare resources quite a bit, but it doesn’t encompass properly the class of things you can bounded on. I also think that defining via the negative is actually important here. A person who must wear a suit and tie, or the color blue, lacks Slack in this way, and one who is not so forced has it, but it is odd to think of this as spare resources. In many cases the way you retain Slack is to avoid incentive structures that impose constraints rather than having resources. Perhaps this is simply a case of not knowing what you’ve got till its gone, but I think that getting there would prove confusing.
Another intuition that points against spare resources is that you can substitute the ability to acquire resources for resources. Slack can often come simply from the ability to trade, including trade among things not literally traded (e.g. trading time for money for emotional stability etc etc) or the ability to produce. Again, you can call all such things “resources” or even “spare resources” but that would imply a pretty non-useful and non-intuitive definition of spare and resources, that wouldn’t help explain the concept very well.
That all does suggest freedom might be a good word but I think it’s not right. Certainly Slack implies freedom and freedom requires Slack, but freedom is a very overloaded (and loaded) word that has a lot of meanings that would be misleading. My model of how people think about concepts says that if we use the word freedom in this way, people will pattern match heavily on their current affectations and models of freedom, and won’t grok the concept we’re pointing towards with Slack as easily. I also think there’d be an instinct to think things of the class “oh, that’s just...” and that’s a curiosity stopper.
“I also think there’d be an instinct to think things of the class “oh, that’s just...” and that’s a curiosity stopper.”
This is an important idea (and an important argument in favor of jargon proliferation) that I don’t recall having seen presented before explicitly.
I don’t understand how your definition is different from freedom.
I’m using resource in a very broad sense in that there is something that can be modelled as roughly on a linear scale and that there is some level beneath where bad things happen (often this is 0, but not always). So emotional stability can be thrown into a linear scale in very high level models.
Is this a reply?
The LW2 user interface has a few quirks. One of them is that it’s regrettably easy to submit an empty comment by accident.