You are apparently a very smart person, and you seem to think that this means that you are a good communicator. It does not. In my opinion, you are one of the worst communicators here. You tend to be terse to the point of incomprehensibility. You tend to seize upon interpretations of what other people say that can be both bizarre and unshakable. Conversing with you is simply no fun.
I generally agree with this characterization (except for self-deception part). I’m a bad writer, somewhat terse and annoying, and I don’t like the sound of my own more substantive writings (such as blog posts). I compensate by striving to understand what I’m talking about, so that further detail or clarification can be generally called up, accumulated across multiple comments, or, as is the case for this particular comment, dumped in redundant quantity without regard for resulting style. I like practicing “hyper-analytical” conversation, and would like more people to do that, although I understand that most people won’t like that. I’m worse than average (on my level) at quickly grasping things that are not clearly presented (my intuition is unreliable), but I’m good at systematically settling on correct understanding eventually, discarding previous positions easily, as long as the consciously driven process of figuring out doesn’t terminate prematurely.
Since people are often wrong, assuming a particular mistake is not always that much off a hypothesis (given available information), but the person suspected of error will often notice the false positives more saliently than they deserve, instead of making a correction, as a purely technical step, and moving forward.
I compensate by striving to understand what I’m talking about
Well, that is unquestionably a good thing, and I have no reason to doubt you that you do in fact tend to understand quite a large number of things that you talk about. I wish more people had that trait.
I like practicing “hyper-analytical” conversation
I’m not sure exactly what is meant here. An example (with analysis) might help.
I’m good at systematically settling on correct understanding eventually, discarding previous positions easily, as long as the consciously driven process of figuring out doesn’t terminate prematurely.
If that is the case, then I misinterpreted this exchange:
Me: Ah. I get it now. My phrase “respect for the authentic but less than perfect”. You saw it as an intuition in favor of not “overdoing” the optimizing. Believe me. It wasn’t.
You: I can’t believe what I don’t understand.
Perhaps the reason for my confusion is that it struck me as a premature termination. If you wish to understand something, you should perhaps ask a question, not make a comment of the kind that might be uttered by a Zen master.
Since people are often wrong, assuming a particular mistake is not always that much off a hypothesis (given available information), but the person suspected of error will often notice the false positives more saliently than they deserve, instead of making a correction, as a purely technical step, and moving forward.
Here we go again. …
I don’t understand that comment. Sorry. I don’t understand the context to which it is intended to be applicable, nor how to parse it. There are apparently two people involved in the scenario being discussed, but I don’t understand who does what, who makes what mistake, nor who should make a correction and move forward.
You are welcome to clarify, but quite frankly I am coming to believe that it is just not worth it.
I’m not sure exactly what is meant here. An example (with analysis) might help.
I basically mean permitting unpacking of any concept, including the “obvious” and “any good person would know that” and “are you mad?” ones, and staying on a specific topic even if the previous one was much more important in context, or if there are seductive formalizations that nonetheless have little to do with the original informally-referred-to concepts. See for example here.
P.: Ah. I get it now. My phrase “respect for the authentic but less than perfect”. You saw it as an intuition in favor of not “overdoing” the optimizing. Believe me. It wasn’t.
VN: I can’t believe what I don’t understand.
Perhaps the reason for my confusion is that it struck me as a premature termination.
I simply meant that I don’t understand what you referred to in your suggestion to believe something. You said that “[It’s not] an intuition in favor of not “overdoing” the optimizing”, but I’m not sure what it is then, and whether on further look it’ll turn out to be what I’d refer to with the same words. Finally, I won’t believe something just because you say I should, a better alternative to discussing your past beliefs (which I don’t have further access to and so can’t form much better understanding of) would be to start discussing statements (not necessarily beliefs!) you name at present.
Since people are often wrong, assuming a particular mistake is not always that much off a hypothesis (given available information), but the person suspected of error will often notice the false positives more saliently than they deserve, instead of making a correction, as a purely technical step, and moving forward.
Here we go again. …
Consider person K. That person K happens to be wrong on any given topic won’t be shocking. People are often wrong. When person K saying something confusing, trying to explain the confusingness of that statement by person K being wrong is not a bad hypothesis, even if the other possibility is that what K said was not expressed clearly, and can be amended. When person V says to person K “I think you’re wrong”, and it turns out that person K was not wrong in this particular situation, that constitutes a “false positive”: V decided that K is wrong, but it’s not the case. In the aftermath, K will remember V being wrong on this count as a personal attack, and will focus too much on pointing out how wrong it was to assume K’s wrongness when in fact it’s V who can’t understand anything K is saying. Instead, K could’ve just stated a clarifying statement that falsifies V’s hypothesis, so that the conversation would go on efficiently, without undue notice to the hypothesis of being wrong.
(You see why I’m trying to be succinct: writing it up in more detail is too long and no fun. I’ve been busy for the last days, and replied to other comments that felt less like work, but not this one.)
K says A meaning X. V thinks A means Y. V disagrees with Y.
So if V says “If by ‘A’ you mean ‘Y’, then I have to disagree,” then every thing is fine. K corrects the misconception and they both move on. On the other hand, if V says “I disagree with ‘Y’”, things become confused, because K never said ‘Y’. If V says “I disagree with ‘A’, things become even more confused. K has been given no clue of the existence of the misinterpretation ‘Y’ - reconstructing it from the reasons V offers for disputing ‘A’ will take a lot of work.
But if V likes to be succinct, he may simply reply “I disagree” to a long comment and then (succinctly) provide reasons. Then K is left with the hopeless task of deciding whether V is disagreeing with ‘A’, ‘B’, or ‘C’ - all of which statements were made in the original posting. The task is hopeless, because the disagreement is with ‘Y’ and neither party has even mentioned ‘Y’.
I believe that AdeleneDawner makes the same point.
(You see why I’m trying to be succinct: writing it up in more detail is too long and no fun. I’ve been busy for the last days, and replied to other comments that felt less like work, but not this one.)
I suspect that you would find yourself with even less tedious work to do if you refrained from making cryptic comments in the first place. That way, neither you nor your victims has to work at transforming what you write into something that can be understood.
I suspect that you would find yourself with even less tedious work to do if you refrained from making cryptic comments in the first place.
I like commenting the way I do, it’s not tedious.
That way, neither you nor your victims has to work at transforming what you write into something that can be understood.
Since some people will be able to understand what I wrote, even when it’s not the person I reply to, some amount of good can come out of it. Also, the general policy of ignoring everything I write allows to avoid the harm completely.
As a meta remark, your attitude expressed in the parent comment seems to be in conflict with attitude expressed in this comment. Which one more accurately reflects your views? Have they changed since then? From the past comment:
A good observation. My calling Vladimir a poor communicator is an instance of mind-projection. He is not objectively poor at communicating—only poor at communicating with me.
Both reflect my views. Why do you think there is a conflict?
Because the recent comment assumes that one of the relevant consequences of me not writing comments would be relief of victimized people that read my comments, while if we assume that there are also people not included in the group, the consequence of them not benefiting from my comments would balance out the consequence you pointed out, making it filtered evidence and hence not worth mentioning on its own. If you won’t use filtered evidence this way, it follows that your recent comment assumes this non-victimized group to be insignificant, while the earlier comment didn’t. (No rhetorical questions in this thread.)
Since people are often wrong, assuming a particular mistake is not always that much off a hypothesis (given available information), but the person suspected of error will often notice the false positives more saliently than they deserve, instead of making a correction, as a purely technical step, and moving forward.
The observation that people are often wrong applies similarly to both the hypothesis that a specific error is present and the hypothesis that a specific correction is optimal. Expecting a conversation partner to take either of those as given is incorrect in a very similar way to expecting a conversational partner to take a particular hypothesis’s truth as given. Clear communication of the logic behind a hypothesis (including a hypothesis about wrongness or correction) is generally necessary in such situations before that hypothesis is accepted as likely-true.
I generally agree with this characterization (except for self-deception part). I’m a bad writer, somewhat terse and annoying, and I don’t like the sound of my own more substantive writings (such as blog posts). I compensate by striving to understand what I’m talking about, so that further detail or clarification can be generally called up, accumulated across multiple comments, or, as is the case for this particular comment, dumped in redundant quantity without regard for resulting style. I like practicing “hyper-analytical” conversation, and would like more people to do that, although I understand that most people won’t like that. I’m worse than average (on my level) at quickly grasping things that are not clearly presented (my intuition is unreliable), but I’m good at systematically settling on correct understanding eventually, discarding previous positions easily, as long as the consciously driven process of figuring out doesn’t terminate prematurely.
Since people are often wrong, assuming a particular mistake is not always that much off a hypothesis (given available information), but the person suspected of error will often notice the false positives more saliently than they deserve, instead of making a correction, as a purely technical step, and moving forward.
Well, that is unquestionably a good thing, and I have no reason to doubt you that you do in fact tend to understand quite a large number of things that you talk about. I wish more people had that trait.
I’m not sure exactly what is meant here. An example (with analysis) might help.
If that is the case, then I misinterpreted this exchange:
Perhaps the reason for my confusion is that it struck me as a premature termination. If you wish to understand something, you should perhaps ask a question, not make a comment of the kind that might be uttered by a Zen master.
Here we go again. …
I don’t understand that comment. Sorry. I don’t understand the context to which it is intended to be applicable, nor how to parse it. There are apparently two people involved in the scenario being discussed, but I don’t understand who does what, who makes what mistake, nor who should make a correction and move forward.
You are welcome to clarify, but quite frankly I am coming to believe that it is just not worth it.
I basically mean permitting unpacking of any concept, including the “obvious” and “any good person would know that” and “are you mad?” ones, and staying on a specific topic even if the previous one was much more important in context, or if there are seductive formalizations that nonetheless have little to do with the original informally-referred-to concepts. See for example here.
I simply meant that I don’t understand what you referred to in your suggestion to believe something. You said that “[It’s not] an intuition in favor of not “overdoing” the optimizing”, but I’m not sure what it is then, and whether on further look it’ll turn out to be what I’d refer to with the same words. Finally, I won’t believe something just because you say I should, a better alternative to discussing your past beliefs (which I don’t have further access to and so can’t form much better understanding of) would be to start discussing statements (not necessarily beliefs!) you name at present.
Consider person K. That person K happens to be wrong on any given topic won’t be shocking. People are often wrong. When person K saying something confusing, trying to explain the confusingness of that statement by person K being wrong is not a bad hypothesis, even if the other possibility is that what K said was not expressed clearly, and can be amended. When person V says to person K “I think you’re wrong”, and it turns out that person K was not wrong in this particular situation, that constitutes a “false positive”: V decided that K is wrong, but it’s not the case. In the aftermath, K will remember V being wrong on this count as a personal attack, and will focus too much on pointing out how wrong it was to assume K’s wrongness when in fact it’s V who can’t understand anything K is saying. Instead, K could’ve just stated a clarifying statement that falsifies V’s hypothesis, so that the conversation would go on efficiently, without undue notice to the hypothesis of being wrong.
(You see why I’m trying to be succinct: writing it up in more detail is too long and no fun. I’ve been busy for the last days, and replied to other comments that felt less like work, but not this one.)
K says A meaning X. V thinks A means Y. V disagrees with Y.
So if V says “If by ‘A’ you mean ‘Y’, then I have to disagree,” then every thing is fine. K corrects the misconception and they both move on. On the other hand, if V says “I disagree with ‘Y’”, things become confused, because K never said ‘Y’. If V says “I disagree with ‘A’, things become even more confused. K has been given no clue of the existence of the misinterpretation ‘Y’ - reconstructing it from the reasons V offers for disputing ‘A’ will take a lot of work.
But if V likes to be succinct, he may simply reply “I disagree” to a long comment and then (succinctly) provide reasons. Then K is left with the hopeless task of deciding whether V is disagreeing with ‘A’, ‘B’, or ‘C’ - all of which statements were made in the original posting. The task is hopeless, because the disagreement is with ‘Y’ and neither party has even mentioned ‘Y’.
I believe that AdeleneDawner makes the same point.
I suspect that you would find yourself with even less tedious work to do if you refrained from making cryptic comments in the first place. That way, neither you nor your victims has to work at transforming what you write into something that can be understood.
I like commenting the way I do, it’s not tedious.
Since some people will be able to understand what I wrote, even when it’s not the person I reply to, some amount of good can come out of it. Also, the general policy of ignoring everything I write allows to avoid the harm completely.
As a meta remark, your attitude expressed in the parent comment seems to be in conflict with attitude expressed in this comment. Which one more accurately reflects your views? Have they changed since then? From the past comment:
Both reflect my views. Why do you think there is a conflict? I wrote:
It seems to me that this advice is good, even if you choose to operationalize the word ‘cryptic’ to mean ‘comments directed at Perplexed’.
Writing not tedious, so advice not good.
Because the recent comment assumes that one of the relevant consequences of me not writing comments would be relief of victimized people that read my comments, while if we assume that there are also people not included in the group, the consequence of them not benefiting from my comments would balance out the consequence you pointed out, making it filtered evidence and hence not worth mentioning on its own. If you won’t use filtered evidence this way, it follows that your recent comment assumes this non-victimized group to be insignificant, while the earlier comment didn’t. (No rhetorical questions in this thread.)
The observation that people are often wrong applies similarly to both the hypothesis that a specific error is present and the hypothesis that a specific correction is optimal. Expecting a conversation partner to take either of those as given is incorrect in a very similar way to expecting a conversational partner to take a particular hypothesis’s truth as given. Clear communication of the logic behind a hypothesis (including a hypothesis about wrongness or correction) is generally necessary in such situations before that hypothesis is accepted as likely-true.