For what it’s worth, as someone who has been reading your various exchanges without becoming involved in them (even to the extent of voting), I think your summary of the causes of that shift leaves out some important aspects.
That aside, though, I suspect your conclusion is substantively correct: karma-shift is a reasonable indicator (though hardly a perfectly reliable one) of how your recent behavior is affecting your reputation here , and if you expected that your recent behavior would improve your reputation your expectations were way misaligned with the reality of this site.
if you expected that your recent behavior would improve your reputation your expectations were way misaligned with the reality of this site.
Even assuming Logos was entirely correct about all his main points it would be bizarre to expect anything but a drastic drop in reputation in response to Logos’ recent behavior. This requires only a rudimentary knowledge of social behavior.
, and if you expected that your recent behavior would improve your reputation your expectations were way misaligned with the reality of this site.
It’s a question of degree. I realized from the outset that I’m essentially committing heresy against sacred beliefs of this community. But I expected a greater capacity for rationality.
I’m a non-Bayesian with over 6000 karma. I’ve started discussion threads on problems about problems with finding the right priors in a Bayesian context and have expressed skepticism that there any genuinely good set of priors exists. Almost every time I post on something related to AGI it is to discuss reasons why I think fooming isn’t likely. I’m not signed up for cryonics and have made multiple comments discussing problems with it from both a strict utilitarian perspective and from a more general framework. When there was a surge of interest in bitcoins here I made a discussion thread pointing out a potentially disturbing issue with that. One of my very first posts here was arguing that phlogiston is a really bad example of an unfalsifiable theory, and I’ve made this argument repeatedly here, despite phlogiston being the go-to example here for a bad scientific theory (although I don’t seem to have had much success in convincing anyone).
I have over 6000 karma. A few days ago I had gained enough karma to be one of the top contributors in the last 30 days. (This signaled to me that I needed to spend less time here and more time being actually productive.)
It should be clear from my example that arguing against “sacred beliefs” here does not by itself result in downvotes. And it isn’t like I’ve had those comments get downvoted and balanced out by my other remarks. Almost all such comments have been upvoted. I therefore have to conclude that either the set of heresies here is very different than what I would guess or something you are doing is getting you downvoted other than your questioning of sacred beliefs.
It would not surprise me if quality of arguments and their degree of politeness matter. It helps to keep in mind that in any community with a karma system or something similar, high quality, polite arguments help more. Even on Less Wrong, people often care a lot about civility, sometimes more than logical correctness. As a general rule of thumb in internet conversations, high quality arguments that support shared beliefs in a community will be treated well. Mediocre or low quality arguments that support community beliefs will be ignored or treated somewhat positively. At the better places on the internet high quality arguments against communal beliefs will be treated with respect. Mediocre or low quality arguments against communal beliefs will generally be treated harshly. That’s not fair, but it is a good rule of thumb. Less Wrong is better than the vast majority of the internet but in this regard it is still roughly an approximation of what you would expect on any internet community.
So when one is trying to argue against a community belief, you need to be very careful to have your ducks lined up in a row. Have your arguments carefully thought out. Be civil at all times. If something is not going well take a break and come back to it later. Also, keep in mind that aside from shared beliefs, almost any community has shared norms about communication and behavior and these norms may have implicit elements that take time to pick up. This can result in harsh receptions unless one has either spent a lot of time in the community or has carefully studied the community. This can make worse the other issues mentioned above.
I’m a non-Bayesian with over 6000 karma. I’ve started discussion threads on problems about problems with finding the right priors in a Bayesian context and have expressed skepticism that there any genuinely good set of priors exists.
That’s a standard element of Bayesian discourse, actually. The notion I’ve been arguing for, on the other hand, fundamentally violates Bayesian epistemology. And yes, I haven’t been highly rigorous about it; but then, I’m also really not all that concerned about my karma score in general. I was simply noting it as demonstrative of something.
However, actively dishonest rhetorical tactics have been taken up ‘against’ me in this thread, and that is what I have reacted strongly negatively against.
That’s a standard element of Bayesian discourse, actually.
That’s an interesting notion which I’d be curious if the Bayesians here could comment on. Do you agree that discussions that good priors might not be possible seem to be standard Bayesian discourse?
However, actively dishonest rhetorical tactics have been taken up ‘against’ me in this thread, and that is what I have reacted strongly negatively against.
I haven’t seen any indications of that in this thread. I do however recall the unfortunate communication lapses that you and I apparently had in the subthread on anti-aging medicine and it seems that some of the comments you made there fit a similar pattern match to accusing people for “actively dishonest rhetorical tactics” (albeit less extreme in that context). Given that two similar issues have occurred on wildly different topics, there seem to be two different explanations: 1) There is a problem with most of the commentators at Less Wrong 2) Something is occurring with the other common denominator of these discussions.
I know you aren’t a Bayesian so I won’t ask you to estimate the probabilities in these two situations. But let me ask a different question: If we took a neutral individual who has never gone to LW and isn’t a Bayesian, do you think you would bet even that on reading this thread they would agree with you and think there’s a problem with the Less Wrong community? What about 2-1 odds? 3-1? If not, why not?
1) There is a problem with most of the commentators at Less Wrong 2) Something is occurring with the other common denominator of these discussions.
Very gently worded. It is my current belief that both statements are true. I have never before so routinely encountered such difficulty in expressing my ideas and having them be understood, despite the fact that the inferential gap between myself and others is… well, larger than what I have ever witnessed between any other two people saving those with severely abnormal psychology. When I’m feeling particularly “existential” I sometimes worry about what that means about me.
On the other hand, I have also never before encountered a community whose dialogue was so deeply entrenched with so many unique linguistic constructs as LessWrong. I do not, fundamentally, disapprove of this: language shapes thought, after all. But this does also create a problem that if I cannot express my ideas within that patois, then I am not going to be understood—with the corollary that ideas which directly violate those construct’s patterns will be violently rejected as incomprehensible, “wrong”, or “confused”.
But let me ask a different question: If we took a neutral individual who has never gone to LW and isn’t a Bayesian, do you think you would bet even that on reading this thread they would agree with you and think there’s a problem with the Less Wrong community? What about 2-1 odds? 3-1? If not, why not?
I am disappointed by this. Shifting the framework of the question from the personal perspective to the outside view does not substantively make it a different question.
On the other hand, I have also never before encountered a community whose dialogue was so deeply entrenched with so many unique linguistic constructs as LessWrong. I do not, fundamentally, disapprove of this: language shapes thought, after all. But this does also create a problem that if I cannot express my ideas within that patois, then I am not going to be understood—with the corollary that ideas which directly violate those construct’s patterns will be violently rejected as incomprehensible, “wrong”, or “confused”.
I’m not sure this is the case. At least it has not struck me as the case. There is a fair number of constructs here that are specific to LW and a larger set that while not specific to LW are not common. But in my observation this results much more frequently in people on LW not explaining themselves well to newcomers. It rarely seems to result in people not being understood or rejected as confused. The most likely way for that to happen is for someone to try to speak in the LW patois without internalizing the meaning of the terms.
I am disappointed by this. Shifting the framework of the question from the personal perspective to the outside view does not substantively make it a different question.
Not my intended point by the question. I wanted an outside view point in general and wanted your estimate on what it would be like. I phrased it in terms of a bet so one would not need to talk about any notion of probability but could just speak of what bets you would be willing to take.
I’m not sure this is the case. At least it has not struck me as the case.
To be frank, have you ever encountered a scenario which displays this phenomenon as vulgarly as this one?
The most likely way for that to happen is for someone to try to speak in the LW patois without internalizing the meaning of the terms.
Well, again; note—what I’m doing here is actually directly violating the framework of the “LWP”. Those who have internalized it, but are unfamiliar with my own framework, would have significant barriers to comprehension. And that is very frequently associated with all sorts of negative reactions—especially when by that framework I am clearly a very confused person who keeps asserting that I am not the one who’s confused here.
I wanted an outside view point in general and wanted your estimate on what it would be like.
I can see why you think it would be an interesting question. I have, however, no opinion or belief on the matter; it is a thoroughly uninteresting question to me.
To be frank, have you ever encountered a scenario which displays this phenomenon as vulgarly as this one?
I’m not at all convinced that that is what is going on here, and this doesn’t seem to be a very vulgar case if I am interpreting your meaning correctly. You seem to think that people are responding in a much more negative and personal fashion than they are.
Well, again; note—what I’m doing here is actually directly violating the framework of the “LWP”. Those who have internalized it, but are unfamiliar with my own framework, would have significant barriers to comprehension.
So the solution then is not to just use your own language and get annoyed by when people fail to respond positively. The solution there is to either use a common framework (e.g. very basic English) or to carefully translate into the new language, or to start off by constructing a helpful dictionary. In general, it is socially rude and and unlikely to be productive to go to any area of the internet where there’s a specialized vocab and not only not learn it but to use a different vocab that has overlapping words. I wouldn’t recommend this in a Dungeons and Dragons forum either.
I can see why you think it would be an interesting question. I have, however, no opinion or belief on the matter; it is a thoroughly uninteresting question to me.
This is unfortunate. It is a question that while uninteresting to you may help you calibrate what is going on. I would tentatively suggest spending a few seconds on the question before dismissing it.
In general, it is socially rude and and unlikely to be productive to go to any area of the internet where there’s a specialized vocab and not only not learn it but to use a different vocab that has overlapping words. I wouldn’t recommend this in a Dungeons and Dragons forum either.
eg. “d20 doesn’t mean a twenty sided dice it refers to the bust and cup size of a female NPC!”
Well, again; note—what I’m doing here is actually directly violating the framework of the “LWP”.
Also the framework presented in “A Practical Study of Argument” by Grovier—my textbook from my first year Philosophy class called “Critical Thinking”. It is actually the only textbook I kept from my first undergrad degree—definitely recommended for anyone wanting to get up to speed on pre-bayesian rational thinking and argument.
Nonsense. This is exactly on topic. It isn’t my “Less Wrong Framework” you are challenging. When I learned about thinking, reasoning and fallacies LessWrong Wasn’t even in existence. For the matter Eliezer’s posts on OvercomingBias weren’t even in existence. Your claim that the response you are getting is the result of your violation of lesswrong specific beliefs is utterly absurd.
So that justifies your assertion that I violate the basic principles of logic and argumentation?
What justifies my assertion that you violate the basic principles of logic and argumentation is Trudy Govier, “A Practical Study of Argument”, 4th edition—Chapter 5 (“Premises—What to accept and why”), page 138. Under the subheading “Proper Authority”.
For an explanation of when and why an appeal to authority is, in fact, fallacious see pages 141, 159 and 434. Or wikipedia. Either way my disagreement with you is nothing to do with what I learned on LessWrong. If I’m wrong it’s the result of my prior training and an independent flaw in my personal thinking. Don’t try to foist this off on LessWrong groupthink. (That claim would be credible if we were arguing about, say, cryonics.)
What justifies my assertion that you violate the basic principles of logic and argumentation is Trudy Govier, “A Practical Study of Argument”, 4th edition—Chapter 5 (“Premises—What to accept and why”), page 138. Under the subheading “Proper Authority”.
You are really going to claim that by logically arguing over what qualifies as a valid argument I violate the basic principles of argumentation and logic?
Just guessing from the chapter and subheading titles, but I’m pretty sure that bit of “A Practical Study of Argument” has to do with why arguments from authority are not always fallacious.
You are really going to claim that by logically arguing over what qualifies as a valid argument I violate the basic principles of argumentation and logic?
Then by all means enlighten me as to how it can be possible that merely by disagreeing with Grovier on the topic of appeals to authority, and in doing so providing explanations based on deduction and induction, I “violate the basic principles of logic and argumentation”.
For what it’s worth, as someone who has been reading your various exchanges without becoming involved in them (even to the extent of voting), I think your summary of the causes of that shift leaves out some important aspects.
That aside, though, I suspect your conclusion is substantively correct: karma-shift is a reasonable indicator (though hardly a perfectly reliable one) of how your recent behavior is affecting your reputation here , and if you expected that your recent behavior would improve your reputation your expectations were way misaligned with the reality of this site.
Even assuming Logos was entirely correct about all his main points it would be bizarre to expect anything but a drastic drop in reputation in response to Logos’ recent behavior. This requires only a rudimentary knowledge of social behavior.
It’s a question of degree. I realized from the outset that I’m essentially committing heresy against sacred beliefs of this community. But I expected a greater capacity for rationality.
I’m a non-Bayesian with over 6000 karma. I’ve started discussion threads on problems about problems with finding the right priors in a Bayesian context and have expressed skepticism that there any genuinely good set of priors exists. Almost every time I post on something related to AGI it is to discuss reasons why I think fooming isn’t likely. I’m not signed up for cryonics and have made multiple comments discussing problems with it from both a strict utilitarian perspective and from a more general framework. When there was a surge of interest in bitcoins here I made a discussion thread pointing out a potentially disturbing issue with that. One of my very first posts here was arguing that phlogiston is a really bad example of an unfalsifiable theory, and I’ve made this argument repeatedly here, despite phlogiston being the go-to example here for a bad scientific theory (although I don’t seem to have had much success in convincing anyone).
I have over 6000 karma. A few days ago I had gained enough karma to be one of the top contributors in the last 30 days. (This signaled to me that I needed to spend less time here and more time being actually productive.)
It should be clear from my example that arguing against “sacred beliefs” here does not by itself result in downvotes. And it isn’t like I’ve had those comments get downvoted and balanced out by my other remarks. Almost all such comments have been upvoted. I therefore have to conclude that either the set of heresies here is very different than what I would guess or something you are doing is getting you downvoted other than your questioning of sacred beliefs.
It would not surprise me if quality of arguments and their degree of politeness matter. It helps to keep in mind that in any community with a karma system or something similar, high quality, polite arguments help more. Even on Less Wrong, people often care a lot about civility, sometimes more than logical correctness. As a general rule of thumb in internet conversations, high quality arguments that support shared beliefs in a community will be treated well. Mediocre or low quality arguments that support community beliefs will be ignored or treated somewhat positively. At the better places on the internet high quality arguments against communal beliefs will be treated with respect. Mediocre or low quality arguments against communal beliefs will generally be treated harshly. That’s not fair, but it is a good rule of thumb. Less Wrong is better than the vast majority of the internet but in this regard it is still roughly an approximation of what you would expect on any internet community.
So when one is trying to argue against a community belief, you need to be very careful to have your ducks lined up in a row. Have your arguments carefully thought out. Be civil at all times. If something is not going well take a break and come back to it later. Also, keep in mind that aside from shared beliefs, almost any community has shared norms about communication and behavior and these norms may have implicit elements that take time to pick up. This can result in harsh receptions unless one has either spent a lot of time in the community or has carefully studied the community. This can make worse the other issues mentioned above.
That’s a standard element of Bayesian discourse, actually. The notion I’ve been arguing for, on the other hand, fundamentally violates Bayesian epistemology. And yes, I haven’t been highly rigorous about it; but then, I’m also really not all that concerned about my karma score in general. I was simply noting it as demonstrative of something.
However, actively dishonest rhetorical tactics have been taken up ‘against’ me in this thread, and that is what I have reacted strongly negatively against.
That’s an interesting notion which I’d be curious if the Bayesians here could comment on. Do you agree that discussions that good priors might not be possible seem to be standard Bayesian discourse?
I haven’t seen any indications of that in this thread. I do however recall the unfortunate communication lapses that you and I apparently had in the subthread on anti-aging medicine and it seems that some of the comments you made there fit a similar pattern match to accusing people for “actively dishonest rhetorical tactics” (albeit less extreme in that context). Given that two similar issues have occurred on wildly different topics, there seem to be two different explanations: 1) There is a problem with most of the commentators at Less Wrong 2) Something is occurring with the other common denominator of these discussions.
I know you aren’t a Bayesian so I won’t ask you to estimate the probabilities in these two situations. But let me ask a different question: If we took a neutral individual who has never gone to LW and isn’t a Bayesian, do you think you would bet even that on reading this thread they would agree with you and think there’s a problem with the Less Wrong community? What about 2-1 odds? 3-1? If not, why not?
Very gently worded. It is my current belief that both statements are true. I have never before so routinely encountered such difficulty in expressing my ideas and having them be understood, despite the fact that the inferential gap between myself and others is… well, larger than what I have ever witnessed between any other two people saving those with severely abnormal psychology. When I’m feeling particularly “existential” I sometimes worry about what that means about me.
On the other hand, I have also never before encountered a community whose dialogue was so deeply entrenched with so many unique linguistic constructs as LessWrong. I do not, fundamentally, disapprove of this: language shapes thought, after all. But this does also create a problem that if I cannot express my ideas within that patois, then I am not going to be understood—with the corollary that ideas which directly violate those construct’s patterns will be violently rejected as incomprehensible, “wrong”, or “confused”.
I am disappointed by this. Shifting the framework of the question from the personal perspective to the outside view does not substantively make it a different question.
I’m not sure this is the case. At least it has not struck me as the case. There is a fair number of constructs here that are specific to LW and a larger set that while not specific to LW are not common. But in my observation this results much more frequently in people on LW not explaining themselves well to newcomers. It rarely seems to result in people not being understood or rejected as confused. The most likely way for that to happen is for someone to try to speak in the LW patois without internalizing the meaning of the terms.
Not my intended point by the question. I wanted an outside view point in general and wanted your estimate on what it would be like. I phrased it in terms of a bet so one would not need to talk about any notion of probability but could just speak of what bets you would be willing to take.
To be frank, have you ever encountered a scenario which displays this phenomenon as vulgarly as this one?
Well, again; note—what I’m doing here is actually directly violating the framework of the “LWP”. Those who have internalized it, but are unfamiliar with my own framework, would have significant barriers to comprehension. And that is very frequently associated with all sorts of negative reactions—especially when by that framework I am clearly a very confused person who keeps asserting that I am not the one who’s confused here.
I can see why you think it would be an interesting question. I have, however, no opinion or belief on the matter; it is a thoroughly uninteresting question to me.
I’m not at all convinced that that is what is going on here, and this doesn’t seem to be a very vulgar case if I am interpreting your meaning correctly. You seem to think that people are responding in a much more negative and personal fashion than they are.
So the solution then is not to just use your own language and get annoyed by when people fail to respond positively. The solution there is to either use a common framework (e.g. very basic English) or to carefully translate into the new language, or to start off by constructing a helpful dictionary. In general, it is socially rude and and unlikely to be productive to go to any area of the internet where there’s a specialized vocab and not only not learn it but to use a different vocab that has overlapping words. I wouldn’t recommend this in a Dungeons and Dragons forum either.
This is unfortunate. It is a question that while uninteresting to you may help you calibrate what is going on. I would tentatively suggest spending a few seconds on the question before dismissing it.
eg. “d20 doesn’t mean a twenty sided dice it refers to the bust and cup size of a female NPC!”
Also the framework presented in “A Practical Study of Argument” by Grovier—my textbook from my first year Philosophy class called “Critical Thinking”. It is actually the only textbook I kept from my first undergrad degree—definitely recommended for anyone wanting to get up to speed on pre-bayesian rational thinking and argument.
You mean Grovier.
This is unwarranted and petty.
That’s true.
Nonsense. This is exactly on topic. It isn’t my “Less Wrong Framework” you are challenging. When I learned about thinking, reasoning and fallacies LessWrong Wasn’t even in existence. For the matter Eliezer’s posts on OvercomingBias weren’t even in existence. Your claim that the response you are getting is the result of your violation of lesswrong specific beliefs is utterly absurd.
So that justifies your assertion that I violate the basic principles of logic and argumentation?
I have only one viable response: “Bullshit.”
What justifies my assertion that you violate the basic principles of logic and argumentation is Trudy Govier, “A Practical Study of Argument”, 4th edition—Chapter 5 (“Premises—What to accept and why”), page 138. Under the subheading “Proper Authority”.
For an explanation of when and why an appeal to authority is, in fact, fallacious see pages 141, 159 and 434. Or wikipedia. Either way my disagreement with you is nothing to do with what I learned on LessWrong. If I’m wrong it’s the result of my prior training and an independent flaw in my personal thinking. Don’t try to foist this off on LessWrong groupthink. (That claim would be credible if we were arguing about, say, cryonics.)
You are really going to claim that by logically arguing over what qualifies as a valid argument I violate the basic principles of argumentation and logic?
I reiterate: I have but one viable response.
Just guessing from the chapter and subheading titles, but I’m pretty sure that bit of “A Practical Study of Argument” has to do with why arguments from authority are not always fallacious.
And this makes whatever it says the inerrant truth, never to be contradicted, and therefore a fundamental basic principle of logic and argumentation?
The claim was
The claim was later refined to: “[the] assertion that [Logos01] violate[s] the basic principles of logic and argumentation”.
By you, yes.
Which was agreed to
Okay. But do you acknowledge that the quoted exchange involves a shifting of the goalposts on your part?
Sure.
This is another straw man.
Then by all means enlighten me as to how it can be possible that merely by disagreeing with Grovier on the topic of appeals to authority, and in doing so providing explanations based on deduction and induction, I “violate the basic principles of logic and argumentation”.