If you want social science to be taken seriously, you do your cause a disservice by asserting social science is different in kind from so-called “hard science.”
Edit: In fact, Eugine_Nier’s argument here is that social science is not rigorous enough to be worth considering. You don’t advance true belief by asserting that social science does not need to be rigorous.
And just in case it isn’t clear, the ability to replicate an experiment is not required for a scientific field to be rigorous. (Just look at astro-physics: It isn’t like we can cue up a supernova on command to test our hypothesis). It is preferable, but not necessary.
In fact, Eugine_Nier’s argument here is that social science is not rigorous enough to be worth considering.
No, my argument was that much of modern social science (and especially modern anthropology which is bad even by social science standards) is more concerned with politics than truth. See here for JulianMorrison practically admitting as much and then trying to argue that this is a good thing. And quite frankly the tone of your comment is also not encouraging in that respect.
See here for JulianMorrison practically admitting as much and then trying to argue that this is a good thing.
I think that’s a highly disingenuous reading of JulianMorrison’s statement. JulianMorrison never stated that it was a good thing, only that it was a necessary thing in the face of political realities. In an evolutionary environment where only the Dark Arts are capable of surviving, would you rather win or die?
Essentially, we all need to remember that speaking the truth has a variable utility cost that depends on environment. If the perceived utility of speaking the truth publically is negative, then you invoke the Bayesian Conspiracy and don’t speak the truth except in private.
In this post JulianMorrison was, at least partially, trying to inform you that there is in fact something like a Bayesian Conspiracy within the Social Sciences—that there are social truths that are understood from within the discipline (or at least, from within parts of the discipline) that can’t be discussed with outsiders, because non-rational people will use the knowledge in ways with a highly negative net utility. He was also trying to test you to see if you could be trusted with initiation into that Bayesian Conspiracy. (You failed the test, btw—which is something you might realize with pride or chagrin, depending on your political allegiances.)
In this post JulianMorrison was, at least partially, trying to inform you that there is in fact something like a Bayesian Conspiracy within the Social Sciences—that there are social truths that are understood from within the discipline (or at least, from within parts of the discipline) that can’t be discussed with outsiders, because non-rational people will use the knowledge in ways with a highly negative net utility.
I don’t think I’d identify the activist subculture with the social sciences, at least in the case JulianMorrison was talking about. If there’s an academic community whose members publish relatively unfiltered research within their fields but don’t usually talk to the public unless they are also activists, and also an activist community whose members are much more interested in spreading the word but aren’t always too interested in spreading up-to-date science (charitably, because they believe some avenues of research to be suffering from bias or otherwise suspect), then we get the same results without having to invoke a conspiracy. This also has the advantage of explaining why it’s possible to read about ostensibly forbidden social truths by, e.g., querying the right Wikipedia page.
Whether this accurately models any particular controversial subject is probably best left as an exercise.
Hold on a moment. I think the labels are accurate descriptions of the phenomena. There’s hostility to this kind of discussion, so sometimes the only winning move is not to play. But if the labels (heternormativity, privilege, social construction, rape culture) are not describing social phenomena, then we should find accurate labels.
And if experts use the labels right, but [Edit: sympathetic] laypeople do not, then we should chide the laypeople until they use them right. Agreement with my preferred policies does not make you wise, because arguments are not soldiers.
In short, I think I win on the merits, so let’s not get caught up in procedural machinations.
And if experts use the labels right, but laypeople do not, then we should chide the laypeople until they use them right. Agreement with my preferred policies does not make you wise, because arguments are not soldiers.
That assumes that we have sufficient status that our chiding the laypeople will win. The problem with social phenomena is that discussions about social phenomena are themselves social phenomena, so your statements have social cost that may be independent of their truth value. If you want to rationally strive towards maximum utility, you need to recognize and deal with the utility costs inherent in discussing facts with agents whose strategies involve manipulating consensus, and who themselves may not care as much about avoiding the Dark Arts as you seem to.
Secondly:
labels (heternormativity, privilege, social construction, rape culture)
I currently tend to believe that these are somewhat accurate labels—that is, they accurately define semantic boundaries around phenomena that do in fact exist, and that we do in fact have some actual understanding of. But if your audience sees them as fighting words, then they will see your arguments as soldiers. If you want to have a rational discussion about this, you need to be able to identify who else is willing to have a rational discussion about this, and at what level. Remember that on lesswrong, signaling rationality is a status move, so just because someone displays signals that indicate rationality doesn’t mean that they are in fact rational about a particular subject, especially a political one.
Ah. I see all my comments everywhere on the site are getting voted down again. Politics is the mind-killer, indeed.
Ok, serious question, folks:
What would it take to negotiate a truce on lesswrong, such that people could have differing opinions about what is or isn’t appropriate social utility maximization without getting into petty karma wars with each other?
Ah. This got downvoted too. Is there any way for me to stop this death-spiral and flag for empathy? Please?
I endorse interpreting net downvotes as information: specifically, the information that more people want less contributions like whatever’s being downvoted than want more contributions like it.
I can then either ignore that stated preference and keep contributing what I want to contribute (and accept any resulting downvotes as ongoing confirmation that of the above), or I can conform to that stated preference. I typically do the latter but I endorse the former in some cases.
The notion of a “truce” whereby I get to contribute whatever I choose and other people don’t use the voting mechanism to express their judgments of it doesn’t quite make sense to me.
All of that said, I agree with you that there exist various social patterns to which labels have been attached in popular culture, where those labels are shibboleths in certain subcultures and anti-shibboleths (“fighting words,” as you put it) in others. I find that if I want to have a useful discussion about those patterns within those subcultures, I often do best to not use those labels.
I endorse interpreting net downvotes as information: specifically, the information that more people want less
contributions like whatever’s being downvoted than want more contributions like it.
Except your interpretation is at least partially wrong—people mass downvote comments based on author, so there is no information about the quality of a particular post (it’s more like (S)HE IS A WITCH!). A better theory is that karma is some sort of noisy average between what you said, and ‘internet microaggression,’ and probably some other things—there is no globally enforced usage guidelines for karma.
I personally ignore karma. I generally write two types of posts: technical posts, and posts on which there should be no consensus. For the former, almost no one here is qualified to downvote me. For the latter, if people downvote me, it’s about the social group not correctness.
There are plenty of things to learn on lesswrong, but almost nothing from the karma system.
Oh, I completely agree that the reality is a noisy average as you describe. That said, for someone with the goals ialdabaoth describes themselves as having, I continue to endorse the interpretation strategy I describe. (By contrast, for someone with the goal-structure you describe, ignoring karma is a fine strategy.)
Do you think it likely that “social activism” and “liberalism” are fighting words in this board’s culture?
Huh. Are “Do you think it likely that ‘social activism’ and ‘liberalism’ are fighting words in this board’s culture?” fighting words in this board’s culture?
Depends on how they’re used, but yes, there are many contexts where I would probably avoid using those words here and instead state what I mean by them. Why do you ask?
Edit: the question got edited after I answered it into something not-quite-grammatical, so I should perhaps clarify that the words I’m referring to here are ‘social activism’ and ‘liberalism’ .
One approach is to identify high-status contributors and look for systematic differences between your way of expressing yourself and theirs, then experiment with adopting theirs.
Be that as it may (or mayn’t), that’s a clever way of making the intended message more palatable, including yourself in the deprecation. But you’re right. Aren’t we all pathetic, eh?
Look at your most upvoted contributions (ETA: or better, look at contributions with a positive score in general—see replies to this comment). Look at your most downvoted contributions. Compare and contrast.
Most downvoted, yes, but on the positive side I’d instead suggest looking at your comments one or two sigma east of average and no higher: they’re likely to be more reproducible. If they’re anything like mine, your most highly upvoted posts are probably high risk/high reward type comments—jokes, cultural criticism, pithy Deep Wisdom—and it’ll probably be a lot harder to identify and cultivate what made them successful.
A refinement of this is to look at the pattern of votes around the contributions as well, if they are comments. Comparing the absolute ranking of different contributions is tricky, because they frequently reflect the visibility of the thread as much as they do the popularity of the comment. (At one time, my most-upvoted contributions were random observations on the Harry Potter discussion threads, for example.)
Rationality quotes might be a helpful way of figuring out how to get upvoted, but it is not particularly helpful in figuring out how to be more competent.
(nods) Then yeah, I’d encourage you to avoid using those words and instead state what you mean by them. Which may also result in downvotes, depending on how people judge your meaning.
No I didn’t, I argued that in a different context, it’s dangerous to discuss your beliefs openly with outsiders. And I wasn’t even trying to defend that behavior, I was offering an explanation for it.
...and you’re using rhetorical tactics. Why do you consider this a fight? Why is it so important that I lose?
I’ll agree to have lost if that will help. Will it help?
Interesting, so do you disprove of the behavior in question. If so why do you still identify it’s practitioners as “your side”?
Issues can be complex, you know. They can be simpler than ‘green’ vs. ‘blue’.
I wasn’t trying to. I was pointing out the problems with basing a movement on ‘pious lies’.
Which is still a gross mischaracterization of what was being discussed, but that mischaracterizing process is itself part of the rhetorical tactic being employed. I’m afraid I can no longer trust this communication channel.
I wasn’t trying to. I was pointing out the problems with basing a movement on ‘pious lies’.
Which is still a gross mischaracterization of what was being discussed,
How so? Near as I can tell from an outside view my description is a decent summary of your and/or Julian’s position. I realize that from the inside it feels different because the lies feel justified, well ‘pious lies’ always feel justified to those who tell them.
I’m afraid I can no longer trust this communication channel.
You’re the one who just argued (and/or presented Julian’s case) that I was not to be trusted with the truth. If anything, I’m the one who has a right to complain that this communication channel is untrustworthy.
If anything, I’m the one who has a right to complain that this communication channel is untrustworthy.
And yet you’re still using it. What are you attempting to accomplish? What do you think I was attempting to accomplish? (I no longer need to know the answers to these questions, because I’ve already downgraded this channel to barely above the noise threshold; I’m expending the energy in the hopes that you ask yourself these questions in a way that doesn’t involve assuming that all our posts are soldiers fighting a battle.)
And yet you’re still using it. What are you attempting to accomplish?
(..)
I’m expending the energy in the hopes that you ask yourself these questions in a way that doesn’t involve assuming that all our posts are soldiers fighting a battle.
Same here with respect to the questions I asked here, here, and here. The fact that you were willing to admit to the lies gave me hope that we might have something resembling a reasonable discussion. Unfortunately it seems you’d rather dismiss my questions as ‘rhetoric’ than question the foundations of your beliefs. I realize the former choice is easier but if you’re serious about wanting to anilyze your beliefs you need to do the latter.
For the sake of others watching, the fact that you continue to use phrases like “willing to admit to the lies” should be a telling signal that something other than truth-seeking is happening here.
For the sake of others watching, the fact that you continue to use phrases like “willing to admit to the lies” should be a telling signal that something other than truth-seeking is happening here.
Something other than truth-seeking is happening here. But the use of that phrase does not demonstrate that—your argument is highly dubious. Since the subject at the core seems to be about prioritizing between epistemic accuracy and political advocacy it can be an on topic observation of fact.
If a phrase such as “pursuing goals other than pure truth-seeking” were used rather than “noble lies”, I would agree with you. But he appears to deliberately attempt to re-frame any argument that he doesn’t like in the most reprehensible way possible, rather than attempting to give it any credit whatsoever. He’s performing all sorts of emotional “booing” and straw-manning, rather than presenting the strongest possible interpretation of his opponent’s view and then attacking that. And when someone attempts to point that out to him, he immediately turns around and attempts to accuse them of doing it, rather than him.
It’s possible to have discussions about this without either side resorting to “this is how evil you’re being” tactics, or without resorting to “you’re resorting to ‘this is how evil you’re being’ tactics” tactics, or without resorting to “you’re resorting to ‘you’re resorting to {this is how evil you’re being} tactics’ tactics” tactics. Unfortunately, it’s a classic Prisoner’s Dilemma—whoever defects first tends to win, because humans are wired such that rhetoric beats honest debate.
But he appears to deliberately attempt to re-frame any argument that he doesn’t like in the most reprehensible way possible, rather than attempting to give it any credit whatsoever. He’s performing all sorts of emotional “booing” and straw-manning, rather than presenting the strongest possible interpretation of his opponent’s view and then attacking that. And when someone attempts to point that out to him, he immediately turns around and attempts to accuse them of doing it, rather than him.
That is approximately how I would summarize the entire conversation.
It’s possible to have discussions about this without either side resorting to “this is how evil you’re being” tactics, or without resorting to “you’re resorting to ‘this is how evil you’re being’ tactics” tactics, or without resorting to “you’re resorting to ‘you’re resorting to {this is how evil you’re being} tactics’ tactics” tactics.
Theoretically, although those most capable of being sane when it comes to this kind of topic are also less likely to bother.
Unfortunately, it’s a classic Prisoner’s Dilemma—whoever defects first tends to win, because humans are wired such that rhetoric beats honest debate.
Often, yes. It would be a gross understatement to observe that I share your lament.
If a phrase such as “pursuing goals other than pure truth-seeking” were used rather than “noble lies”, I would agree with you.
Specifically, the method of pursuing said goals in question is by making and promoting false statements. This is precisely what the phrase ‘noble lie’ means. This is the kind of thing that would be bad enough even if the authority of “Science” weren’t being invoked by the people making said false statements. Yes, the phrase “noble lie” has negative connotations, there are very good reasons for that.
Incidentally, at the time that I write this comments none of your most recent comments are net-negative, most are net-positive, including the one I’m responding to. Does knowing that make t easier for you to contribute without worrying too much about your social status here?
The net variability is the problem, not merely the bulk downvoting. All this sort of situation does is demonstrate that the karma system is untrustworthy. Since the karma system was the easiest way to determine whether what I’m saying is considered worth listening to by the community, I have to find secondary indicators. Unfortunately, most of those require feedback, and explicitly asking for that feedback often results in bulk downvoting.
I’m one of those people who has to be very careful to modulate my tone so that what I’m trying to say is understood by my audience; if all of the available feedback mechanisms are known to have serious problems, I’m not sure how to proceed.
It does make sense, and the karma system is most assuredly untrustworthy, in the sense you mean it here. (I would say “noisy.:) Asking for feedback is also noisy, as it happens.
At some point, it becomes worthwhile to work out how to proceed given noisy and unreliable feedback.
For example, one useful principle if I think the feedback is net-reliable in the aggregate but has high variability is to damp down sensitivity to individual feedback-items and instead attend to the trend over time once it stabilizes. Conversely, if I think the feedback is unreliable even in the aggregate, it’s best to ignore it altogether.
Yeah, that’s what I try to do in the abstract. In-the-moment, the less rational parts of my brain tend to bump up urgency and try to convince me that I don’t have time to ignore the data and wait for the aggregate, and when I try to pause for reflection, those same parts of my brain tend to ratchet up the perceived urgency again and convince me that I don’t have time to examine whether I have the time to examine whether those parts of my brain are lying to me less or more than the data.
I’m working on a brainhack to mitigate that, but it’s slow going. Once I have something useful I hope to post an article on it.
those same parts of my brain tend to ratchet up the perceived urgency again and convince me that I don’t have time to examine whether I have the time to examine whether those parts of my brain are lying to me
Because I’ve had enough of my posts voted down below the reply threshold that it destroyed the ability to continue the conversation, and for me having an idea debated back-and-forth is a necessary component of my mental process. Also, karma is used on this site to indicate whether I should be saying what I’m saying, so when everything I said for the past two weeks gets downvoted within 5 minutes of making a statement, even things utterly unrelated to that statement, I feel the need to raise an alarm to ensure that I’m interpreting signals correctly.
for me having an idea debated back-and-forth is a necessary component of my mental process.
If it is, then it is. But from the outside, this sounds like a rationalization for you to choose to do something that you find emotionally harmful. You have no obligation to participate in conversation that you find emotionally harmful.
That assumes that we have sufficient status that our chiding the laypeople will win.
I intended that to refer only to laypeople who agree with the labels, but are using them wrong. The people who are choosing our side because it seems high status, not because they think it is right. Those folks are dangerous in a lot of ways.
But if your audience sees them as fighting words, then they will see your arguments as soldiers.
Well, yes. This venue is not safe for these types of discussions—our interlocutor is an important reason why. I do it because I’m trying to dispel the appearance of a silent majority.
I think it is totally understandable to decide that the only winning (socially safe) move is not to engage in the conversation here. It’s not like it will make a huge difference—so choosing yourself first is very appropriate and NOT even a little bit worthy of blame.
This venue is not safe for these types of discussions
In what way?
our interlocutor is an important reason why.
All I’m doing is criticizing your arguments and providing counter-arguments. Or are you only willing to discuss these things among people who agree with you?
This sounds like an overly convenient excuse to avoid having to confront the implications of said truths. Restrict them to only people who won’t ask awkward questions and tell everyone else pious lies.
You need to establish some truths before worrying about the consequences. Scientific facts need controls, for instance. When have you shown any interest in controlling for the effects of environment?
When have you shown any interest in controlling for the effects of environment?
I never said I knew what caused the racial differences in question. There are certainly policy issues where the cause is relevant (incidentally addressing it requires admitting that the differences exist), there are issues where it’s less relevant.
Incidentally, in the example I sited in the great-grandparent it was the anthropologists who had declared that official policy was to deny all environmental explanations.
In this post JulianMorrison was, at least partially, trying to inform you that there is in fact something like a Bayesian Conspiracy within the Social Sciences—that there are social truths that are understood from within the discipline (or at least, from within parts of the discipline) that can’t be discussed with outsiders, because non-rational people will use the knowledge in ways with a highly negative net utility. He was also trying to test you to see if you could be trusted with initiation into that Bayesian Conspiracy.
How do you and Julian know that you are indeed in the “inner ring” of this conspiracy and/or that it’s actual purpose is what you think it is? How sure are you that this conspiracy even has any clue what it’s doing and hasn’t started to believe its own lies? Do you have an answer to the questions I asked here?
How do you and Julian know that you are indeed in the “inner ring” of this conspiracy and/or that it’s actual purpose is what you think it is? How sure are you that this conspiracy even has any clue what it’s doing and hasn’t started to believe its own lies? Do you have an answer to the questions I asked here?
On a case-by-case basis, you do experiments. You double-check them. You entertain alternate hypotheses. You accept that it’s entirely possible that things aren’t the way you think they are. You ask yourself what the likely social consequences of your actions are and if you’re comfortable with them, and then ask yourself how you know that. In short, you act like a rationalist.
(And you certainly don’t just downvote everyone who proposes a model that you don’t like.)
How do you and Julian know that you are indeed in the “inner ring” of this conspiracy and/or that it’s actual purpose is what you think it is? How sure are you that this conspiracy even has any clue what it’s doing and hasn’t started to believe its own lies? Do you have an answer to the questions I asked here?
is more concerned with politics than truth...is more concerned with politics than truth.
And that’s a bad thing? Trying to translate Hard Science directly into real-world action without considering the ethical, social and political consequences would be disastrous. We need something like social science.
is more concerned with politics than truth...is more concerned with politics than truth.
And that’s a bad thing?
If your goal is having an accurate model of the world, yes. If you’re goal is something else, you’re still better of with an accurate model of them world.
Edit: If you want to do politics, that’s also important, just don’t pretend you’re doing science even “soft science”.
We had this discussion before. You told me that the social activist labels are boo lights. But whether something is an applause light or a boo light in a particular community doesn’t mean it is not an accurate label for a phenomena.
“Democracy” is an applause light in the venues I generally hang out in (and I assume the same for you). That does not mean that democracy is not a real phenomena. And the fact that some folks in this venue don’t approve of democracy does not mean they think that the phenomena “democracy” as defined by relevant experts does not exist. In fact, a serious claim that democracy is bad or good first requires believing it occurs.
If you want social science to be taken seriously, you do your cause a disservice by asserting social science is different in kind from so-called “hard science.”
Edit: In fact, Eugine_Nier’s argument here is that social science is not rigorous enough to be worth considering. You don’t advance true belief by asserting that social science does not need to be rigorous.
And just in case it isn’t clear, the ability to replicate an experiment is not required for a scientific field to be rigorous. (Just look at astro-physics: It isn’t like we can cue up a supernova on command to test our hypothesis). It is preferable, but not necessary.
No, my argument was that much of modern social science (and especially modern anthropology which is bad even by social science standards) is more concerned with politics than truth. See here for JulianMorrison practically admitting as much and then trying to argue that this is a good thing. And quite frankly the tone of your comment is also not encouraging in that respect.
I think that’s a highly disingenuous reading of JulianMorrison’s statement. JulianMorrison never stated that it was a good thing, only that it was a necessary thing in the face of political realities. In an evolutionary environment where only the Dark Arts are capable of surviving, would you rather win or die?
Essentially, we all need to remember that speaking the truth has a variable utility cost that depends on environment. If the perceived utility of speaking the truth publically is negative, then you invoke the Bayesian Conspiracy and don’t speak the truth except in private.
In this post JulianMorrison was, at least partially, trying to inform you that there is in fact something like a Bayesian Conspiracy within the Social Sciences—that there are social truths that are understood from within the discipline (or at least, from within parts of the discipline) that can’t be discussed with outsiders, because non-rational people will use the knowledge in ways with a highly negative net utility. He was also trying to test you to see if you could be trusted with initiation into that Bayesian Conspiracy. (You failed the test, btw—which is something you might realize with pride or chagrin, depending on your political allegiances.)
I don’t think I’d identify the activist subculture with the social sciences, at least in the case JulianMorrison was talking about. If there’s an academic community whose members publish relatively unfiltered research within their fields but don’t usually talk to the public unless they are also activists, and also an activist community whose members are much more interested in spreading the word but aren’t always too interested in spreading up-to-date science (charitably, because they believe some avenues of research to be suffering from bias or otherwise suspect), then we get the same results without having to invoke a conspiracy. This also has the advantage of explaining why it’s possible to read about ostensibly forbidden social truths by, e.g., querying the right Wikipedia page.
Whether this accurately models any particular controversial subject is probably best left as an exercise.
Hold on a moment. I think the labels are accurate descriptions of the phenomena. There’s hostility to this kind of discussion, so sometimes the only winning move is not to play. But if the labels (heternormativity, privilege, social construction, rape culture) are not describing social phenomena, then we should find accurate labels.
And if experts use the labels right, but [Edit: sympathetic] laypeople do not, then we should chide the laypeople until they use them right. Agreement with my preferred policies does not make you wise, because arguments are not soldiers.
In short, I think I win on the merits, so let’s not get caught up in procedural machinations.
That assumes that we have sufficient status that our chiding the laypeople will win. The problem with social phenomena is that discussions about social phenomena are themselves social phenomena, so your statements have social cost that may be independent of their truth value. If you want to rationally strive towards maximum utility, you need to recognize and deal with the utility costs inherent in discussing facts with agents whose strategies involve manipulating consensus, and who themselves may not care as much about avoiding the Dark Arts as you seem to.
Secondly:
I currently tend to believe that these are somewhat accurate labels—that is, they accurately define semantic boundaries around phenomena that do in fact exist, and that we do in fact have some actual understanding of. But if your audience sees them as fighting words, then they will see your arguments as soldiers. If you want to have a rational discussion about this, you need to be able to identify who else is willing to have a rational discussion about this, and at what level. Remember that on lesswrong, signaling rationality is a status move, so just because someone displays signals that indicate rationality doesn’t mean that they are in fact rational about a particular subject, especially a political one.
Ah. I see all my comments everywhere on the site are getting voted down again. Politics is the mind-killer, indeed.
Ok, serious question, folks:
What would it take to negotiate a truce on lesswrong, such that people could have differing opinions about what is or isn’t appropriate social utility maximization without getting into petty karma wars with each other?
Ah. This got downvoted too. Is there any way for me to stop this death-spiral and flag for empathy? Please?
Mercy? Uncle?
I endorse interpreting net downvotes as information: specifically, the information that more people want less contributions like whatever’s being downvoted than want more contributions like it.
I can then either ignore that stated preference and keep contributing what I want to contribute (and accept any resulting downvotes as ongoing confirmation that of the above), or I can conform to that stated preference. I typically do the latter but I endorse the former in some cases.
The notion of a “truce” whereby I get to contribute whatever I choose and other people don’t use the voting mechanism to express their judgments of it doesn’t quite make sense to me.
All of that said, I agree with you that there exist various social patterns to which labels have been attached in popular culture, where those labels are shibboleths in certain subcultures and anti-shibboleths (“fighting words,” as you put it) in others. I find that if I want to have a useful discussion about those patterns within those subcultures, I often do best to not use those labels.
Except your interpretation is at least partially wrong—people mass downvote comments based on author, so there is no information about the quality of a particular post (it’s more like (S)HE IS A WITCH!). A better theory is that karma is some sort of noisy average between what you said, and ‘internet microaggression,’ and probably some other things—there is no globally enforced usage guidelines for karma.
I personally ignore karma. I generally write two types of posts: technical posts, and posts on which there should be no consensus. For the former, almost no one here is qualified to downvote me. For the latter, if people downvote me, it’s about the social group not correctness.
There are plenty of things to learn on lesswrong, but almost nothing from the karma system.
Oh, I completely agree that the reality is a noisy average as you describe. That said, for someone with the goals ialdabaoth describes themselves as having, I continue to endorse the interpretation strategy I describe. (By contrast, for someone with the goal-structure you describe, ignoring karma is a fine strategy.)
Huh. Are “Do you think it likely that ‘social activism’ and ‘liberalism’ are fighting words in this board’s culture?” fighting words in this board’s culture?
Depends on how they’re used, but yes, there are many contexts where I would probably avoid using those words here and instead state what I mean by them. Why do you ask?
Edit: the question got edited after I answered it into something not-quite-grammatical, so I should perhaps clarify that the words I’m referring to here are ‘social activism’ and ‘liberalism’ .
Because I want to discuss and analyze my beliefs openly, but I don’t want to lose social status on this site if I don’t have to.
A deeper observation and question: I appear to be stupid at the moment. Where can I go to learn to be less socially stupid on this site?
One approach is to identify high-status contributors and look for systematic differences between your way of expressing yourself and theirs, then experiment with adopting theirs.
Ya lol works awsome, look at my awsome bla-bla (lol!) ye mighty, and despair.
Nothing beside remains. Round the decay
Of that colossal wreck, boundless and bare
The lone and level sands stretch far away.
Alas, you and I are not in the same league as Yvain, TheOtherDave, fubarobfusco, or Jack.
Be that as it may (or mayn’t), that’s a clever way of making the intended message more palatable, including yourself in the deprecation. But you’re right. Aren’t we all pathetic, eh?
Look at your most upvoted contributions (ETA: or better, look at contributions with a positive score in general—see replies to this comment). Look at your most downvoted contributions. Compare and contrast.
Most downvoted, yes, but on the positive side I’d instead suggest looking at your comments one or two sigma east of average and no higher: they’re likely to be more reproducible. If they’re anything like mine, your most highly upvoted posts are probably high risk/high reward type comments—jokes, cultural criticism, pithy Deep Wisdom—and it’ll probably be a lot harder to identify and cultivate what made them successful.
A refinement of this is to look at the pattern of votes around the contributions as well, if they are comments. Comparing the absolute ranking of different contributions is tricky, because they frequently reflect the visibility of the thread as much as they do the popularity of the comment. (At one time, my most-upvoted contributions were random observations on the Harry Potter discussion threads, for example.)
Not to mention Rationality Quotes threads...
Rationality quotes might be a helpful way of figuring out how to get upvoted, but it is not particularly helpful in figuring out how to be more competent.
Edit: Oops. Misunderstood the comment.
Actually, I was agreeing with TheOtherDave. (I’ve edited my comment to quote the part of its parent I was elaborating upon; is that clearer now?)
(nods) Then yeah, I’d encourage you to avoid using those words and instead state what you mean by them. Which may also result in downvotes, depending on how people judge your meaning.
And yet you’ve just argued that your beliefs should not be discussed openly with outsiders.
No I didn’t, I argued that in a different context, it’s dangerous to discuss your beliefs openly with outsiders. And I wasn’t even trying to defend that behavior, I was offering an explanation for it.
...and you’re using rhetorical tactics. Why do you consider this a fight? Why is it so important that I lose?
I’ll agree to have lost if that will help. Will it help?
I don’t see the difference in context. (This isn’t rhetoric, I honestly don’t see the difference in context.)
Interesting, so do you disapprove of the behavior in question. If so why do you still identify it’s practitioners as “your side”?
I wasn’t trying to. I was pointing out the problems with basing a movement on ‘pious lies’.
Issues can be complex, you know. They can be simpler than ‘green’ vs. ‘blue’.
Which is still a gross mischaracterization of what was being discussed, but that mischaracterizing process is itself part of the rhetorical tactic being employed. I’m afraid I can no longer trust this communication channel.
How so? Near as I can tell from an outside view my description is a decent summary of your and/or Julian’s position. I realize that from the inside it feels different because the lies feel justified, well ‘pious lies’ always feel justified to those who tell them.
You’re the one who just argued (and/or presented Julian’s case) that I was not to be trusted with the truth. If anything, I’m the one who has a right to complain that this communication channel is untrustworthy.
And yet you’re still using it. What are you attempting to accomplish? What do you think I was attempting to accomplish? (I no longer need to know the answers to these questions, because I’ve already downgraded this channel to barely above the noise threshold; I’m expending the energy in the hopes that you ask yourself these questions in a way that doesn’t involve assuming that all our posts are soldiers fighting a battle.)
Same here with respect to the questions I asked here, here, and here. The fact that you were willing to admit to the lies gave me hope that we might have something resembling a reasonable discussion. Unfortunately it seems you’d rather dismiss my questions as ‘rhetoric’ than question the foundations of your beliefs. I realize the former choice is easier but if you’re serious about wanting to anilyze your beliefs you need to do the latter.
For the sake of others watching, the fact that you continue to use phrases like “willing to admit to the lies” should be a telling signal that something other than truth-seeking is happening here.
Something other than truth-seeking is happening here. But the use of that phrase does not demonstrate that—your argument is highly dubious. Since the subject at the core seems to be about prioritizing between epistemic accuracy and political advocacy it can be an on topic observation of fact.
If a phrase such as “pursuing goals other than pure truth-seeking” were used rather than “noble lies”, I would agree with you. But he appears to deliberately attempt to re-frame any argument that he doesn’t like in the most reprehensible way possible, rather than attempting to give it any credit whatsoever. He’s performing all sorts of emotional “booing” and straw-manning, rather than presenting the strongest possible interpretation of his opponent’s view and then attacking that. And when someone attempts to point that out to him, he immediately turns around and attempts to accuse them of doing it, rather than him.
It’s possible to have discussions about this without either side resorting to “this is how evil you’re being” tactics, or without resorting to “you’re resorting to ‘this is how evil you’re being’ tactics” tactics, or without resorting to “you’re resorting to ‘you’re resorting to {this is how evil you’re being} tactics’ tactics” tactics. Unfortunately, it’s a classic Prisoner’s Dilemma—whoever defects first tends to win, because humans are wired such that rhetoric beats honest debate.
That is approximately how I would summarize the entire conversation.
Theoretically, although those most capable of being sane when it comes to this kind of topic are also less likely to bother.
Often, yes. It would be a gross understatement to observe that I share your lament.
Specifically, the method of pursuing said goals in question is by making and promoting false statements. This is precisely what the phrase ‘noble lie’ means. This is the kind of thing that would be bad enough even if the authority of “Science” weren’t being invoked by the people making said false statements. Yes, the phrase “noble lie” has negative connotations, there are very good reasons for that.
Incidentally, at the time that I write this comments none of your most recent comments are net-negative, most are net-positive, including the one I’m responding to. Does knowing that make t easier for you to contribute without worrying too much about your social status here?
No, and here’s my reasoning:
The net variability is the problem, not merely the bulk downvoting. All this sort of situation does is demonstrate that the karma system is untrustworthy. Since the karma system was the easiest way to determine whether what I’m saying is considered worth listening to by the community, I have to find secondary indicators. Unfortunately, most of those require feedback, and explicitly asking for that feedback often results in bulk downvoting.
I’m one of those people who has to be very careful to modulate my tone so that what I’m trying to say is understood by my audience; if all of the available feedback mechanisms are known to have serious problems, I’m not sure how to proceed.
Does that make any sense?
It does make sense, and the karma system is most assuredly untrustworthy, in the sense you mean it here. (I would say “noisy.:) Asking for feedback is also noisy, as it happens.
At some point, it becomes worthwhile to work out how to proceed given noisy and unreliable feedback.
For example, one useful principle if I think the feedback is net-reliable in the aggregate but has high variability is to damp down sensitivity to individual feedback-items and instead attend to the trend over time once it stabilizes. Conversely, if I think the feedback is unreliable even in the aggregate, it’s best to ignore it altogether.
Yeah, that’s what I try to do in the abstract. In-the-moment, the less rational parts of my brain tend to bump up urgency and try to convince me that I don’t have time to ignore the data and wait for the aggregate, and when I try to pause for reflection, those same parts of my brain tend to ratchet up the perceived urgency again and convince me that I don’t have time to examine whether I have the time to examine whether those parts of my brain are lying to me less or more than the data.
I’m working on a brainhack to mitigate that, but it’s slow going. Once I have something useful I hope to post an article on it.
This is wonderfully put.
Serious question: why even care about karma? Just say what you want.
Because I’ve had enough of my posts voted down below the reply threshold that it destroyed the ability to continue the conversation, and for me having an idea debated back-and-forth is a necessary component of my mental process. Also, karma is used on this site to indicate whether I should be saying what I’m saying, so when everything I said for the past two weeks gets downvoted within 5 minutes of making a statement, even things utterly unrelated to that statement, I feel the need to raise an alarm to ensure that I’m interpreting signals correctly.
If it is, then it is. But from the outside, this sounds like a rationalization for you to choose to do something that you find emotionally harmful. You have no obligation to participate in conversation that you find emotionally harmful.
I intended that to refer only to laypeople who agree with the labels, but are using them wrong. The people who are choosing our side because it seems high status, not because they think it is right. Those folks are dangerous in a lot of ways.
Well, yes. This venue is not safe for these types of discussions—our interlocutor is an important reason why. I do it because I’m trying to dispel the appearance of a silent majority.
I think it is totally understandable to decide that the only winning (socially safe) move is not to engage in the conversation here. It’s not like it will make a huge difference—so choosing yourself first is very appropriate and NOT even a little bit worthy of blame.
In what way?
All I’m doing is criticizing your arguments and providing counter-arguments. Or are you only willing to discuss these things among people who agree with you?
This sounds like an overly convenient excuse to avoid having to confront the implications of said truths. Restrict them to only people who won’t ask awkward questions and tell everyone else pious lies.
You need to establish some truths before worrying about the consequences. Scientific facts need controls, for instance. When have you shown any interest in controlling for the effects of environment?
I never said I knew what caused the racial differences in question. There are certainly policy issues where the cause is relevant (incidentally addressing it requires admitting that the differences exist), there are issues where it’s less relevant.
Incidentally, in the example I sited in the great-grandparent it was the anthropologists who had declared that official policy was to deny all environmental explanations.
How do you and Julian know that you are indeed in the “inner ring” of this conspiracy and/or that it’s actual purpose is what you think it is? How sure are you that this conspiracy even has any clue what it’s doing and hasn’t started to believe its own lies? Do you have an answer to the questions I asked here?
On a case-by-case basis, you do experiments. You double-check them. You entertain alternate hypotheses. You accept that it’s entirely possible that things aren’t the way you think they are. You ask yourself what the likely social consequences of your actions are and if you’re comfortable with them, and then ask yourself how you know that. In short, you act like a rationalist.
(And you certainly don’t just downvote everyone who proposes a model that you don’t like.)
Can you describe some of the experiments you did?
And that’s a bad thing? Trying to translate Hard Science directly into real-world action without considering the ethical, social and political consequences would be disastrous. We need something like social science.
If your goal is having an accurate model of the world, yes. If you’re goal is something else, you’re still better of with an accurate model of them world.
Edit: If you want to do politics, that’s also important, just don’t pretend you’re doing science even “soft science”.
We had this discussion before. You told me that the social activist labels are boo lights. But whether something is an applause light or a boo light in a particular community doesn’t mean it is not an accurate label for a phenomena.
“Democracy” is an applause light in the venues I generally hang out in (and I assume the same for you). That does not mean that democracy is not a real phenomena. And the fact that some folks in this venue don’t approve of democracy does not mean they think that the phenomena “democracy” as defined by relevant experts does not exist. In fact, a serious claim that democracy is bad or good first requires believing it occurs.