That’s barely half an argument. You would need to believe that there are significant between-group differences AND that they are significant AND that they should be relevant to policy or decision making in some way. You didn’t argue the second two points there, and you haven’t elsewhere.
You would need to believe that there are [statistically] significant between-group differences AND that they are [actually] significant AND that they should be relevant to policy or decision making in some way.
I’m with you on the first two, but if the trait is interesting enough to talk about (intelligence, competence, or whatever), isn’t that enough for consideration in policy making? If it isn’t worth considering in making policy, why are we talking about the trait?
Politics isn’t a value-free reflection of nature. The disvalue of reflecting a fact politically might outweigh the value. For instance, people aren’t the same in their political judgement, but everyone gets one vote, for instance.
So if we don’t base our politics on facts, what should we base it on? This isn’t a purely rhetorical question, I can think of several ways to answer it (each of which also has other implications) and am curious what your answer is.
As for your example, that’s because one-man-one-vote is a more workable Schelling point since otherwise you have the problem of who decides which people have better political judgement.
As for your example, that’s because one-man-one-vote is a more workable Schelling point since otherwise you have the problem of who decides which people have better political judgement.
You include a copy of the Cognitive Reflection Test or similar in each ballot and weigh votes by the number of correct answers to the test.
(This idea isn’t original to me, BTW—but I can’t recall anyone expressing it on the public Internet at the moment.)
You include a copy of the Cognitive Reflection Test or similar in each ballot and weigh votes by the number of correct answers to the test.
This doesn’t quite solve the Schelling point problem. You start getting questions about why that particular test and not some other. You will also get problems related to Goodheart’s law.
You start getting questions about why that particular test and not some other.
Well… People might ask that about (say) university admission tests, and yet in practice very few do so with a straight face. (OTOH, more people consider voting a sacrosanct right than studying.)
ETA: now that I think about that, this might be way more problematic in a country less culturally homogeneous than mine—I’m now reminded of complaints in the US that the SAT is culturally biased.
You will also get problems related to Goodheart’s law.
Keeping the choice of questions secret until the election ought to mitigate that.
now that I think about that, this might be way more problematic in a country less culturally homogeneous than mine—I’m now reminded of complaints in the US that the SAT is culturally biased.
Also in the US the SAT is only one of the factors effecting admissions.
Keeping the choice of questions secret until the election ought to mitigate that.
Only partially. Also what about the people whose design the questions?
This is a purely terminological point: A substantial percentage of the folks in this forum think moral propositions are a kind of fact. I think they are wrong, but my usage (moral values are not empirical facts) is an idiosyncratic usage in this venue.
In short, I’m not sure if you are disagreeing with the local consensus, or simply using a different vocabulary. Until you and your interlocutors are using the same vocabulary, continuing disagreement is unlikely to be productive.
In short, I think basically everyone agrees that public policy is the product of the combination of scientific fact (including historical fact and sociological fact) and moral values. But because of disagreements on the meta-ethical and philosophy of science level, there is widespread disagreement on what my applause light sentence means in practice.
Your post is very susceptible to the construction:
Only moral values are relevant to policy decisions. Empirical facts are not relevant to policy decisions.
You could object that this is not a charitable reading. But in the context of this discussion, it is hard to tell how to read you charitably while ensuring that you would still endorse the interpretation.
You didn’t define what you mean by “narrowly construed” facts, but from context it seems like you’re saying I don’t like these particular facts therefore I want an excuse to ignore them.
Then where did you get the evidence to assert it with such high confidence? (This isn’t meant to be a rhetorical question.)
Also, is this really the best example you could come up with? The problem with this example is that even if the fact in question is true, there are still good game theoretic/decision theoretic reasons not to respond to blackmail.
[shrugs]. You construed riots in a sweepingly negative way as “blackmail”. The fact that I do not agree does not mean I am construing them in a sweepingly positive way. This is as a pattern you have repeated throughout this discussion, and it illustrates how politics mindkills.
If a policy is good, a riot against it is blackmail. If a policy is bad, you shouldn’t be pursuing it riot or no riot. Thus the hypothetical existence of riots shouldn’t affect which policies one pursues. Frankly, I have hard time believing “leading to riots” is your true rejection of the policies in question.
If a policy is good, a riot against it is blackmail. If a policy is bad, you shouldn’t be pursuing it riot or no riot. Thus the hypothetical existence of riots shouldn’t affect which policies one pursues.
That is a dangerous belief for a leader to hold. I’d prefer leaders that don’t have that belief. In fact it should be taken as granted that leaders who do not respond to the expectation that the people will oppose their actions will be killed or otherwise rendered harmless through whichever actions are suitable to the political environment.
If you want social science to be taken seriously, you do your cause a disservice by asserting social science is different in kind from so-called “hard science.”
Edit: In fact, Eugine_Nier’s argument here is that social science is not rigorous enough to be worth considering. You don’t advance true belief by asserting that social science does not need to be rigorous.
And just in case it isn’t clear, the ability to replicate an experiment is not required for a scientific field to be rigorous. (Just look at astro-physics: It isn’t like we can cue up a supernova on command to test our hypothesis). It is preferable, but not necessary.
In fact, Eugine_Nier’s argument here is that social science is not rigorous enough to be worth considering.
No, my argument was that much of modern social science (and especially modern anthropology which is bad even by social science standards) is more concerned with politics than truth. See here for JulianMorrison practically admitting as much and then trying to argue that this is a good thing. And quite frankly the tone of your comment is also not encouraging in that respect.
See here for JulianMorrison practically admitting as much and then trying to argue that this is a good thing.
I think that’s a highly disingenuous reading of JulianMorrison’s statement. JulianMorrison never stated that it was a good thing, only that it was a necessary thing in the face of political realities. In an evolutionary environment where only the Dark Arts are capable of surviving, would you rather win or die?
Essentially, we all need to remember that speaking the truth has a variable utility cost that depends on environment. If the perceived utility of speaking the truth publically is negative, then you invoke the Bayesian Conspiracy and don’t speak the truth except in private.
In this post JulianMorrison was, at least partially, trying to inform you that there is in fact something like a Bayesian Conspiracy within the Social Sciences—that there are social truths that are understood from within the discipline (or at least, from within parts of the discipline) that can’t be discussed with outsiders, because non-rational people will use the knowledge in ways with a highly negative net utility. He was also trying to test you to see if you could be trusted with initiation into that Bayesian Conspiracy. (You failed the test, btw—which is something you might realize with pride or chagrin, depending on your political allegiances.)
In this post JulianMorrison was, at least partially, trying to inform you that there is in fact something like a Bayesian Conspiracy within the Social Sciences—that there are social truths that are understood from within the discipline (or at least, from within parts of the discipline) that can’t be discussed with outsiders, because non-rational people will use the knowledge in ways with a highly negative net utility.
I don’t think I’d identify the activist subculture with the social sciences, at least in the case JulianMorrison was talking about. If there’s an academic community whose members publish relatively unfiltered research within their fields but don’t usually talk to the public unless they are also activists, and also an activist community whose members are much more interested in spreading the word but aren’t always too interested in spreading up-to-date science (charitably, because they believe some avenues of research to be suffering from bias or otherwise suspect), then we get the same results without having to invoke a conspiracy. This also has the advantage of explaining why it’s possible to read about ostensibly forbidden social truths by, e.g., querying the right Wikipedia page.
Whether this accurately models any particular controversial subject is probably best left as an exercise.
Hold on a moment. I think the labels are accurate descriptions of the phenomena. There’s hostility to this kind of discussion, so sometimes the only winning move is not to play. But if the labels (heternormativity, privilege, social construction, rape culture) are not describing social phenomena, then we should find accurate labels.
And if experts use the labels right, but [Edit: sympathetic] laypeople do not, then we should chide the laypeople until they use them right. Agreement with my preferred policies does not make you wise, because arguments are not soldiers.
In short, I think I win on the merits, so let’s not get caught up in procedural machinations.
And if experts use the labels right, but laypeople do not, then we should chide the laypeople until they use them right. Agreement with my preferred policies does not make you wise, because arguments are not soldiers.
That assumes that we have sufficient status that our chiding the laypeople will win. The problem with social phenomena is that discussions about social phenomena are themselves social phenomena, so your statements have social cost that may be independent of their truth value. If you want to rationally strive towards maximum utility, you need to recognize and deal with the utility costs inherent in discussing facts with agents whose strategies involve manipulating consensus, and who themselves may not care as much about avoiding the Dark Arts as you seem to.
Secondly:
labels (heternormativity, privilege, social construction, rape culture)
I currently tend to believe that these are somewhat accurate labels—that is, they accurately define semantic boundaries around phenomena that do in fact exist, and that we do in fact have some actual understanding of. But if your audience sees them as fighting words, then they will see your arguments as soldiers. If you want to have a rational discussion about this, you need to be able to identify who else is willing to have a rational discussion about this, and at what level. Remember that on lesswrong, signaling rationality is a status move, so just because someone displays signals that indicate rationality doesn’t mean that they are in fact rational about a particular subject, especially a political one.
Ah. I see all my comments everywhere on the site are getting voted down again. Politics is the mind-killer, indeed.
Ok, serious question, folks:
What would it take to negotiate a truce on lesswrong, such that people could have differing opinions about what is or isn’t appropriate social utility maximization without getting into petty karma wars with each other?
Ah. This got downvoted too. Is there any way for me to stop this death-spiral and flag for empathy? Please?
I endorse interpreting net downvotes as information: specifically, the information that more people want less contributions like whatever’s being downvoted than want more contributions like it.
I can then either ignore that stated preference and keep contributing what I want to contribute (and accept any resulting downvotes as ongoing confirmation that of the above), or I can conform to that stated preference. I typically do the latter but I endorse the former in some cases.
The notion of a “truce” whereby I get to contribute whatever I choose and other people don’t use the voting mechanism to express their judgments of it doesn’t quite make sense to me.
All of that said, I agree with you that there exist various social patterns to which labels have been attached in popular culture, where those labels are shibboleths in certain subcultures and anti-shibboleths (“fighting words,” as you put it) in others. I find that if I want to have a useful discussion about those patterns within those subcultures, I often do best to not use those labels.
I endorse interpreting net downvotes as information: specifically, the information that more people want less
contributions like whatever’s being downvoted than want more contributions like it.
Except your interpretation is at least partially wrong—people mass downvote comments based on author, so there is no information about the quality of a particular post (it’s more like (S)HE IS A WITCH!). A better theory is that karma is some sort of noisy average between what you said, and ‘internet microaggression,’ and probably some other things—there is no globally enforced usage guidelines for karma.
I personally ignore karma. I generally write two types of posts: technical posts, and posts on which there should be no consensus. For the former, almost no one here is qualified to downvote me. For the latter, if people downvote me, it’s about the social group not correctness.
There are plenty of things to learn on lesswrong, but almost nothing from the karma system.
Oh, I completely agree that the reality is a noisy average as you describe. That said, for someone with the goals ialdabaoth describes themselves as having, I continue to endorse the interpretation strategy I describe. (By contrast, for someone with the goal-structure you describe, ignoring karma is a fine strategy.)
Do you think it likely that “social activism” and “liberalism” are fighting words in this board’s culture?
Huh. Are “Do you think it likely that ‘social activism’ and ‘liberalism’ are fighting words in this board’s culture?” fighting words in this board’s culture?
Depends on how they’re used, but yes, there are many contexts where I would probably avoid using those words here and instead state what I mean by them. Why do you ask?
Edit: the question got edited after I answered it into something not-quite-grammatical, so I should perhaps clarify that the words I’m referring to here are ‘social activism’ and ‘liberalism’ .
One approach is to identify high-status contributors and look for systematic differences between your way of expressing yourself and theirs, then experiment with adopting theirs.
Be that as it may (or mayn’t), that’s a clever way of making the intended message more palatable, including yourself in the deprecation. But you’re right. Aren’t we all pathetic, eh?
Look at your most upvoted contributions (ETA: or better, look at contributions with a positive score in general—see replies to this comment). Look at your most downvoted contributions. Compare and contrast.
Most downvoted, yes, but on the positive side I’d instead suggest looking at your comments one or two sigma east of average and no higher: they’re likely to be more reproducible. If they’re anything like mine, your most highly upvoted posts are probably high risk/high reward type comments—jokes, cultural criticism, pithy Deep Wisdom—and it’ll probably be a lot harder to identify and cultivate what made them successful.
A refinement of this is to look at the pattern of votes around the contributions as well, if they are comments. Comparing the absolute ranking of different contributions is tricky, because they frequently reflect the visibility of the thread as much as they do the popularity of the comment. (At one time, my most-upvoted contributions were random observations on the Harry Potter discussion threads, for example.)
Rationality quotes might be a helpful way of figuring out how to get upvoted, but it is not particularly helpful in figuring out how to be more competent.
(nods) Then yeah, I’d encourage you to avoid using those words and instead state what you mean by them. Which may also result in downvotes, depending on how people judge your meaning.
No I didn’t, I argued that in a different context, it’s dangerous to discuss your beliefs openly with outsiders. And I wasn’t even trying to defend that behavior, I was offering an explanation for it.
...and you’re using rhetorical tactics. Why do you consider this a fight? Why is it so important that I lose?
I’ll agree to have lost if that will help. Will it help?
Interesting, so do you disprove of the behavior in question. If so why do you still identify it’s practitioners as “your side”?
Issues can be complex, you know. They can be simpler than ‘green’ vs. ‘blue’.
I wasn’t trying to. I was pointing out the problems with basing a movement on ‘pious lies’.
Which is still a gross mischaracterization of what was being discussed, but that mischaracterizing process is itself part of the rhetorical tactic being employed. I’m afraid I can no longer trust this communication channel.
I wasn’t trying to. I was pointing out the problems with basing a movement on ‘pious lies’.
Which is still a gross mischaracterization of what was being discussed,
How so? Near as I can tell from an outside view my description is a decent summary of your and/or Julian’s position. I realize that from the inside it feels different because the lies feel justified, well ‘pious lies’ always feel justified to those who tell them.
I’m afraid I can no longer trust this communication channel.
You’re the one who just argued (and/or presented Julian’s case) that I was not to be trusted with the truth. If anything, I’m the one who has a right to complain that this communication channel is untrustworthy.
If anything, I’m the one who has a right to complain that this communication channel is untrustworthy.
And yet you’re still using it. What are you attempting to accomplish? What do you think I was attempting to accomplish? (I no longer need to know the answers to these questions, because I’ve already downgraded this channel to barely above the noise threshold; I’m expending the energy in the hopes that you ask yourself these questions in a way that doesn’t involve assuming that all our posts are soldiers fighting a battle.)
And yet you’re still using it. What are you attempting to accomplish?
(..)
I’m expending the energy in the hopes that you ask yourself these questions in a way that doesn’t involve assuming that all our posts are soldiers fighting a battle.
Same here with respect to the questions I asked here, here, and here. The fact that you were willing to admit to the lies gave me hope that we might have something resembling a reasonable discussion. Unfortunately it seems you’d rather dismiss my questions as ‘rhetoric’ than question the foundations of your beliefs. I realize the former choice is easier but if you’re serious about wanting to anilyze your beliefs you need to do the latter.
For the sake of others watching, the fact that you continue to use phrases like “willing to admit to the lies” should be a telling signal that something other than truth-seeking is happening here.
For the sake of others watching, the fact that you continue to use phrases like “willing to admit to the lies” should be a telling signal that something other than truth-seeking is happening here.
Something other than truth-seeking is happening here. But the use of that phrase does not demonstrate that—your argument is highly dubious. Since the subject at the core seems to be about prioritizing between epistemic accuracy and political advocacy it can be an on topic observation of fact.
If a phrase such as “pursuing goals other than pure truth-seeking” were used rather than “noble lies”, I would agree with you. But he appears to deliberately attempt to re-frame any argument that he doesn’t like in the most reprehensible way possible, rather than attempting to give it any credit whatsoever. He’s performing all sorts of emotional “booing” and straw-manning, rather than presenting the strongest possible interpretation of his opponent’s view and then attacking that. And when someone attempts to point that out to him, he immediately turns around and attempts to accuse them of doing it, rather than him.
It’s possible to have discussions about this without either side resorting to “this is how evil you’re being” tactics, or without resorting to “you’re resorting to ‘this is how evil you’re being’ tactics” tactics, or without resorting to “you’re resorting to ‘you’re resorting to {this is how evil you’re being} tactics’ tactics” tactics. Unfortunately, it’s a classic Prisoner’s Dilemma—whoever defects first tends to win, because humans are wired such that rhetoric beats honest debate.
But he appears to deliberately attempt to re-frame any argument that he doesn’t like in the most reprehensible way possible, rather than attempting to give it any credit whatsoever. He’s performing all sorts of emotional “booing” and straw-manning, rather than presenting the strongest possible interpretation of his opponent’s view and then attacking that. And when someone attempts to point that out to him, he immediately turns around and attempts to accuse them of doing it, rather than him.
That is approximately how I would summarize the entire conversation.
It’s possible to have discussions about this without either side resorting to “this is how evil you’re being” tactics, or without resorting to “you’re resorting to ‘this is how evil you’re being’ tactics” tactics, or without resorting to “you’re resorting to ‘you’re resorting to {this is how evil you’re being} tactics’ tactics” tactics.
Theoretically, although those most capable of being sane when it comes to this kind of topic are also less likely to bother.
Unfortunately, it’s a classic Prisoner’s Dilemma—whoever defects first tends to win, because humans are wired such that rhetoric beats honest debate.
Often, yes. It would be a gross understatement to observe that I share your lament.
If a phrase such as “pursuing goals other than pure truth-seeking” were used rather than “noble lies”, I would agree with you.
Specifically, the method of pursuing said goals in question is by making and promoting false statements. This is precisely what the phrase ‘noble lie’ means. This is the kind of thing that would be bad enough even if the authority of “Science” weren’t being invoked by the people making said false statements. Yes, the phrase “noble lie” has negative connotations, there are very good reasons for that.
Incidentally, at the time that I write this comments none of your most recent comments are net-negative, most are net-positive, including the one I’m responding to. Does knowing that make t easier for you to contribute without worrying too much about your social status here?
The net variability is the problem, not merely the bulk downvoting. All this sort of situation does is demonstrate that the karma system is untrustworthy. Since the karma system was the easiest way to determine whether what I’m saying is considered worth listening to by the community, I have to find secondary indicators. Unfortunately, most of those require feedback, and explicitly asking for that feedback often results in bulk downvoting.
I’m one of those people who has to be very careful to modulate my tone so that what I’m trying to say is understood by my audience; if all of the available feedback mechanisms are known to have serious problems, I’m not sure how to proceed.
It does make sense, and the karma system is most assuredly untrustworthy, in the sense you mean it here. (I would say “noisy.:) Asking for feedback is also noisy, as it happens.
At some point, it becomes worthwhile to work out how to proceed given noisy and unreliable feedback.
For example, one useful principle if I think the feedback is net-reliable in the aggregate but has high variability is to damp down sensitivity to individual feedback-items and instead attend to the trend over time once it stabilizes. Conversely, if I think the feedback is unreliable even in the aggregate, it’s best to ignore it altogether.
Yeah, that’s what I try to do in the abstract. In-the-moment, the less rational parts of my brain tend to bump up urgency and try to convince me that I don’t have time to ignore the data and wait for the aggregate, and when I try to pause for reflection, those same parts of my brain tend to ratchet up the perceived urgency again and convince me that I don’t have time to examine whether I have the time to examine whether those parts of my brain are lying to me less or more than the data.
I’m working on a brainhack to mitigate that, but it’s slow going. Once I have something useful I hope to post an article on it.
those same parts of my brain tend to ratchet up the perceived urgency again and convince me that I don’t have time to examine whether I have the time to examine whether those parts of my brain are lying to me
Because I’ve had enough of my posts voted down below the reply threshold that it destroyed the ability to continue the conversation, and for me having an idea debated back-and-forth is a necessary component of my mental process. Also, karma is used on this site to indicate whether I should be saying what I’m saying, so when everything I said for the past two weeks gets downvoted within 5 minutes of making a statement, even things utterly unrelated to that statement, I feel the need to raise an alarm to ensure that I’m interpreting signals correctly.
for me having an idea debated back-and-forth is a necessary component of my mental process.
If it is, then it is. But from the outside, this sounds like a rationalization for you to choose to do something that you find emotionally harmful. You have no obligation to participate in conversation that you find emotionally harmful.
That assumes that we have sufficient status that our chiding the laypeople will win.
I intended that to refer only to laypeople who agree with the labels, but are using them wrong. The people who are choosing our side because it seems high status, not because they think it is right. Those folks are dangerous in a lot of ways.
But if your audience sees them as fighting words, then they will see your arguments as soldiers.
Well, yes. This venue is not safe for these types of discussions—our interlocutor is an important reason why. I do it because I’m trying to dispel the appearance of a silent majority.
I think it is totally understandable to decide that the only winning (socially safe) move is not to engage in the conversation here. It’s not like it will make a huge difference—so choosing yourself first is very appropriate and NOT even a little bit worthy of blame.
This venue is not safe for these types of discussions
In what way?
our interlocutor is an important reason why.
All I’m doing is criticizing your arguments and providing counter-arguments. Or are you only willing to discuss these things among people who agree with you?
This sounds like an overly convenient excuse to avoid having to confront the implications of said truths. Restrict them to only people who won’t ask awkward questions and tell everyone else pious lies.
You need to establish some truths before worrying about the consequences. Scientific facts need controls, for instance. When have you shown any interest in controlling for the effects of environment?
When have you shown any interest in controlling for the effects of environment?
I never said I knew what caused the racial differences in question. There are certainly policy issues where the cause is relevant (incidentally addressing it requires admitting that the differences exist), there are issues where it’s less relevant.
Incidentally, in the example I sited in the great-grandparent it was the anthropologists who had declared that official policy was to deny all environmental explanations.
In this post JulianMorrison was, at least partially, trying to inform you that there is in fact something like a Bayesian Conspiracy within the Social Sciences—that there are social truths that are understood from within the discipline (or at least, from within parts of the discipline) that can’t be discussed with outsiders, because non-rational people will use the knowledge in ways with a highly negative net utility. He was also trying to test you to see if you could be trusted with initiation into that Bayesian Conspiracy.
How do you and Julian know that you are indeed in the “inner ring” of this conspiracy and/or that it’s actual purpose is what you think it is? How sure are you that this conspiracy even has any clue what it’s doing and hasn’t started to believe its own lies? Do you have an answer to the questions I asked here?
How do you and Julian know that you are indeed in the “inner ring” of this conspiracy and/or that it’s actual purpose is what you think it is? How sure are you that this conspiracy even has any clue what it’s doing and hasn’t started to believe its own lies? Do you have an answer to the questions I asked here?
On a case-by-case basis, you do experiments. You double-check them. You entertain alternate hypotheses. You accept that it’s entirely possible that things aren’t the way you think they are. You ask yourself what the likely social consequences of your actions are and if you’re comfortable with them, and then ask yourself how you know that. In short, you act like a rationalist.
(And you certainly don’t just downvote everyone who proposes a model that you don’t like.)
How do you and Julian know that you are indeed in the “inner ring” of this conspiracy and/or that it’s actual purpose is what you think it is? How sure are you that this conspiracy even has any clue what it’s doing and hasn’t started to believe its own lies? Do you have an answer to the questions I asked here?
is more concerned with politics than truth...is more concerned with politics than truth.
And that’s a bad thing? Trying to translate Hard Science directly into real-world action without considering the ethical, social and political consequences would be disastrous. We need something like social science.
is more concerned with politics than truth...is more concerned with politics than truth.
And that’s a bad thing?
If your goal is having an accurate model of the world, yes. If you’re goal is something else, you’re still better of with an accurate model of them world.
Edit: If you want to do politics, that’s also important, just don’t pretend you’re doing science even “soft science”.
We had this discussion before. You told me that the social activist labels are boo lights. But whether something is an applause light or a boo light in a particular community doesn’t mean it is not an accurate label for a phenomena.
“Democracy” is an applause light in the venues I generally hang out in (and I assume the same for you). That does not mean that democracy is not a real phenomena. And the fact that some folks in this venue don’t approve of democracy does not mean they think that the phenomena “democracy” as defined by relevant experts does not exist. In fact, a serious claim that democracy is bad or good first requires believing it occurs.
The general rule is that it is both the writer’s responsibility to be clear and the reader’s responsibility to decipher them. You know, responsibility is not a pie to be divided (warning: potentially mind-killing link), Postel’s law, the principle of charity, an’ all that.
In general, yes. In particular, I’m trying to give whowhowho constructive criticism, and he does not seem to think it is constructive.
warning: potentially mind-killing link
Lol. That’s almost literally the worst example to use for a de-escalating discussion about shares of responsibility.
Edit: On further reflection, Postel’s law is an engineering maxim not appropriate to social debate, and it should be well established that the principle of charity is polite, but not necessarily truth-enhancing.
It’s the reader’s responsibility to read your words, and read all your words, and not to imagine other words. Recently, someone paraphrased a remark of mine with two “maybe”’s I had used deleted and a “necessarily” I hadn’t used inserted. Was that my fault?
As an attorney, my experience is that the distinction between literal words and communicated meaning is very artificial. One canon of statutory construction is the absurdity principle (between two possible meanings, pick the one that isn’t absurd). But that relies on context beyond the words to figure out what is absurd. Eloquent version of this point here.
Now the thing with that logic is that 97% of the world is made up of idiots (Probably a little higher than that, actually.)
I do agree that it’s their fault if they misquote it, not your own, but let’s say you put an unclear statement in a self help book. Those books are generally read by the, ah, lower 40th percentile (Or around thereabouts), or just by really sad people- either way, they’re more emotionally unstable than normal.
Now that we have the perfect conditions for a blowup, let’s say you said something like ‘It’s your responsibility to be happy’ in that book, meaning that you and only you can make yourself happy. Your emotionally unstable reader, however, read it as it was said and took a huge hit to their self-confidence.
Do you see how it isn’t always the reader’s job?
In the great-great-grandparent you make the extremely strong assertion that some facts have such bad implications that reflecting on them causes more harm than good, this raises the question of how can you know which facts have this property without reflecting on them?
Like ethics and practicallity.
Also what do you mean by “ethics”? Do you mean the ethics in the LW-technical sense of ethical injunction or in the non-technical sense of morality?
That’s barely half an argument. You would need to believe that there are significant between-group differences AND that they are significant AND that they should be relevant to policy or decision making in some way. You didn’t argue the second two points there, and you haven’t elsewhere.
I’m with you on the first two, but if the trait is interesting enough to talk about (intelligence, competence, or whatever), isn’t that enough for consideration in policy making? If it isn’t worth considering in making policy, why are we talking about the trait?
Politics isn’t a value-free reflection of nature. The disvalue of reflecting a fact politically might outweigh the value. For instance, people aren’t the same in their political judgement, but everyone gets one vote, for instance.
So if we don’t base our politics on facts, what should we base it on? This isn’t a purely rhetorical question, I can think of several ways to answer it (each of which also has other implications) and am curious what your answer is.
As for your example, that’s because one-man-one-vote is a more workable Schelling point since otherwise you have the problem of who decides which people have better political judgement.
You include a copy of the Cognitive Reflection Test or similar in each ballot and weigh votes by the number of correct answers to the test.
(This idea isn’t original to me, BTW—but I can’t recall anyone expressing it on the public Internet at the moment.)
This doesn’t quite solve the Schelling point problem. You start getting questions about why that particular test and not some other. You will also get problems related to Goodheart’s law.
Well… People might ask that about (say) university admission tests, and yet in practice very few do so with a straight face. (OTOH, more people consider voting a sacrosanct right than studying.)
ETA: now that I think about that, this might be way more problematic in a country less culturally homogeneous than mine—I’m now reminded of complaints in the US that the SAT is culturally biased.
Keeping the choice of questions secret until the election ought to mitigate that.
Also in the US the SAT is only one of the factors effecting admissions.
Only partially. Also what about the people whose design the questions?
High-stakes testing, like the SAT, where voters - I mean, test-takers—have vastly more incentive to cheat, seem to do fine.
Come to think of it, the problem is that the people designing the SAT’s have fewer incentives to bias them then people designing the election tests.
I was arguing against basing policy on (narrowly construed) facts alone.
This is a purely terminological point: A substantial percentage of the folks in this forum think moral propositions are a kind of fact. I think they are wrong, but my usage (moral values are not empirical facts) is an idiosyncratic usage in this venue.
In short, I’m not sure if you are disagreeing with the local consensus, or simply using a different vocabulary. Until you and your interlocutors are using the same vocabulary, continuing disagreement is unlikely to be productive.
In short, I think basically everyone agrees that public policy is the product of the combination of scientific fact (including historical fact and sociological fact) and moral values. But because of disagreements on the meta-ethical and philosophy of science level, there is widespread disagreement on what my applause light sentence means in practice.
Well, I did say “narrowly construed” facts.
Your post is very susceptible to the construction:
You could object that this is not a charitable reading. But in the context of this discussion, it is hard to tell how to read you charitably while ensuring that you would still endorse the interpretation.
I don’t see why anyone would read “not on facts alone” as “not on facts at all”.
You didn’t define what you mean by “narrowly construed” facts, but from context it seems like you’re saying I don’t like these particular facts therefore I want an excuse to ignore them.
I will point out, for a third time, that “not on (narrowly construed) facts alone” does not mean “not on facts at all”.
In that case the correct response is the present the relevant additional facts, not attempt to suppress the facts that are too “narrowly construed”.
“if we implement such-and-such policies, people will riot” is a fact of a sort, but not the sort that is discovered in a laboratory.
Then where did you get the evidence to assert it with such high confidence? (This isn’t meant to be a rhetorical question.)
Also, is this really the best example you could come up with? The problem with this example is that even if the fact in question is true, there are still good game theoretic/decision theoretic reasons not to respond to blackmail.
I am glad that the tyrants of the past did not know of them, or you and I would not now enjoy freedom and democracy.
Yes, and I’m also glad Hitler’s megalomania interfered with the effectiveness of the German army.
Are you also glad the Eisenhower did when he sent the national guard to enforce integration?
[shrugs]. You construed riots in a sweepingly negative way as “blackmail”. The fact that I do not agree does not mean I am construing them in a sweepingly positive way. This is as a pattern you have repeated throughout this discussion, and it illustrates how politics mindkills.
If a policy is good, a riot against it is blackmail. If a policy is bad, you shouldn’t be pursuing it riot or no riot. Thus the hypothetical existence of riots shouldn’t affect which policies one pursues. Frankly, I have hard time believing “leading to riots” is your true rejection of the policies in question.
That is a dangerous belief for a leader to hold. I’d prefer leaders that don’t have that belief. In fact it should be taken as granted that leaders who do not respond to the expectation that the people will oppose their actions will be killed or otherwise rendered harmless through whichever actions are suitable to the political environment.
history
If you want social science to be taken seriously, you do your cause a disservice by asserting social science is different in kind from so-called “hard science.”
Edit: In fact, Eugine_Nier’s argument here is that social science is not rigorous enough to be worth considering. You don’t advance true belief by asserting that social science does not need to be rigorous.
And just in case it isn’t clear, the ability to replicate an experiment is not required for a scientific field to be rigorous. (Just look at astro-physics: It isn’t like we can cue up a supernova on command to test our hypothesis). It is preferable, but not necessary.
No, my argument was that much of modern social science (and especially modern anthropology which is bad even by social science standards) is more concerned with politics than truth. See here for JulianMorrison practically admitting as much and then trying to argue that this is a good thing. And quite frankly the tone of your comment is also not encouraging in that respect.
I think that’s a highly disingenuous reading of JulianMorrison’s statement. JulianMorrison never stated that it was a good thing, only that it was a necessary thing in the face of political realities. In an evolutionary environment where only the Dark Arts are capable of surviving, would you rather win or die?
Essentially, we all need to remember that speaking the truth has a variable utility cost that depends on environment. If the perceived utility of speaking the truth publically is negative, then you invoke the Bayesian Conspiracy and don’t speak the truth except in private.
In this post JulianMorrison was, at least partially, trying to inform you that there is in fact something like a Bayesian Conspiracy within the Social Sciences—that there are social truths that are understood from within the discipline (or at least, from within parts of the discipline) that can’t be discussed with outsiders, because non-rational people will use the knowledge in ways with a highly negative net utility. He was also trying to test you to see if you could be trusted with initiation into that Bayesian Conspiracy. (You failed the test, btw—which is something you might realize with pride or chagrin, depending on your political allegiances.)
I don’t think I’d identify the activist subculture with the social sciences, at least in the case JulianMorrison was talking about. If there’s an academic community whose members publish relatively unfiltered research within their fields but don’t usually talk to the public unless they are also activists, and also an activist community whose members are much more interested in spreading the word but aren’t always too interested in spreading up-to-date science (charitably, because they believe some avenues of research to be suffering from bias or otherwise suspect), then we get the same results without having to invoke a conspiracy. This also has the advantage of explaining why it’s possible to read about ostensibly forbidden social truths by, e.g., querying the right Wikipedia page.
Whether this accurately models any particular controversial subject is probably best left as an exercise.
Hold on a moment. I think the labels are accurate descriptions of the phenomena. There’s hostility to this kind of discussion, so sometimes the only winning move is not to play. But if the labels (heternormativity, privilege, social construction, rape culture) are not describing social phenomena, then we should find accurate labels.
And if experts use the labels right, but [Edit: sympathetic] laypeople do not, then we should chide the laypeople until they use them right. Agreement with my preferred policies does not make you wise, because arguments are not soldiers.
In short, I think I win on the merits, so let’s not get caught up in procedural machinations.
That assumes that we have sufficient status that our chiding the laypeople will win. The problem with social phenomena is that discussions about social phenomena are themselves social phenomena, so your statements have social cost that may be independent of their truth value. If you want to rationally strive towards maximum utility, you need to recognize and deal with the utility costs inherent in discussing facts with agents whose strategies involve manipulating consensus, and who themselves may not care as much about avoiding the Dark Arts as you seem to.
Secondly:
I currently tend to believe that these are somewhat accurate labels—that is, they accurately define semantic boundaries around phenomena that do in fact exist, and that we do in fact have some actual understanding of. But if your audience sees them as fighting words, then they will see your arguments as soldiers. If you want to have a rational discussion about this, you need to be able to identify who else is willing to have a rational discussion about this, and at what level. Remember that on lesswrong, signaling rationality is a status move, so just because someone displays signals that indicate rationality doesn’t mean that they are in fact rational about a particular subject, especially a political one.
Ah. I see all my comments everywhere on the site are getting voted down again. Politics is the mind-killer, indeed.
Ok, serious question, folks:
What would it take to negotiate a truce on lesswrong, such that people could have differing opinions about what is or isn’t appropriate social utility maximization without getting into petty karma wars with each other?
Ah. This got downvoted too. Is there any way for me to stop this death-spiral and flag for empathy? Please?
Mercy? Uncle?
I endorse interpreting net downvotes as information: specifically, the information that more people want less contributions like whatever’s being downvoted than want more contributions like it.
I can then either ignore that stated preference and keep contributing what I want to contribute (and accept any resulting downvotes as ongoing confirmation that of the above), or I can conform to that stated preference. I typically do the latter but I endorse the former in some cases.
The notion of a “truce” whereby I get to contribute whatever I choose and other people don’t use the voting mechanism to express their judgments of it doesn’t quite make sense to me.
All of that said, I agree with you that there exist various social patterns to which labels have been attached in popular culture, where those labels are shibboleths in certain subcultures and anti-shibboleths (“fighting words,” as you put it) in others. I find that if I want to have a useful discussion about those patterns within those subcultures, I often do best to not use those labels.
Except your interpretation is at least partially wrong—people mass downvote comments based on author, so there is no information about the quality of a particular post (it’s more like (S)HE IS A WITCH!). A better theory is that karma is some sort of noisy average between what you said, and ‘internet microaggression,’ and probably some other things—there is no globally enforced usage guidelines for karma.
I personally ignore karma. I generally write two types of posts: technical posts, and posts on which there should be no consensus. For the former, almost no one here is qualified to downvote me. For the latter, if people downvote me, it’s about the social group not correctness.
There are plenty of things to learn on lesswrong, but almost nothing from the karma system.
Oh, I completely agree that the reality is a noisy average as you describe. That said, for someone with the goals ialdabaoth describes themselves as having, I continue to endorse the interpretation strategy I describe. (By contrast, for someone with the goal-structure you describe, ignoring karma is a fine strategy.)
Huh. Are “Do you think it likely that ‘social activism’ and ‘liberalism’ are fighting words in this board’s culture?” fighting words in this board’s culture?
Depends on how they’re used, but yes, there are many contexts where I would probably avoid using those words here and instead state what I mean by them. Why do you ask?
Edit: the question got edited after I answered it into something not-quite-grammatical, so I should perhaps clarify that the words I’m referring to here are ‘social activism’ and ‘liberalism’ .
Because I want to discuss and analyze my beliefs openly, but I don’t want to lose social status on this site if I don’t have to.
A deeper observation and question: I appear to be stupid at the moment. Where can I go to learn to be less socially stupid on this site?
One approach is to identify high-status contributors and look for systematic differences between your way of expressing yourself and theirs, then experiment with adopting theirs.
Ya lol works awsome, look at my awsome bla-bla (lol!) ye mighty, and despair.
Nothing beside remains. Round the decay
Of that colossal wreck, boundless and bare
The lone and level sands stretch far away.
Alas, you and I are not in the same league as Yvain, TheOtherDave, fubarobfusco, or Jack.
Be that as it may (or mayn’t), that’s a clever way of making the intended message more palatable, including yourself in the deprecation. But you’re right. Aren’t we all pathetic, eh?
Look at your most upvoted contributions (ETA: or better, look at contributions with a positive score in general—see replies to this comment). Look at your most downvoted contributions. Compare and contrast.
Most downvoted, yes, but on the positive side I’d instead suggest looking at your comments one or two sigma east of average and no higher: they’re likely to be more reproducible. If they’re anything like mine, your most highly upvoted posts are probably high risk/high reward type comments—jokes, cultural criticism, pithy Deep Wisdom—and it’ll probably be a lot harder to identify and cultivate what made them successful.
A refinement of this is to look at the pattern of votes around the contributions as well, if they are comments. Comparing the absolute ranking of different contributions is tricky, because they frequently reflect the visibility of the thread as much as they do the popularity of the comment. (At one time, my most-upvoted contributions were random observations on the Harry Potter discussion threads, for example.)
Not to mention Rationality Quotes threads...
Rationality quotes might be a helpful way of figuring out how to get upvoted, but it is not particularly helpful in figuring out how to be more competent.
Edit: Oops. Misunderstood the comment.
Actually, I was agreeing with TheOtherDave. (I’ve edited my comment to quote the part of its parent I was elaborating upon; is that clearer now?)
(nods) Then yeah, I’d encourage you to avoid using those words and instead state what you mean by them. Which may also result in downvotes, depending on how people judge your meaning.
And yet you’ve just argued that your beliefs should not be discussed openly with outsiders.
No I didn’t, I argued that in a different context, it’s dangerous to discuss your beliefs openly with outsiders. And I wasn’t even trying to defend that behavior, I was offering an explanation for it.
...and you’re using rhetorical tactics. Why do you consider this a fight? Why is it so important that I lose?
I’ll agree to have lost if that will help. Will it help?
I don’t see the difference in context. (This isn’t rhetoric, I honestly don’t see the difference in context.)
Interesting, so do you disapprove of the behavior in question. If so why do you still identify it’s practitioners as “your side”?
I wasn’t trying to. I was pointing out the problems with basing a movement on ‘pious lies’.
Issues can be complex, you know. They can be simpler than ‘green’ vs. ‘blue’.
Which is still a gross mischaracterization of what was being discussed, but that mischaracterizing process is itself part of the rhetorical tactic being employed. I’m afraid I can no longer trust this communication channel.
How so? Near as I can tell from an outside view my description is a decent summary of your and/or Julian’s position. I realize that from the inside it feels different because the lies feel justified, well ‘pious lies’ always feel justified to those who tell them.
You’re the one who just argued (and/or presented Julian’s case) that I was not to be trusted with the truth. If anything, I’m the one who has a right to complain that this communication channel is untrustworthy.
And yet you’re still using it. What are you attempting to accomplish? What do you think I was attempting to accomplish? (I no longer need to know the answers to these questions, because I’ve already downgraded this channel to barely above the noise threshold; I’m expending the energy in the hopes that you ask yourself these questions in a way that doesn’t involve assuming that all our posts are soldiers fighting a battle.)
Same here with respect to the questions I asked here, here, and here. The fact that you were willing to admit to the lies gave me hope that we might have something resembling a reasonable discussion. Unfortunately it seems you’d rather dismiss my questions as ‘rhetoric’ than question the foundations of your beliefs. I realize the former choice is easier but if you’re serious about wanting to anilyze your beliefs you need to do the latter.
For the sake of others watching, the fact that you continue to use phrases like “willing to admit to the lies” should be a telling signal that something other than truth-seeking is happening here.
Something other than truth-seeking is happening here. But the use of that phrase does not demonstrate that—your argument is highly dubious. Since the subject at the core seems to be about prioritizing between epistemic accuracy and political advocacy it can be an on topic observation of fact.
If a phrase such as “pursuing goals other than pure truth-seeking” were used rather than “noble lies”, I would agree with you. But he appears to deliberately attempt to re-frame any argument that he doesn’t like in the most reprehensible way possible, rather than attempting to give it any credit whatsoever. He’s performing all sorts of emotional “booing” and straw-manning, rather than presenting the strongest possible interpretation of his opponent’s view and then attacking that. And when someone attempts to point that out to him, he immediately turns around and attempts to accuse them of doing it, rather than him.
It’s possible to have discussions about this without either side resorting to “this is how evil you’re being” tactics, or without resorting to “you’re resorting to ‘this is how evil you’re being’ tactics” tactics, or without resorting to “you’re resorting to ‘you’re resorting to {this is how evil you’re being} tactics’ tactics” tactics. Unfortunately, it’s a classic Prisoner’s Dilemma—whoever defects first tends to win, because humans are wired such that rhetoric beats honest debate.
That is approximately how I would summarize the entire conversation.
Theoretically, although those most capable of being sane when it comes to this kind of topic are also less likely to bother.
Often, yes. It would be a gross understatement to observe that I share your lament.
Specifically, the method of pursuing said goals in question is by making and promoting false statements. This is precisely what the phrase ‘noble lie’ means. This is the kind of thing that would be bad enough even if the authority of “Science” weren’t being invoked by the people making said false statements. Yes, the phrase “noble lie” has negative connotations, there are very good reasons for that.
Incidentally, at the time that I write this comments none of your most recent comments are net-negative, most are net-positive, including the one I’m responding to. Does knowing that make t easier for you to contribute without worrying too much about your social status here?
No, and here’s my reasoning:
The net variability is the problem, not merely the bulk downvoting. All this sort of situation does is demonstrate that the karma system is untrustworthy. Since the karma system was the easiest way to determine whether what I’m saying is considered worth listening to by the community, I have to find secondary indicators. Unfortunately, most of those require feedback, and explicitly asking for that feedback often results in bulk downvoting.
I’m one of those people who has to be very careful to modulate my tone so that what I’m trying to say is understood by my audience; if all of the available feedback mechanisms are known to have serious problems, I’m not sure how to proceed.
Does that make any sense?
It does make sense, and the karma system is most assuredly untrustworthy, in the sense you mean it here. (I would say “noisy.:) Asking for feedback is also noisy, as it happens.
At some point, it becomes worthwhile to work out how to proceed given noisy and unreliable feedback.
For example, one useful principle if I think the feedback is net-reliable in the aggregate but has high variability is to damp down sensitivity to individual feedback-items and instead attend to the trend over time once it stabilizes. Conversely, if I think the feedback is unreliable even in the aggregate, it’s best to ignore it altogether.
Yeah, that’s what I try to do in the abstract. In-the-moment, the less rational parts of my brain tend to bump up urgency and try to convince me that I don’t have time to ignore the data and wait for the aggregate, and when I try to pause for reflection, those same parts of my brain tend to ratchet up the perceived urgency again and convince me that I don’t have time to examine whether I have the time to examine whether those parts of my brain are lying to me less or more than the data.
I’m working on a brainhack to mitigate that, but it’s slow going. Once I have something useful I hope to post an article on it.
This is wonderfully put.
Serious question: why even care about karma? Just say what you want.
Because I’ve had enough of my posts voted down below the reply threshold that it destroyed the ability to continue the conversation, and for me having an idea debated back-and-forth is a necessary component of my mental process. Also, karma is used on this site to indicate whether I should be saying what I’m saying, so when everything I said for the past two weeks gets downvoted within 5 minutes of making a statement, even things utterly unrelated to that statement, I feel the need to raise an alarm to ensure that I’m interpreting signals correctly.
If it is, then it is. But from the outside, this sounds like a rationalization for you to choose to do something that you find emotionally harmful. You have no obligation to participate in conversation that you find emotionally harmful.
I intended that to refer only to laypeople who agree with the labels, but are using them wrong. The people who are choosing our side because it seems high status, not because they think it is right. Those folks are dangerous in a lot of ways.
Well, yes. This venue is not safe for these types of discussions—our interlocutor is an important reason why. I do it because I’m trying to dispel the appearance of a silent majority.
I think it is totally understandable to decide that the only winning (socially safe) move is not to engage in the conversation here. It’s not like it will make a huge difference—so choosing yourself first is very appropriate and NOT even a little bit worthy of blame.
In what way?
All I’m doing is criticizing your arguments and providing counter-arguments. Or are you only willing to discuss these things among people who agree with you?
This sounds like an overly convenient excuse to avoid having to confront the implications of said truths. Restrict them to only people who won’t ask awkward questions and tell everyone else pious lies.
You need to establish some truths before worrying about the consequences. Scientific facts need controls, for instance. When have you shown any interest in controlling for the effects of environment?
I never said I knew what caused the racial differences in question. There are certainly policy issues where the cause is relevant (incidentally addressing it requires admitting that the differences exist), there are issues where it’s less relevant.
Incidentally, in the example I sited in the great-grandparent it was the anthropologists who had declared that official policy was to deny all environmental explanations.
How do you and Julian know that you are indeed in the “inner ring” of this conspiracy and/or that it’s actual purpose is what you think it is? How sure are you that this conspiracy even has any clue what it’s doing and hasn’t started to believe its own lies? Do you have an answer to the questions I asked here?
On a case-by-case basis, you do experiments. You double-check them. You entertain alternate hypotheses. You accept that it’s entirely possible that things aren’t the way you think they are. You ask yourself what the likely social consequences of your actions are and if you’re comfortable with them, and then ask yourself how you know that. In short, you act like a rationalist.
(And you certainly don’t just downvote everyone who proposes a model that you don’t like.)
Can you describe some of the experiments you did?
And that’s a bad thing? Trying to translate Hard Science directly into real-world action without considering the ethical, social and political consequences would be disastrous. We need something like social science.
If your goal is having an accurate model of the world, yes. If you’re goal is something else, you’re still better of with an accurate model of them world.
Edit: If you want to do politics, that’s also important, just don’t pretend you’re doing science even “soft science”.
We had this discussion before. You told me that the social activist labels are boo lights. But whether something is an applause light or a boo light in a particular community doesn’t mean it is not an accurate label for a phenomena.
“Democracy” is an applause light in the venues I generally hang out in (and I assume the same for you). That does not mean that democracy is not a real phenomena. And the fact that some folks in this venue don’t approve of democracy does not mean they think that the phenomena “democracy” as defined by relevant experts does not exist. In fact, a serious claim that democracy is bad or good first requires believing it occurs.
I don’t really have a very productive response.
The general rule is that your responsibility is to be clear—it is not your reader’s responsibility to decipher you.
The general rule is that it is both the writer’s responsibility to be clear and the reader’s responsibility to decipher them. You know, responsibility is not a pie to be divided (warning: potentially mind-killing link), Postel’s law, the principle of charity, an’ all that.
In general, yes. In particular, I’m trying to give whowhowho constructive criticism, and he does not seem to think it is constructive.
Lol. That’s almost literally the worst example to use for a de-escalating discussion about shares of responsibility.
Edit: On further reflection, Postel’s law is an engineering maxim not appropriate to social debate, and it should be well established that the principle of charity is polite, but not necessarily truth-enhancing.
ETA: What’s unclear about “not on facts alone”?
It’s the reader’s responsibility to read your words, and read all your words, and not to imagine other words. Recently, someone paraphrased a remark of mine with two “maybe”’s I had used deleted and a “necessarily” I hadn’t used inserted. Was that my fault?
Beware of expecting short inferential distances.
One has to grasp literal meaning before inference even kicks in.
As an attorney, my experience is that the distinction between literal words and communicated meaning is very artificial. One canon of statutory construction is the absurdity principle (between two possible meanings, pick the one that isn’t absurd). But that relies on context beyond the words to figure out what is absurd. Eloquent version of this point here.
If people insist on drawing inferences from what was never intended as a hint...what can you do?
‘On hearing of the death of a Turkish ambassador, Talleyrand is supposed to have said: “I wonder what he meant by that?”’
Now the thing with that logic is that 97% of the world is made up of idiots (Probably a little higher than that, actually.) I do agree that it’s their fault if they misquote it, not your own, but let’s say you put an unclear statement in a self help book. Those books are generally read by the, ah, lower 40th percentile (Or around thereabouts), or just by really sad people- either way, they’re more emotionally unstable than normal. Now that we have the perfect conditions for a blowup, let’s say you said something like ‘It’s your responsibility to be happy’ in that book, meaning that you and only you can make yourself happy. Your emotionally unstable reader, however, read it as it was said and took a huge hit to their self-confidence. Do you see how it isn’t always the reader’s job?
Strangely enough, I never said it was...
For your reference, I have no idea what Lauryn is talking about.
You didn’t answer my question.
If we dont’ base policy on (narrowly construed, laboratory-style) facts alone, we use other things in additiojn. Like ethics and practicallity.
In the great-great-grandparent you make the extremely strong assertion that some facts have such bad implications that reflecting on them causes more harm than good, this raises the question of how can you know which facts have this property without reflecting on them?
Also what do you mean by “ethics”? Do you mean the ethics in the LW-technical sense of ethical injunction or in the non-technical sense of morality?