Langford’s basilisk, but anyone who can write a working FizzBuzz is immune.
The truth about how the Illuminati are secretly controlling society by impurifying our precious bodily fluids.
Learning idea #1 is perfectly safe for anyone. That is, it’s safe for the hearer; it will do you no harm to learn this, whoever you are. That does not, however, mean that it’s safe for the general public to have this idea widely disseminated! Some ne’er-do-well might actually build the damn thing, and then—bad times ahead!
If we try to stop the dissemination of idea #1, nobody can accuse us of “saviorist fuckery”, paternalism, etc.; to such charges we can reply “never mind your safety—that’s your own business; but I don’t quite trust you enough to be sure of my safety, if you learn of this (nor the safety of others)!” (Of course, if it turns out that we are in possession of idea #1 ourselves, the subject might ask how comes it that we are so trustworthy as to be permitted this knowledge, but nobody else is!)
Ok, what about #2? This one’s totally unsafe. Anyone who learns it dies. There’s no question of us keeping this idea for ourselves; we’re as ignorant as anyone else (or we’d be dead). If we likewise keep others from learning this, it can only be purely altruistic. (On the other hand, what if we’re wrong about the danger? Who appointed us guardians against this threat, anyway? What gives us the right to deny people the chance to be exposed to the basilisk, if they choose it, and have been apprised of the [alleged] danger?)
#3 is pretty close to #2, except that the people we’re allegedly protecting now have reason to be skeptical of our motives. (Awfully convenient, isn’t it? This trait that already gave us cause to feel superior to others, happens also to make us the only people who can learn the terrible secret thing without going mad! Yeah, right…)
As for #4, it is not even a case of “saviorist fuckery”, but ordinary self-serving treachery. If we’re Illuminati members ourselves, then clearly we’ve got an interest in preventing the truth about our evil schemes from coming to light. If we’re not members, then we’r just cowardly collaborators. (What could we claim—that we, alone, have the good sense to meekly accept the domination of the Illuminati, and that we keep the secret because we fear that less enlightened folks might object so strenuously that they’d riot, revolt, etc.—and that we do this out of selfless altruism? The noble lie, in other words, except that we’re not the architects of the lie, nor have we cause to think that it’s particularly noble—we simply hold stability so dear that we’ll accept slavery to buy it, and ensure that our oblivious fellows remain unwitting slaves?)
So which of these examples is the closest to the sort of thing the OP talks about, in your view?
Hard to answer that question given how much work the clause ‘for some definition of “safely”’ is doing in that sentence.
I think the amount of work that clause does is part of what makes the question worth answering...or at least makes the question worth asking.
Awfully convenient, isn’t it? This trait that already gave us cause to feel superior to others, happens also to make us the only people who can learn the terrible secret thing without going mad! Yeah, right…
I’m not a fan of inserting this type of phrasing into an argument. I think it’d be better to either argue that the claim is true or not true. To me, this type of claim feels like an applause light. Of course, it’s also possibly literally accurate...maybe most claims of the type we’re talking about are erroneous and clung to because of the makes-us-feel-superior issue, but I don’t think that literally accurate aspect of the argument makes the argument more useful or less of an applause light.
In other words, I don’t have access to an argument that says both of these cannot exist:
Cases that just make Group A feel superior because Group A erroneously thinks they are the only ones who can know it safely.
Cases that make Group A feel superior because Group A accurately thinks they are the only ones who can know it safely.
In either case Group A comes across badly, but in case 2, Group A is right.
If we cannot gather any more information or make any more arguments, it seems likely that case #1 is going to usually be the reality we’re looking at. However, we can gather more information and make more arguments. Since that is so, I don’t think it’s useful to assume bad motives or errors on the part of Group A.
So which of these examples is the closest to the sort of thing the OP talks about, in your view?
I don’t really know. The reason for my root question was to suss out whether you had more information and arguments or were just going by the heuristics that make you default to my case #1. Maybe you have a good argument that case #2 cannot exist. (I’ve never heard of a good argument for that.)
eta: I’m not completely satisfied with this comment at this time as I don’t think it completely gets across the point I’m trying to make. That being said, I assign < 50% chance that I’ll finish rewriting it in some manner so I’m going to leave it as is and hope I’m being overly negative in my assessment of it or at least that someone will be able to extract some meaning from it.
Less so, but it just leads to the question of “why do you think it’s suspicious?”. If at all possible I’d prefer just engaging with whether the root claim is true or false.
That’s fair. I initially looked at (the root claim) as a very different move, which could use critique on different grounds.
‘Yet another group of people thinks they are immune to common bias. At 11, we will return to see if they, shockingly, walked right into it. When will people (who clearly aren’t immune) going to stop doing this?’
I’m not a fan of inserting this type of phrasing into an argument. I think it’d be better to either argue that the claim is true or not true. To me, this type of claim feels like an applause light.
Er… I think there’s been some confusion. I was presenting a hypothetical scenario, with hypothetical examples, and suggesting that some unspecified (but also hypothetical) people would likely react to a hypothetical claim in a certain way. All of this was for the purpose of illustrating and explaining the examples, nothing more. No mapping to any real examples was intended.
The reason for my root question was to suss out whether you had more information and arguments or were just going by the heuristics that make you default to my case #1. Maybe you have a good argument that case #2 cannot exist. (I’ve never heard of a good argument for that.)
My point is that before we can even get to the stage where we’re talking about which of your cases apply, we need to figure out what sort of scenario (from among my four cases, or perhaps others I didn’t list?) we’re dealing with. (For instance, the question of whether Group A is right or wrong that they’re the only ones who can know a given idea safely, is pretty obviously ridiculous in my scenario #4, either quite confused or extremely suspect in my case #1, etc. At any rate, scenario #1 and scenario #2—just to take one obviously contrasting pair—are clearly so different that aggregating them and discussing them as though they’re one thing, is absurd!)
So it’s hard to know how to take your question, in that light. Are you asking whether I think that things like Langford’s basilisk exist (i.e., my scenario #2), or can exist? (Almost certainly not, and probably not but who knows what’s possible, respectively…) Are you asking whether I think that my scenario #3 exists, or can exist? Even less likely…
(Awfully convenient, isn’t it? This trait that already gave us cause to feel superior to others, happens also to make us the only people who can learn the terrible secret thing without going mad! Yeah, right…)
It seemed to me like your parentheticals were you stepping out of the hypothetical and making commentary about the standpoint in your hypotheticals. I apologize if I interpreted that wrong.
My point is that before we can even get to the stage where we’re talking about which of your cases apply, we need to figure out what sort of scenario (from among my four cases, or perhaps others I didn’t list?) we’re dealing with.
Yeah, I think I understood that is what you’re saying, I’m saying I don’t think your point is accurate. I do not think you have to figure out which of your scenarios we’re dealing with. The scenario type is orthogonal to the question I’m asking.
I’m asking if you think it’s possible for these sort of ideas to exist in the real world:
“these ideas are harmful to hear for people who aren’t me/us (because we’re enlightened/rational/hyper-analytic/educated/etc. and they’re not)”
I’m confused about how what you’ve said has a bearing on the answerability of my root question.
Do you think that such things exist?
I...don’t know.
My prior is that they can exist. It’s doesn’t break any laws of physics. I don’t think it breaks any laws of logic. I think there are things that some people are better able to understand than others. It’s not insane to think that some people are less prone to manipulation than others. Just because believing something makes someone feel superior does not logically mean that the thing they believe is wrong.
As far as if they do exist: There are things that have happened on LW like Roko’s basilisk that raise my prior that there are things that some people can hold in their heads safely and others can’t. Of course, that could be down to quirks of individual minds instead of general features of some group. I’d be interested in someone exploring that idea further. When do we go from saying “that’s just a quirk” to “that’s a general feature”? I dunno.
It seemed to me like your parentheticals were you stepping out of the hypothetical and making commentary about the standpoint in your hypotheticals. I apologize if I interpreted that wrong.
That was indeed not my intention.
Yeah, I think I understood that is what you’re saying, I’m saying I don’t think your point is accurate. I do not think you have to figure out which of your scenarios we’re dealing with. The scenario type is orthogonal to the question I’m asking.
I don’t see how that can be. Surely, if you ask me whether some category of thing exists, it is not an orthogonal question, to break that category down into subcategories, and make the same inquiry of each subcategory individually? Indeed, it may be that the original question was intended to refer only to some of the listed subcategories—which we cannot get clear on, until we perform the decomposition!
I’m asking if you think it’s possible for these sort of ideas to exist in the real world:
“these ideas are harmful to hear for people who aren’t me/us (because we’re enlightened/rational/hyper-analytic/educated/etc. and they’re not)”
I’m confused about how what you’ve said has a bearing on the answerability of my root question.
The bearing is simple. Do you think my enumeration of scenarios exhausts the category you describe? If so, then we can investigate, individually, the existence or nonexistence of each scenario. Do you think that there are other sorts of scenarios that I did not list, but that fall into your described category? If so, then I invite you to comment on what those might be.
Just because believing something makes someone feel superior does not logically mean that the thing they believe is wrong.
True enough.
I agree that what you describe breaks no (known) laws of physics or logic. But as I understood it, we were discussing existence, not possibility per se. In that regard, I think that getting down to specifics (at least to the extent of examining the scenarios I listed, or others like them) is really the only fruitful way of resolving this question one way or the other.
I think I see a way towards mutual intelligibility on this, but unfortunately I don’t think I have the bandwidth to get to that point. I will just point out this:
But as I understood it, we were discussing existence, not possibility per se.
Hard to answer that question given how much work the clause ‘for some definition of “safely”’ is doing in that sentence.
EDIT: This sort of thing always comes down to examples and reference classes, doesn’t it? So let’s consider some hypothetical examples:
Instructions for building a megaton thermonuclear bomb in your basement out of parts you can get from a mail-order catalog.
Langford’s basilisk.
Langford’s basilisk, but anyone who can write a working FizzBuzz is immune.
The truth about how the Illuminati are secretly controlling society by impurifying our precious bodily fluids.
Learning idea #1 is perfectly safe for anyone. That is, it’s safe for the hearer; it will do you no harm to learn this, whoever you are. That does not, however, mean that it’s safe for the general public to have this idea widely disseminated! Some ne’er-do-well might actually build the damn thing, and then—bad times ahead!
If we try to stop the dissemination of idea #1, nobody can accuse us of “saviorist fuckery”, paternalism, etc.; to such charges we can reply “never mind your safety—that’s your own business; but I don’t quite trust you enough to be sure of my safety, if you learn of this (nor the safety of others)!” (Of course, if it turns out that we are in possession of idea #1 ourselves, the subject might ask how comes it that we are so trustworthy as to be permitted this knowledge, but nobody else is!)
Ok, what about #2? This one’s totally unsafe. Anyone who learns it dies. There’s no question of us keeping this idea for ourselves; we’re as ignorant as anyone else (or we’d be dead). If we likewise keep others from learning this, it can only be purely altruistic. (On the other hand, what if we’re wrong about the danger? Who appointed us guardians against this threat, anyway? What gives us the right to deny people the chance to be exposed to the basilisk, if they choose it, and have been apprised of the [alleged] danger?)
#3 is pretty close to #2, except that the people we’re allegedly protecting now have reason to be skeptical of our motives. (Awfully convenient, isn’t it? This trait that already gave us cause to feel superior to others, happens also to make us the only people who can learn the terrible secret thing without going mad! Yeah, right…)
As for #4, it is not even a case of “saviorist fuckery”, but ordinary self-serving treachery. If we’re Illuminati members ourselves, then clearly we’ve got an interest in preventing the truth about our evil schemes from coming to light. If we’re not members, then we’r just cowardly collaborators. (What could we claim—that we, alone, have the good sense to meekly accept the domination of the Illuminati, and that we keep the secret because we fear that less enlightened folks might object so strenuously that they’d riot, revolt, etc.—and that we do this out of selfless altruism? The noble lie, in other words, except that we’re not the architects of the lie, nor have we cause to think that it’s particularly noble—we simply hold stability so dear that we’ll accept slavery to buy it, and ensure that our oblivious fellows remain unwitting slaves?)
So which of these examples is the closest to the sort of thing the OP talks about, in your view?
I think the amount of work that clause does is part of what makes the question worth answering...or at least makes the question worth asking.
I’m not a fan of inserting this type of phrasing into an argument. I think it’d be better to either argue that the claim is true or not true. To me, this type of claim feels like an applause light. Of course, it’s also possibly literally accurate...maybe most claims of the type we’re talking about are erroneous and clung to because of the makes-us-feel-superior issue, but I don’t think that literally accurate aspect of the argument makes the argument more useful or less of an applause light.
In other words, I don’t have access to an argument that says both of these cannot exist:
Cases that just make Group A feel superior because Group A erroneously thinks they are the only ones who can know it safely.
Cases that make Group A feel superior because Group A accurately thinks they are the only ones who can know it safely.
In either case Group A comes across badly, but in case 2, Group A is right.
If we cannot gather any more information or make any more arguments, it seems likely that case #1 is going to usually be the reality we’re looking at. However, we can gather more information and make more arguments. Since that is so, I don’t think it’s useful to assume bad motives or errors on the part of Group A.
I don’t really know. The reason for my root question was to suss out whether you had more information and arguments or were just going by the heuristics that make you default to my case #1. Maybe you have a good argument that case #2 cannot exist. (I’ve never heard of a good argument for that.)
eta: I’m not completely satisfied with this comment at this time as I don’t think it completely gets across the point I’m trying to make. That being said, I assign < 50% chance that I’ll finish rewriting it in some manner so I’m going to leave it as is and hope I’m being overly negative in my assessment of it or at least that someone will be able to extract some meaning from it.
Do you have the same reaction to:
“This claim is suscpicious.”
Less so, but it just leads to the question of “why do you think it’s suspicious?”. If at all possible I’d prefer just engaging with whether the root claim is true or false.
That’s fair. I initially looked at (the root claim) as a very different move, which could use critique on different grounds.
‘Yet another group of people thinks they are immune to common bias. At 11, we will return to see if they, shockingly, walked right into it. When will people (who clearly aren’t immune) going to stop doing this?’
Er… I think there’s been some confusion. I was presenting a hypothetical scenario, with hypothetical examples, and suggesting that some unspecified (but also hypothetical) people would likely react to a hypothetical claim in a certain way. All of this was for the purpose of illustrating and explaining the examples, nothing more. No mapping to any real examples was intended.
My point is that before we can even get to the stage where we’re talking about which of your cases apply, we need to figure out what sort of scenario (from among my four cases, or perhaps others I didn’t list?) we’re dealing with. (For instance, the question of whether Group A is right or wrong that they’re the only ones who can know a given idea safely, is pretty obviously ridiculous in my scenario #4, either quite confused or extremely suspect in my case #1, etc. At any rate, scenario #1 and scenario #2—just to take one obviously contrasting pair—are clearly so different that aggregating them and discussing them as though they’re one thing, is absurd!)
So it’s hard to know how to take your question, in that light. Are you asking whether I think that things like Langford’s basilisk exist (i.e., my scenario #2), or can exist? (Almost certainly not, and probably not but who knows what’s possible, respectively…) Are you asking whether I think that my scenario #3 exists, or can exist? Even less likely…
Do you think that such things exist?
I was referring to this part of your text:
It seemed to me like your parentheticals were you stepping out of the hypothetical and making commentary about the standpoint in your hypotheticals. I apologize if I interpreted that wrong.
Yeah, I think I understood that is what you’re saying, I’m saying I don’t think your point is accurate. I do not think you have to figure out which of your scenarios we’re dealing with. The scenario type is orthogonal to the question I’m asking.
I’m asking if you think it’s possible for these sort of ideas to exist in the real world:
I’m confused about how what you’ve said has a bearing on the answerability of my root question.
I...don’t know.
My prior is that they can exist. It’s doesn’t break any laws of physics. I don’t think it breaks any laws of logic. I think there are things that some people are better able to understand than others. It’s not insane to think that some people are less prone to manipulation than others. Just because believing something makes someone feel superior does not logically mean that the thing they believe is wrong.
As far as if they do exist: There are things that have happened on LW like Roko’s basilisk that raise my prior that there are things that some people can hold in their heads safely and others can’t. Of course, that could be down to quirks of individual minds instead of general features of some group. I’d be interested in someone exploring that idea further. When do we go from saying “that’s just a quirk” to “that’s a general feature”? I dunno.
That was indeed not my intention.
I don’t see how that can be. Surely, if you ask me whether some category of thing exists, it is not an orthogonal question, to break that category down into subcategories, and make the same inquiry of each subcategory individually? Indeed, it may be that the original question was intended to refer only to some of the listed subcategories—which we cannot get clear on, until we perform the decomposition!
The bearing is simple. Do you think my enumeration of scenarios exhausts the category you describe? If so, then we can investigate, individually, the existence or nonexistence of each scenario. Do you think that there are other sorts of scenarios that I did not list, but that fall into your described category? If so, then I invite you to comment on what those might be.
True enough.
I agree that what you describe breaks no (known) laws of physics or logic. But as I understood it, we were discussing existence, not possibility per se. In that regard, I think that getting down to specifics (at least to the extent of examining the scenarios I listed, or others like them) is really the only fruitful way of resolving this question one way or the other.
I think I see a way towards mutual intelligibility on this, but unfortunately I don’t think I have the bandwidth to get to that point. I will just point out this:
Hmm, I was more interested in the possibility.