5) Violability. The objectifier treats the object as lacking in boundary integrity, as something that it is permissible to break up, smash, break into.
6) Ownership. The objectifier treats the object as something that is owned by another, can be bought or sold, etc.
Ok, we don’t do these two.
7) Denial of subjectivity. The objectifier treats the object as something whose experience and feelings (if any) need not be taken into account.
Fortunately this isn’t that common but there is an occasional tendency by some prominent commenters to dismiss personal experience as anecdotes.
8) Reduction to body: treatment of a person as identified with their body, or body parts.
What, are you claiming you have a soul or something?
9) Reduction to appearance: treatment of a person primarily in terms of how they look.
Ok we generally avoid this.
10) Silencing: the treatment of a person as if they lack the capacity to speak.
There’s a tendency to consider some people so hopelessly biased that one should disregard anything they say.
Taking Bayseanism and consequentialism seriously tends to reduce humans to the status of tools and victory points.
Status is a valuable commodity, so behaving in a way that lowers someone else’s status is therefore acting against their interests; non-person objects generally have lower status than people, so treating people as though they were non-person objects is therefore acting against their interests.
Regarding free will, the metaphysics of choice are not actually what is at issue when the list mentions “autonomy”, “self-determination”, “agency”, and “activity”. (I can’t tell if you knew this, and were making a joke, or not.)
Regarding free will, the metaphysics of choice are not actually what is at issue when the list mentions “autonomy”, “self-determination”, “agency”, and “activity”.
However, there doesn’t appear to be a clear ‘Schelling line’ between the metaphysics of choice and what you do mean by those terms. Thus people and movements that start out arguing against free-will tend to end up arguing against “autonomy”, “self-determination”, and “agency” in the sense you mean.
If we go with the assumption that humans are strictly deterministic machines, “autonomy” could be thought of as the degree to which it’s easier to predict a human’s future actions by looking at their internal state, rather than by looking at the orders they receive.
Is it at all useful to think of the issue in terms of “treating people as if they had free will/autonomy/etc, as a reasonable way of dealing with the fact that we can’t model each other to a consistently acceptable degree of accuracy”?
5) At least I would consider an unwillingness to be uploaded as silly irrationality and do it to people anyway rather than have somehting bad happen to them if that was the other option.
Fortunately this isn’t that common but there is an occasional tendency by some prominent commenters to dismiss personal experience as anecdotes.
(On that note but totally unrelated to gay shit like “objectification”: It’s amazing how difficult it is to talk to someone sane, reasonable, intelligent, well-intentioned, honest, without obvious incentives to lie, &c. who reports an experience that, if it actually happened, could only be explained by psi. There are anecdotes where pseudo-explanations like “memory bias” just don’t cut it—in order for you to confidently deny psi you have to confidently accuse them of lying, and in order to confidently accuse them of lying you have to have a significantly better model of human psychology than I do. I think not realizing that such people are in fact numerous is what kept me from even considering psi for Aumannesque reasons—like most LessWrong types I’d implicitly assumed all reports of psi were either fuzzy in their details such that cognitive biases were a defensible explanation, or were provided by people who were less than credible. Once you eliminate those two categories the skeptic is left with a lot of uncomfortable evidence just waiting to be examined. Of course the evidence will never be very communicable to a wide audience, per the law of conservation of trolling.)
For my own part, I have low confidence in my ability to identify individuals as sane, reasonable, intelligent, well-intentioned, honest, without obvious incentives to lie, etc. I’d be interested in how you go about reliably distinguishing such people from humans in general; I would find that a useful skill to learn.
Why do you have low confidence in your abilities? It seems to me that there are many cases in which it should be obvious to you whether or not a person has one of those qualities. E.g., I can be reasonably certain that my step-mother is sane, reasonable, intelligent, well-intentioned, and without obvious incentives to lie—so if she reported psi phenomena, I would have to accuse her of lying for some completely non-transparent reason. (My step-mother doesn’t seem like the trolling type.)
I don’t recall any false positives in my experience, though I seem to vaguely recall false negatives. FWIW all the girls I’ve ever been close friends with have been Slytherin, so I might have abnormally much experience with natural liars (though well-intentioned ones). Er also I scored perfect or near-perfect on some emotion facial expression reading quiz thingy at SingInst, and I’ve been weirdly sensitive to peoples’ microexpressions since childhood. I don’t know if I learned any of the relevant skills, nor am I certain I possess them, but for the cases I have in mind I suspect I do, and that most other intelligent non-autistic-spectrum humans do also, especially the schizotypal ones.
For convenience, call T a threshold such that if someone clears T I can reliably trust that their reports of a phenomenon I otherwise consider unlikely ought to be either believed or classed as a lie. That is, when you describe someone as “sane, reasonable, intelligent, well-intentioned, honest, without obvious incentives to lie, etc. ” we understand that to mean that the person clears T.
I have low confidence in my ability to recognize people who clear T because of the numerous incidences in my life where, for example, two people who appear to me to clear T give me mutually exclusive accounts of the same experience, or more generally, where people who appear to me to clear T give me accounts that turn out to be false, but where I discern no reason to believe they’re lying.
The conclusion I reach is that ordinary people say, and often genuinely believe, all kinds of shit, and the fact that someone reports an occurrance isn’t especially strong evidence of it having occurred.
If that’s not actually true of ordinary people, and I’ve simply been unable to distinguish ordinary people from the people of whom that’s true, it would be awfully useful to learn to tell the difference.
Edit: I should add that I also have plenty of evidence that I don’t clear T, and I might also be generalizing from one example.
These surveys were conducted by telephone and explored mental disorders and hallucinations (visual, auditory, olfactory, haptic and gustatory hallucinations, out-of-body experiences, hypnagogic and hypnopompic hallucinations). Overall, 38.7% of the sample reported hallucinatory experiences (19.6% less than once in a month; 6.4% monthly; 2.7% once a week; and 2.4% more than once a week). These hallucinations occurred, (1) At sleep onset (hypnagogic hallucinations 24.8%) and/or upon awakening (hypnopompic hallucinations 6.6%), without relationship to a specific pathology in more than half of the cases; frightening hallucinations were more often the expression of sleep or mental disorders such as narcolepsy, OSAS or anxiety disorders. (2) During the daytime and reported by 27% of the sample: visual (prevalence of 3.2%) and auditory (0.6%) hallucinations were strongly related to a psychotic pathology (respective OR of 6.6 and 5.1 with a conservative estimate of the lifetime prevalence of psychotic disorders in this sample of 0.5%); and to anxiety (respective OR of 5.0 and 9.1).
Of course the evidence will never be very communicable to a wide audience
Why not? First obvious way that comes to mind: take someone that the audience trusts to be honest and to judge people correctly and have them go around talking to people who’ve had experiences and report back their findings.
That’s a multi-step plan: at least one of those steps would go wrong. By hypothesis we’re talking about transhuman intelligence(s) here (no other explanation for psi makes sense given the data we have). They wouldn’t let you ruin their fun like that, per the law of conservation of trolling. (ETA: Or at least, it wouldn’t work out like you’d expect it to.)
There are anecdotes where pseudo-explanations like “memory bias” just don’t cut it—in order for you to confidently deny psi you have to confidently accuse them of lying,
Interesting exercise: going through your list of ‴10 ways to treat a person as a thing‴ and see how many of them the ‘LW consensus’ satisfies.
1) Instrumentality. The objectifier treats the object as a tool of his or her purposes.
Well, we’re mostly consequentialists.
2) Denial of autonomy. The objectifier treats the object as lacking in autonomy and self-determination.
Are you claiming to have free will or something?
3) Inertness. The objectifier treats the object as lacking in agency, and perhaps also in activity.
See 2.
4) Fungibility. The objectifier treats the object as interchangeable (a) with other objects of the same type and/or (b) with objects of other types.
Shut up and multiply!
5) Violability. The objectifier treats the object as lacking in boundary integrity, as something that it is permissible to break up, smash, break into.
6) Ownership. The objectifier treats the object as something that is owned by another, can be bought or sold, etc.
Ok, we don’t do these two.
7) Denial of subjectivity. The objectifier treats the object as something whose experience and feelings (if any) need not be taken into account.
Fortunately this isn’t that common but there is an occasional tendency by some prominent commenters to dismiss personal experience as anecdotes.
8) Reduction to body: treatment of a person as identified with their body, or body parts.
What, are you claiming you have a soul or something?
9) Reduction to appearance: treatment of a person primarily in terms of how they look.
Ok we generally avoid this.
10) Silencing: the treatment of a person as if they lack the capacity to speak.
There’s a tendency to consider some people so hopelessly biased that one should disregard anything they say.
Taking Bayseanism and consequentialism seriously tends to reduce humans to the status of tools and victory points.
Really, I think the list overcomplicates matters.
Status is a valuable commodity, so behaving in a way that lowers someone else’s status is therefore acting against their interests; non-person objects generally have lower status than people, so treating people as though they were non-person objects is therefore acting against their interests.
Yeah, I think this is pretty accurate.
Good point. I’d rather have people treat me like the Mona Lisa than, say, a stereotypical mother-in-law.
Regarding free will, the metaphysics of choice are not actually what is at issue when the list mentions “autonomy”, “self-determination”, “agency”, and “activity”. (I can’t tell if you knew this, and were making a joke, or not.)
However, there doesn’t appear to be a clear ‘Schelling line’ between the metaphysics of choice and what you do mean by those terms. Thus people and movements that start out arguing against free-will tend to end up arguing against “autonomy”, “self-determination”, and “agency” in the sense you mean.
If we go with the assumption that humans are strictly deterministic machines, “autonomy” could be thought of as the degree to which it’s easier to predict a human’s future actions by looking at their internal state, rather than by looking at the orders they receive.
Is it at all useful to think of the issue in terms of “treating people as if they had free will/autonomy/etc, as a reasonable way of dealing with the fact that we can’t model each other to a consistently acceptable degree of accuracy”?
5) At least I would consider an unwillingness to be uploaded as silly irrationality and do it to people anyway rather than have somehting bad happen to them if that was the other option.
(On that note but totally unrelated to gay shit like “objectification”: It’s amazing how difficult it is to talk to someone sane, reasonable, intelligent, well-intentioned, honest, without obvious incentives to lie, &c. who reports an experience that, if it actually happened, could only be explained by psi. There are anecdotes where pseudo-explanations like “memory bias” just don’t cut it—in order for you to confidently deny psi you have to confidently accuse them of lying, and in order to confidently accuse them of lying you have to have a significantly better model of human psychology than I do. I think not realizing that such people are in fact numerous is what kept me from even considering psi for Aumannesque reasons—like most LessWrong types I’d implicitly assumed all reports of psi were either fuzzy in their details such that cognitive biases were a defensible explanation, or were provided by people who were less than credible. Once you eliminate those two categories the skeptic is left with a lot of uncomfortable evidence just waiting to be examined. Of course the evidence will never be very communicable to a wide audience, per the law of conservation of trolling.)
For my own part, I have low confidence in my ability to identify individuals as sane, reasonable, intelligent, well-intentioned, honest, without obvious incentives to lie, etc. I’d be interested in how you go about reliably distinguishing such people from humans in general; I would find that a useful skill to learn.
Why do you have low confidence in your abilities? It seems to me that there are many cases in which it should be obvious to you whether or not a person has one of those qualities. E.g., I can be reasonably certain that my step-mother is sane, reasonable, intelligent, well-intentioned, and without obvious incentives to lie—so if she reported psi phenomena, I would have to accuse her of lying for some completely non-transparent reason. (My step-mother doesn’t seem like the trolling type.)
I don’t recall any false positives in my experience, though I seem to vaguely recall false negatives. FWIW all the girls I’ve ever been close friends with have been Slytherin, so I might have abnormally much experience with natural liars (though well-intentioned ones). Er also I scored perfect or near-perfect on some emotion facial expression reading quiz thingy at SingInst, and I’ve been weirdly sensitive to peoples’ microexpressions since childhood. I don’t know if I learned any of the relevant skills, nor am I certain I possess them, but for the cases I have in mind I suspect I do, and that most other intelligent non-autistic-spectrum humans do also, especially the schizotypal ones.
For convenience, call T a threshold such that if someone clears T I can reliably trust that their reports of a phenomenon I otherwise consider unlikely ought to be either believed or classed as a lie. That is, when you describe someone as “sane, reasonable, intelligent, well-intentioned, honest, without obvious incentives to lie, etc. ” we understand that to mean that the person clears T.
I have low confidence in my ability to recognize people who clear T because of the numerous incidences in my life where, for example, two people who appear to me to clear T give me mutually exclusive accounts of the same experience, or more generally, where people who appear to me to clear T give me accounts that turn out to be false, but where I discern no reason to believe they’re lying.
The conclusion I reach is that ordinary people say, and often genuinely believe, all kinds of shit, and the fact that someone reports an occurrance isn’t especially strong evidence of it having occurred.
If that’s not actually true of ordinary people, and I’ve simply been unable to distinguish ordinary people from the people of whom that’s true, it would be awfully useful to learn to tell the difference.
Edit: I should add that I also have plenty of evidence that I don’t clear T, and I might also be generalizing from one example.
http://www.psy-journal.com/article/S0165-1781%2800%2900227-4/abstract
Reminds me of an experience I had as a kid where I woke up in the middle of the night, and was unable to move, with a ghost asking me for help. I ran to my parents’ room, and I knew what I was about to say would make me look stupid or confused, but I also knew I was right—I saw and heard that ghost. So, I made the story as convincing as possible; I left out any little details that might have drawn suspicion to my experience.
Why not? First obvious way that comes to mind: take someone that the audience trusts to be honest and to judge people correctly and have them go around talking to people who’ve had experiences and report back their findings.
That’s a multi-step plan: at least one of those steps would go wrong. By hypothesis we’re talking about transhuman intelligence(s) here (no other explanation for psi makes sense given the data we have). They wouldn’t let you ruin their fun like that, per the law of conservation of trolling. (ETA: Or at least, it wouldn’t work out like you’d expect it to.)
Can you give an example or two of such anecdotes?