An instrumental question: how would you exploit this to your advantage, were you dark-arts inclined? For example, if you are a US presidential candidate, what tactics would you use to invisibly switch voters’ choice to you? Given that you are probably not better at it than the professionals in each candidate’s team, can you find examples of such tactics?
Interestingly, the other major party never seems to fail to notice. Right now there are endless videos on YouTube of Romney’s flip-flopping, and Republicans reacted similarly to Kerry’s waffling in 2004. But for some reason, supporters of the candidate in question either don’t notice or don’t care.
The quality (in American politics, at least) that either 1: a politician’s stance on any given topic is highly mutable, or 2: a politician’s stance could perfectly reasonably disagree with that of some of his supporters, given that the politician one supports is at best a best-effort compromise rather than (in most cases) a perfect representation of one’s beliefs, is it not so widely-known as to eliminate or alleviate that effect?
I don’t see how either or both options you’ve presented change the point in any way; if politicians claim to agree on X until you agree to vote for them, then turn out to revert to their personal preference once you’ve already voted for them, then while you may know they’re mutable or a best-effort-compromise, you’ve still agreed with a politician and voted for them on the basis of X, which they now no longer hold.
That they are known to have mutable stances or be prone to hidden agendas only makes this tactic more visible, but also more popular, and by selection effects makes the more dangerous instances of this even more subtle and, well, dangerous.
I would argue that the chief difference between picking a politician to support and choosing answers based on one’s personal views of morality is that the former is self-evidently mutable. If a survey-taker was informed beforehand that the survey-giver might or might not change his responses, it is highly doubtful the study in question would have these results.
While meeting with voters in local community halls, candidates sometimes go around distributing goodwill tokens or promises while thanking people for supporting them, whether the person actually seems to support them or not.
It’s not a very strong version, and it’s tinged with some guilt-tripping, but matches the pattern under some circumstances and very well might trigger the choice blindness in some cases.
Dark tactic: Have we verified that it doesn’t work to present them with a paper saying what their opinion is even if they did NOT fill anything out? I explain how that might work This tactic is based on that possibility:
An unethical political candidate could have campaigners get a bunch of random people together and hand them a falsified survey with their name on it, making it look like they filled it out. The responses support a presidential candidate.
The unethical campaigner might then say: “A year ago, (too long for most people to remember the answers they gave on tests) you filled out a survey with our independent research company, saying you support X, Y and Z.” If authoritative enough, they might believe this.
“These are the three key parts of my campaign! Can you explain why you support these?”
(victim explains)
“Great responses! Do you mind if we use these?”
(victim may feel compelled to say yes or seem ungrateful for the compliment)
“I think your family and friends should hear what great supports you have for your points on this important issue, don’t you?”
(now new victims will be dragged in)
The responses that were given are used to make it look like there’s a consensus.
(too long for most people to remember the answers they gave on tests)
For me at least, one year is also too long for me to reliably hold the same opinion, so if you did that to me, I think I’d likely say something like “Yeah, I did support X, Y and Z back then, but now I’ve changed my mind.” (I’m not one to cache opinions about most political issues—I usually recompute them on the fly each time I need them.)
Implement feedback surveys for lesswrong meta stuff, and slip in a test for this tactic in one of the surveys a few surveys in.
Having a website as a medium should make it even harder for people to speak up or realize there’s something going on, and I figure LWers are probably the biggest challenge. If LWers fall into a trap like this, that’d be strong evidence that you could take over a country with such methods.
That would be very weak evidence that you could take over a country with such methods. It would be strong evidence that you could take over a website with such methods.
Imagine some horrible person wants to start a cult. So they get a bunch of people together and survey them asking things like:
“I don’t think that cults are a good thing.”
“I’m not completely sure that (horrible person) would be a good cult leader.”
and switches them with:
“I think that cults are a good thing.”
“I’m completely sure that (horrible person) would be a good cult leader.”
And the horrible person shows the whole room the results of the second set of questions, showing that there’s a consensus that cults are a good thing and most people are completely sure that (horrible person) would be a good cult leader.
Then the horrible person asks individuals to support their conclusions about why cults are a good thing and why they would be a good leader.
Then the horrible person starts asking for donations and commitments, etc.
Who do we tell about these things? They have organizations for reporting security vulnerabilities for computer systems so the professionals get them… where do you report security vulnerabilities for the human mind?
Is you start a cult you don’t tell people that you start a cult. You tell them:
Look there this nice meetup. All the people in that meetup are cool. The people in that group think differently than the rest of the world. They are better.
Then there are those retreats where people spents a lot of time together and become even better and more different than the average person on the street.
Most people in the LessWrong community don’t see it as a cult, and the same is true for most organisations that are seen as cults.
Is you start a cult you don’t tell people that you start a cult. You tell them: Look there this nice meetup. All the people in that meetup are cool. The people in that group think differently than the rest of the world. They are better. Then there are those retreats where people spents a lot of time together and become even better and more different than the average person on the street.
Do you? Really? That works? When creating an actual literal cult? This is counter-intuitive.
The trick: you need to spin it as something they’d like to do anyway… you can’t just present it as a way to be cool and different, you need to tie it into an existing motivation. Making money is an easy one, because then you can come in with an MLM structure, and get your cultists to go recruiting for you. You don’t even need to do much in the way of developing cultic materials; there’s plenty of stuff designed to indoctrinate people in anti-rational pro-cult philosophies like “the law of attraction” that are written in a way so as to appear as guides for salespeople, so your prospective cultists will pay for and perform their own indoctrination voluntarily.
I was in such a cult myself; it’s tremendously effective.
If you want to reach a person who feels lonely having a community of like minded people who accept the person can be enough. You don’t necessarily need stuff like money.
Agreed. Emotional motivations make just as good a target as intellectual ones. If someone already feels lonely and isolated, then they have a generally exploitable motivation, making them a prime candidate for any sort of cult recruitment. That kind of isolation is just what cults look for in a recruit, and most try to create it intentionally, using whatever they can to cut their cultists off from any anti-cult influences in their lives.
It works. Especially if you can get people away from their other social contacts. Mix in insufficient sleep and a low protein diet, and it works really well. (Second-hand information, but there’s pretty good consensus on how cults work.)
I’d question “really well”. Cult retention rates tend to be really low—about 2% for Sun Myung Moon’s Unification Church (“Moonies”) over three to five years, for example, or somewhere in the neighborhood of 10% for Scientology. The cult methodology seems to work well in the short term and on vulnerable people, but it seriously lacks staying power: one reason why many cults focus so heavily on recruiting, as they need to recruit massively just to keep up their numbers.
Judging from the statistics here, retention rates for conventional religious conversions are much higher than this (albeit lower than retention rates for those raised in the church).
Note that the term cult is a worst argument in the world (guilt by association). The neutral term is NRM. Thus to classify something as a cult one should first tick off the “religious” check mark, which requires spirituality, a rather nebulous concept:
Spirituality is the concept of an ultimate or an alleged immaterial reality; an inner path enabling a person to discover the essence of his/her being; or the “deepest values and meanings by which people live.
If you define cult as an NRM with negative connotations, then you have to agree on what those negatives are, not an easy task.
“NRM” is a term in the sociology of religion. There are many groups that are often thought of as “cultish” in the ordinary-language sense that are not particularly spiritual. Multi-level marketing groups and large group awareness training come to mind.
This is basically true, although I had a dickens of a time finding specifics in the religious/psychology/sociological research—everyone is happy to claim that cults have horrible retention rates, but none of them seem to present much beyond anecdotes.
I’ll confess I was using remembered statistics for the Moonies, not fresh ones. The data I remember from a couple of years ago seems to have been rendered unGooglable by the news of Sun Myung Moon’s death.
Scientology is easier to find fresh statistics for, but harder to find consistent statistics for. I personally suspect the correct value is lower, but 10% is about the median in easily accessible sources.
Like what you say but not much like ChristianKI said. I think he was exaggerating rather a lot to try to make something fit when it doesn’t particularly.
When I went to the Quantified Self conference in Amsterdam last year, I heard the allegation that Quantified Self is a cult after I explained it to someone who lived at the place I stayed for the weekend.
I also had to defend against the cult allegation when explain the Quantified Self community to journalists.
Which groups are cults depends a lot of the person who’s making the judgement.
There are however also groups where we can agree that they are cults. I would say that the principle applies to an organisation like the Church of Scientology.
I think that’s known as voter fraud. A lot of people believe (and tell others to believe) that certain candidates were legally and fairly elected even when exit polls show dramatically different results. Although of course this could work the same way if exit polls were changed to reflect the opposite outcome of an actually fair election and people believed the false exit polls and demanded a recount or re-election. It just depends on which side can effectively collude to cheat.
No. What I’m saying here is that, using this technique, it might not be seen as fraud.
If the view on “choice blindness” is that people are actually changing their opinions, it would not be technically seen as false to claim that those are their opinions. Committing fraud would require you to lie. This may be a form of brainwashing, not a new way to lie.
We need a worldwide Mindhacker Convention/Summit/Place-where-people-go.
Unfortunately, the cult leaders you’ve just described will not permit this, because they’ve already brainwashed their minions (and those minions’ children, and those childrens’ children, for thousands of years) into accepting that the human mind is supreme and sacred and must not be toyed with at any cost.
Online dating. Put up a profile that suggests a certain personality types and interests. In face-to-face meetup, even if you’re someone different than was advertised, choice blindness should cover up the fact.
This tactic can also be extended to job resumes presumably.
Either that’s already a well-used tactic amongst online daters, or 6′1″, 180lb guys who earn over $80k/year are massively more likely to use online dating sites than the average man.
I meant in the shoes of the candidate, not the interviewer. If that happened to me, I would feel like my status-o-meter started reading minus infinity.
The problem is that we don’t know how influential the blind spot is. It could just fade away after a couple minutes and a “hey, wait a minute...” But assuming it sticks:
If I were a car salesmen, I would have potential customers tell me their ideal car and then I would tell them what I want their ideal car to be as though I were simply restating what they just said.
If I were a politician, I would target identities (e.g., latino, pro-life, low taxes, ect) rather than individuals because identities are made of choices and they’re easier to target than individuals. The identity makes a choice and then you assume the identity chose you. E.g., “President Obama has all but said that I’m instigating “class warfare,” or that I don’t care about business owners, or that I want to redistribute wealth. Well, Mr. Obama, I am fighting with and for the 99%; the middle class; the inner city neighborhoods that your administration has forgotten; Latinos; African-Americans. We all have had enough of the Democrats decades long deafness towards our voice. Vote Romney.” Basically, you take the opposition’s reasons for not voting for you and then assume those reasons are for the opposition, and you run the ads in the areas you want to affect.
I don’t like either presidential candidate. I need to say that before I say this: using current rather than past political examples is playing with fire.
I completely agree with you; there shouldn’t be any problems discussing political examples where you’re only restating a campaign’s talking points rather than supporting one side or the other.
For example, if you are a US presidential candidate, what tactics would you use to invisibly switch voters’ choice to you?
I vaguely remember that when a president becomes very widely accepted as a good or bad president, many people will misremember that they voted for or against him respectively; e.g. much fewer people would admit (even to themselves) having voted for Nixon than the actual number that voted for him. If this is so, then maybe the answer is simply “Win, and be a good president”.
Imagine answering a question like “I think such and such candidate is not a very good person.” and then it gives you a button where you can automatically post it to your twitter / facebook. When you read the post on your twitter, it says “I think such and such candidate is a very good person.” but you don’t notice the wording has changed. :/
I wonder if people would feel compelled to confabulate reasons why they posted that on their accounts. It might set of their “virus” radars because of the online context and therefore not trigger the same behavior.
An unwitting research company could be contracted to do a survey by an unethical organization.
The survey could use the trick where by asking some question that people will mostly say “yes” to and then ask a similar question later where the wording is slightly changed to agree with the viewpoint of the unethical organization.
Most people end up saying they agree with the viewpoint of the unethical organization.
The reputation of the research company is abused as the unethical organization claims they “proved” that most people agree with their point of view.
A marketing campaign is devised around the false evidence that most people agree with them.
They already trick people in less expensive ways, though. I was taught in school that they’ll do things like ask 5 doctors whether they recommend something and then saying “4 of 5 doctors” recommend this to imply 4 of every 5 doctors when their sample was way too small.
An instrumental question: how would you exploit this to your advantage, were you dark-arts inclined? For example, if you are a US presidential candidate, what tactics would you use to invisibly switch voters’ choice to you? Given that you are probably not better at it than the professionals in each candidate’s team, can you find examples of such tactics?
Claim to agree with them on issue X, then once they’ve committed to supporting you, change your position on issue X.
Come to think of it, politicians already do this.
Interestingly, the other major party never seems to fail to notice. Right now there are endless videos on YouTube of Romney’s flip-flopping, and Republicans reacted similarly to Kerry’s waffling in 2004. But for some reason, supporters of the candidate in question either don’t notice or don’t care.
The quality (in American politics, at least) that either 1: a politician’s stance on any given topic is highly mutable, or 2: a politician’s stance could perfectly reasonably disagree with that of some of his supporters, given that the politician one supports is at best a best-effort compromise rather than (in most cases) a perfect representation of one’s beliefs, is it not so widely-known as to eliminate or alleviate that effect?
I don’t see how either or both options you’ve presented change the point in any way; if politicians claim to agree on X until you agree to vote for them, then turn out to revert to their personal preference once you’ve already voted for them, then while you may know they’re mutable or a best-effort-compromise, you’ve still agreed with a politician and voted for them on the basis of X, which they now no longer hold.
That they are known to have mutable stances or be prone to hidden agendas only makes this tactic more visible, but also more popular, and by selection effects makes the more dangerous instances of this even more subtle and, well, dangerous.
I would argue that the chief difference between picking a politician to support and choosing answers based on one’s personal views of morality is that the former is self-evidently mutable. If a survey-taker was informed beforehand that the survey-giver might or might not change his responses, it is highly doubtful the study in question would have these results.
While meeting with voters in local community halls, candidates sometimes go around distributing goodwill tokens or promises while thanking people for supporting them, whether the person actually seems to support them or not.
It’s not a very strong version, and it’s tinged with some guilt-tripping, but matches the pattern under some circumstances and very well might trigger the choice blindness in some cases.
Dark tactic: Have we verified that it doesn’t work to present them with a paper saying what their opinion is even if they did NOT fill anything out? I explain how that might work This tactic is based on that possibility:
An unethical political candidate could have campaigners get a bunch of random people together and hand them a falsified survey with their name on it, making it look like they filled it out. The responses support a presidential candidate.
The unethical campaigner might then say: “A year ago, (too long for most people to remember the answers they gave on tests) you filled out a survey with our independent research company, saying you support X, Y and Z.” If authoritative enough, they might believe this.
“These are the three key parts of my campaign! Can you explain why you support these?”
(victim explains)
“Great responses! Do you mind if we use these?”
(victim may feel compelled to say yes or seem ungrateful for the compliment)
“I think your family and friends should hear what great supports you have for your points on this important issue, don’t you?”
(now new victims will be dragged in)
The responses that were given are used to make it look like there’s a consensus.
For me at least, one year is also too long for me to reliably hold the same opinion, so if you did that to me, I think I’d likely say something like “Yeah, I did support X, Y and Z back then, but now I’ve changed my mind.” (I’m not one to cache opinions about most political issues—I usually recompute them on the fly each time I need them.)
Someone should see if this works.
Of course, you need to filter for people who fill out surveys.
Idea:
Implement feedback surveys for lesswrong meta stuff, and slip in a test for this tactic in one of the surveys a few surveys in.
Having a website as a medium should make it even harder for people to speak up or realize there’s something going on, and I figure LWers are probably the biggest challenge. If LWers fall into a trap like this, that’d be strong evidence that you could take over a country with such methods.
That would be very weak evidence that you could take over a country with such methods. It would be strong evidence that you could take over a website with such methods.
Break into someone’s blog and alter statements that reflect their views.
Dark Tactic:
This one makes me sick to my stomach.
Imagine some horrible person wants to start a cult. So they get a bunch of people together and survey them asking things like:
“I don’t think that cults are a good thing.” “I’m not completely sure that (horrible person) would be a good cult leader.”
and switches them with:
“I think that cults are a good thing.” “I’m completely sure that (horrible person) would be a good cult leader.”
And the horrible person shows the whole room the results of the second set of questions, showing that there’s a consensus that cults are a good thing and most people are completely sure that (horrible person) would be a good cult leader.
Then the horrible person asks individuals to support their conclusions about why cults are a good thing and why they would be a good leader.
Then the horrible person starts asking for donations and commitments, etc.
Who do we tell about these things? They have organizations for reporting security vulnerabilities for computer systems so the professionals get them… where do you report security vulnerabilities for the human mind?
Is you start a cult you don’t tell people that you start a cult. You tell them: Look there this nice meetup. All the people in that meetup are cool. The people in that group think differently than the rest of the world. They are better. Then there are those retreats where people spents a lot of time together and become even better and more different than the average person on the street.
Most people in the LessWrong community don’t see it as a cult, and the same is true for most organisations that are seen as cults.
That’s not too different from the description of a university though.
Do you? Really? That works? When creating an actual literal cult? This is counter-intuitive.
The trick: you need to spin it as something they’d like to do anyway… you can’t just present it as a way to be cool and different, you need to tie it into an existing motivation. Making money is an easy one, because then you can come in with an MLM structure, and get your cultists to go recruiting for you. You don’t even need to do much in the way of developing cultic materials; there’s plenty of stuff designed to indoctrinate people in anti-rational pro-cult philosophies like “the law of attraction” that are written in a way so as to appear as guides for salespeople, so your prospective cultists will pay for and perform their own indoctrination voluntarily.
I was in such a cult myself; it’s tremendously effective.
If you want to reach a person who feels lonely having a community of like minded people who accept the person can be enough. You don’t necessarily need stuff like money.
Agreed. Emotional motivations make just as good a target as intellectual ones. If someone already feels lonely and isolated, then they have a generally exploitable motivation, making them a prime candidate for any sort of cult recruitment. That kind of isolation is just what cults look for in a recruit, and most try to create it intentionally, using whatever they can to cut their cultists off from any anti-cult influences in their lives.
Agree, except I’d strengthen this to “a much better”.
It works. Especially if you can get people away from their other social contacts. Mix in insufficient sleep and a low protein diet, and it works really well. (Second-hand information, but there’s pretty good consensus on how cults work.)
How do you think cults work?
I’d question “really well”. Cult retention rates tend to be really low—about 2% for Sun Myung Moon’s Unification Church (“Moonies”) over three to five years, for example, or somewhere in the neighborhood of 10% for Scientology. The cult methodology seems to work well in the short term and on vulnerable people, but it seriously lacks staying power: one reason why many cults focus so heavily on recruiting, as they need to recruit massively just to keep up their numbers.
Judging from the statistics here, retention rates for conventional religious conversions are much higher than this (albeit lower than retention rates for those raised in the church).
I guess “really well” is ill-defined, but I do think that both Sun Myung Moon and L. Ron Hubbard could say “It’s a living”.
You can get a lot out of people in the three to five years before they leave.
Note that the term cult is a worst argument in the world (guilt by association). The neutral term is NRM. Thus to classify something as a cult one should first tick off the “religious” check mark, which requires spirituality, a rather nebulous concept:
If you define cult as an NRM with negative connotations, then you have to agree on what those negatives are, not an easy task.
“NRM” is a term in the sociology of religion. There are many groups that are often thought of as “cultish” in the ordinary-language sense that are not particularly spiritual. Multi-level marketing groups and large group awareness training come to mind.
This is basically true, although I had a dickens of a time finding specifics in the religious/psychology/sociological research—everyone is happy to claim that cults have horrible retention rates, but none of them seem to present much beyond anecdotes.
I’ll confess I was using remembered statistics for the Moonies, not fresh ones. The data I remember from a couple of years ago seems to have been rendered unGooglable by the news of Sun Myung Moon’s death.
Scientology is easier to find fresh statistics for, but harder to find consistent statistics for. I personally suspect the correct value is lower, but 10% is about the median in easily accessible sources.
Click on “Search tools” at the bottom of the menu on the left side of Google’s search results page, then on “Custom range”.
Like what you say but not much like ChristianKI said. I think he was exaggerating rather a lot to try to make something fit when it doesn’t particularly.
What’s an actual literal cult?
When I went to the Quantified Self conference in Amsterdam last year, I heard the allegation that Quantified Self is a cult after I explained it to someone who lived at the place I stayed for the weekend. I also had to defend against the cult allegation when explain the Quantified Self community to journalists. Which groups are cults depends a lot of the person who’s making the judgement.
There are however also groups where we can agree that they are cults. I would say that the principle applies to an organisation like the Church of Scientology.
I think that’s known as voter fraud. A lot of people believe (and tell others to believe) that certain candidates were legally and fairly elected even when exit polls show dramatically different results. Although of course this could work the same way if exit polls were changed to reflect the opposite outcome of an actually fair election and people believed the false exit polls and demanded a recount or re-election. It just depends on which side can effectively collude to cheat.
No. What I’m saying here is that, using this technique, it might not be seen as fraud.
If the view on “choice blindness” is that people are actually changing their opinions, it would not be technically seen as false to claim that those are their opinions. Committing fraud would require you to lie. This may be a form of brainwashing, not a new way to lie.
That’s why this is so creepy.
We need a worldwide Mindhacker Convention/Summit/Place-where-people-go.
Unfortunately, the cult leaders you’ve just described will not permit this, because they’ve already brainwashed their minions (and those minions’ children, and those childrens’ children, for thousands of years) into accepting that the human mind is supreme and sacred and must not be toyed with at any cost.
Online dating. Put up a profile that suggests a certain personality types and interests. In face-to-face meetup, even if you’re someone different than was advertised, choice blindness should cover up the fact.
This tactic can also be extended to job resumes presumably.
Either that’s already a well-used tactic amongst online daters, or 6′1″, 180lb guys who earn over $80k/year are massively more likely to use online dating sites than the average man.
I wouldn’t like to be standing in the shoes of someone who tried that and it didn’t work.
Why? Just go interview somewhere else. The same applies for any interview signalling strategy.
I meant in the shoes of the candidate, not the interviewer. If that happened to me, I would feel like my status-o-meter started reading minus infinity.
Tom N. Haverford comes to mind.
The problem is that we don’t know how influential the blind spot is. It could just fade away after a couple minutes and a “hey, wait a minute...” But assuming it sticks:
If I were a car salesmen, I would have potential customers tell me their ideal car and then I would tell them what I want their ideal car to be as though I were simply restating what they just said.
If I were a politician, I would target identities (e.g., latino, pro-life, low taxes, ect) rather than individuals because identities are made of choices and they’re easier to target than individuals. The identity makes a choice and then you assume the identity chose you. E.g., “President Obama has all but said that I’m instigating “class warfare,” or that I don’t care about business owners, or that I want to redistribute wealth. Well, Mr. Obama, I am fighting with and for the 99%; the middle class; the inner city neighborhoods that your administration has forgotten; Latinos; African-Americans. We all have had enough of the Democrats decades long deafness towards our voice. Vote Romney.” Basically, you take the opposition’s reasons for not voting for you and then assume those reasons are for the opposition, and you run the ads in the areas you want to affect.
I don’t like either presidential candidate. I need to say that before I say this: using current rather than past political examples is playing with fire.
I completely agree with you; there shouldn’t be any problems discussing political examples where you’re only restating a campaign’s talking points rather than supporting one side or the other.
I vaguely remember that when a president becomes very widely accepted as a good or bad president, many people will misremember that they voted for or against him respectively; e.g. much fewer people would admit (even to themselves) having voted for Nixon than the actual number that voted for him. If this is so, then maybe the answer is simply “Win, and be a good president”.
That would not be an instrumentally useful campaigning strategy.
Now I’m alternating between laughing and crying. :(
Awww. I might have discovered a flaw in this study, TimS. Here you go
Imagine answering a question like “I think such and such candidate is not a very good person.” and then it gives you a button where you can automatically post it to your twitter / facebook. When you read the post on your twitter, it says “I think such and such candidate is a very good person.” but you don’t notice the wording has changed. :/
I wonder if people would feel compelled to confabulate reasons why they posted that on their accounts. It might set of their “virus” radars because of the online context and therefore not trigger the same behavior.
Dark Tactic:
An unwitting research company could be contracted to do a survey by an unethical organization.
The survey could use the trick where by asking some question that people will mostly say “yes” to and then ask a similar question later where the wording is slightly changed to agree with the viewpoint of the unethical organization.
Most people end up saying they agree with the viewpoint of the unethical organization.
The reputation of the research company is abused as the unethical organization claims they “proved” that most people agree with their point of view.
A marketing campaign is devised around the false evidence that most people agree with them.
They already trick people in less expensive ways, though. I was taught in school that they’ll do things like ask 5 doctors whether they recommend something and then saying “4 of 5 doctors” recommend this to imply 4 of every 5 doctors when their sample was way too small.