What I think the two choices signal and the trade offs are
A. Sees that an interpretation of reality shared by others is not correct, but tries to pretend otherwise for personal gain and/or safety.
B. Fails to see that an interpretation of reality is shared by others is flawed. He is therefore perfectly honest in sharing the interpretation of reality with others. The reward regime for outward behaviour is the same as with A.
Most people I would guess are discomforted by sustained duplicity. Without us necessarily realizing it, our positions on matters shift towards those that are convenient for us, either because of material gain, personal safety, reproductive success or just plain good signalling. Everyone wants to look good, especially to themselves. Most people will have a hard time “living a lie” and may also eventually fail to emulate all the aspects of behaviour a false belief may entail. The emulator is in a sense at a disadvantage compared to someone who is honest in their personally beneficial belief.
Person B may indeed fail to realize the truth because of this effect, or it may be due to other deficiencies, it dosen’t matter. Plainly person B is worse at making a good map of reality than person A is. He seems to be signalling a deficiency or rather failure in rationality. But he’s signalling more than just that, as I will soon show.
Person A on the other hand clearly has better map making skills. He seems to signal more rationality. But if he slips up, he is signalling he may not be the best person to associate yourself with, the benefits and gains he accrues from the his stated beliefs will be smaller than someone who is a true believer in most things convenient. If he dosen’t slip up he may indeed be signalling that he is unusually comfortable with deceiving people and is harder to move with socially accepted norms, the only people who can do this flawlessly are sociopaths or those vastly more intelligent than their surroundings. Does this sound like someone who is reliably non-threatening? In fact how exactly to distinguish such an A from another A who just dosen’t care about other people and wishes to preserve his own advantage?
It is safer to cooperate with Person B than person A. Person A is someone for whom it takes much more resources cognitive and otherwise to distinguish the subtypes that share your interests or will not deceive you on a particular matter compared to distinguishing different types of B. Opportunity costs matter. Needless to say if you are not yourself exceptionally gifted with such resources these may be resources you simply don’t have.
Perhaps some of you may be doubting at this point that a non-plain selfish type A exists. The normative, publicly praised and endorsed course of action if you disagree with a widely accepted truth or norm is to voice this disagreement, either so the false paradigm can be overturned or so others can help you overcome your folly. Naturally the actual norm on this differs, though how strongly so depends on where. What good does do you if the same improved map making abilities that helped you overcome the potentially adaptive biases also tell you that its currently folly to try and change other peoples minds by entering public debate? Why sacrifice yourself if this has negligible impact? If you think the best strategy to do away with the falsehood with as little damage to others is to delay disclosure to a later point, or if you think its utterly hopeless that the falsehood will be done away with in your lifetime and that your sacrifice will have only minimal impact, why not be duplicitous (for the relevant time frame)? But naturally here we reach the same test all over again.
It is convenient for one to believe that it is best for one to remain silent isn’t it?
What I think the results of this poll might be.
I expect about a bit below two thirds will choose A. because of LW norms that value rationality and map making skills. This is somewhat counteracted by LW explicit norms on truth telling being closer to actual norms than in most places so people might feel that others are more likley to be wrong in their assessments of the negative consequences .
I think in a representative sample of people, most would choose B.
It is safer to cooperate with Person B than person A.
I am sorry, this is so wrong, and the only way to prove that it is wrong is to disturb one of the many dangerous wild elephants in the living room.
Person A believes that on average, members of group X are more criminal and violent than members of group Y, and that one can make deductions from this to individual cases. However he piously pretends to be horrified that a taxi cab driver would not pick up customers from certain streets, a pizza parlor would not deliver to certain streets.
Person B believes that on average, members of group X are more criminal and violent than members of group Y, but that one cannot make deductions about this to individual cases. He is therefore genuinely horrified that a taxi cab driver would not pick up customers from certain streets, a pizza parlor would not deliver to certain streets. He piously makes an arrangement to meet his niece that will result in her waiting for him on such a street.
Don’t choose to associate with B
Banker A believes that on average, members of group X are less likely to repay their loans than members of group Y, even if all other factors appear to be equal, and if all other factors should appear to be equal, suspects that the member of group X has probably stolen someone’s identity or is committing some other fraud. Banker A is in charge of other people’s money, and is extra careful when lending it to a member of group X, but piously pretends not to be. To avoid it becoming noticeable that he is in fact discriminating, he locates his bank branches in places where few members of group X show up, he advertises his loans in ways that members of group X are less likely to see.
Banker B believes that on average, members of group X are equally likely to repay their loans as members of group Y, or if they are not, the fact is irrelevant and should be ignored. He notices that members of group X rarely get loans. He deduces that due to racism, this market is being under served, and promptly sets out to serve the X market as vigorously as possible. He sincerely expects to make lots of money doing so.
Person B believes that on average, members of group X are more criminal and violent than members of group Y, but that one cannot make deductions about this to individual cases. He is therefore genuinely horrified that a taxi cab driver would not pick up customers from certain streets, a pizza parlor would not deliver to certain streets. He piously makes an arrangement to meet his niece that will result in her waiting for him on such a street.
Person B happens to believe what is good for them more than person A does. I don’t think it follows their rationalisations/mistakes need be consistent with each other. In fact looking at people it seems we can belive and believe we believe all sorts of contradictory things that are “good for us” in some sense or another that when taken seriously would seem to contradict each other. You provided two examples, where his false beliefs didn’t match up with gain, this naturally does happen. But I can easily provide counterexamples.
Person X honestly believes that intelligence tests are meaningless, and everyone can acheive anything , yet he will see no problem in using low test scores of a political opponent as a form of mockery, since clearly they really are stupid.
He may consider the preferences of parents who think group Y on average would have an undesirable effect on the values or academic achievement of their child and wish to make sure they have minimal influence on them to be so utterly immoral that must be proactivley fought in personal and public life. But in practice he will never send his children to a school where group Y is a high percentage of the pupils. You see that is because naturally, the school is a bad school and no self respecting parent sends their child to a bad school.
In both cases he manages to do basically the same thing he would have if he was person A.. And I actually think that on the whole type B manages to isolate themselves from some of the fallout of false belief as well as type A.
I think that this is because common problems in every day life quickly generate commonly accepted solutions. These solutions may come with explicitly stated rationalizations or they may be unstated practices held up by status quo bias and ugh fields. Person A may even be the one to think of the original rationalization that cloaks rational behaviour based on accurate data! The just mentioned simple conditioning insures that at least some B people will adopt them. If person B happens to wanders into uncommon situations however, he may indeed pay a price.
Naturally an alternative explanation is that a great portion of seemingly B type people are in fact A type people.
Person X honestly believes that intelligence tests are meaningless, and everyone can acheive anything , yet he will see no problem in using low test scores of a political opponent as a form of mockery, since clearly they really are stupid.
He may consider the preferences of parents who think group Y on average would have an undesirable effect on the values or academic achievement of their child and wish to make sure they have minimal influence on them to be so utterly immoral that must be proactively fought in personal and public life. But in practice he will never send his children to a school where group Y is a high percentage of the pupils. You see that is because naturally, the school is a bad school and no self respecting parent sends their child to a bad school.
These sound to me like good reasons for not associating with B. Selective rationality makes it likely he will do bad things for bad reasons and be sincerely unaware that he is doing bad things. He can probably rationalize embezzling my money as glibly as he can rationalize avoiding a “bad school”, whereas if he is person A, and knows perfectly well he does not want his children to associate with group X, he would know if he was cheating you.
Rationalization predicts bad behavior. Avoiding the inquisition does not predict bad behavior.
Selective rationality makes it likely he will do bad things for bad reasons and be sincerely unaware that he is doing bad things. He can probably rationalize embezzling my money as glibly as he can rationalize avoiding a “bad school”...
But that’s not what I observe in reality. As Konkvistador said, common problems generate commonly accepted solutions. A strong discrepancy between respectable beliefs and reality leads to a common problem, and a specific mode of rationalization then becomes a commonly accepted and socially approved solution. And in my experience, the fact that someone suspends rationality and adopts rationalizations in a commonly accepted way doesn’t imply bad character otherwise.
In fact, as a reductio ad absurdum of your position, I would point out that a complete rejection of all pious rationalizations that are common today would mean adopting a number of views that are shared only by an infinitesimal minority. But clearly it’s absurd to claim that everyone outside of this tiny minority
is untrustworthy and of bad character.
On the other hand, I agree with you when it comes to rationalizations that are uncommon and not approved tacitly as an unspoken social convention. They are indeed a red flag as you describe.
Selective rationality makes it likely he will do bad things for bad reasons and be sincerely unaware that he is doing bad things. He can probably rationalize embezzling my money as glibly as he can rationalize avoiding a “bad school”...
But that’s not what I observe in reality.
It is what we observed in the recent banking crisis. To take an even more extreme example, Pol Pot was a true believer who glibly rationalized away discrepancies. One of his rationalizations was that the bad consequences of his policies were the result of comrades betraying him, which led him to torture his comrades to death.
I would point out that a complete rejection of all pious rationalizations that are common today would mean adopting a number of views that are shared only by an infinitesimal minority. But clearly it’s absurd to claim that everyone outside of this tiny minority is untrustworthy and of bad character.
Our financial system has just collapsed in a way that suggests that the great majority of those who adopt certain pious rationalizations applicable to the financial system are untrustworthy and of bad character. Certain single payer medical systems are apply an alarming level of involuntary euthanasia, aka murder, and everyone is piously rationalizing it, except for a tiny minority.
What I see is a terrifying and extremely dangerous level of bad behavior, glibly justified by pious and politically correct rationalizations.
Breathing difficulties in old people are a wide range of complex and extremely urgent problems, frequently difficult to diagnose and expensive to treat, and apt to progress rapidly to death over a few hours. Tracheotomy or similar urgent and immediate surgical treatment is often absolutely necessary, but for administrative reasons single payer medical systems find it very difficult to provide immediate surgery, surgery that is urgent in the sense of right now not urgent in the sense that in a couple of weeks you will eventually be be put on the extra urgent special emergency queue of people waiting to jump the rest of the merely ordinarily urgent special emergency queue. Therefore, old people who show up at the British health care system struggling to breath, are always treated with barbiturates, which of course stops them struggling. The inability to provide emergency surgery gets rationalized by the great majority of all respectable believers in right things, and these rationalizations are an indication of moral failure. Finding that it is administratively difficult for a single payer system to provide certain kinds of treatment, we see people rationalizing that the treatment actually provided is desirable, which rationalization makes them murderers or accomplices to murder.
What I see is a terrifying and extremely dangerous level of bad behavior, glibly justified by pious and politically correct rationalizations.
I agree with this as a general assessment (though we might argue about the concrete examples). I also see plenty of terrifying and ominous deeds justified by pious rationalizations.
However, I still don’t see how you can infer bad character from ordinary, everyday rationalizations of the common people. Yes, these collectively add up to an awful tragedy of the commons, and individuals in high positions who do extraordinary misdeeds and employ extraordinary rationalizations are usually worse than lying, cynical climbers. But among the common folk, I really don’t see any connection between individual bad character and the regular, universally accepted pious rationalizatons.
However, I still don’t see how you can infer bad character from ordinary, everyday rationalizations of the common people. Yes, these collectively add up to an awful tragedy of the commons
We cannot infer bad character from collective consequences of delusive socially approved beliefs. We can infer bad character if delusive beliefs are apt to result individual bad consequences to other people.
In the hypothetical case of the person who wholly genuinely believes in some delusive socially approved belief, one can easily see that it would result in bad consequences for friends, acquaintances, and business associates, while advancing the career of the holder of those beliefs: Therefore bad person
What however, about the person who semi genuinely believes in some delusive socially approved belief, but has clever rationalizations for acting as if the belief was not true in those situations where the falsity of the belief might afflict him personally?
Of course, such rationalizing Bs blur into As. How then shall we tell the difference? The difference is that a true B genuinely considers socially approved beliefs to be true, and therefore righteously imposes the corresponding socially approved behavior on others, while finding rationalizations to avoid it for himself. Therefore evil.
Though he applies those clever rationalizations to avoid bad consequences for himself, rationalizes himself avoiding bad consequences, he does not apply those clever rationalizations to avoid bad consequences for his friends, and those he does business with, since he does not have any motivation to fool himself in those cases, and further, by not fooling himself in those cases, he demonstrates genuine allegiance to socially approved beliefs at low cost to himself.
individuals in high positions who do extraordinary misdeeds and employ extraordinary rationalizations
But individuals in high positions don’t employ extraordinary rationalizations, unless you call the false but socially approved beliefs that everyone is supposed to believe in and most people believe or pretend to believe, “extraordinary”.
Indeed, these beliefs are socially approved precisely because the powerful find them convenient to do and justify evil. Those in high places perform extraordinary misdeeds by acting as if these beliefs were true.
Illustration 1: The financial crisis. Read the FDIC examining Beverly Hills bank for compliance with the CRA. Those who prepared this report were evil men, and their evil followed necessarily from their willingness to apply socially approved, but delusive, beliefs to other people.
Now Joe Sixpack, unlike the evil men who prepared that report, are seldom in a position to individually force other people to act as if delusive beliefs were true. But to the extent that he could, he would, and does, for example by voting. Since self delusion is a major source of evil, this self delusion is surely a predictor of his willingness to do evil in other ways.
Then there is the underclass, who, echoing their betters, claim that the reason they are depraved is because they are deprived. A member of the underclass who believes in the politically correct reasons for bad underclass behavior is more likely to live down to those behaviors than a member of the underclass who does not believe in those justifications.
So, if we look at the top, the guys who prepared that report on Beverly Hills bank, socially approved beliefs predict evil behavior. And if we look at the bottom, the guys who burned down parts of London recently, socially approved beliefs also predict evil behavior.
And if we look at the upper middle class, for example the guys who wanted to lynch the Lacrosse team at Duke University, socially approved beliefs also predicted bad behavior.
And, at the level of my individual people kin and associates, politically correct beliefs have predicted very great crimes and several minor crimes, though I cannot present you any evidence of that. My direct personal experience has been that there is a direct connection between individual bad conduct, such as embezzling funds, and the regular universally accepted pious rationalizations. That he who rationalizes one thing, can rationalize another thing.
You make your case very poignantly. I’ll have to think about this a bit more.
In particular, it does seem to me that people whom I find exceptionally trustworthy in my own life tend to have at least some serious disagreements with the respectable opinion, or at least aren’t prone to stonewalling with pious rationalizations and moral condemnations when presented with arguments against the respectable opinion. But I’m not sure how much this is a selection effect, given that I myself have some opinions that aren’t very respectable.
I don’t see any evidence that Person B won’t defect just as readily, just that they haven’t yet realized that other people are wrong. Maybe Person B is wrong simply out of an easily cured ignorance, and will happily become a “Person A” once that ignorance is cured.
In short, I actually know more about the behavior of Person A, and therefore I trust them more. All I know about Person B is that they’re ignorant.
Remember person A is the odd one in his society. He dosen’t share most other peoples map of reality. Other people have very good reasons to doubt his rationality. Easily cured ignorance of person B is all but so.
Certainly a particular person B might just have not gotten around to realizing this. But I think generally you are missing what was implied in the comparison. Being person B seems to have greater fitness in certain circumstances. And we know there are mechanism developed in our own minds that help us stay person B.
I think we actually know more about the typical person B than just he that he is ignorant. For starters we de facto know he is less rational than A. Secondly its much more likley than with person A, that the mentioned mechanisms are doing their job properly.
Remember person A is the odd one in his society. He dosen’t share most other peoples map of reality. Other people have very good reasons to doubt his rationality.
But by assumption, his society is irrational, so their reasons for doubting his rationality are themselves irrational. Needless to say, all socially desirable beliefs in our society are of course wonderfully beneficent, but let us instead suppose the society is Soviet Russia, Nazi Germany, or any society that no longer meets our highly enlightened stamp of approval. Are you better off associated with the fake Nazi or the sincere Nazi?
Clearly, you are better off associating with the fake Nazi.
The society given in the example is wrong. But that’s not exactly the same as being irrational, I do think however that its probable to say that person A is more rational than the society as a whole. This may be a high or low standard mind you.
Now again I dislike the highly charged example, since they narrow down the scope of thinking, but I suppose you do make a vivid case.
is rationality are themselves irrational.
But how can they know this? If they know this why don’t they change? All else being equal an individual being mistaken seem more likley than the societal consensus being wrong. I don’t think you realize on just how much human societies agree. Also just because society is wrong, dosen’t mean the individual is right.
Are you better off associated with the fake Nazi or the sincere Nazi?
The answer for the typical person living in Nazi Germany would be? Mind you a Nazi Germany where we don’t have the benefit of hindsight that the regime will be short lived.
But how can they know this? If they know this why don’t they change?
They don’t change because their beliefs are politically convenient. Because their beliefs justify the elite exercising power over the less elite. Because their beliefs justify behavior by the elite that serves the interests of members of the elite but destroys society.
Searching for an example of suicidal delusions that is not unduly relevant to either today’s politics or yesterdays demons—unfortunately, such examples are necessarily obscure.
The nineteenth century British belief in benevolent enlightened imperialism justified a transfer of power and wealth from the unenlightened and piratical colonialists, to members of the British establishment more closely associated with the government, the elite and the better schools. Lots of people predicted this ideology would wind up having the consequences that it did have, that the pirates actually governed better, but were, of course, ignored.
Now again I dislike the highly charged example, since they narrow down the scope of thinking, but I suppose you do make a vivid case.
If I reference beliefs in our society that might cause harmful effects were they not so wise and enlightened, it also makes vivid case. Indeed, any reference to strikingly harmful effects makes a vivid case.
The answer for the typical person living in Nazi Germany would be? Mind you a Nazi Germany where we don’t have the benefit of hindsight that the regime will be short lived.
But some people did have the foresight that the regime was going to be short lived, at least towards the end. Nazi strategy was explained in Hitler’s widely read book. The plan was to destroy France (done), force a quick peace settlement with the anglophones (failed), and then invade and ethnically cleanse a large part of Russia. The plan was for short wars against a small set of enemies at any one time. When the British sank the Bismark, the plan was in trouble, since Anglophone air and sea superiority made it unlikely that Germany could force a quick peace, or force them to do anything they did not feel like doing, nor force them to refrain from doing anything they might feel like doing. When they sank the Bismark in May 1941, it was apparent that anglophones could reach Germany, and Germany could not effectively reach them. At that point all type A’s should have suspected that Germany had lost the war. At Stalingrad, the plan sank without a trace, and every type A must have known that the war was lost.
In general, a type A will predict the future better than a type B, since false beliefs lead society to unforseen consequences.
For starters we de facto know he is less rational than A
Ignorance does not imply unintelligent, irrational, etc., much less make a de facto case for them. There’s nothing irrational about honestly believing the group-consensus if you don’t have the skill foundation to see how it could be wrong. Sure, one should be open about one’s ignorance, but you still have to have anticipations to function, and Bayesian evidence suggests “follow the leader” is better than “pick randomly”. Especially since, not having the background knowledge in the first place, one would be hard pressed to list choices to pick randomly amongst :)
There’s nothing irrational about honestly believing the group-consensus if you don’t have the skill foundation to see how it could be wrong.
If someone does not have the skill foundation to see how the group-consensus is wrong, he is ignorant or stupid. Such people are, quite inadvertently, dangerous and harmful. There is no con man worse or more dangerous than a con man who sincerely believes his own scam, and is therefore quite prepared to go down with his ship.
There is no con man worse or more dangerous than a con man who sincerely believes his own scam, and is therefore quite prepared to go down with his ship.
This is true in a big way that I haven’t mentioned before though. Type B seem to me more likley to cause trouble for anyone attempting to implement solutions that might avert tragedy of the commons situations caused by a false society wide belief, than type A.
There’s nothing irrational about honestly believing the group-consensus if you don’t have the skill foundation to see how it could be wrong.
Actually he is right. Just because you can’t find a flaw with common consensus dosen’t mean you are ignorant or stupid because its perfectly possible there is no flaw with common consensus on a particular subject or that the flaw is too difficult to detect by the means available to you. Perhaps its too difficult to detect the flaw with the means the entire society has available to it!
A rational agent is not an omniscient agent after all!
I think you may be letting yourself slightly adversarial in your thinking here because you perceive this as a fight over a specific thing you estimate society is delusional about. Its not, its really not. Chill man. :)
Edit: Considering the downvotes, I just want to ask what I missing in this comment? Thanks for any help!
Yes but the odds of A getting the right answer from picking randomly are even lower. ;)
Remember person A was defined in this example as having a better map on this little spot, though I suppose most of the analysis done by people so far works equally well for someone who thinks he has a better map and is hiding it.
So Person A believes in MWI because they read the Quantum Mechanics sequence, and Person B never thought about it beyond an article in Discover Magazine saying all the top scientists favor the Copenhagen interpretation. They’re both being entirely rational about the information they have, even if Person A has the right answer :)
I suppose they are in a sense, but what exactly are the rewards/lack of benefit for a layman, even an educated one, believing or not in MWI? .I think a major indicator is that I haven’t heard in recent years of anyone been outed as a MWIist and loosing their job as a consequence :P
Nitpick: The average person who has read QM sequence is likley above average in rationality.
but what exactly are the rewards/lack of benefit for a layman, even an educated one, believing or not in MWI?
Everyone is avoiding realistic examples, for fear that if they should disturb any of the several large elephants in the living room, they will immediately be trampled.
Substitute a relevant example as needed, I’m simply trying to make the point that ignorance != irrationality. Someone who simply has more information on a field is going to reach better conclusions, and will thus need to hide controversial opinions. Someone with less information is generally going to go with the “follow the herd” strategy, because in the absence of any other evidence, it’s their best bet. Thus, just based on knowledge (not rationality!) you’re going to see a split between A and B types.
There dosen’t have to be a correlation of 1 between ignorance and irrationally. There just has to be some positive correlation for us to judge in the absence of other information A probably more rational than B.
And if there isn’t a correlation greater than 0 between rationality and a proper map of reality, uhm what is this rationality thing anyway?
For starters we de facto know he is less rational than A
Ahhh, you’re meaning “we have Bayesian evidence that Person B is less likely to be rational than Person A”? I’d agree, but I still think it’s weak evidence if you’re only looking at a single situation, and
I’d still feel I therefore know more about Person A (how they handle these situations) than I do about Person B (merely that they are either ignorant or irrational). How someone handles a situation strikes me as a more consistent trait, whereas most people seem to have enough gaps in their knowledge that a single gap is very little evidence for other gaps.
Ahhh, you’re meaning “we have Bayesian evidence that Person B is less likely to be rational than Person A”?
Yeah I should have been more explicit on that, sorry for the miscommunication!
I’d agree, but I still think it’s weak evidence if you’re only looking at a single situation, and
I’d still feel I therefore know more about Person A (how they handle these situations) than I do about Person B (merely that they are either ignorant or irrational). How someone handles a situation strikes me as a more consistent trait, whereas most people seem to have enough gaps in their knowledge that a single gap is very little evidence for other gaps.
Perhaps for convenience we can add that person A and B are exposed to the same information? It dosen’t change the spirit of the thought experiment. I was originally implicitly operating with that as given but since we started discussing it I’ve noticed I never explicitly mentioned it.
Basically I wanted to compare what kinds of things person A/B would signal in a certain set of circumstances to others.
Person B, but the magnitude of the distinction I make will probably be highly context dependent.
At least that was my original answer. Now I lean toward mostly person B, and sometimes person A.
There are a number of reasons to be wary of person A. While they will likely make for a more interesting story character, in real life their behavior can cause a number of serious problems. First, while we should presumably fairly distinguish between person A and person C (where person C only thinks they see that an interpretation of reality shared by others is not correct, and tries to pretend otherwise for personal gain and/or safety), any one individual is unlikely to be able to tell whether they are A or C, and will tend to default to believing they are A. If we set person A and person C to be criminal-killing vigilantes, for example, I think it then becomes clear that we, as a society, must take certain steps to prevent the behaviors of person C from being encouraged. Otherwise, the innocent who has been truly acquitted of murder will likely die. Discouraging people from disconnecting from society in the way that both person A and C does is one way to do that. Secrecy in important issues can be a necessity if society itself is broken, but that doesn’t make it desirable. The more people you can check your ideas with, the more likely you can correct your mistakes by being shown information you have not considered (or have others correct your mistakes for you, as seen below).
If we consider person B now, they are more like the juror in the court who truly believes that an innocent person is guilty of murder. I say this because a person who interfaces honestly in society will tend to be acting within a societal context. Even if they and all their peers convict the innocent person, the convicted may still have a chance to live and possibly to be freed one day.
I think that many of the responders to this poll are considering these people and their behaviors in isolation, when they should also be considered within the context of society. Person B is the better person to have around or be in the cases where society is not so completely broken as to be worthless. Person A may be the better person only when civilization gives no other recourse. Even then, that will be highly dependent on their motives.
I should also note that I suspect I have been person A, B, and C at various points in my life (though fortunately at no point did this involve being directly responsible for the life or death of another). My point here is that one should also keep in consideration that these are not actually separate people, but separate behaviors. I am somewhat doubtful that I live in a universe that so cleverly divides people into categories like ‘secretive and always right’, ‘secretive and always wrong’ and ‘honest and always wrong’. So I think one should also consider whether one wants to encourage honesty or secrecy in any given individual in general.
PS: Also if you recall my previous comment on finding “friendly” A’s is harder than finding friendly B’s, it seems to me that if LWers respond as I think they will or even more enthusiastically than that, they will be signalling they consider themselves uniquely gifted in such (cognitive and other) resources. Preferring person A to B seems the better choice only if its rather unlikely that person A is significantly smarter than you or if you are exceptionally good at identifying sociopaths and/or people who share your interests. Choice A is a “rich” man’s choice. Someone who can afford to use that status distribution. I hope you can also see that for A’s that vastly differ in intelligence/resources/specialized abilities cooperating for common goals is tricky.
This seems to me relevant to what the values of someone likley to build a friendly AI seem to be. Has there been discussion or a article that explored these implications that I’ve missed so far?
finding “friendly” A’s is harder than finding friendly B’s
In the recent economic crisis, who was more likely to scam you? A or B? The ones that pissed away the largest amounts of other people’s money were those that pissed away their own money.
Preferring person A to B seems the better choice only if its rather unlikely that person A is significantly smarter than you.
Assume you know the truth, and know or strongly suspect that person A knows the truth but is concealing it.
OK. You are on the rocket team in Nazi Germany. You know Nazi Germany is going down in flames. Ostensibly, all good Nazis intend win heroically. You strongly suspect that Dr Wernher von Braun is not, however, a good Nazi. You know he is a lot smarter than you, and you strongly suspect he is issuing lots of complicated lies because of lots of complicated plots. Who then should you stick with?
Why would person A being significantly smarter be a bad thing? Just from the danger of being hacked? I’m not thinking of anything else that would weigh against the extra utility from their intelligence.
If you have two agents who can read each others source code they could cooperate on a prisoners dilemma, since they would have assurance as to not defect.
Of course we can’t read each others source code, but if our intelligence or rather our ability to asses each others honesty is rather matched, the risk for the other side defecting is at its lowest possible point shy of that, (in the absence of more complex stations where we have to think about signalling to other people), wouldn’t you agree?
When one side is vastly more intelligent/capable, the cost of defection is clearly much much smaller for the more capable side.
All else being equal, it seems an A would rather cooperate with a B than another A, because the cost to predict defection is lower. In other words Bs have a discount on needed cognitive resources, despite their inferior maps, and even As have a discount when working with Bs! What I wanted to say with the PS post was that under certain circumstances (say very expensive cognitive resources) opportunity costs associated with a bunch of As cooperating, especially As that have group norms to actively exclude Bs, can’t be neglected.
All else being equal, it seems an A would rather cooperate with a B than another A, because the cost to predict defection is lower.
The cost to predict consciously intended defection is lower.
I can and have produced numerous examples of Bs unintentionally defecting in our society, but for a less controversial example, let us take a society now deemed horrid. Let us consider the fake Nazi Dr. Wernher von Braun. Dr. Wernher von Braun was an example of A. His associates were examples of Bs. He proceeded to save their lives by lying to them and others, causing them to be captured by the Americans rather than the Russians. The B’s around him were busy trying to get him killed, and themselves killed.
The cost to predict consciously intended defection is lower.
I generally find it it easier to predict behaviour when people pursue their interests than when they pursue their ideals. If their behaviour matches their interests rather than a set of ideals that they hide, isn’t it easier to predict their behaviour?
Maybe people on LW like person A because they are aware that there’s usually a Sophie’s choice that comes along with finding out you were Too Smart For Your Own Good.
What I think the two choices signal and the trade offs are
Most people I would guess are discomforted by sustained duplicity. Without us necessarily realizing it, our positions on matters shift towards those that are convenient for us, either because of material gain, personal safety, reproductive success or just plain good signalling. Everyone wants to look good, especially to themselves. Most people will have a hard time “living a lie” and may also eventually fail to emulate all the aspects of behaviour a false belief may entail. The emulator is in a sense at a disadvantage compared to someone who is honest in their personally beneficial belief.
Person B may indeed fail to realize the truth because of this effect, or it may be due to other deficiencies, it dosen’t matter. Plainly person B is worse at making a good map of reality than person A is. He seems to be signalling a deficiency or rather failure in rationality. But he’s signalling more than just that, as I will soon show.
Person A on the other hand clearly has better map making skills. He seems to signal more rationality. But if he slips up, he is signalling he may not be the best person to associate yourself with, the benefits and gains he accrues from the his stated beliefs will be smaller than someone who is a true believer in most things convenient. If he dosen’t slip up he may indeed be signalling that he is unusually comfortable with deceiving people and is harder to move with socially accepted norms, the only people who can do this flawlessly are sociopaths or those vastly more intelligent than their surroundings. Does this sound like someone who is reliably non-threatening? In fact how exactly to distinguish such an A from another A who just dosen’t care about other people and wishes to preserve his own advantage?
It is safer to cooperate with Person B than person A. Person A is someone for whom it takes much more resources cognitive and otherwise to distinguish the subtypes that share your interests or will not deceive you on a particular matter compared to distinguishing different types of B. Opportunity costs matter. Needless to say if you are not yourself exceptionally gifted with such resources these may be resources you simply don’t have.
Perhaps some of you may be doubting at this point that a non-plain selfish type A exists. The normative, publicly praised and endorsed course of action if you disagree with a widely accepted truth or norm is to voice this disagreement, either so the false paradigm can be overturned or so others can help you overcome your folly. Naturally the actual norm on this differs, though how strongly so depends on where. What good does do you if the same improved map making abilities that helped you overcome the potentially adaptive biases also tell you that its currently folly to try and change other peoples minds by entering public debate? Why sacrifice yourself if this has negligible impact? If you think the best strategy to do away with the falsehood with as little damage to others is to delay disclosure to a later point, or if you think its utterly hopeless that the falsehood will be done away with in your lifetime and that your sacrifice will have only minimal impact, why not be duplicitous (for the relevant time frame)? But naturally here we reach the same test all over again. It is convenient for one to believe that it is best for one to remain silent isn’t it?
What I think the results of this poll might be.
I expect about a bit below two thirds will choose A. because of LW norms that value rationality and map making skills. This is somewhat counteracted by LW explicit norms on truth telling being closer to actual norms than in most places so people might feel that others are more likley to be wrong in their assessments of the negative consequences .
I think in a representative sample of people, most would choose B.
I am sorry, this is so wrong, and the only way to prove that it is wrong is to disturb one of the many dangerous wild elephants in the living room.
Person A believes that on average, members of group X are more criminal and violent than members of group Y, and that one can make deductions from this to individual cases. However he piously pretends to be horrified that a taxi cab driver would not pick up customers from certain streets, a pizza parlor would not deliver to certain streets.
Person B believes that on average, members of group X are more criminal and violent than members of group Y, but that one cannot make deductions about this to individual cases. He is therefore genuinely horrified that a taxi cab driver would not pick up customers from certain streets, a pizza parlor would not deliver to certain streets. He piously makes an arrangement to meet his niece that will result in her waiting for him on such a street.
Don’t choose to associate with B
Banker A believes that on average, members of group X are less likely to repay their loans than members of group Y, even if all other factors appear to be equal, and if all other factors should appear to be equal, suspects that the member of group X has probably stolen someone’s identity or is committing some other fraud. Banker A is in charge of other people’s money, and is extra careful when lending it to a member of group X, but piously pretends not to be. To avoid it becoming noticeable that he is in fact discriminating, he locates his bank branches in places where few members of group X show up, he advertises his loans in ways that members of group X are less likely to see.
Banker B believes that on average, members of group X are equally likely to repay their loans as members of group Y, or if they are not, the fact is irrelevant and should be ignored. He notices that members of group X rarely get loans. He deduces that due to racism, this market is being under served, and promptly sets out to serve the X market as vigorously as possible. He sincerely expects to make lots of money doing so.
Don’t choose to invest with B
Person B happens to believe what is good for them more than person A does. I don’t think it follows their rationalisations/mistakes need be consistent with each other. In fact looking at people it seems we can belive and believe we believe all sorts of contradictory things that are “good for us” in some sense or another that when taken seriously would seem to contradict each other. You provided two examples, where his false beliefs didn’t match up with gain, this naturally does happen. But I can easily provide counterexamples.
Person X honestly believes that intelligence tests are meaningless, and everyone can acheive anything , yet he will see no problem in using low test scores of a political opponent as a form of mockery, since clearly they really are stupid.
He may consider the preferences of parents who think group Y on average would have an undesirable effect on the values or academic achievement of their child and wish to make sure they have minimal influence on them to be so utterly immoral that must be proactivley fought in personal and public life. But in practice he will never send his children to a school where group Y is a high percentage of the pupils. You see that is because naturally, the school is a bad school and no self respecting parent sends their child to a bad school.
In both cases he manages to do basically the same thing he would have if he was person A.. And I actually think that on the whole type B manages to isolate themselves from some of the fallout of false belief as well as type A. I think that this is because common problems in every day life quickly generate commonly accepted solutions. These solutions may come with explicitly stated rationalizations or they may be unstated practices held up by status quo bias and ugh fields. Person A may even be the one to think of the original rationalization that cloaks rational behaviour based on accurate data! The just mentioned simple conditioning insures that at least some B people will adopt them. If person B happens to wanders into uncommon situations however, he may indeed pay a price.
Naturally an alternative explanation is that a great portion of seemingly B type people are in fact A type people.
These sound to me like good reasons for not associating with B. Selective rationality makes it likely he will do bad things for bad reasons and be sincerely unaware that he is doing bad things. He can probably rationalize embezzling my money as glibly as he can rationalize avoiding a “bad school”, whereas if he is person A, and knows perfectly well he does not want his children to associate with group X, he would know if he was cheating you.
Rationalization predicts bad behavior. Avoiding the inquisition does not predict bad behavior.
But that’s not what I observe in reality. As Konkvistador said, common problems generate commonly accepted solutions. A strong discrepancy between respectable beliefs and reality leads to a common problem, and a specific mode of rationalization then becomes a commonly accepted and socially approved solution. And in my experience, the fact that someone suspends rationality and adopts rationalizations in a commonly accepted way doesn’t imply bad character otherwise.
In fact, as a reductio ad absurdum of your position, I would point out that a complete rejection of all pious rationalizations that are common today would mean adopting a number of views that are shared only by an infinitesimal minority. But clearly it’s absurd to claim that everyone outside of this tiny minority is untrustworthy and of bad character.
On the other hand, I agree with you when it comes to rationalizations that are uncommon and not approved tacitly as an unspoken social convention. They are indeed a red flag as you describe.
It is what we observed in the recent banking crisis. To take an even more extreme example, Pol Pot was a true believer who glibly rationalized away discrepancies. One of his rationalizations was that the bad consequences of his policies were the result of comrades betraying him, which led him to torture his comrades to death.
Our financial system has just collapsed in a way that suggests that the great majority of those who adopt certain pious rationalizations applicable to the financial system are untrustworthy and of bad character. Certain single payer medical systems are apply an alarming level of involuntary euthanasia, aka murder, and everyone is piously rationalizing it, except for a tiny minority.
What I see is a terrifying and extremely dangerous level of bad behavior, glibly justified by pious and politically correct rationalizations.
Breathing difficulties in old people are a wide range of complex and extremely urgent problems, frequently difficult to diagnose and expensive to treat, and apt to progress rapidly to death over a few hours. Tracheotomy or similar urgent and immediate surgical treatment is often absolutely necessary, but for administrative reasons single payer medical systems find it very difficult to provide immediate surgery, surgery that is urgent in the sense of right now not urgent in the sense that in a couple of weeks you will eventually be be put on the extra urgent special emergency queue of people waiting to jump the rest of the merely ordinarily urgent special emergency queue. Therefore, old people who show up at the British health care system struggling to breath, are always treated with barbiturates, which of course stops them struggling. The inability to provide emergency surgery gets rationalized by the great majority of all respectable believers in right things, and these rationalizations are an indication of moral failure. Finding that it is administratively difficult for a single payer system to provide certain kinds of treatment, we see people rationalizing that the treatment actually provided is desirable, which rationalization makes them murderers or accomplices to murder.
I agree with this as a general assessment (though we might argue about the concrete examples). I also see plenty of terrifying and ominous deeds justified by pious rationalizations.
However, I still don’t see how you can infer bad character from ordinary, everyday rationalizations of the common people. Yes, these collectively add up to an awful tragedy of the commons, and individuals in high positions who do extraordinary misdeeds and employ extraordinary rationalizations are usually worse than lying, cynical climbers. But among the common folk, I really don’t see any connection between individual bad character and the regular, universally accepted pious rationalizatons.
We cannot infer bad character from collective consequences of delusive socially approved beliefs. We can infer bad character if delusive beliefs are apt to result individual bad consequences to other people.
In the hypothetical case of the person who wholly genuinely believes in some delusive socially approved belief, one can easily see that it would result in bad consequences for friends, acquaintances, and business associates, while advancing the career of the holder of those beliefs: Therefore bad person
What however, about the person who semi genuinely believes in some delusive socially approved belief, but has clever rationalizations for acting as if the belief was not true in those situations where the falsity of the belief might afflict him personally?
Of course, such rationalizing Bs blur into As. How then shall we tell the difference? The difference is that a true B genuinely considers socially approved beliefs to be true, and therefore righteously imposes the corresponding socially approved behavior on others, while finding rationalizations to avoid it for himself. Therefore evil.
Though he applies those clever rationalizations to avoid bad consequences for himself, rationalizes himself avoiding bad consequences, he does not apply those clever rationalizations to avoid bad consequences for his friends, and those he does business with, since he does not have any motivation to fool himself in those cases, and further, by not fooling himself in those cases, he demonstrates genuine allegiance to socially approved beliefs at low cost to himself.
But individuals in high positions don’t employ extraordinary rationalizations, unless you call the false but socially approved beliefs that everyone is supposed to believe in and most people believe or pretend to believe, “extraordinary”.
Indeed, these beliefs are socially approved precisely because the powerful find them convenient to do and justify evil. Those in high places perform extraordinary misdeeds by acting as if these beliefs were true.
Illustration 1: The financial crisis. Read the FDIC examining Beverly Hills bank for compliance with the CRA. Those who prepared this report were evil men, and their evil followed necessarily from their willingness to apply socially approved, but delusive, beliefs to other people.
Now Joe Sixpack, unlike the evil men who prepared that report, are seldom in a position to individually force other people to act as if delusive beliefs were true. But to the extent that he could, he would, and does, for example by voting. Since self delusion is a major source of evil, this self delusion is surely a predictor of his willingness to do evil in other ways.
Then there is the underclass, who, echoing their betters, claim that the reason they are depraved is because they are deprived. A member of the underclass who believes in the politically correct reasons for bad underclass behavior is more likely to live down to those behaviors than a member of the underclass who does not believe in those justifications.
So, if we look at the top, the guys who prepared that report on Beverly Hills bank, socially approved beliefs predict evil behavior. And if we look at the bottom, the guys who burned down parts of London recently, socially approved beliefs also predict evil behavior.
And if we look at the upper middle class, for example the guys who wanted to lynch the Lacrosse team at Duke University, socially approved beliefs also predicted bad behavior.
And, at the level of my individual people kin and associates, politically correct beliefs have predicted very great crimes and several minor crimes, though I cannot present you any evidence of that. My direct personal experience has been that there is a direct connection between individual bad conduct, such as embezzling funds, and the regular universally accepted pious rationalizations. That he who rationalizes one thing, can rationalize another thing.
You make your case very poignantly. I’ll have to think about this a bit more.
In particular, it does seem to me that people whom I find exceptionally trustworthy in my own life tend to have at least some serious disagreements with the respectable opinion, or at least aren’t prone to stonewalling with pious rationalizations and moral condemnations when presented with arguments against the respectable opinion. But I’m not sure how much this is a selection effect, given that I myself have some opinions that aren’t very respectable.
I don’t see any evidence that Person B won’t defect just as readily, just that they haven’t yet realized that other people are wrong. Maybe Person B is wrong simply out of an easily cured ignorance, and will happily become a “Person A” once that ignorance is cured.
In short, I actually know more about the behavior of Person A, and therefore I trust them more. All I know about Person B is that they’re ignorant.
Remember person A is the odd one in his society. He dosen’t share most other peoples map of reality. Other people have very good reasons to doubt his rationality. Easily cured ignorance of person B is all but so.
Certainly a particular person B might just have not gotten around to realizing this. But I think generally you are missing what was implied in the comparison. Being person B seems to have greater fitness in certain circumstances. And we know there are mechanism developed in our own minds that help us stay person B.
I think we actually know more about the typical person B than just he that he is ignorant. For starters we de facto know he is less rational than A. Secondly its much more likley than with person A, that the mentioned mechanisms are doing their job properly.
But by assumption, his society is irrational, so their reasons for doubting his rationality are themselves irrational. Needless to say, all socially desirable beliefs in our society are of course wonderfully beneficent, but let us instead suppose the society is Soviet Russia, Nazi Germany, or any society that no longer meets our highly enlightened stamp of approval. Are you better off associated with the fake Nazi or the sincere Nazi?
Clearly, you are better off associating with the fake Nazi.
I eat babies.
(Translation: Please don’t ask rhetorical questions that make me choose between agreeing with you and affiliating with sincere Nazis.)
Upvoted since I strongly agree. Arguments shouldn’t require using such strongly emotionally biasing examples to be persuasive.
And you made your point so wonderfully concise.
I think the question was mostly intended to be about fake and sincere creationists rather than fake and sincere Nazis.
The society given in the example is wrong. But that’s not exactly the same as being irrational, I do think however that its probable to say that person A is more rational than the society as a whole. This may be a high or low standard mind you.
Now again I dislike the highly charged example, since they narrow down the scope of thinking, but I suppose you do make a vivid case.
But how can they know this? If they know this why don’t they change? All else being equal an individual being mistaken seem more likley than the societal consensus being wrong. I don’t think you realize on just how much human societies agree. Also just because society is wrong, dosen’t mean the individual is right.
The answer for the typical person living in Nazi Germany would be? Mind you a Nazi Germany where we don’t have the benefit of hindsight that the regime will be short lived.
They don’t change because their beliefs are politically convenient. Because their beliefs justify the elite exercising power over the less elite. Because their beliefs justify behavior by the elite that serves the interests of members of the elite but destroys society.
Searching for an example of suicidal delusions that is not unduly relevant to either today’s politics or yesterdays demons—unfortunately, such examples are necessarily obscure.
The nineteenth century British belief in benevolent enlightened imperialism justified a transfer of power and wealth from the unenlightened and piratical colonialists, to members of the British establishment more closely associated with the government, the elite and the better schools. Lots of people predicted this ideology would wind up having the consequences that it did have, that the pirates actually governed better, but were, of course, ignored.
If I reference beliefs in our society that might cause harmful effects were they not so wise and enlightened, it also makes vivid case. Indeed, any reference to strikingly harmful effects makes a vivid case.
But some people did have the foresight that the regime was going to be short lived, at least towards the end. Nazi strategy was explained in Hitler’s widely read book. The plan was to destroy France (done), force a quick peace settlement with the anglophones (failed), and then invade and ethnically cleanse a large part of Russia. The plan was for short wars against a small set of enemies at any one time. When the British sank the Bismark, the plan was in trouble, since Anglophone air and sea superiority made it unlikely that Germany could force a quick peace, or force them to do anything they did not feel like doing, nor force them to refrain from doing anything they might feel like doing. When they sank the Bismark in May 1941, it was apparent that anglophones could reach Germany, and Germany could not effectively reach them. At that point all type A’s should have suspected that Germany had lost the war. At Stalingrad, the plan sank without a trace, and every type A must have known that the war was lost.
In general, a type A will predict the future better than a type B, since false beliefs lead society to unforseen consequences.
Ignorance does not imply unintelligent, irrational, etc., much less make a de facto case for them. There’s nothing irrational about honestly believing the group-consensus if you don’t have the skill foundation to see how it could be wrong. Sure, one should be open about one’s ignorance, but you still have to have anticipations to function, and Bayesian evidence suggests “follow the leader” is better than “pick randomly”. Especially since, not having the background knowledge in the first place, one would be hard pressed to list choices to pick randomly amongst :)
If someone does not have the skill foundation to see how the group-consensus is wrong, he is ignorant or stupid. Such people are, quite inadvertently, dangerous and harmful. There is no con man worse or more dangerous than a con man who sincerely believes his own scam, and is therefore quite prepared to go down with his ship.
This is true in a big way that I haven’t mentioned before though. Type B seem to me more likley to cause trouble for anyone attempting to implement solutions that might avert tragedy of the commons situations caused by a false society wide belief, than type A.
Actually he is right. Just because you can’t find a flaw with common consensus dosen’t mean you are ignorant or stupid because its perfectly possible there is no flaw with common consensus on a particular subject or that the flaw is too difficult to detect by the means available to you. Perhaps its too difficult to detect the flaw with the means the entire society has available to it!
A rational agent is not an omniscient agent after all!
I think you may be letting yourself slightly adversarial in your thinking here because you perceive this as a fight over a specific thing you estimate society is delusional about. Its not, its really not. Chill man. :)
Edit: Considering the downvotes, I just want to ask what I missing in this comment? Thanks for any help!
Yes but the odds of A getting the right answer from picking randomly are even lower. ;)
Remember person A was defined in this example as having a better map on this little spot, though I suppose most of the analysis done by people so far works equally well for someone who thinks he has a better map and is hiding it.
So Person A believes in MWI because they read the Quantum Mechanics sequence, and Person B never thought about it beyond an article in Discover Magazine saying all the top scientists favor the Copenhagen interpretation. They’re both being entirely rational about the information they have, even if Person A has the right answer :)
I suppose they are in a sense, but what exactly are the rewards/lack of benefit for a layman, even an educated one, believing or not in MWI? .I think a major indicator is that I haven’t heard in recent years of anyone been outed as a MWIist and loosing their job as a consequence :P
Nitpick: The average person who has read QM sequence is likley above average in rationality.
Everyone is avoiding realistic examples, for fear that if they should disturb any of the several large elephants in the living room, they will immediately be trampled.
Substitute a relevant example as needed, I’m simply trying to make the point that ignorance != irrationality. Someone who simply has more information on a field is going to reach better conclusions, and will thus need to hide controversial opinions. Someone with less information is generally going to go with the “follow the herd” strategy, because in the absence of any other evidence, it’s their best bet. Thus, just based on knowledge (not rationality!) you’re going to see a split between A and B types.
There dosen’t have to be a correlation of 1 between ignorance and irrationally. There just has to be some positive correlation for us to judge in the absence of other information A probably more rational than B.
And if there isn’t a correlation greater than 0 between rationality and a proper map of reality, uhm what is this rationality thing anyway?
Ahhh, you’re meaning “we have Bayesian evidence that Person B is less likely to be rational than Person A”? I’d agree, but I still think it’s weak evidence if you’re only looking at a single situation, and
I’d still feel I therefore know more about Person A (how they handle these situations) than I do about Person B (merely that they are either ignorant or irrational). How someone handles a situation strikes me as a more consistent trait, whereas most people seem to have enough gaps in their knowledge that a single gap is very little evidence for other gaps.
Yeah I should have been more explicit on that, sorry for the miscommunication!
Perhaps for convenience we can add that person A and B are exposed to the same information? It dosen’t change the spirit of the thought experiment. I was originally implicitly operating with that as given but since we started discussing it I’ve noticed I never explicitly mentioned it.
Basically I wanted to compare what kinds of things person A/B would signal in a certain set of circumstances to others.
No worries. I think part of it was on me as well :)
Person B, but the magnitude of the distinction I make will probably be highly context dependent.
At least that was my original answer. Now I lean toward mostly person B, and sometimes person A.
There are a number of reasons to be wary of person A. While they will likely make for a more interesting story character, in real life their behavior can cause a number of serious problems. First, while we should presumably fairly distinguish between person A and person C (where person C only thinks they see that an interpretation of reality shared by others is not correct, and tries to pretend otherwise for personal gain and/or safety), any one individual is unlikely to be able to tell whether they are A or C, and will tend to default to believing they are A. If we set person A and person C to be criminal-killing vigilantes, for example, I think it then becomes clear that we, as a society, must take certain steps to prevent the behaviors of person C from being encouraged. Otherwise, the innocent who has been truly acquitted of murder will likely die. Discouraging people from disconnecting from society in the way that both person A and C does is one way to do that. Secrecy in important issues can be a necessity if society itself is broken, but that doesn’t make it desirable. The more people you can check your ideas with, the more likely you can correct your mistakes by being shown information you have not considered (or have others correct your mistakes for you, as seen below).
If we consider person B now, they are more like the juror in the court who truly believes that an innocent person is guilty of murder. I say this because a person who interfaces honestly in society will tend to be acting within a societal context. Even if they and all their peers convict the innocent person, the convicted may still have a chance to live and possibly to be freed one day.
I think that many of the responders to this poll are considering these people and their behaviors in isolation, when they should also be considered within the context of society. Person B is the better person to have around or be in the cases where society is not so completely broken as to be worthless. Person A may be the better person only when civilization gives no other recourse. Even then, that will be highly dependent on their motives.
I should also note that I suspect I have been person A, B, and C at various points in my life (though fortunately at no point did this involve being directly responsible for the life or death of another). My point here is that one should also keep in consideration that these are not actually separate people, but separate behaviors. I am somewhat doubtful that I live in a universe that so cleverly divides people into categories like ‘secretive and always right’, ‘secretive and always wrong’ and ‘honest and always wrong’. So I think one should also consider whether one wants to encourage honesty or secrecy in any given individual in general.
PS: Also if you recall my previous comment on finding “friendly” A’s is harder than finding friendly B’s, it seems to me that if LWers respond as I think they will or even more enthusiastically than that, they will be signalling they consider themselves uniquely gifted in such (cognitive and other) resources. Preferring person A to B seems the better choice only if its rather unlikely that person A is significantly smarter than you or if you are exceptionally good at identifying sociopaths and/or people who share your interests. Choice A is a “rich” man’s choice. Someone who can afford to use that status distribution. I hope you can also see that for A’s that vastly differ in intelligence/resources/specialized abilities cooperating for common goals is tricky.
This seems to me relevant to what the values of someone likley to build a friendly AI seem to be. Has there been discussion or a article that explored these implications that I’ve missed so far?
In the recent economic crisis, who was more likely to scam you? A or B? The ones that pissed away the largest amounts of other people’s money were those that pissed away their own money.
Assume you know the truth, and know or strongly suspect that person A knows the truth but is concealing it.
OK. You are on the rocket team in Nazi Germany. You know Nazi Germany is going down in flames. Ostensibly, all good Nazis intend win heroically. You strongly suspect that Dr Wernher von Braun is not, however, a good Nazi. You know he is a lot smarter than you, and you strongly suspect he is issuing lots of complicated lies because of lots of complicated plots. Who then should you stick with?
Why would person A being significantly smarter be a bad thing? Just from the danger of being hacked? I’m not thinking of anything else that would weigh against the extra utility from their intelligence.
If you have two agents who can read each others source code they could cooperate on a prisoners dilemma, since they would have assurance as to not defect.
Of course we can’t read each others source code, but if our intelligence or rather our ability to asses each others honesty is rather matched, the risk for the other side defecting is at its lowest possible point shy of that, (in the absence of more complex stations where we have to think about signalling to other people), wouldn’t you agree? When one side is vastly more intelligent/capable, the cost of defection is clearly much much smaller for the more capable side.
All else being equal, it seems an A would rather cooperate with a B than another A, because the cost to predict defection is lower. In other words Bs have a discount on needed cognitive resources, despite their inferior maps, and even As have a discount when working with Bs! What I wanted to say with the PS post was that under certain circumstances (say very expensive cognitive resources) opportunity costs associated with a bunch of As cooperating, especially As that have group norms to actively exclude Bs, can’t be neglected.
The cost to predict consciously intended defection is lower.
I can and have produced numerous examples of Bs unintentionally defecting in our society, but for a less controversial example, let us take a society now deemed horrid. Let us consider the fake Nazi Dr. Wernher von Braun. Dr. Wernher von Braun was an example of A. His associates were examples of Bs. He proceeded to save their lives by lying to them and others, causing them to be captured by the Americans rather than the Russians. The B’s around him were busy trying to get him killed, and themselves killed.
I generally find it it easier to predict behaviour when people pursue their interests than when they pursue their ideals. If their behaviour matches their interests rather than a set of ideals that they hide, isn’t it easier to predict their behaviour?
What use can we make of this information?
Maybe people on LW like person A because they are aware that there’s usually a Sophie’s choice that comes along with finding out you were Too Smart For Your Own Good.