The key issue isn’t levels of trust, but levels of trustworthiness. Yes, there can be feedback effects in both directions between trust and trustworthiness, but fundamentally, it is possible for people and institutions with high trustworthiness to thrive in an otherwise low-trust/trustworthiness society. Indeed, lacking competitors, they may find it particularly easy to do so, and through gradual growth and expansion, lead to a high-trust/trustworthiness society over time. It is not possible for people and institutions with high trust to thrive in an otherwise low-trust/trustworthiness society, as they will be taken advantage of. I am extremely sceptical of people who call for higher levels of trust absent better mechanisms of enforcing trustworthiness. Think about what you are actually asking people to do.
You can’t bootstrap a society to a high-trust equilibrium by encouraging people to trust more. You need to encourage them to keep their promises.
But being trusthworthy is very risky and does not necessarily pay off in a llow-trust environment. Imagine you are the only bureaucrat who does not take bribes. The pay is low because you are expected to do so. You have no nest egg for unemployment. You get sooner or later fired because coworkers fear you will rat them out. Imagine being a conscientous tax payer who never cheats on his taxes in an environment where taxes are twice as high as funds needed because it is expected people cheat off half of it, and imagine trying to compete with another business who offers lower prices because they cheat on taxes. Imagine being a teacher giving out honest grades to kids not learning at all and getting constant threats by parents. And so on.
The only “solution” I see here is to see this kind of corruption not as a corruption of the formal-official system, but the actual system and be trusthworthy inside it. And seeing the formal-official sytem only as a facade. This is one possibility I see: to see any convoluted, broken, corrupted system as nothing but an extraction engine which you dodge, and the real system is an informal kind of free market trading favors and bribes. In such a system, a good person could become a teacher, teach crap at school but give out stellar grades, and agree with every non-stupid parent to provide evening and weekend lessons at extra pay, high quality, and very very honest feedback about student progress.
Trustworthiness is about keeping your promises, not obeying the law. Elsewhere you write about ethics as reciprocal, but now you view the good tax payer as one who pays all the government asks in tax? Precisely what’s missing from your account here is reciprocity.
The point is not that in a low-trust society, everyone should suddenly act with high-trustworthiness. It is an equilibrium for a reason. Rather, that there are avenues and interstices where a reputation for high-trustworthiness is now extremely valuable. Start a bank, or a law firm. If the bureaucrats are all taking bribes, then negotiate a discount on a bulk rate and sell it. And so on.
Fine, but if you are being reciprocal, you can kiss social improvement goodby. Suppose you are a teacher underpaid, kicked by all, and underrespected, it is reciprocal for you to give very few shits about teaching well, but then you can kiss the whole idea to improve education by learning from Finland goodby. Predictably, it can only get worse, not better.
Bootstrapping by fake trust I mean you tell the teacher to do a good work and I promise the parents will respect you and tell the parents respect the teacher and I will promise they will do a good job and the taxpayer I promise if you pay taxes you get services and to the governor I promise you if you provide services they will pay taxes… don’t you think such a noble lie could solve coordination problems? Self fulfilling prophecy basically.
I disagree that reciprocity can’t solve co-ordination problems. I don’t think such a “noble lie” could solve co-ordination problems because I don’t see the fundamental problems here as being ones of co-ordination.
You seem to posit that the problem is that the teacher doesn’t work hard because she isn’t respected, and the parents don’t respect the teacher because she doesn’t work hard. But think harder. Do parents try and get their children into the classes of the good teachers? Can a teacher with a good reputation charge more money for private lessons than a teacher with a bad lesson? In this case, trustworthiness wins in a virtuous cycle, and indeed builds trust. But if parents don’t care one way or the other, this trustworthiness cycle won’t happen—because your analysis was broken in the first place. It’s not that the parents don’t respect the teacher because she doesn’t work hard, it’s that they don’t respect the teacher because they don’t care about teaching. And who’s to say they are wrong? When you start with the position that a teacher is underpaid or underrespected, you are putting the care before the horse.
Similarly, what makes you think the governor wants to provide services rather than skim the money for himself? What makes you think the taxpayers want to buy the government services at the price being offered? What makes you think these are co-ordination problems as opposed to divergent interests?
Because I don’t want to think everybody is a saint in Denmark and everybody is a villain in Ukraine or Tanzania—that would be for me uncomfortably close to racial superiority theories. So I like to think people would be more or less honest everywhere as long as they would see this reciprocated.
You may not want to believe racist ideas, but that doesn’t mean that “noble lies” will work.
Besides, you are missing the obvious third option—that the issue is culture and institutions. The politician in Denmark might also steal if he could get away with it, but he has people checking on him. In other words, there are “mechanisms of enforcing trustworthiness,” to use the phrase in my original post. But if there are no such mechanisms, then why should the rate of tax compliance make any difference to the politician’s willingness to steal?
People’s “honesty” (and I worry you are subsuming too much into this label) isn’t just a function of the honesty of people around them. It is a function of the particular incentives they face from (dis)honesty. The inmates are no doubt dishonest to the prison warden all day long, but it doesn’t follow that he’s dishonest with them.
The problem is that (say) Ukraine has basically never had stable, pro-market culture and institutions, whereas England has had them for centuries. They are not going to catch up overnight, because it takes a long time for webs of trust to form and break, habits to change, norms to be removed, established and reinforced. You just have to be patient and do the best you can.
But checking on a politician is an investment in your community with little personal return to yourself. It requires trust to think it will pay off to you.
Mancur Olson modelled it perfectly in The Logic Of Collective Action. 1000 people put 100 $1 bills or coins into a hat making it $100K and then someone steals $1000 out of it. What is more beneficial for you, to catch the thief and then you personally get your $1 back, or you, too, steal $1000 out of it? Setting morals aside, in the short run the second is better. The first one depends on the vague notion that if you let others fuck up the moral standards of your community, it cannot be beneficial for you in the long run. But this long run already requires trust. It already requires the belief the thief is an exception and not the norm: this is what is called trust. If you believe they all are thieves, low trust, your most efficient move is to steal too.
The mechanism can exist only a high-trust community where people think it is beneficial for them to prevent others for screwing up the common trust and moral standards. Once the standards and the trust both gets low, there can be no such mechanism.
The hat with the money is a good example because it shows how it easily becomes a death spiral, a race to the bottom, every single person you see stealing $1000 from the hate lowers the utility for you to chase them and raises the utility for you to do the same. There is a vicious circle, but no similar virtuous circle to bootstrap out of.
And I think my bootstap of fake trust is more like, if we really believe the other one will not steal from the hat, we feel less pressure for ourselves to do. The mechanism itself requires the trust that it is not the norm.
Again the model is this. If you are the first one to steal from the hat, the benefit is low—you don’t need that stolen money in a functional society, you can also earn it, and the cost is high: people will go after you. If you are the 20th one to steal from the hat, the benefit is high: you need a buffer of savings, actually earning money in a society like that is hard, and the cost low: why exactly would they go after you.
Fake trust is the noble lie telling people “you would be the first one to steal from the hat”. Get it?
BTW stable, pro-market cultures come from somewhere, not just time. It is not like human nature is hardwired to be cooperative with a million strangers—our instincts are more tribal. I think England or Denmark had it because of things like protestantism, or being generally on the winner side of history, but that is too long to explain maybe in another comment.
But checking on a politician is an investment in your community with little personal return to yourself. It requires trust to think it will pay off to you.
Right, but Denmark doesn’t rely on ordinary members of the community volunteering to check on the politician, with no thought of personal gain. Of course that won’t work. The solution is institutional—in other words, there are paid officials within government agencies who are responsible for these investigations, and laws and requirements for transparency, and so on.
You are making a fine argument as to why institutionless trust can’t scale. But whoever said it could? And your solutions don’t even solve your problems on their own terms. Why would a “noble lie” solve the problem with the hat? Won’t someone just steal anyway? The solution to the problem of the hat is a policeman.
OK. Maybe I am entirely clueless here, but I just don’t see how if you pay official A to keep check on official B how the heck they don’t instantly collude into a mafia the very second citizen volunteer vigilance stops keeping watch on them?
Here is one interesting thing. People with radical politics, anarchists, communists, libertarians, suchlike, tend to say precisely that, that yes, they collude. Even “conservative” Chesterton said every aristocracy is a mob with style (or something similar, not accurate quote).
People who are moderates and skeptical should probably think it happens to some extent all the time, but all the difference between the first world and the third is precisely the extent of it: preventing most of that collusion, preventing that one big political-business-criminal mafia “blob” from coming into existence and colonize the top echelons is what is the difference between functional and dysfunctional, improving and deteriorating places.
But instutions like playing one Ivy League windbag to keep check on basically his classmate cannot possibly work in themselves, they collude very easily. There must be some other kind of “trick” there.
Of course it’s possible for collusion to take place. So you have to make it hard for collusion to take place, you need failsafes. That’s why I didn’t just say “you need a guard” I talked about transparency and the legal regime. There are different ways this can work.
You are right that one element is to make sure that affinity networks (like Ivy League classmates or ethnic groupings) don’t get to colonize the top echelons. But there’s more to it than that. I think you need to look at the specific legal regimes in a bunch of Western countries, see how they work, and see how they set the incentives for actors within it. And then you’ll be able to explain why there’s more corruption in Ukraine than in Italy, and more in Italy than in England. And none of them are perfect, by the way.
You are right that one element is to make sure that affinity networks (like Ivy League classmates or ethnic groupings) don’t get to colonize the top echelons.
The UK works relatively well despite the huge portion of it’s leadership going to school in Eton and then going to college in Oxford or Cambridge.
But checking on a politician is an investment in your community with little personal return to yourself. It requires trust to think it will pay off to you.
That a fairly trivial look at why politicians get checked upon. It’s not useful to dismiss complex structures of accountability that evolved over decades in a way by assuming they work in a way that can be summarized in two sentences.
I don’t think simply the complexity of structures can prevent collusion between people with similar class and school backgrounds. This is one of the things in life that is simple: watchdogs, sheepdogs, need a personal incentive for catching wrongdoers, and the closer they are to each other culturally, the higher is the danger of old boys networds, the stronger this needs to be. I think complex structures are just a make-believe thing, ultimately it is still about whether I will incriminate some old buddy who I go regularly drinking with who works in a department of my organization a watchdog is or not. It is one of the things that is hard to do but the underlying logic is simple.
For example, Byzantine emperors made sure their bodyguards come from feuding Norse tribes, to prevent collusion (as a conspiracy to kil him). This is one of the simple ways of doing so. If it was on me, I would try putting people from lower-class backgrounds who are very, very suspicious and disliking of silver spoon folks into watchdog positions.
True, but in case of uncertainty and unimportance it is better to go for the kinder ones.
By unimportance I mean: lacking the power to solve problems, or the problems lack urgency. Uncertainty is a clear term.
To put it differently, if racism is true, the only thing I could change about my behavior or the world is to be an asshole with people of color. This is a change not worth doing.
E.g. I don’t have the kind of power to e.g. set limits to immigration. If I had the power, I still don’t see it is urgent or important. Even if it is urgent and important, I would be uncertain about my beliefs, or whether it is the right approach, or what is the right way to execute it.
So if the diamond in the box is that racism is true, all I could really do with it is to be bitter and hateful. Would a truth with utility like that worth investing time into to find out? Yeah, that is largely why I was originally interested in “red pill” stuff then backed out of it in disgust. Any truth that’s main utility is to be an ass online is not really worth finding out.
Ultimately I do not agree with the litany—I do not agree with truth having an absolute value, which may be the biggest heresy on LW :) Truth is a tool, not an absolute. Truth is neat little gadget that makes predictions that come true. Like a Geiger-Müller. Some predictions you can use, some not. If it is not about a diamond I could sell but a piece of driftwood in the box, do I really care if it is really there or not? Why?
Truths are motivated. Geiger-Müllers are made not only by science but the desire to know if you are in dangerous radioactivity. Reasoning, categories are motivated. The statement “there is a tiger in your room” does not simply mean “I predict you will find one if you open the door”, it also means “please don’t open that door, I don’t want you mauled”.
To put it differently, if racism is true, the only thing I could change about my behavior or the world is to be an asshole with people of color.
Um, no. You may want to look at what you said in the grandparent and this thread more generally. You were trying to figure how to improve the education systems and general problems of some countries. In order to do that you were trying to determine the cause of the problem. Along the way you were prepared to reject a possible hypothesis because “it would be for me uncomfortably close to racial superiority theories”.
So if you really care about improving the situation in say Romania you need to figure out why it is the way it is, as in what is the true (in the absolute sense) situation. In order to do that, you can’t reject hypotheses simply because they make you feel uncomfortable.
Okay, fair point. Still. If society deterioriate while their racial-ethnic make up does not change at all or very little (they are no the most tempting immigration targets and so on) there seems to be little point in dwelling on that—if you see a change in an outcome then any variable that did not change cannot really be a causal factor, now can it?
You were the one “dwelling on that” by calling Salemicus’s theory “uncomfortably close to racial superiority”. Whether his theory is actually “racist” (to the extent that word even has a coherent definition) is irrelevant, the point is that your first reaction was to dismiss it not for any logical reason but because of your hangup about thinking any thoughts that pattern match to “racism”.
I just want to add, for my own sake, that I was in no way advocating anything resembling “racial superiority.” Rather, my explanation for the relative success of some societies over others is institutional.
You don’t solve coordination problems by being blindly trusting and you certainly don’t do this by spreading “noble lies”. You do it by becoming trustworthy, i.e., not defecting against those who haven’t defected against you.
In fact all a “noble lie” will do is make it harder to determine who is or isn’t trustworthy, thus making it harder to punish defectors.
By using the verb “to defect” I am assuming you are familiar with the button-pressing tit-for-tat game-theory stuff. AFAIK one simple yet efficient algorithm is “reciprocate what the other player does, but when you both are stuck in mutual non-cooperative loops, offer “forgiveness” by pressing the cooperate button once and see if it is reciprocated and you both can enter a mutually cooperative loop. This “forgiving” press is clearly unearned trust!
The key issue isn’t levels of trust, but levels of trustworthiness. Yes, there can be feedback effects in both directions between trust and trustworthiness, but fundamentally, it is possible for people and institutions with high trustworthiness to thrive in an otherwise low-trust/trustworthiness society. Indeed, lacking competitors, they may find it particularly easy to do so, and through gradual growth and expansion, lead to a high-trust/trustworthiness society over time. It is not possible for people and institutions with high trust to thrive in an otherwise low-trust/trustworthiness society, as they will be taken advantage of. I am extremely sceptical of people who call for higher levels of trust absent better mechanisms of enforcing trustworthiness. Think about what you are actually asking people to do.
You can’t bootstrap a society to a high-trust equilibrium by encouraging people to trust more. You need to encourage them to keep their promises.
But being trusthworthy is very risky and does not necessarily pay off in a llow-trust environment. Imagine you are the only bureaucrat who does not take bribes. The pay is low because you are expected to do so. You have no nest egg for unemployment. You get sooner or later fired because coworkers fear you will rat them out. Imagine being a conscientous tax payer who never cheats on his taxes in an environment where taxes are twice as high as funds needed because it is expected people cheat off half of it, and imagine trying to compete with another business who offers lower prices because they cheat on taxes. Imagine being a teacher giving out honest grades to kids not learning at all and getting constant threats by parents. And so on.
The only “solution” I see here is to see this kind of corruption not as a corruption of the formal-official system, but the actual system and be trusthworthy inside it. And seeing the formal-official sytem only as a facade. This is one possibility I see: to see any convoluted, broken, corrupted system as nothing but an extraction engine which you dodge, and the real system is an informal kind of free market trading favors and bribes. In such a system, a good person could become a teacher, teach crap at school but give out stellar grades, and agree with every non-stupid parent to provide evening and weekend lessons at extra pay, high quality, and very very honest feedback about student progress.
Trustworthiness is about keeping your promises, not obeying the law. Elsewhere you write about ethics as reciprocal, but now you view the good tax payer as one who pays all the government asks in tax? Precisely what’s missing from your account here is reciprocity.
The point is not that in a low-trust society, everyone should suddenly act with high-trustworthiness. It is an equilibrium for a reason. Rather, that there are avenues and interstices where a reputation for high-trustworthiness is now extremely valuable. Start a bank, or a law firm. If the bureaucrats are all taking bribes, then negotiate a discount on a bulk rate and sell it. And so on.
Fine, but if you are being reciprocal, you can kiss social improvement goodby. Suppose you are a teacher underpaid, kicked by all, and underrespected, it is reciprocal for you to give very few shits about teaching well, but then you can kiss the whole idea to improve education by learning from Finland goodby. Predictably, it can only get worse, not better.
Bootstrapping by fake trust I mean you tell the teacher to do a good work and I promise the parents will respect you and tell the parents respect the teacher and I will promise they will do a good job and the taxpayer I promise if you pay taxes you get services and to the governor I promise you if you provide services they will pay taxes… don’t you think such a noble lie could solve coordination problems? Self fulfilling prophecy basically.
I disagree that reciprocity can’t solve co-ordination problems. I don’t think such a “noble lie” could solve co-ordination problems because I don’t see the fundamental problems here as being ones of co-ordination.
You seem to posit that the problem is that the teacher doesn’t work hard because she isn’t respected, and the parents don’t respect the teacher because she doesn’t work hard. But think harder. Do parents try and get their children into the classes of the good teachers? Can a teacher with a good reputation charge more money for private lessons than a teacher with a bad lesson? In this case, trustworthiness wins in a virtuous cycle, and indeed builds trust. But if parents don’t care one way or the other, this trustworthiness cycle won’t happen—because your analysis was broken in the first place. It’s not that the parents don’t respect the teacher because she doesn’t work hard, it’s that they don’t respect the teacher because they don’t care about teaching. And who’s to say they are wrong? When you start with the position that a teacher is underpaid or underrespected, you are putting the care before the horse.
Similarly, what makes you think the governor wants to provide services rather than skim the money for himself? What makes you think the taxpayers want to buy the government services at the price being offered? What makes you think these are co-ordination problems as opposed to divergent interests?
Because I don’t want to think everybody is a saint in Denmark and everybody is a villain in Ukraine or Tanzania—that would be for me uncomfortably close to racial superiority theories. So I like to think people would be more or less honest everywhere as long as they would see this reciprocated.
You may not want to believe racist ideas, but that doesn’t mean that “noble lies” will work.
Besides, you are missing the obvious third option—that the issue is culture and institutions. The politician in Denmark might also steal if he could get away with it, but he has people checking on him. In other words, there are “mechanisms of enforcing trustworthiness,” to use the phrase in my original post. But if there are no such mechanisms, then why should the rate of tax compliance make any difference to the politician’s willingness to steal?
People’s “honesty” (and I worry you are subsuming too much into this label) isn’t just a function of the honesty of people around them. It is a function of the particular incentives they face from (dis)honesty. The inmates are no doubt dishonest to the prison warden all day long, but it doesn’t follow that he’s dishonest with them.
The problem is that (say) Ukraine has basically never had stable, pro-market culture and institutions, whereas England has had them for centuries. They are not going to catch up overnight, because it takes a long time for webs of trust to form and break, habits to change, norms to be removed, established and reinforced. You just have to be patient and do the best you can.
But checking on a politician is an investment in your community with little personal return to yourself. It requires trust to think it will pay off to you.
Mancur Olson modelled it perfectly in The Logic Of Collective Action. 1000 people put 100 $1 bills or coins into a hat making it $100K and then someone steals $1000 out of it. What is more beneficial for you, to catch the thief and then you personally get your $1 back, or you, too, steal $1000 out of it? Setting morals aside, in the short run the second is better. The first one depends on the vague notion that if you let others fuck up the moral standards of your community, it cannot be beneficial for you in the long run. But this long run already requires trust. It already requires the belief the thief is an exception and not the norm: this is what is called trust. If you believe they all are thieves, low trust, your most efficient move is to steal too.
The mechanism can exist only a high-trust community where people think it is beneficial for them to prevent others for screwing up the common trust and moral standards. Once the standards and the trust both gets low, there can be no such mechanism.
The hat with the money is a good example because it shows how it easily becomes a death spiral, a race to the bottom, every single person you see stealing $1000 from the hate lowers the utility for you to chase them and raises the utility for you to do the same. There is a vicious circle, but no similar virtuous circle to bootstrap out of.
And I think my bootstap of fake trust is more like, if we really believe the other one will not steal from the hat, we feel less pressure for ourselves to do. The mechanism itself requires the trust that it is not the norm.
Again the model is this. If you are the first one to steal from the hat, the benefit is low—you don’t need that stolen money in a functional society, you can also earn it, and the cost is high: people will go after you. If you are the 20th one to steal from the hat, the benefit is high: you need a buffer of savings, actually earning money in a society like that is hard, and the cost low: why exactly would they go after you.
Fake trust is the noble lie telling people “you would be the first one to steal from the hat”. Get it?
BTW stable, pro-market cultures come from somewhere, not just time. It is not like human nature is hardwired to be cooperative with a million strangers—our instincts are more tribal. I think England or Denmark had it because of things like protestantism, or being generally on the winner side of history, but that is too long to explain maybe in another comment.
Right, but Denmark doesn’t rely on ordinary members of the community volunteering to check on the politician, with no thought of personal gain. Of course that won’t work. The solution is institutional—in other words, there are paid officials within government agencies who are responsible for these investigations, and laws and requirements for transparency, and so on.
You are making a fine argument as to why institutionless trust can’t scale. But whoever said it could? And your solutions don’t even solve your problems on their own terms. Why would a “noble lie” solve the problem with the hat? Won’t someone just steal anyway? The solution to the problem of the hat is a policeman.
OK. Maybe I am entirely clueless here, but I just don’t see how if you pay official A to keep check on official B how the heck they don’t instantly collude into a mafia the very second citizen volunteer vigilance stops keeping watch on them?
Here is one interesting thing. People with radical politics, anarchists, communists, libertarians, suchlike, tend to say precisely that, that yes, they collude. Even “conservative” Chesterton said every aristocracy is a mob with style (or something similar, not accurate quote).
People who are moderates and skeptical should probably think it happens to some extent all the time, but all the difference between the first world and the third is precisely the extent of it: preventing most of that collusion, preventing that one big political-business-criminal mafia “blob” from coming into existence and colonize the top echelons is what is the difference between functional and dysfunctional, improving and deteriorating places.
But instutions like playing one Ivy League windbag to keep check on basically his classmate cannot possibly work in themselves, they collude very easily. There must be some other kind of “trick” there.
Of course it’s possible for collusion to take place. So you have to make it hard for collusion to take place, you need failsafes. That’s why I didn’t just say “you need a guard” I talked about transparency and the legal regime. There are different ways this can work.
You are right that one element is to make sure that affinity networks (like Ivy League classmates or ethnic groupings) don’t get to colonize the top echelons. But there’s more to it than that. I think you need to look at the specific legal regimes in a bunch of Western countries, see how they work, and see how they set the incentives for actors within it. And then you’ll be able to explain why there’s more corruption in Ukraine than in Italy, and more in Italy than in England. And none of them are perfect, by the way.
Only institutional change can explain how (say) New York city governance moved from being incredibly corrupt in the 19th century to moderately corrupt in the mid-20th century to a bit corrupt today. It’s not because the population has become more “saintly,” and it’s not because of any “noble lie.” But it did require time.
The UK works relatively well despite the huge portion of it’s leadership going to school in Eton and then going to college in Oxford or Cambridge.
That a fairly trivial look at why politicians get checked upon. It’s not useful to dismiss complex structures of accountability that evolved over decades in a way by assuming they work in a way that can be summarized in two sentences.
I don’t think simply the complexity of structures can prevent collusion between people with similar class and school backgrounds. This is one of the things in life that is simple: watchdogs, sheepdogs, need a personal incentive for catching wrongdoers, and the closer they are to each other culturally, the higher is the danger of old boys networds, the stronger this needs to be. I think complex structures are just a make-believe thing, ultimately it is still about whether I will incriminate some old buddy who I go regularly drinking with who works in a department of my organization a watchdog is or not. It is one of the things that is hard to do but the underlying logic is simple.
For example, Byzantine emperors made sure their bodyguards come from feuding Norse tribes, to prevent collusion (as a conspiracy to kil him). This is one of the simple ways of doing so. If it was on me, I would try putting people from lower-class backgrounds who are very, very suspicious and disliking of silver spoon folks into watchdog positions.
It’s not a matter of what you want to believe to be true, it’s a matter of what is true.
You may want to practice the Litany of Tarski.
True, but in case of uncertainty and unimportance it is better to go for the kinder ones.
By unimportance I mean: lacking the power to solve problems, or the problems lack urgency. Uncertainty is a clear term.
To put it differently, if racism is true, the only thing I could change about my behavior or the world is to be an asshole with people of color. This is a change not worth doing.
E.g. I don’t have the kind of power to e.g. set limits to immigration. If I had the power, I still don’t see it is urgent or important. Even if it is urgent and important, I would be uncertain about my beliefs, or whether it is the right approach, or what is the right way to execute it.
So if the diamond in the box is that racism is true, all I could really do with it is to be bitter and hateful. Would a truth with utility like that worth investing time into to find out? Yeah, that is largely why I was originally interested in “red pill” stuff then backed out of it in disgust. Any truth that’s main utility is to be an ass online is not really worth finding out.
Ultimately I do not agree with the litany—I do not agree with truth having an absolute value, which may be the biggest heresy on LW :) Truth is a tool, not an absolute. Truth is neat little gadget that makes predictions that come true. Like a Geiger-Müller. Some predictions you can use, some not. If it is not about a diamond I could sell but a piece of driftwood in the box, do I really care if it is really there or not? Why?
Truths are motivated. Geiger-Müllers are made not only by science but the desire to know if you are in dangerous radioactivity. Reasoning, categories are motivated. The statement “there is a tiger in your room” does not simply mean “I predict you will find one if you open the door”, it also means “please don’t open that door, I don’t want you mauled”.
Um, no. You may want to look at what you said in the grandparent and this thread more generally. You were trying to figure how to improve the education systems and general problems of some countries. In order to do that you were trying to determine the cause of the problem. Along the way you were prepared to reject a possible hypothesis because “it would be for me uncomfortably close to racial superiority theories”.
So if you really care about improving the situation in say Romania you need to figure out why it is the way it is, as in what is the true (in the absolute sense) situation. In order to do that, you can’t reject hypotheses simply because they make you feel uncomfortable.
Okay, fair point. Still. If society deterioriate while their racial-ethnic make up does not change at all or very little (they are no the most tempting immigration targets and so on) there seems to be little point in dwelling on that—if you see a change in an outcome then any variable that did not change cannot really be a causal factor, now can it?
You were the one “dwelling on that” by calling Salemicus’s theory “uncomfortably close to racial superiority”. Whether his theory is actually “racist” (to the extent that word even has a coherent definition) is irrelevant, the point is that your first reaction was to dismiss it not for any logical reason but because of your hangup about thinking any thoughts that pattern match to “racism”.
I just want to add, for my own sake, that I was in no way advocating anything resembling “racial superiority.” Rather, my explanation for the relative success of some societies over others is institutional.
What do you mean by that, are you saying race isn’t correlated with IQ, or anything else important?
Or are you merely saying that the subject isn’t relevant to the original discussion?
I expressed no opinion on race, because it wasn’t relevant.
Being trusting is even riskier (and stupider).
The point is making it mutual. Assuming it is a coordination problem.
You don’t solve coordination problems by being blindly trusting and you certainly don’t do this by spreading “noble lies”. You do it by becoming trustworthy, i.e., not defecting against those who haven’t defected against you.
In fact all a “noble lie” will do is make it harder to determine who is or isn’t trustworthy, thus making it harder to punish defectors.
By using the verb “to defect” I am assuming you are familiar with the button-pressing tit-for-tat game-theory stuff. AFAIK one simple yet efficient algorithm is “reciprocate what the other player does, but when you both are stuck in mutual non-cooperative loops, offer “forgiveness” by pressing the cooperate button once and see if it is reciprocated and you both can enter a mutually cooperative loop. This “forgiving” press is clearly unearned trust!