N.B. This is a chapter in a planned book about epistemology. Chapters are not necessarily released in order. If you read this, the most helpful comments would be on things you found confusing, things you felt were missing, threads that were hard to follow or seemed irrelevant, and otherwise mid to high level feedback about the content. When I publish I’ll have an editor help me clean up the text further.
You’re walking down the street and find a lost wallet. You open it and find the owner’s ID and $100 dollars. You have a few possible courses of action. You could return the wallet exactly as you found it. If you did, most people would say you did the right thing. If instead you kept the wallet and all the money in it, most people would say you did the wrong thing. But what if you returned the wallet and kept a little of the money, say $10, as a “finder’s fee”? Did you do a good thing or a bad thing?
Some might say you deserve the finder’s fee since the time it takes you to return the wallet is worth something. Further, at least you were willing to return most of the money. A real thief could have found it and kept all the money! The owner will be nearly as happy to get back the wallet with $90 as they would be to get back the wallet with $100, so they might not even notice. And if they do ask about the $10 you can lie and say that’s how you found the wallet to spare you both an awkward conversation.
Others will say that stealing is stealing, end of story. The owner can choose to offer you a reward when you return the wallet, but they don’t have an obligation to do that. Even if they should offer you a reward, if they don’t it shouldn’t matter. Good actions aren’t contingent on later rewards. Thus the only right thing to do is return the wallet in tact. After all, wouldn’t you want someone to return your lost wallet without stealing any of the money?
Deciding what to do about a lost wallet may seem like a typical moral dilemma, but consider that in the last chapter we solved the problem of how words get their meanings. In that case, shouldn’t we all know what “good” and “bad” mean? And if we know what “good” and “bad” mean, shouldn’t we then be able to pick out which actions are good and which are bad and then only choose the good ones? How can there be disagreement about what is right and wrong?
Recall that we learn the meanings of words through our interactions with others when they show us what words mean to them. Maybe one day you share a toy on the playground and the teacher says you were “good” to share and rewards you with a smile. Another day you hit another kid on the playground, the teacher says you were “bad”, and punishes you by making you sit on the ground while the other kids play. Through many of these interactions with teachers, parents, other adults, and your fellow kids, you build up a notion of what “good” and “bad” mean, as well as learn that doing good is generally rewarded while doing bad is generally punished. So you try to do good things to get rewards and avoid bad things so you don’t get punished.
But your notion of what “good” and “bad” mean are ultimately your own since they are shaped by your individual experiences. Although many people try to influence your experience so you end up with the same notion of “good” and “bad” as they have, there will inevitably be some errors in the transmission. For example, maybe your parents tried to teach you to hold the door open for others, except they thought it was okay to not hold the door open when you have a good reason to hurry. You might not have picked up on the exception, and so learned the rule “always hold the door open for others, no matter what”. So now you have a slightly different idea than the one your parents tried to teach you about what’s good behavior due to an error.
And it doesn’t even require error to get variance in values. Maybe your parents fell on hard times and could only feed the family by stealing food. They were raised with the idea that all stealing is bad so always felt guilty about stealing to eat. However, when they teach you to steal food to feed yourself, they insist it’s a good thing because they don’t want you to feel the same shame they do. So you grow up thinking stealing to eat is good. Now we have at least two different ideas in the world about the goodness of stealing: those who think all stealing is bad, even to feed yourself when there’s no other option, and those who think stealing is fine if the alternative is hunger.
Given these differences in values naturally arise, how can we come to agreement on what’s right? If we have a city where 99% of the people are Never Stealers—they think all stealing is bad—and 1% are Food Stealers—they think stealing food is okay when that’s the only way to eat—should the Food Stealers bend to the will of the majority? Or would the “right thing” be for the Never Stealers to have a bit more compassion for the hungry and add some nuance to their beliefs about stealing? Is there some means by which we can know what’s really right and wrong when people end up disagreeing?
Reaching Agreement
Let’s leave questions or morals and ethics aside for a moment to talk about how people agree at all.
Let’s suppose two people, Alice and Bob, are debating. Alice claims that all phoobs are red. Bob claims that all phoobs are blue. How might they come to some agreement about the color of phoobs?
Alice: I read in a book that all phoobs are red.
Bob: I saw a blue phoob with my own eyes! In fact, every phoob I’ve ever seen has been blue.
Alice: Hmm, interesting. I read that all phoobs are red in National Geographic. Are you saying the writer lied and the photographer doctored the images?
Bob: Oh, that’s strange. I’ve definitely only ever see blue phoobs.
Alice: Wait! Where did you see these phoobs?
Bob: I saw them in a zoo.
Alice: I wonder if phoobs change color in captivity?
Bob: Yeah, or maybe there’s more than one species of phoob!
Alice: Okay, so taking our experiences together, I think we can say that all phoobs are either red or blue.
Bob: Yeah, I agree.
Carroll: Hey, you’re not going to believe it! I just saw a green phoob!
What happened here? Alice and Bob shared information with each other. Each one updated their beliefs about the world based on the information they learned. After sharing, they were able to come to agreement about phoob colors—at least until Carroll showed up with new information!
If Alice and Bob share information like this, should they always come to agreement? That is, with enough time and effort, could they always resolve their disagreements?
Turns out yes—at least if they are sufficiently “rational”.
What does it mean to be sufficiently “rational”, though? Most people would describe a person as rational if they make decisions and form beliefs without letting their emotions dominate. That’s not the kind of rational necessary to get people to always agree, though. It might help, sure, but it’s not enough to logically guarantee agreement. For that we’re going to need people who are Bayesian rationalists.
Bayesian rationality is a precise mathematical way of describing rationality. Bayesian rationalists—or just “Bayesians”—have precise mathematical beliefs. Each of their beliefs is a combination of a statement about the world, like “the sky is blue”, and a probability of how likely they think that statement is to be true, say 99.99%. They then update these beliefs based on their observations of the world using a mathematical rule known as Bayes’ Theorem, hence why we call them Bayesians.
There’s a few things you should know about Bayesians. First, the way they use Bayes’ Theorem to update their beliefs is by running a calculation on two values. The first is the prior probability, which means the probability the Bayesian had about a statement before they saw any additional information. The second is the likelihood of the evidence, which is a calculation of how likely it is that whatever they observed is true given what they know about the world. Bayes’ Theorem multiples the prior probability and the likelihood together to generate a posterior probability, which is what the Bayesian should update their beliefs to.
Second, Bayesian beliefs always have a probability between 0% and 100% but are never 0% or 100%. Why? Because if they were ever 100% certain about one of their beliefs they’d be stuck with it forever, never able to change it based on new information. This is a straightforward consequence of how Bayes’ Theorem is calculated, but also matches intuitions: total certainty in the truth or falsity of a statement is to take the statement “on faith” or “by assumption” and so be unable or unwilling to consider alternatives. So if a Bayesian were ever 100% sure the sky is blue, they’d keep believing it even if they moved to Mars and the sky was clearly red. They’d see red and keep believing the sky is blue.
Third, Bayesians are optimal reasoners. That is, they never leave any evidence on the table; they always update their beliefs as much as possible based on what they observe. If you present evidence to a Bayesian that they’ve moved to Mars, they’ll believe they are on Mars with exactly the probability permitted by the evidence, no more and no less. This means you can’t trick Bayesians, or at least not in the normal sense. At most you can fool a Bayesian by carefully filtering what evidence they see, but even then they’ll have accounted for the probability that they’re being deceived and update in light of it! Think of them like Sherlock Holmes but with all the messy opportunity for human error removed.
Combined, these facts mean that Bayesians beliefs are always in a state of fluctuating uncertainty that nevertheless are the more accurate beliefs possible given a Bayesian’s priors and the evidence they’ve seen. And that means that can pull off some impressive feats of reasoning!
For instance, returning to the claim that two people sufficiently rational can always agree, if two people are Bayesian rationalists, there’s a theorem—Aumann’s Agreement Theorem—which proves that they will always agree…under special conditions. Those conditions are that they must have common prior beliefs—things they believed before they encountered any of the evidence they know that supports their beliefs—and they must share all the information they have with each other. If they do those two things, then they will be mathematically forced to agree about everything!
That’s pretty cool, but humans disagree all the time. What gives? Well, humans aren’t Bayesian rationalists! Most of us don’t have precise probabilities assigned to all of our beliefs, and even if we did we wouldn’t succeed at always applying Bayes’ Theorem correctly to update them. Bayesians are more like a theoretical ideal we can compare ourselves against. We instead look more like Bayesians with a lot of error mixed in: we have vague ideas of how likely things we believe are to be true and do a somewhat fuzzy job of updating those beliefs when we learn new information, but we rely heavily on heuristics and biases to update our beliefs. With a lot of training we can be a little bit closer to being Bayesians, but we’ll always make mistakes, ensuring we’ll disagree at least some of the time.
We also don’t meet one of the other requirements of Aumann’s Agreement Theorem: we don’t have the same prior beliefs. This is likely intuitively true to you, but it’s worth proving. For us to all have the same prior beliefs we’d need to all be born with the same priors. This seems unlikely, but for the sake of argument let’s suppose it’s true that we are. As we collect evidence about the world we update our beliefs, but we don’t remember all the evidence. Even if we have photographic memories, childhood amnesia assures that by the time we reach the age of 3 or 4 we’ve forgotten things that happened to us as babies. Thus by the time we’re young children we already have different prior beliefs and can’t share all our evidence with each other to align on the same priors because we’ve forgotten it. Thus when we meet and try to agree, sometimes we can’t because even if we have common knowledge about all the information each other has now, we didn’t start from the same place and so may fail to reach agreement.
So in theory we cannot all agree about everything, but in practice some people agree about some things. This is because our everyday agreement is fuzzy. Unlike Bayesians, we don’t get hung up on disagreements about things like whether to be precisely 99.5% or 99.6% sure the sky is blue. We just agree that the sky is blue and leave open the vague possibility that something crazy happens like waking up on Mars and seeing a red sky.
Given that we can reach fuzzy agreement about some things, can we reach fuzzy agreement about morals and ethics? Can we agree about what is right and wrong in practice even if we cannot in theory?
Disagreeing on Priors
If we want to see if we can reach some fuzzy agreement about what’s good and bad, we need to consider in more depth what it means when we disagree. Previously when we talked about disagreements about what words mean we did so in terms of error. This makes sense for many types of disagreements where there’s broad agreement about what the right meaning is. For example, if I think all orange-colored citrus fruits are oranges, I might accidentally serve a redish variety of grapefruit to guests at my breakfast table. Their surprise when they take a bite of these “oranges” will quickly inform me of my mistake.
But other disagreements don’t look so much like errors. To continue the fruit example, maybe I do know the difference between oranges and grapefruits, but I happen to think grapefruits taste better than oranges, so I intentionally serve them. My guests disagree. There’s not really an error here, though, but rather a difference in preferences. It’s similar with disagreements about what clothes to wear, art to view, music to listen to, and so on: these are differences in individual preferences rather than errors. Sure, we can find the likes of busybodies and professional critics who make a career of telling others they have the wrong preferences, but the errors they see only exist from their point of view. If I happen to like music that’s derivative and simplistic because I have fond memories associated with it, then that’s my business and no one else’s—as long as I wear headphones!
But what’s the deal with good and bad? Are differences in morals and ethics a matter of error or preference? On the one hand, it seems to be a matter of error because our actions can have serious consequences for other people. If I’m a Food Stealer who thinks it’s okay to steal to eat and you’re a Never Stealer who thinks all stealing is wrong, you’ll be upset when your family has to skip a meal because I snuck into your house and stole your dinner for mine. This seems like a case where one of us is making an error: one of us is wrong about what is right, and we need to settle the ethical question of whether stealing food to eat when hungry is good or bad.
But isn’t this also kind of a difference in preference? Perhaps I and my fellow Food Stealers prefer to live in a world where the hungry get to eat even if that means others sometimes have their food stolen. Perhaps you and your fellow Never Stealers prefer a world where no one ever steals, and we can be safe in the knowledge that our food remains ours to do with as we please. So maybe this isn’t an error about what’s right and wrong, but a disagreement about a preference for the kind of society we’d each like to live in.
When we find a situation like this where two interpretations of the same events seem reasonable, it’s worth asking if there’s a way both can be true at once. That is, is there a way that it can both look to us like differences in morals are errors and for them to behave like preferences?
To see, let’s first return to our friends the Bayesians. How do they think about morality? For them, any beliefs they have about what’s right and wrong are the same as any other beliefs they have, which is to say that those beliefs are statements with probabilities attached. So a Bayesian doesn’t make categorical statements like “it’s wrong to steal” the way most people do. They instead believe things like “I’m 95% certain that all stealing is wrong”.
So if I’m a Bayesian, what does it mean for me to say someone else is “in error”? Well, since I’m an optimal reasoner and my beliefs are already the best ones that can be reckoned given my prior beliefs and the evidence I’ve seen, it would mean that they don’t agree with my beliefs in some way. So if I’m 95% certain the statement “all stealing is wrong other than stealing to eat when hungry” is true and I meet another Bayesian who says that they are 95% certain that “all stealing is wrong” is true, then it looks to me like they’ve made a mistake since, as just noted, I already have the best possible beliefs. But because they’re also a Bayesian, they think the same thing in reverse: they have the best beliefs and I am in error.
If we have the same priors we might be able to come into agreement by sharing all the evidence we have that supports our beliefs. Since we’re Bayesians, Aumann’s Agreement Theorem applies, so we’ll be able to make the same updates on the same evidence and should come to believe the same things. But let’s suppose that we’re human-like with respect to our priors, which is to say that we don’t share the same priors. If we figure this out, we can agree to disagree on priors, which is to say we agree that our disagreement cannot be resolved because we didn’t start out with the same prior beliefs. This is analogous to the human situation of having different preferences, only it extends well beyond things we typically think of as preferences to questions of morals and ethics.
Returning to the world of humans, we’re not so different from Bayesians who disagree on priors. To wit, we have deeply held beliefs about what is right and wrong. So do other people. Those beliefs were influenced by everything that happened in our lives to construct our unique concept of what is right and wrong. When we disagree on moral questions it feels like others are in error because we did our best to come up with our beliefs, but so did they. Instead of one or both of us being in error, it’s reasonable to say that we have a fundamental disagreement about morals and ethics because we don’t share the same deeply held beliefs about good and bad. Neither one of us is necessarily correct in some absolute sense; we both have our own reasonable claim on the truth based on what we know.
This is a pretty wild idea, though, because it implies that people with vastly different beliefs from our own might be just as justified as us in their ideas about what’s right and wrong. This is true even if they believe something that, to us, seems utterly abhorrent. Rather than pushing you on any hot-button issues here—you can think about those for yourself—let’s reconsider the disagreement between the Food Stealers and the Never Stealers.
If you ask Food Stealers what they think about Never Stealers, they’d likely say that Never Stealers are cold and heartless. The Never Stealers, in the same way, think the Food Stealers are unrepentant thieves freeloading off the hard work of the Never Stealers. But is either really right? The Food Stealers grew up thinking it was more important to care for others in need than to greedily hoard food. The Never Stealers grew up thinking it was more important to respect each others property than let beggars eat them out of house and home. They disagree because they believe fundamentally different things about the world and cannot agree. If they were Bayesians, they would be disagreeing on priors. But since they’re humans, we might instead say they have different moral foundations.
Different Moral Foundations
The idea of moral foundations, and that people might have different ones, comes from Jonathan Haidt in his book The Righteous Mind. He argues that humans have different fundamental beliefs about what is right and wrong. These fundamental beliefs are built out of a few moral “foundations” or core beliefs. A person’s moral beliefs can then be thought of like different moral “personalities”, with each person identifying more or less with different foundational moral beliefs.
He identifies six moral foundations. They are:
care/harm: concern for the pain and joy of others
fairness/cheating: the desire for everyone to be treated the same
loyalty/betrayal: placing the group above the individual
authority/subversion: deference to leaders and tradition
sanctity/degradation: generalized “disgust”; splits the world into pure and impure
liberty/oppression: the right to be free of the control of others
Under this theory, one person might believe fairness is more important than liberty and think that it’s good to give up some freedom in order to treat everyone equally. Another might believe just the opposite, seeing any limits on freedom as wrong no matter what the cost in terms of other moral foundations. Haidt and his fellow researchers use this theory to explain differences in beliefs about what is right and wrong between different cultures, religions, and even political parties. For example, based on survey data it seems that political conservatives more value loyalty, authority, and sanctity than the politically liberal, who more value care and fairness. Given this fact, it then seems likely that most disagreements between conservatives and liberals are actually not due to disagreements about what specific policies will be best for society but due to disagreements about what even is best, which is to say disagreements about moral foundations.
Perhaps unsurprisingly, some people disagree with Haidt’s theory. Many of them think he’s right that humans have something like moral foundations or fundamental core beliefs about morality but wrong in what the specific moral foundations are. Others think his theory is irrelevant, because even if people have different moral foundations there’s still some fact of the matter about which moral foundations are best. This only further underscores how fundamentally uncertain we are about what things are right and wrong—that we can’t even agree on the theoretical framework in which to work out what things are good and bad. In this chapter we’ve not even begun to touch on the long history of philosophers and theologians trying to figure out what’s good and bad. That’s a topic one step deeper that we’ll return to in Chapter 8.
For now, whether or not Haidt’s theory is correct, has the right idea but the wrong details, or is true but irrelevant, it illustrates well the point we’ve been driving at in this chapter, which is that we can disagree at a deep, fundamental level about what we believe to be true. So deeply that we may not be ever able to come to complete agreement with others. When we look at people from other political parties, religions, and cultures than our own and find them acting in ways that seem immoral, they look back at us and think the same. It seems that we each have approximately equally good footing to justify our beliefs and so are stuck disagreeing.
And it’s not just other people we disagree with. We also disagree with ourselves all the time! For example, whether or not I think others should do the same, I think the right thing for me to do is to avoid sugary drinks. But most days I drink a Coke. What happened? Why didn’t I do what I believed was right? That’s the question we’ll explore in the next chapter.
Fundamental Uncertainty: Chapter 3 - Why don’t we agree on what’s right?
N.B. This is a chapter in a planned book about epistemology. Chapters are not necessarily released in order. If you read this, the most helpful comments would be on things you found confusing, things you felt were missing, threads that were hard to follow or seemed irrelevant, and otherwise mid to high level feedback about the content. When I publish I’ll have an editor help me clean up the text further.
You’re walking down the street and find a lost wallet. You open it and find the owner’s ID and $100 dollars. You have a few possible courses of action. You could return the wallet exactly as you found it. If you did, most people would say you did the right thing. If instead you kept the wallet and all the money in it, most people would say you did the wrong thing. But what if you returned the wallet and kept a little of the money, say $10, as a “finder’s fee”? Did you do a good thing or a bad thing?
Some might say you deserve the finder’s fee since the time it takes you to return the wallet is worth something. Further, at least you were willing to return most of the money. A real thief could have found it and kept all the money! The owner will be nearly as happy to get back the wallet with $90 as they would be to get back the wallet with $100, so they might not even notice. And if they do ask about the $10 you can lie and say that’s how you found the wallet to spare you both an awkward conversation.
Others will say that stealing is stealing, end of story. The owner can choose to offer you a reward when you return the wallet, but they don’t have an obligation to do that. Even if they should offer you a reward, if they don’t it shouldn’t matter. Good actions aren’t contingent on later rewards. Thus the only right thing to do is return the wallet in tact. After all, wouldn’t you want someone to return your lost wallet without stealing any of the money?
Deciding what to do about a lost wallet may seem like a typical moral dilemma, but consider that in the last chapter we solved the problem of how words get their meanings. In that case, shouldn’t we all know what “good” and “bad” mean? And if we know what “good” and “bad” mean, shouldn’t we then be able to pick out which actions are good and which are bad and then only choose the good ones? How can there be disagreement about what is right and wrong?
Recall that we learn the meanings of words through our interactions with others when they show us what words mean to them. Maybe one day you share a toy on the playground and the teacher says you were “good” to share and rewards you with a smile. Another day you hit another kid on the playground, the teacher says you were “bad”, and punishes you by making you sit on the ground while the other kids play. Through many of these interactions with teachers, parents, other adults, and your fellow kids, you build up a notion of what “good” and “bad” mean, as well as learn that doing good is generally rewarded while doing bad is generally punished. So you try to do good things to get rewards and avoid bad things so you don’t get punished.
But your notion of what “good” and “bad” mean are ultimately your own since they are shaped by your individual experiences. Although many people try to influence your experience so you end up with the same notion of “good” and “bad” as they have, there will inevitably be some errors in the transmission. For example, maybe your parents tried to teach you to hold the door open for others, except they thought it was okay to not hold the door open when you have a good reason to hurry. You might not have picked up on the exception, and so learned the rule “always hold the door open for others, no matter what”. So now you have a slightly different idea than the one your parents tried to teach you about what’s good behavior due to an error.
And it doesn’t even require error to get variance in values. Maybe your parents fell on hard times and could only feed the family by stealing food. They were raised with the idea that all stealing is bad so always felt guilty about stealing to eat. However, when they teach you to steal food to feed yourself, they insist it’s a good thing because they don’t want you to feel the same shame they do. So you grow up thinking stealing to eat is good. Now we have at least two different ideas in the world about the goodness of stealing: those who think all stealing is bad, even to feed yourself when there’s no other option, and those who think stealing is fine if the alternative is hunger.
Given these differences in values naturally arise, how can we come to agreement on what’s right? If we have a city where 99% of the people are Never Stealers—they think all stealing is bad—and 1% are Food Stealers—they think stealing food is okay when that’s the only way to eat—should the Food Stealers bend to the will of the majority? Or would the “right thing” be for the Never Stealers to have a bit more compassion for the hungry and add some nuance to their beliefs about stealing? Is there some means by which we can know what’s really right and wrong when people end up disagreeing?
Reaching Agreement
Let’s leave questions or morals and ethics aside for a moment to talk about how people agree at all.
Let’s suppose two people, Alice and Bob, are debating. Alice claims that all phoobs are red. Bob claims that all phoobs are blue. How might they come to some agreement about the color of phoobs?
What happened here? Alice and Bob shared information with each other. Each one updated their beliefs about the world based on the information they learned. After sharing, they were able to come to agreement about phoob colors—at least until Carroll showed up with new information!
If Alice and Bob share information like this, should they always come to agreement? That is, with enough time and effort, could they always resolve their disagreements?
Turns out yes—at least if they are sufficiently “rational”.
What does it mean to be sufficiently “rational”, though? Most people would describe a person as rational if they make decisions and form beliefs without letting their emotions dominate. That’s not the kind of rational necessary to get people to always agree, though. It might help, sure, but it’s not enough to logically guarantee agreement. For that we’re going to need people who are Bayesian rationalists.
Bayesian rationality is a precise mathematical way of describing rationality. Bayesian rationalists—or just “Bayesians”—have precise mathematical beliefs. Each of their beliefs is a combination of a statement about the world, like “the sky is blue”, and a probability of how likely they think that statement is to be true, say 99.99%. They then update these beliefs based on their observations of the world using a mathematical rule known as Bayes’ Theorem, hence why we call them Bayesians.
There’s a few things you should know about Bayesians. First, the way they use Bayes’ Theorem to update their beliefs is by running a calculation on two values. The first is the prior probability, which means the probability the Bayesian had about a statement before they saw any additional information. The second is the likelihood of the evidence, which is a calculation of how likely it is that whatever they observed is true given what they know about the world. Bayes’ Theorem multiples the prior probability and the likelihood together to generate a posterior probability, which is what the Bayesian should update their beliefs to.
Second, Bayesian beliefs always have a probability between 0% and 100% but are never 0% or 100%. Why? Because if they were ever 100% certain about one of their beliefs they’d be stuck with it forever, never able to change it based on new information. This is a straightforward consequence of how Bayes’ Theorem is calculated, but also matches intuitions: total certainty in the truth or falsity of a statement is to take the statement “on faith” or “by assumption” and so be unable or unwilling to consider alternatives. So if a Bayesian were ever 100% sure the sky is blue, they’d keep believing it even if they moved to Mars and the sky was clearly red. They’d see red and keep believing the sky is blue.
Third, Bayesians are optimal reasoners. That is, they never leave any evidence on the table; they always update their beliefs as much as possible based on what they observe. If you present evidence to a Bayesian that they’ve moved to Mars, they’ll believe they are on Mars with exactly the probability permitted by the evidence, no more and no less. This means you can’t trick Bayesians, or at least not in the normal sense. At most you can fool a Bayesian by carefully filtering what evidence they see, but even then they’ll have accounted for the probability that they’re being deceived and update in light of it! Think of them like Sherlock Holmes but with all the messy opportunity for human error removed.
Combined, these facts mean that Bayesians beliefs are always in a state of fluctuating uncertainty that nevertheless are the more accurate beliefs possible given a Bayesian’s priors and the evidence they’ve seen. And that means that can pull off some impressive feats of reasoning!
For instance, returning to the claim that two people sufficiently rational can always agree, if two people are Bayesian rationalists, there’s a theorem—Aumann’s Agreement Theorem—which proves that they will always agree…under special conditions. Those conditions are that they must have common prior beliefs—things they believed before they encountered any of the evidence they know that supports their beliefs—and they must share all the information they have with each other. If they do those two things, then they will be mathematically forced to agree about everything!
That’s pretty cool, but humans disagree all the time. What gives? Well, humans aren’t Bayesian rationalists! Most of us don’t have precise probabilities assigned to all of our beliefs, and even if we did we wouldn’t succeed at always applying Bayes’ Theorem correctly to update them. Bayesians are more like a theoretical ideal we can compare ourselves against. We instead look more like Bayesians with a lot of error mixed in: we have vague ideas of how likely things we believe are to be true and do a somewhat fuzzy job of updating those beliefs when we learn new information, but we rely heavily on heuristics and biases to update our beliefs. With a lot of training we can be a little bit closer to being Bayesians, but we’ll always make mistakes, ensuring we’ll disagree at least some of the time.
We also don’t meet one of the other requirements of Aumann’s Agreement Theorem: we don’t have the same prior beliefs. This is likely intuitively true to you, but it’s worth proving. For us to all have the same prior beliefs we’d need to all be born with the same priors. This seems unlikely, but for the sake of argument let’s suppose it’s true that we are. As we collect evidence about the world we update our beliefs, but we don’t remember all the evidence. Even if we have photographic memories, childhood amnesia assures that by the time we reach the age of 3 or 4 we’ve forgotten things that happened to us as babies. Thus by the time we’re young children we already have different prior beliefs and can’t share all our evidence with each other to align on the same priors because we’ve forgotten it. Thus when we meet and try to agree, sometimes we can’t because even if we have common knowledge about all the information each other has now, we didn’t start from the same place and so may fail to reach agreement.
So in theory we cannot all agree about everything, but in practice some people agree about some things. This is because our everyday agreement is fuzzy. Unlike Bayesians, we don’t get hung up on disagreements about things like whether to be precisely 99.5% or 99.6% sure the sky is blue. We just agree that the sky is blue and leave open the vague possibility that something crazy happens like waking up on Mars and seeing a red sky.
Given that we can reach fuzzy agreement about some things, can we reach fuzzy agreement about morals and ethics? Can we agree about what is right and wrong in practice even if we cannot in theory?
Disagreeing on Priors
If we want to see if we can reach some fuzzy agreement about what’s good and bad, we need to consider in more depth what it means when we disagree. Previously when we talked about disagreements about what words mean we did so in terms of error. This makes sense for many types of disagreements where there’s broad agreement about what the right meaning is. For example, if I think all orange-colored citrus fruits are oranges, I might accidentally serve a redish variety of grapefruit to guests at my breakfast table. Their surprise when they take a bite of these “oranges” will quickly inform me of my mistake.
But other disagreements don’t look so much like errors. To continue the fruit example, maybe I do know the difference between oranges and grapefruits, but I happen to think grapefruits taste better than oranges, so I intentionally serve them. My guests disagree. There’s not really an error here, though, but rather a difference in preferences. It’s similar with disagreements about what clothes to wear, art to view, music to listen to, and so on: these are differences in individual preferences rather than errors. Sure, we can find the likes of busybodies and professional critics who make a career of telling others they have the wrong preferences, but the errors they see only exist from their point of view. If I happen to like music that’s derivative and simplistic because I have fond memories associated with it, then that’s my business and no one else’s—as long as I wear headphones!
But what’s the deal with good and bad? Are differences in morals and ethics a matter of error or preference? On the one hand, it seems to be a matter of error because our actions can have serious consequences for other people. If I’m a Food Stealer who thinks it’s okay to steal to eat and you’re a Never Stealer who thinks all stealing is wrong, you’ll be upset when your family has to skip a meal because I snuck into your house and stole your dinner for mine. This seems like a case where one of us is making an error: one of us is wrong about what is right, and we need to settle the ethical question of whether stealing food to eat when hungry is good or bad.
But isn’t this also kind of a difference in preference? Perhaps I and my fellow Food Stealers prefer to live in a world where the hungry get to eat even if that means others sometimes have their food stolen. Perhaps you and your fellow Never Stealers prefer a world where no one ever steals, and we can be safe in the knowledge that our food remains ours to do with as we please. So maybe this isn’t an error about what’s right and wrong, but a disagreement about a preference for the kind of society we’d each like to live in.
When we find a situation like this where two interpretations of the same events seem reasonable, it’s worth asking if there’s a way both can be true at once. That is, is there a way that it can both look to us like differences in morals are errors and for them to behave like preferences?
To see, let’s first return to our friends the Bayesians. How do they think about morality? For them, any beliefs they have about what’s right and wrong are the same as any other beliefs they have, which is to say that those beliefs are statements with probabilities attached. So a Bayesian doesn’t make categorical statements like “it’s wrong to steal” the way most people do. They instead believe things like “I’m 95% certain that all stealing is wrong”.
So if I’m a Bayesian, what does it mean for me to say someone else is “in error”? Well, since I’m an optimal reasoner and my beliefs are already the best ones that can be reckoned given my prior beliefs and the evidence I’ve seen, it would mean that they don’t agree with my beliefs in some way. So if I’m 95% certain the statement “all stealing is wrong other than stealing to eat when hungry” is true and I meet another Bayesian who says that they are 95% certain that “all stealing is wrong” is true, then it looks to me like they’ve made a mistake since, as just noted, I already have the best possible beliefs. But because they’re also a Bayesian, they think the same thing in reverse: they have the best beliefs and I am in error.
If we have the same priors we might be able to come into agreement by sharing all the evidence we have that supports our beliefs. Since we’re Bayesians, Aumann’s Agreement Theorem applies, so we’ll be able to make the same updates on the same evidence and should come to believe the same things. But let’s suppose that we’re human-like with respect to our priors, which is to say that we don’t share the same priors. If we figure this out, we can agree to disagree on priors, which is to say we agree that our disagreement cannot be resolved because we didn’t start out with the same prior beliefs. This is analogous to the human situation of having different preferences, only it extends well beyond things we typically think of as preferences to questions of morals and ethics.
Returning to the world of humans, we’re not so different from Bayesians who disagree on priors. To wit, we have deeply held beliefs about what is right and wrong. So do other people. Those beliefs were influenced by everything that happened in our lives to construct our unique concept of what is right and wrong. When we disagree on moral questions it feels like others are in error because we did our best to come up with our beliefs, but so did they. Instead of one or both of us being in error, it’s reasonable to say that we have a fundamental disagreement about morals and ethics because we don’t share the same deeply held beliefs about good and bad. Neither one of us is necessarily correct in some absolute sense; we both have our own reasonable claim on the truth based on what we know.
This is a pretty wild idea, though, because it implies that people with vastly different beliefs from our own might be just as justified as us in their ideas about what’s right and wrong. This is true even if they believe something that, to us, seems utterly abhorrent. Rather than pushing you on any hot-button issues here—you can think about those for yourself—let’s reconsider the disagreement between the Food Stealers and the Never Stealers.
If you ask Food Stealers what they think about Never Stealers, they’d likely say that Never Stealers are cold and heartless. The Never Stealers, in the same way, think the Food Stealers are unrepentant thieves freeloading off the hard work of the Never Stealers. But is either really right? The Food Stealers grew up thinking it was more important to care for others in need than to greedily hoard food. The Never Stealers grew up thinking it was more important to respect each others property than let beggars eat them out of house and home. They disagree because they believe fundamentally different things about the world and cannot agree. If they were Bayesians, they would be disagreeing on priors. But since they’re humans, we might instead say they have different moral foundations.
Different Moral Foundations
The idea of moral foundations, and that people might have different ones, comes from Jonathan Haidt in his book The Righteous Mind. He argues that humans have different fundamental beliefs about what is right and wrong. These fundamental beliefs are built out of a few moral “foundations” or core beliefs. A person’s moral beliefs can then be thought of like different moral “personalities”, with each person identifying more or less with different foundational moral beliefs.
He identifies six moral foundations. They are:
care/harm: concern for the pain and joy of others
fairness/cheating: the desire for everyone to be treated the same
loyalty/betrayal: placing the group above the individual
authority/subversion: deference to leaders and tradition
sanctity/degradation: generalized “disgust”; splits the world into pure and impure
liberty/oppression: the right to be free of the control of others
Under this theory, one person might believe fairness is more important than liberty and think that it’s good to give up some freedom in order to treat everyone equally. Another might believe just the opposite, seeing any limits on freedom as wrong no matter what the cost in terms of other moral foundations. Haidt and his fellow researchers use this theory to explain differences in beliefs about what is right and wrong between different cultures, religions, and even political parties. For example, based on survey data it seems that political conservatives more value loyalty, authority, and sanctity than the politically liberal, who more value care and fairness. Given this fact, it then seems likely that most disagreements between conservatives and liberals are actually not due to disagreements about what specific policies will be best for society but due to disagreements about what even is best, which is to say disagreements about moral foundations.
Perhaps unsurprisingly, some people disagree with Haidt’s theory. Many of them think he’s right that humans have something like moral foundations or fundamental core beliefs about morality but wrong in what the specific moral foundations are. Others think his theory is irrelevant, because even if people have different moral foundations there’s still some fact of the matter about which moral foundations are best. This only further underscores how fundamentally uncertain we are about what things are right and wrong—that we can’t even agree on the theoretical framework in which to work out what things are good and bad. In this chapter we’ve not even begun to touch on the long history of philosophers and theologians trying to figure out what’s good and bad. That’s a topic one step deeper that we’ll return to in Chapter 8.
For now, whether or not Haidt’s theory is correct, has the right idea but the wrong details, or is true but irrelevant, it illustrates well the point we’ve been driving at in this chapter, which is that we can disagree at a deep, fundamental level about what we believe to be true. So deeply that we may not be ever able to come to complete agreement with others. When we look at people from other political parties, religions, and cultures than our own and find them acting in ways that seem immoral, they look back at us and think the same. It seems that we each have approximately equally good footing to justify our beliefs and so are stuck disagreeing.
And it’s not just other people we disagree with. We also disagree with ourselves all the time! For example, whether or not I think others should do the same, I think the right thing for me to do is to avoid sugary drinks. But most days I drink a Coke. What happened? Why didn’t I do what I believed was right? That’s the question we’ll explore in the next chapter.