Contra Alexander on the Virtue of Silence
At my local LW meetup, we recently discussed Scott Alexander’s post on the Virtue of Silence. For reasons I could not immediately discern, I intuitively disagreed with his argument. The following is my attempt to find out what exactly I disagreed with, and why.
The Virtue of Silence: Scott Alexander’s original example
Consider a patient who is telling their doctor that they sent an innocent person to prison, and that this causes them stress which may be relevant to a medical treatment. The doctor is, of course, bound by medical confidentiality, but also faces an ethical dilemma: Should they uphold medical confidentiality, or should they report this to the authorities, which could possibly free the wrongfully convicted?
Leah Libresco argues that the doctor should not violate medical confidentiality, because “violating medical confidentiality creates an expectation that medical confidentiality will be violated in the future, thus dooming patients who are too afraid to talk about drug use or gay sex or other potentially embarrassing but important medical risk factors.”
Scott Alexander then raises the stakes: He proposes that the decision of a patient to tell their doctor about some medical risk factor is not influenced by whether or not a doctor violated medical confidentiality in the past, but rather by the patient’s hearing about it. He rightly argues that if such a violation were published in a high-volume outlet such as the New York Times, the patient would be much more likely to hear about it than, say, via the proceedings of the resulting court case.
Alexander also argues that a New York Times article about a doctor considering a violation of medical confidentiality would have a similar effect on the patient than an article about an actual violation.
I don’t disagree with Alexander’s reasoning up to this point. I disagree, however, with the conclusion he then draws: That “whether the doctor actually keeps her promise or not in this particular case is of miniscule importance compared to the damage that the column has already done”, and that the New York Times ought not have published the column containing the doctor’s ethical qualms.
Unbreakable Rules
To disentangle exactly where my disagreement lies, let us examine the assertion that a New York Times article about a doctor considering a violation of medical confidentiality would have a similar effect on the patient than an article about an actual violation. This implies that medical confidentiality is supposed to be, in the eyes of the naive patient, a rule so absolute that there is no room for interpretation, and no room for permissibly breaking it—in short, that it is a rule which knows no exceptions. If a patient’s trust in his doctor’s medical confidentiality is shaken by the doctor’s thinking about breaking the rule in the face of an ethical dilemma, the patient must have thought the rule so absolute that the doctor should not even have thought about breaking it under any circumstance.
Conversely, had the patient assumed that medical confidentiality can permissibly be broken in some extreme cases—as most rules, even basic rights in most legal systems, can[1] - a public discussion about whether this concrete ethical dilemma the doctor faces constitutes such an extreme case could not have changed the patient’s fundamental assumptions about medical confidentiality. It follows that the patient must have considered medical confidentiality a rule which admits no exceptions.
The doctor, on the other hand, believes that medical confidentiality is up to interpretation, at least when facing some ethical dilemmas. They made this belief clear by writing a letter to the New York Times. Now consider the example used by Alexander, where the patient has sent an innocent person to prison. Before reading the New York Times, the patient will operate under the assumption that medical confidentiality will not be broken, regardless of what they might tell their doctor: They will always tell their doctor about the person they sent to prison. The doctor, however, considers breaking medical confidentiality, a fact unknown to the patient. If the doctor decided to report the false accusation, the patient would have just incriminated themselves and may themselves face prison. Does this not seem unfair to the patient?
Asymmetric Information
The root cause of this perceived unfairness is asymmetric information: The patient believes that they and the doctor are playing a game with a given set of rules (here, that medical confidentiality cannot be broken); in fact, however, the rules are different, but the patient doesn’t know them. Medical confidentiality may be broken for many reasons—in the prison example, if ethical reasons might require it. In a game between rational agents, asymmetric information leads to suboptimal outcomes.[2] Conversely, as Alexander seems to do, postulating asymmetric information as a requirement for optimal outcomes denies the patient rationality. Does this not seem unfair to the patient?
Examples
Denouncing public discussion of whether some situation constitutes an exception to a rule amounts to upholding an illusion of an unbreakable rule. In the example of the patient sending the wrong person to prison, the adverse outcomes for the patient are less startling, because the reader sympathizes with the innocent prisoner. Other examples can be constructed where these adverse effects are much more clear.
In societies in which homosexuality is a capital offense, a doctor might choose to violate medical confidentiality to report a patient to the authorities who mentioned that they had gay sex. The naive belief in the illusion of an unbreakable rule may cost the patient their life. In this case, I would hope for the doctor to discuss their considerations of violating medical confidentiality in public—it will allow gay people to realistically assess the very real risk involved in mentioning their sex life to their doctor.
There are examples beyond medical confidentiality where illusions of unbreakable rules create adverse outcomes. Consider the rule, common to militaries across the world, that a soldier has to obey the commands of their superior officers. Like with medical confidentiality, one may argue that upholding the illusion of this rule as unbreakable is necessary: After all, if soldiers were not to obey orders on the battlefield, chaos would ensue, or worse, they might refuse to fight at all. However, there are exceptions to this rule: Soldiers are indeed required to disobey orders if the orders are unlawful (particularly if they are violations of human rights). Upholding the illusion that military orders must always be obeyed as an unbreakable rule may well have led to war crimes.
Attacking this from the other side, I find it difficult to construct rules for which Silence on matters of interpretation is always Virtuous. Basic human rights come to mind—certainly, these should always unambiguously hold, and someone openly discussing whether it would be ethical to violate someone else’s human rights would immediately cast doubt on whether they actually do. But even the most inalienable of human rights can conflict with another, and situations can arise which cannot be resolved without violating someone’s human rights; consider the classical trolley dilemma, where a decision must be made which violates someone’s right to life either way. More salient questions arise when different human rights are clashing. Consider a doctor faced with ending the life of a terminally ill patient in assisted suicide, where the patient has explicitly declared their wish to die. Should the doctor not be permitted to publicly discuss whether he can make an exemption to a rule (respect the patient’s right to life; Thou Shalt Not Kill) if it conflicts with a fundamental moral value (the patient’s right to self-determination)? Yes, it would make the right to life look less unbreakable. But the moral conflict exists, and it must be resolved by public debate.
Takeaways
If a rule can be broken under some circumstances, it must be permissible (and sometimes, as in the case of military command, even encouraged) to publicly discuss these circumstances under which it can be broken.
Forbidding discussions about such exceptions at best creates an unlevel playing field, where naive agents unwillingly make decisions under false assumptions about the rules. At worst, it creates oppressive power structures where the same small group of the agents (the doctors) are interpreting the rules, and the other, larger group of the agents (the patients) do not even know that the rules are up to interpretation.
- ^
During the COVID-19 pandemic, basic rights such as the freedom of movement were suspended. The freedom of movement is considered an inalienable human right by the United Nations—even inalienable human rights can be suspended if they conflict with other rights, such as the right to life.
- ^
It is, for example, often mentioned as a possible reason for market failures in free-market systems.
Suppose I knew medical confidentiality was a thing, but really didn’t realize it stretched that far. I think there is an assumption that people will update down on seeing this discussion. And I don’t think that is true. It is a big update away from “medical confidentiality gets lip service but is broken all the time for any old reason”.
Several years ago I had a conversation with someone that helped them predict other people’s behavior much better thereafter: most people are not consequentialists, and are not trying to be, they mostly do what is customary for them in each situation for the relevant part of their culture or social group.
Your discussion in the post seems premised on the idea that people are trying to reason about consequences in specific cases at all, and I don’t think that’s usually true. Yes, very few rules are truly absolute, which is why most people balk at Kant’s discussion of telling a murderer where his friend is, or why the seal of the confessional in Catholicism and spousal privilege in US law are considered exceptional. But an illusion of absoluteness is a polite social fiction that helps most people trust that exceptions are at least sufficiently rare that the other party would be very hesitant to break them, and probably socially sanctioned even for validly breaking them. There’s a reason “no one likes a tattletale” has survived as a maxim, too.
That said, if I ever manage to live in a community where most people are even trying to take consequentialists arguments seriously, I’m likely to agree with the case you’ve laid out here. There’s a whole list of examples in my head that would signify that change to me, but I’m not holding my breath.
Your framing of the illusion of absolute rules as “polite social fictions” is quite brilliant, because I think that’s what Scott Alexander probably wanted to convey. It comes to mind that such social fictions may be required for people to trust in institutions, and strong institutions are generally credited as a core factor for social progress. Take for example the police—it is an extremely useful social fiction that “the police is your friend and helper”, as they say in German, even though they often aren’t, particularly not to marginalized social groups. Upholding this fiction ensures that most people respect the police, report crimes when they occur, and physical attacks by non-criminals against the police are comparatively rare in most countries. At the same time, I think it is incredibly dangerous to prohibit public discussion of police misconduct. Yes, it may destroy the social fiction of the well-meaning police—but shouldn’t people be made aware of instances of police misconduct, so that they can properly adjust their priors? Rationally speaking, doesn’t the police deserve to be treated with a degree of suspicion proportional to their probability of misconduct?
Your point that most people aren’t consequentialists is probably right. But treating them as non-consequentialists, prohibiting discussions and purposefully upholding social fictions inevitably puts you on a slippery slope, where you’re incentivized to keep discussions under wraps because it might “upset the people”—a slope any rationally-minded policymaker should be aware of.
I think something that a lot of discussions here forget is that human modeling of humans (including themselves) is a leaky abstraction. However much we think we’re perfect consequentialists, and able to know when violating a rule is best overall, and able to know all the impacts enough to mitigate the downsides, we’re often wrong.
It doesn’t matter if the doctor gets caught specifically. It’s going to get out that sometimes doctors violate confidentiality. No matter how hard they try to keep it quiet, it had an impact (duh! that’s WHY they did it) and it will be low-key known. In fact, it is known, and nobody would be surprised at all by any of your examples.
Also, a doctor who violates confidentiality once has suffered a permanent loss in their own ability to uphold confidentiality in the future. And taken on a whole lot of stress and pain in keeping secrets, which they’re likely to confess to THEIR doctors or perhaps close friends.
Contributing to the dilemma is the wide variance in human capabilities and motivations. Most groups contain a significant population of idiots and, if not full psychopaths, at least self-centered jerks. And there’s a weird idea that laws should apply equally to everyone.
My solution—recognize that the rules and heuristics which allow cooperation in our societies are imperfect, but enforce them anyway. Godel’s Theorem applies to law (in which I include social behavioral strictures), and it CANNOT be complete and corrrect. Someone enlightened enough to break the law for good reason must either pay the costs of hiding it, or suffer the consequences of being caught in the violation. Don’t try to make more epicycles and exceptions until the rules are so obtuse that they’re not actually useful as rules for most people. Instead, take the hit that sometimes someone will be punished for a good act.
Note—this makes some acts even more heroic (remember, heroism is suffering for a sympathetic cause). If https://en.wikipedia.org/wiki/Stanislav_Petrov had been executed rather than just shuffled around for his crime of saving humanity, he’d be a martyr rather than “just” a rationalist example of good consequentialism.
Suppose I knew medical confidentiality was a thing, but really didn’t realize it stretched that far. I think there is an assumption that people will update down on seeing this discussion. And I don’t think that is true. It is a big update away from “medical confidentiality gets lip service but is broken all the time for any old reason”.
Good post, I like how you pinpoint information asymmetry as the critical core issue.
Scott Alexander would sacrifice information symmetry in order to maintain trust in the rule of medical confidentiality. You propose to sacrifice this trust in order to attain information symmetry. I think there is a way out of the dilemma by reframing the issue.
As I see it, openly discussing the act of informally violating a rule weakens it (regardless of whether it is actually violated). In contrast, discussing to (formally) change the rule itself does maintain trust in the rule. A public discussion on whether the law that prescribes medical confidentiality needs to be amended with an exception for e.g. criminal offenses does not damage trust in the law. People can still assume that, until the law is changed, it is upheld in its current form. This is in contrast to public knowledge of (potential) informal rule-breaking, which leads to a situation in which one cannot be sure to what extent a doctor’s compliance with the rule can be assumed.
So, I agree with Scott Alexander in that the doctor should not write a public letter to a newspaper to ask whether he should (informally) violate the rule. Additionally, I agree with you that information imbalance is bad. Instead, I would suggest that the doctor should write a letter to discuss whether the confidentiality rules/laws require some well-defined (and transparent) exceptions. This would keep both the discussion public and uphold information symmetry.
One caveat: In a situation where informal violations of the rule do already occur frequently and this fact is not known to the general public, I fully agree with your position of making that information public (e.g. by whistleblowing) in order to restore information symmetry. This enables a debate on whether the informal violations are deemed good/bad and should be formalized/prohibited. Here, I prefer that to Scott Alexander’s position of silence.
I agree with your main point, and I think the solution to the original dilemma is that medical confidentiality should cover drug use and gay sex but not human rights violations.