I will attempt to clarify which of these things I actually believe, as best I can, but do not expect to be able to engage deeper into the thread.
Implication: it’s bad for people to have much more information about other people (generally), because they would reward/punish them based on that info, and such rewarding/punishing would be unjust. We currently have scapegoating, not justice. (Note that a just system for rewarding/punishing people will do no worse by having more information, and in particular will do no worse than the null strategy of not rewarding/punishing behavior based on certain subsets of information)
>> What I’m primarily thinking about here is that if one is going to be rewarded/punished for what one does and thinks, one chooses what one does and thinks largely based upon that—you have a signaling equilibria, as Wei Dei notes in his top-level comment. I believe that this in many situations is much worse, and will lead to massive warping of behavior in various ways, even if those rewarding/punishing were attempting to be just (or even if they actually were just, if there wasn’t both common knowledge of this and agreement on what is and isn’t just). The primary concern isn’t whether someone can expect to be on-net punished or rewarded, but on how behaviors are changed.
We need people there with us who won’t judge us. Who won’t use information against us.
Implication: “judge” means to use information against someone. Linguistic norms related to the word “judgment” are thoroughly corrupt enough that it’s worth ceding to these, linguistically, and using “judge” to mean (usually unjustly!) using information against people.
>> Judge here means to react to information about someone or their actions or thoughts largely by updating their view of the person—to not have to worry (as much, at least) about how things make you seem. The second sentence is a second claim, that we also need them not to use the information against us. I did not intend for the second to seem to be part of the first.
A complete transformation of our norms and norm principles, beyond anything I can think of in a healthy historical society, would be required to even attempt full non-contextual strong enforcement of all remaining norms.
Implication (in the context of the overall argument): a general reduction in privacy wouldn’t lead to norms changing or being enforced less strongly, it would lead to the same norms being enforced strongly. Whatever or whoever decides which norms to enforce and how to enforce them is reflexive rather than responsive to information. We live in a reflex-based control system.
>> That doesn’t follow at all, and I’m confused why you think that it does. I’m saying that when I try to design a norm system from scratch in order to be compatible with full non-contextual strong enforcement, I don’t see a way to do that. Not that things wouldn’t change—I’m sure they would.
There are also known dilemmas where any action taken would be a norm violation of a sacred value.
Implication: the system of norms is so corrupt that they will regularly put people in situations where they are guaranteed to be blamed, regardless of their actions. They won’t adjust even when this is obvious.
>> The system of norms is messy, which is different than corrupt. Different norms conflict. Yes, the system is corrupt, but that’s not required for this to be a problem. Concrete example, chosen to hopefully be not controversial: Either turn away the expensive sick child patient, or risk bankrupting the hospital.
Part of the job of making sausage is to allow others not to see it. We still get reliably disgusted when we see it.
Implication: people expect to lose value by knowing some things. Probably, it is because they would expect to be punished due to it being revealed they know these things (as in 1984). It is all an act, and it’s better not to know that in concrete detail.
>> Consider the literal example of sausage being made. The central problem is not that people are afraid the sausage makers will strike back at them. The problem is knowing reduces one’s ability to enjoy sausage. Alternatively, it might force one to stop enjoying sausage.
>> Another important dynamic is that we want to enforce a norm that X is bad and should be minimized. But sometimes X is necessary. So we’d rather not be too reminded of the X that is necessary in some situations where we know X must occur, to avoid weakening the norm against X elsewhere, and because we don’t want to penalize those doing X where it is necessary as we would instinctively do if we learned too much detail.
We constantly must claim ‘everything is going to be all right’ or ‘everything is OK.’ That’s never true. Ever.
Implication: the control system demands optimistic stories regardless of the facts. There is something or someone forcing everyone to call the deer a horse under threat of punishment, to maintain a lie about how good things are, probably to prop up an unjust regime.
>> OK, this one’s just straight up correct if you remove the unjust regime part. Also, I am married with children.
But these problems, while improved, wouldn’t go away in a better or less hypocritical time. Norms are not a system that can have full well-specified context dependence and be universally enforced. That’s not how norms work.
Implication: even in the most just possible system of norms, it would be good to sometimes violate those norms and hide the fact that you violated them. (This seems incorrect to me!)
>> As I noted above, my model of norms is that they are even at their best messy ways of steering behavior, and generally just norms will in some circumstances push towards incorrect action in ways the norm system would cause people to instinctively punish. In such cases it is sometimes correct to violate the norm system, even if it is as just a system as one could hope for. And yes, in some of those cases, it would be good to hide that this was done, to avoid weakening norms (including by allowing such cases not be punished thus enabling otherwise stronger punishment).
If others know exactly what resources we have, they can and will take all of them.
Implication: the bad guys won; we have rule by gangsters, who aren’t concerned with sustainable production, and just take as much stuff as possible in the short term. (This seems on the right track but partially false; the top marginal tax rate isn’t 100% [EDIT: see Ben’s comment, the actual rate of extraction is higher than the marginal tax rate])
>> This is not primarily a statement about The Powers That Be or any particular bad guys. I think this is inherent in how people and politics operate, and what happens when one has many conflicting would-be sacred values. Of course, it is also a statement that when gangsters do go after you, it is important that they not know, and there is always worry about potential gangsters on many levels whether or not they have won. Often the thing taking all your resources is not a bad guy—e.g. expensive medical treatments, or in-need family members, etc etc.
If it is known how we respond to any given action, others find best responses. They will respond to incentives. They exploit exactly the amount we won’t retaliate against. They feel safe.
Implication: more generally available information about what strategies people are using helps “our” enemies more than it helps “us”. (This seems false to me, for notions of “us” that I usually use in strategy)
>> Often on the margin more information is helpful. But complete information is highly dangerous. And in my experience, most systems in an interesting equilibrium where good things happen sustain that partly with fuzziness and uncertainty—the idea that obeying the spirit of the rules and working towards the goals and good things gets rewarded, other action gets punished, in uncertain ways. There need to be unknowns in the system. Competitions where every action by other agents is known are one-player games about optimization and exploitation.
World peace, and doing anything at all that interacts with others, depends upon both strategic confidence in some places, and strategic ambiguity in others. We need to choose carefully where to use which.
Implication (in context): strategic ambiguity isn’t just necessary for us given our circumstances, it’s necessary in general, even if we lived in a surveillance state. (Huh?)
>> Strategic ambiguity is necessary for the surveillance state so that people can’t do everything the state didn’t explicitly punish/forbid. It is necessary for those living in the state, because the risk of revolution, the we’re-not-going-to-take-it-anymore moment, helps keep such places relatively livable versus places where there is no such fear. It is important that you don’t know exactly what will cause the people to rise up, or you’ll treat them as bad as won’t do that. And of course I was also talking explicitly about things like ‘if you cross that border we will be at war’ - there are times when you want to be 100% clear that there will be war (e.g. NATO) and others where you want to be 100% unclear (e.g. Taiwan).
To conclude: if you think the arguments in this post are sound (with the conclusion being that we shouldn’t drastically reduce privacy in general), you also believe the implications I just listed, unless I (or you) misinterpreted something.
>> I hope this cleared things up. And of course, you can disagree with many, most or even all my arguments and still not think we should radically reduce privacy. Radical changes don’t default to being a good idea if someone gives invalid arguments against them!
I will attempt to clarify which of these things I actually believe, as best I can, but do not expect to be able to engage deeper into the thread.
Implication: it’s bad for people to have much more information about other people (generally), because they would reward/punish them based on that info, and such rewarding/punishing would be unjust. We currently have scapegoating, not justice. (Note that a just system for rewarding/punishing people will do no worse by having more information, and in particular will do no worse than the null strategy of not rewarding/punishing behavior based on certain subsets of information)
>> What I’m primarily thinking about here is that if one is going to be rewarded/punished for what one does and thinks, one chooses what one does and thinks largely based upon that—you have a signaling equilibria, as Wei Dei notes in his top-level comment. I believe that this in many situations is much worse, and will lead to massive warping of behavior in various ways, even if those rewarding/punishing were attempting to be just (or even if they actually were just, if there wasn’t both common knowledge of this and agreement on what is and isn’t just). The primary concern isn’t whether someone can expect to be on-net punished or rewarded, but on how behaviors are changed.
Implication: “judge” means to use information against someone. Linguistic norms related to the word “judgment” are thoroughly corrupt enough that it’s worth ceding to these, linguistically, and using “judge” to mean (usually unjustly!) using information against people.
>> Judge here means to react to information about someone or their actions or thoughts largely by updating their view of the person—to not have to worry (as much, at least) about how things make you seem. The second sentence is a second claim, that we also need them not to use the information against us. I did not intend for the second to seem to be part of the first.
Implication (in the context of the overall argument): a general reduction in privacy wouldn’t lead to norms changing or being enforced less strongly, it would lead to the same norms being enforced strongly. Whatever or whoever decides which norms to enforce and how to enforce them is reflexive rather than responsive to information. We live in a reflex-based control system.
>> That doesn’t follow at all, and I’m confused why you think that it does. I’m saying that when I try to design a norm system from scratch in order to be compatible with full non-contextual strong enforcement, I don’t see a way to do that. Not that things wouldn’t change—I’m sure they would.
Implication: the system of norms is so corrupt that they will regularly put people in situations where they are guaranteed to be blamed, regardless of their actions. They won’t adjust even when this is obvious.
>> The system of norms is messy, which is different than corrupt. Different norms conflict. Yes, the system is corrupt, but that’s not required for this to be a problem. Concrete example, chosen to hopefully be not controversial: Either turn away the expensive sick child patient, or risk bankrupting the hospital.
Implication: people expect to lose value by knowing some things. Probably, it is because they would expect to be punished due to it being revealed they know these things (as in 1984). It is all an act, and it’s better not to know that in concrete detail.
>> Consider the literal example of sausage being made. The central problem is not that people are afraid the sausage makers will strike back at them. The problem is knowing reduces one’s ability to enjoy sausage. Alternatively, it might force one to stop enjoying sausage.
>> Another important dynamic is that we want to enforce a norm that X is bad and should be minimized. But sometimes X is necessary. So we’d rather not be too reminded of the X that is necessary in some situations where we know X must occur, to avoid weakening the norm against X elsewhere, and because we don’t want to penalize those doing X where it is necessary as we would instinctively do if we learned too much detail.
Implication: the control system demands optimistic stories regardless of the facts. There is something or someone forcing everyone to call the deer a horse under threat of punishment, to maintain a lie about how good things are, probably to prop up an unjust regime.
>> OK, this one’s just straight up correct if you remove the unjust regime part. Also, I am married with children.
Implication: even in the most just possible system of norms, it would be good to sometimes violate those norms and hide the fact that you violated them. (This seems incorrect to me!)
>> As I noted above, my model of norms is that they are even at their best messy ways of steering behavior, and generally just norms will in some circumstances push towards incorrect action in ways the norm system would cause people to instinctively punish. In such cases it is sometimes correct to violate the norm system, even if it is as just a system as one could hope for. And yes, in some of those cases, it would be good to hide that this was done, to avoid weakening norms (including by allowing such cases not be punished thus enabling otherwise stronger punishment).
Implication: the bad guys won; we have rule by gangsters, who aren’t concerned with sustainable production, and just take as much stuff as possible in the short term. (This seems on the right track but partially false; the top marginal tax rate isn’t 100% [EDIT: see Ben’s comment, the actual rate of extraction is higher than the marginal tax rate])
>> This is not primarily a statement about The Powers That Be or any particular bad guys. I think this is inherent in how people and politics operate, and what happens when one has many conflicting would-be sacred values. Of course, it is also a statement that when gangsters do go after you, it is important that they not know, and there is always worry about potential gangsters on many levels whether or not they have won. Often the thing taking all your resources is not a bad guy—e.g. expensive medical treatments, or in-need family members, etc etc.
Implication: more generally available information about what strategies people are using helps “our” enemies more than it helps “us”. (This seems false to me, for notions of “us” that I usually use in strategy)
>> Often on the margin more information is helpful. But complete information is highly dangerous. And in my experience, most systems in an interesting equilibrium where good things happen sustain that partly with fuzziness and uncertainty—the idea that obeying the spirit of the rules and working towards the goals and good things gets rewarded, other action gets punished, in uncertain ways. There need to be unknowns in the system. Competitions where every action by other agents is known are one-player games about optimization and exploitation.
Implication (in context): strategic ambiguity isn’t just necessary for us given our circumstances, it’s necessary in general, even if we lived in a surveillance state. (Huh?)
>> Strategic ambiguity is necessary for the surveillance state so that people can’t do everything the state didn’t explicitly punish/forbid. It is necessary for those living in the state, because the risk of revolution, the we’re-not-going-to-take-it-anymore moment, helps keep such places relatively livable versus places where there is no such fear. It is important that you don’t know exactly what will cause the people to rise up, or you’ll treat them as bad as won’t do that. And of course I was also talking explicitly about things like ‘if you cross that border we will be at war’ - there are times when you want to be 100% clear that there will be war (e.g. NATO) and others where you want to be 100% unclear (e.g. Taiwan).
To conclude: if you think the arguments in this post are sound (with the conclusion being that we shouldn’t drastically reduce privacy in general), you also believe the implications I just listed, unless I (or you) misinterpreted something.
>> I hope this cleared things up. And of course, you can disagree with many, most or even all my arguments and still not think we should radically reduce privacy. Radical changes don’t default to being a good idea if someone gives invalid arguments against them!