In a scapegoating environment, having privacy yourself is obviously pretty important. However, you seem to be making a stronger point, which is that privacy in general is good (e.g. we shouldn’t have things like blackmail and surveillance which generally reduce privacy, not just our own privacy). I’m going to respond assuming you are arguing in favor of the stronger point.
This post rests on several background assumptions about how the world works, which are worth making explicit. I think many of these are empirically true but are, importantly, not necessarily true, and not all of them are true.
We need a realm shielded from signaling and judgment. A place where what we do does not change what everyone thinks about us, or get us rewarded and punished.
Implication: it’s bad for people to have much more information about other people (generally), because they would reward/punish them based on that info, and such rewarding/punishing would be unjust. We currently have scapegoating, not justice. (Note that a just system for rewarding/punishing people will do no worse by having more information, and in particular will do no worse than the null strategy of not rewarding/punishing behavior based on certain subsets of information)
We need people there with us who won’t judge us. Who won’t use information against us.
Implication: “judge” means to use information against someone. Linguistic norms related to the word “judgment” are thoroughly corrupt enough that it’s worth ceding to these, linguistically, and using “judge” to mean (usually unjustly!) using information against people.
A complete transformation of our norms and norm principles, beyond anything I can think of in a healthy historical society, would be required to even attempt full non-contextual strong enforcement of all remaining norms.
Implication (in the context of the overall argument): a general reduction in privacy wouldn’t lead to norms changing or being enforced less strongly, it would lead to the same norms being enforced strongly. Whatever or whoever decides which norms to enforce and how to enforce them is reflexive rather than responsive to information. We live in a reflex-based control system.
There are also known dilemmas where any action taken would be a norm violation of a sacred value.
Implication: the system of norms is so corrupt that they will regularly put people in situations where they are guaranteed to be blamed, regardless of their actions. They won’t adjust even when this is obvious.
Part of the job of making sausage is to allow others not to see it. We still get reliably disgusted when we see it.
Implication: people expect to lose value by knowing some things. Probably, it is because they would expect to be punished due to it being revealed they know these things (as in 1984). It is all an act, and it’s better not to know that in concrete detail.
We constantly must claim ‘everything is going to be all right’ or ‘everything is OK.’ That’s never true. Ever.
Implication: the control system demands optimistic stories regardless of the facts. There is something or someone forcing everyone to call the deer a horse under threat of punishment, to maintain a lie about how good things are, probably to prop up an unjust regime.
But these problems, while improved, wouldn’t go away in a better or less hypocritical time. Norms are not a system that can have full well-specified context dependence and be universally enforced. That’s not how norms work.
Implication: even in the most just possible system of norms, it would be good to sometimes violate those norms and hide the fact that you violated them. (This seems incorrect to me!)
If others know exactly what resources we have, they can and will take all of them.
Implication: the bad guys won; we have rule by gangsters, who aren’t concerned with sustainable production, and just take as much stuff as possible in the short term. (This seems on the right track but partially false; the top marginal tax rate isn’t 100% [EDIT: see Ben’s comment, the actual rate of extraction is higher than the marginal tax rate])
If it is known how we respond to any given action, others find best responses. They will respond to incentives. They exploit exactly the amount we won’t retaliate against. They feel safe.
Implication: more generally available information about what strategies people are using helps “our” enemies more than it helps “us”. (This seems false to me, for notions of “us” that I usually use in strategy)
World peace, and doing anything at all that interacts with others, depends upon both strategic confidence in some places, and strategic ambiguity in others. We need to choose carefully where to use which.
Implication (in context): strategic ambiguity isn’t just necessary for us given our circumstances, it’s necessary in general, even if we lived in a surveillance state. (Huh?)
To conclude: if you think the arguments in this post are sound (with the conclusion being that we shouldn’t drastically reduce privacy in general), you also believe the implications I just listed, unless I (or you) misinterpreted something.
> If others know exactly what resources we have, they can and will take all of them.
Implication: the bad guys won; we have rule by gangsters, who aren’t concerned with sustainable production, and just take as much stuff as possible in the short term.
To me this feels like Zvi is talking about some impersonal universal law of economics (whether such law really exists or not, we may debate), and you are making it about people (“the bad guys”, “gangsters”) and their intentions, like we could get a better outcome instead by simply replacing the government or something.
I see it as something similar to Moloch. If you have resources, it creates a temptation for others to try taking it. Nice people will resist the temptation… but in a prisoners’ dilemma with sufficient number of players, sooner or later someone will choose to defect, and it only takes one such person for you to get hurt. You can defend against an attempt to steal your resources, but the defense also costs you some resources. And perhaps… in the hypothetical state of perfect information… the only stable equilibrium is when you spend so much on defense that there is almost nothing left to steal from you.
And there is nothing special about the “bad guys” other than the fact that, statistically, they exist. Actually, if the hypothesis is correct, then… in the hypothetical state of perfect information… the bad guys would themselves end up in the very same situation, having to spend almost all successfully stolen resources to defend themselves against theft by other bad guys.
To defend yourself from the ordinary thieves, you need police. The police needs some money to be able to do their job. But what prevents them from abusing their power to take more from you? So you have the government to protect you from the police, but the government also needs money to do their job, and it is also tempted to take more. In the democratic government, politicians compete against each other… and the good guy who doesn’t want to take more of your money than he actually needs to do his job, may be outcompeted by a bad guy who takes more of your resources and uses the surplus to defeat the good guy. Also, different countries expend resources on defending against each other. And you have corruption inside all organizations, including the government, the police, the army. The corruption costs resources, and so does fighting against it. It is a fractal of burning resources.
So… perhaps there is an economical law saying that this process continues until the available resources are exhausted (because otherwise, someone would be tempted to take some of the remaining resources, and then more resources would have to be spent to stop them). Unless there is some kind of “friction”, such as people not knowing exactly how much money you have, or how exactly would you react if pushed further (where exactly is your “now I have nothing to lose anymore” point, when instead of providing the requested resources you start doing something undesired, even if doing so is likely to hurt you more); or when it becomes too difficult for the government to coordinate to take each available penny (because their oversight and money extraction also have a cost). And making the situation more transparent reduces this “friction”.
It this model, the difference between the “good guy” and the “bad guy” becomes smaller than you might expect, simply because the good guy still needs (your) resources to fight against the bad guy, so he can’t leave you alone either.
I don’t think the 100% tax rate argument works, for several reasons:
100% is not the short-run maximum extraction rate (Cf “Laffer Curve,” which is explicitly short-term).
USGOVT is not really an agent here, some extractors taking all they can are subjectvto the top marginal tax rate & reallocating to themselves using subtler mechanisms like monetary policy and financial regulation (and deregulation, cyclically), boondoggles, other regulatory capture...
If you count other extraction points such as credentialism + high college tuition + need-based financial aid (mostly involving loans), hospital bills, lifetime extraction rate may be a lot higher.
Good point, I updated towards the extraction rate being higher than I thought (will edit my comment). Rich people do end up existing but they’re rare and are often under additional constraints.
I will attempt to clarify which of these things I actually believe, as best I can, but do not expect to be able to engage deeper into the thread.
Implication: it’s bad for people to have much more information about other people (generally), because they would reward/punish them based on that info, and such rewarding/punishing would be unjust. We currently have scapegoating, not justice. (Note that a just system for rewarding/punishing people will do no worse by having more information, and in particular will do no worse than the null strategy of not rewarding/punishing behavior based on certain subsets of information)
>> What I’m primarily thinking about here is that if one is going to be rewarded/punished for what one does and thinks, one chooses what one does and thinks largely based upon that—you have a signaling equilibria, as Wei Dei notes in his top-level comment. I believe that this in many situations is much worse, and will lead to massive warping of behavior in various ways, even if those rewarding/punishing were attempting to be just (or even if they actually were just, if there wasn’t both common knowledge of this and agreement on what is and isn’t just). The primary concern isn’t whether someone can expect to be on-net punished or rewarded, but on how behaviors are changed.
We need people there with us who won’t judge us. Who won’t use information against us.
Implication: “judge” means to use information against someone. Linguistic norms related to the word “judgment” are thoroughly corrupt enough that it’s worth ceding to these, linguistically, and using “judge” to mean (usually unjustly!) using information against people.
>> Judge here means to react to information about someone or their actions or thoughts largely by updating their view of the person—to not have to worry (as much, at least) about how things make you seem. The second sentence is a second claim, that we also need them not to use the information against us. I did not intend for the second to seem to be part of the first.
A complete transformation of our norms and norm principles, beyond anything I can think of in a healthy historical society, would be required to even attempt full non-contextual strong enforcement of all remaining norms.
Implication (in the context of the overall argument): a general reduction in privacy wouldn’t lead to norms changing or being enforced less strongly, it would lead to the same norms being enforced strongly. Whatever or whoever decides which norms to enforce and how to enforce them is reflexive rather than responsive to information. We live in a reflex-based control system.
>> That doesn’t follow at all, and I’m confused why you think that it does. I’m saying that when I try to design a norm system from scratch in order to be compatible with full non-contextual strong enforcement, I don’t see a way to do that. Not that things wouldn’t change—I’m sure they would.
There are also known dilemmas where any action taken would be a norm violation of a sacred value.
Implication: the system of norms is so corrupt that they will regularly put people in situations where they are guaranteed to be blamed, regardless of their actions. They won’t adjust even when this is obvious.
>> The system of norms is messy, which is different than corrupt. Different norms conflict. Yes, the system is corrupt, but that’s not required for this to be a problem. Concrete example, chosen to hopefully be not controversial: Either turn away the expensive sick child patient, or risk bankrupting the hospital.
Part of the job of making sausage is to allow others not to see it. We still get reliably disgusted when we see it.
Implication: people expect to lose value by knowing some things. Probably, it is because they would expect to be punished due to it being revealed they know these things (as in 1984). It is all an act, and it’s better not to know that in concrete detail.
>> Consider the literal example of sausage being made. The central problem is not that people are afraid the sausage makers will strike back at them. The problem is knowing reduces one’s ability to enjoy sausage. Alternatively, it might force one to stop enjoying sausage.
>> Another important dynamic is that we want to enforce a norm that X is bad and should be minimized. But sometimes X is necessary. So we’d rather not be too reminded of the X that is necessary in some situations where we know X must occur, to avoid weakening the norm against X elsewhere, and because we don’t want to penalize those doing X where it is necessary as we would instinctively do if we learned too much detail.
We constantly must claim ‘everything is going to be all right’ or ‘everything is OK.’ That’s never true. Ever.
Implication: the control system demands optimistic stories regardless of the facts. There is something or someone forcing everyone to call the deer a horse under threat of punishment, to maintain a lie about how good things are, probably to prop up an unjust regime.
>> OK, this one’s just straight up correct if you remove the unjust regime part. Also, I am married with children.
But these problems, while improved, wouldn’t go away in a better or less hypocritical time. Norms are not a system that can have full well-specified context dependence and be universally enforced. That’s not how norms work.
Implication: even in the most just possible system of norms, it would be good to sometimes violate those norms and hide the fact that you violated them. (This seems incorrect to me!)
>> As I noted above, my model of norms is that they are even at their best messy ways of steering behavior, and generally just norms will in some circumstances push towards incorrect action in ways the norm system would cause people to instinctively punish. In such cases it is sometimes correct to violate the norm system, even if it is as just a system as one could hope for. And yes, in some of those cases, it would be good to hide that this was done, to avoid weakening norms (including by allowing such cases not be punished thus enabling otherwise stronger punishment).
If others know exactly what resources we have, they can and will take all of them.
Implication: the bad guys won; we have rule by gangsters, who aren’t concerned with sustainable production, and just take as much stuff as possible in the short term. (This seems on the right track but partially false; the top marginal tax rate isn’t 100% [EDIT: see Ben’s comment, the actual rate of extraction is higher than the marginal tax rate])
>> This is not primarily a statement about The Powers That Be or any particular bad guys. I think this is inherent in how people and politics operate, and what happens when one has many conflicting would-be sacred values. Of course, it is also a statement that when gangsters do go after you, it is important that they not know, and there is always worry about potential gangsters on many levels whether or not they have won. Often the thing taking all your resources is not a bad guy—e.g. expensive medical treatments, or in-need family members, etc etc.
If it is known how we respond to any given action, others find best responses. They will respond to incentives. They exploit exactly the amount we won’t retaliate against. They feel safe.
Implication: more generally available information about what strategies people are using helps “our” enemies more than it helps “us”. (This seems false to me, for notions of “us” that I usually use in strategy)
>> Often on the margin more information is helpful. But complete information is highly dangerous. And in my experience, most systems in an interesting equilibrium where good things happen sustain that partly with fuzziness and uncertainty—the idea that obeying the spirit of the rules and working towards the goals and good things gets rewarded, other action gets punished, in uncertain ways. There need to be unknowns in the system. Competitions where every action by other agents is known are one-player games about optimization and exploitation.
World peace, and doing anything at all that interacts with others, depends upon both strategic confidence in some places, and strategic ambiguity in others. We need to choose carefully where to use which.
Implication (in context): strategic ambiguity isn’t just necessary for us given our circumstances, it’s necessary in general, even if we lived in a surveillance state. (Huh?)
>> Strategic ambiguity is necessary for the surveillance state so that people can’t do everything the state didn’t explicitly punish/forbid. It is necessary for those living in the state, because the risk of revolution, the we’re-not-going-to-take-it-anymore moment, helps keep such places relatively livable versus places where there is no such fear. It is important that you don’t know exactly what will cause the people to rise up, or you’ll treat them as bad as won’t do that. And of course I was also talking explicitly about things like ‘if you cross that border we will be at war’ - there are times when you want to be 100% clear that there will be war (e.g. NATO) and others where you want to be 100% unclear (e.g. Taiwan).
To conclude: if you think the arguments in this post are sound (with the conclusion being that we shouldn’t drastically reduce privacy in general), you also believe the implications I just listed, unless I (or you) misinterpreted something.
>> I hope this cleared things up. And of course, you can disagree with many, most or even all my arguments and still not think we should radically reduce privacy. Radical changes don’t default to being a good idea if someone gives invalid arguments against them!
I agree that privacy would be less necessary in a hypothetical world of angels. But I don’t find it convincing that removing privacy would bring about such a world, and arguments of this type (let’s discard a human right like property / free speech / privacy, and a world of angels will result) have a very poor track record.
That isn’t the same as arguing against privacy. If someone says “I think X because Y” and I say “Y is false for this reason” that isn’t (necessarily) arguing against X. People can have wrong reasons for correct beliefs.
It’s epistemically harmful to frame efforts towards increasing local validity as attempts to control the outcome of a discussion process; they’re good independent of whether they push one way or the other in expectation.
In other words, you’re treating arguments as soldiers here.
(Additionally, in the original comment, I was mostly not saying that Zvi’s arguments were unsound (although I did say that for a few), but that they reflected a certain background understanding of how the world works)
Implication: more generally available information about what strategies people are using helps “our” enemies more than it helps “us”. (This seems false to me, for notions of “us” that I usually use in strategy)
Maybe I’m misreading and you’re arguing that it will help us and enemies equally? But even that seems impossible. If Big Bad Wolf can run faster than Little Red Hood, mutual visibility ensures that Little Red Hood gets eaten.
OK, I can defend this claim, which seems different from the “less privacy means we get closer to a world of angels” claim; it’s about asymmetric advantages in conflict situations.
In the example you gave, more generally available information about people’s locations helps Big Bad Wolf more than Little Red Hood. If I’m strategically identifying with Big Bad Wolf then I want more information available, and if I’m strategically identifying with Little Red Hood then I want less information available. I haven’t seen a good argument that my strategic position is more like Little Red Hood’s than Big Bad Wolf’s (yes, the names here are producing moral connotations that I think are off).
So, why would info help us more than our enemies? I think efforts to do big, important things (e.g. solve AI safety or aging) really often get derailed by predatory patterns (see Geeks, Mops, Sociopaths), which usually aren’t obvious to the people cooperative with the original goal for a while. These patterns derail the group and cause it to stop actually targeting its original mission. It seems like having more information about strategies would help solve this problem.
Of course, it also gives the predators more information. But I think it helps defense more than offense, since there are more non-predators to start with than predators, and non-predators are (presently) at a more severe information disadvantage than the predators are, with respect to this conflict.
Anyway, I’m not that confident in the overall judgment, but I currently think more available info about strategies is good in expectation with respect to conflict situations.
Yes, less privacy leads to more conformity. But I don’t think that will disproportionately help small projects that you like. Mostly it will help big projects that feed on conformity—ideologies and religions.
Only ones that don’t structurally depend on huge levels of hypocrisy. People can lie. It’s currently cheap and effective in a wide variety of circumstances. This does not make the lies true.
Conformity-based strategies only benefit from reductions in privacy, when they’re based on actual conformity. If they’re based on pretend/outer conformity, then they get exposed with less privacy.
Ah, gotcha. Yeah that makes sense, although it in turn depends a lot on what you think happens when lack-of-privacy forces the strategy to adapt.
(note: following comment didn’t end up engaging with a strong version of the claim, and I ran out of time to think through other scenarios.)
If you have a workplace (with a low generativity strategy) in which people are supposed to work 8 hours, but they actually only work 2 (and goof off the rest of the time), and then suddenly everyone has access to exactly how much people work, I’d expect one of a few things to happen:
1. People actually start working harder
2. People actually end up getting 2 hour work days (and then go home)
3. People continue working for 2 hours and then goofing off (with or without maintaining some kind of plausible fiction – i.e. I could easily imagine that even with full information, people still maintain the polite fiction that people work 8 hours a day, and people only go to the efforts of directing attention to those who goof off when they are a political enemy. “Polite” society often seems to not just be about concealing information but actively choosing to look away)
4. People start finding things to do with their extra 6 hours that look enough like work (but are low effort / fun) that even though people could theoretically check on them and expose them, there’d still be enough plausible deniability that it’d require effort to expose them and punish them.
These options range in how good they are – hopefully you get 1 or 2 depending on how much more valuable the extra 6 hours are.
But none of them actually change the underlying fact that this business is pursuing a simple, collectivist strategy.
(this line of options doesn’t really interface with the original claim that simple collective strategies are easier under a privacy-less regime, I think I’d have to look at several plausible examples to build up a better model and ran out of time to write this comment before, um, returning to work. [hi habryka])
I think the main thing is I can’t think of many examples where it seems like the active-ingredient in the strategy is the conformity-that-would-be-ruined-by-information.
The most common sort of strategy I’m imagining is “we are a community that requires costly signals for group membership” (i.e. strict sexual norms, subscribing to and professing the latest dogma, giving to the poor), but costly signals are, well, costly, so there’s incentive for people to pretend to meet them without actually doing so.
If it became common knowledge that nobody or very few people were “really” doing the work, one thing that might happen is that the community’s bonds would weaken or disintegrate. But I think these sorts of social norms would mostly just adapt to the new environment, in one of a few ways:
come up with new norms that are more complicated, such that it’s harder to check (even given perfect information) whether someone is meeting them. I think this what often happened in academia. (See jokes about postmodernism, where people can review each other’s work, but the work is sort of deliberately inscrutable so it’s hard to see if it says anything meaningful)
people just develop a norm of not checking in on each other (cooperating for the sake of preserving the fiction), and scrutiny is only actually deployed against political opponents.
(The latter one at least creates an interesting mutually assured destruction thing that probably makes people less willing to attack each other openly, but humans also just seem pretty good at taking social games into whatever domain seems most plausibly deniable)
I think you’re pointing in an important direction, but your phrasing sounds off to me.
(In particular, ‘scapegoating’ feels like a very different frame than the one I’d use here)
If I think out loud, especially about something I’m uncertain about, that other people have opinions on, a few things can happen to me:
Someone who overhears part of my thought process might think (correctly, even!) that my thought process reveals that I am not very smart. Therefore, they will be less likely to hire me. This is punishment, but it’s very much not “scapegoating” style punishment.
Someone who overhears my private thought process might (correctly, or incorrectly! either) come to think that I am smart, and be more likely to hire me. This can be just as dangerous. In a world where all information is public, I have to attend to how the process by which I act and think looks. I am incentivized to think in ways that are legibly good.
“Judgment” is dangerous to me (epistemically) even if the judgment is positive, because it incentives me against exploring paths that look bad, or are good for incomprehensible reasons.
This seems like a general argument that providing evidence without trying to control the conclusions others draw is bad because it leads to errors. It doesn’t seem to take into account the cost of reduced info glow or the possibility that the gatekeeper might also introduce errors. That’s before we even consider self-serving bias!
TLDR: I literally do not understand how to interpret your comment as NOT a general endorsement of fraud and implicit declaration of intent to engage in it.
My intent was not that it’s “bad”, just, if you do not attempt to control the conclusions of others, they will predictably form conclusions of particular types, and this will have effects. (It so happens that I think most people won’t like those effects, and therefore will attempt to control the conclusions of others.)
Ah, if you literally just mean it increases variance & risk, that’s true in the very short term. In context it sounded to me like a policy argument against doing so, but on reflection it’s easy to read you as meaning the more reasonable thing. Thank you for explaining.
Hmm. I think I meant something more like your second interpretation than your first interpretation but I think I actually meant a third thing and am not confident we aren’t still misunderstanding each other.
An intended implication, (which comes with an if-then suggestion, which was not an essential part of my original claim but I think is relevant) is:
If you value being able to think freely and have epistemologically sound thoughts, it is important to be able to think thoughts that you will neither be rewarded nor punished for… [edit: or be extremely confident than you have accounted for your biases towards reward gradients]. And the rewards are only somewhat less bad than the punishments.
A followup implication is that this is not possible to maintain humanity-wide if thought-privacy is removed (which legalizing blackmail would contribution somewhat towards). And that this isn’t just a fact about our current equilibria, it’s intrinsic to human biology.
It seems plausible (although I am quite skeptical) that a small group of humans might be able to construct an epistemically sound world that includes lack-of-intellectual-privacy, but they’d have to have correctly accounted for wide variety of subtle errors.
[edit: all of this assumes you are running on human wetware. If you remove that as a constraint other things may be possible]
further update: I do think rewards are something like 10x less problematic than punishments, because humans are risk averse and fear punishment more than they desire reward. (“10x” is a stand-in for “whatever the psychological research says on how big the difference is between human response to rewards and punishments”)
[note: this subthread is far afield from the article—LW is about publication, not private thoughts (unless there’s a section I don’t know about where only specifically invited people can see things) . And LW karma is far from the sanctions under discussion in the rest of the post.]
Have you considered things to reduce the assymetric impact of up- and down-votes? Cap karma value at −5? Use downvotes as a divisor for upvotes (say, score is upvotes / (1 + 0.25 * downvotes)) rather than simple subtraction?
We’ve thought about things in that space, although any of the ideas would be a fairly major change, and we haven’t come up with anything we feel good enough about to commit to.
(We have done some subtle things to avoid making downvotes feel worse than they need to, such as not including the explicit number of downvotes)
Do you think that thoughts are too incentivised or not incentivised enough on the margin, for the purpose of epistemically sound thinking? If they’re too incentivised, have you considered dampening LWs karma system? If they’re not incentivised enough, what makes you believe that legalising blackmail will worsen the epistemic quality of thoughts?
The LW karma obviously has its flaws, per Goodhart’s law. It is used anyway, because the alternative is having other problems, and for the moment this seems like a reasonable trade-off.
The punishment for “heresies” is actually very mild. As long as one posts respected content in general, posting a “heretical” comment every now and then does not ruin their karma. (Compare to people having their lives changed dramatically because of one tweet.) The punishment accumulates mostly for people whose only purpose here is to post “heresies”. Also, LW karma does not prevent anyone from posting “heresies” on a different website. Thus, people can keep positive LW karma even if their main topic is talking how LW is fundamentally wrong as long as they can avoid being annoying (for example by posting hundred LW-critical posts on their personal website, posting a short summary with hyperlinks on LW, and afterwards using LW mostly to debate other topics).
Blackmail typically attacks you in real life, i.e. you can’t limit the scope of impact. If losing an online account on a website X would be the worst possible outcome of one’s behavior at the website X, life would be easy. (You would only need to keep your accounts on different websites separated from each other.) It was already mentioned somewhere in this debate that blackmail often uses the difference between norms in different communities, i.e. that your local-norm-following behavior in one context can be local-norm-breaking in another context. This is quite unlike LW karma.
I’d say thoughts aren’t incentivized enough on the margin, but:
1. A major bottleneck is how fine-tuned and useful the incentives are. (i.e. I’d want to make LW karma more closely track “reward good epistemic processes” before I made the signal stronger. I think it currently tracks that well enough that I prefer it over no-karma).
2. It’s important that people can still have private thoughts separate from the LW karma system. LW is where you come when you have thoughts that seem good enough to either contribute to the commons, or to get feedback on so you can improve your thought process… after having had time to mull things over privately without worrying about what anyone will think of you.
(But, I also think, on the margin, people should be much less scared about sharing their private thoughts than they currently are. Many people seem to be scared about sharing unfinished thoughts at all, and my actual model of what is “threatening” says that there’s a much narrower domain where you need to be worried in the current environment)
3. One conscious decision we made was not not display “number of downvotes” on a post (we tried it out privately for admins for awhile). Instead we just included “total number of votes”. Explicitly knowing how much one’s post got downvoted felt much worse than having a vague sense of how good it was overall + a rough sense of how many people *may* have downvoted it. This created a stronger punishment signal than seemed actually appropriate.
(Separately, I am right now making arguments in terms that I’m fairly confident both of us value, but I also think there are reasons to want private thoughts that are more like “having a Raemon_healthy soul”, than like being able to contribute usefully to the intellectual commons.
(I noticed while writing this that the latter might be most of what a Benquo finds important for having a healthy soul, but unsure. In any case healthy souls are more complicated and I’m avoiding making claims about them for now)
If privacy in general is reduced, then they get to see others’ thoughts too [EDIT: this sentence isn’t critical, the rest works even if they can only see your thoughts]. If they’re acting justly, then they will take into account that others might modify their thoughts to look smarter, and make basically well-calibrated (if not always accurate) judgments about how smart different people are. (People who are trying can detect posers a lot of the time, even without mind-reading). So, them having more information means they are more likely to make a correct judgment, hiring the smarter person (or, generally, whoever can do the job better). At worst, even if they are very bad at detecting posers, they can see everyone’s thoughts and choose to ignore them, making the judgment they would make without having this information (But, they were probably already vulnerable to posers, it’s just that seeing people’s thoughts doesn’t have to make them more vulnerable).
If privacy in general is reduced, then they get to see others’ thoughts too.
This response seems mostly orthogonal to what I was worried about. It is quite plausible that most hiring decisions would become better in fully transparent (and also just?) world. But, fully-and-justly-transparent-world can still mean that fewer people think original or interesting thoughts because doing so is too risky.
And I might think this is bad, not only because of fewer-objectively-useful thoughts get thunk, but also because… it just kinda sucks and I don’t get to be myself?
(As well as, fully-transparent-and-just-world might still be a more stressful world to live in, and/or involve more cognitive overhead because I need to model how others will think about me all the time. Hypothetically we could come to an equilibrium wherein we *don’t* put extra effort into signaling legibly good thought processes. This is plausible, but it is indeed a background assumption of mine that this is not possible to run on human wetware)
Regarding that sentence, I edited my comment at about the same time you posted this.
But, fully-and-justly-transparent-world can still mean that fewer people think original or interesting thoughts because doing so is too risky.
If someone taking a risk is good with respect to the social good, then the justice process should be able to see that they did that and reward them (or at least not punish them) for it, right? This gets easier the more information is available to the justice process.
So, much of my thread was respond to this sentence:
Implication: “judge” means to use information against someone.
The point being, you can have entirely positive judgment, and have it still produce distortions. All that has to be true is that some forms of thought are more legibly good and get more rewarded, for a fully transparent system to start producing warped incentives on what sort of thoughts get thought.
i.e. say I have three options of what to think about today:
some random innocuous status quo thought (neither gets me rewarded nor punished)
some weird thought that seems kind of dumb, which most of the time is evidence about being dumb, which occasionally pays off with something creative and neat. (I’m not sure what kind of world we’re stipulating here. In some “just”-worlds, this sort of thought gets punished (because it’s usually dumb). In some “just worlds” it gets rewarded (because everyone has cooperated on some kind of long term strategy). In some just-worlds it’s hit or miss because there’s a collection of people trying different strategies with their rewards.
some heretical thought that seems actively dangerous, and only occasionally produces novel usefulness if I turn out to be real good at being contrarian.
a thought that is clearly, legibly good, almost certainly net positive, either by following well worn paths, or being “creatively out of the box” in a set of ways that are known to have pretty good returns.
Even in one of the possible-just-worlds, it seems like you’re going to incentivize the last one much more than the 2nd or 3rd.
This isn’t that different from the status quo – it’s a hard problem that VC funders have an easier time investing in people doing something that seems obviously good, then someone with a genuinely weird, new idea. But I think this would crank that problem up to 11, even if we stipulate a just-world.
...
Most importantly: the key implication I believe in, is that humans are not nearly smart enough at present to coordinate on anything like a just world, even if everyone were incredibly well intentioned. This whole conversation is in fact probably not possible for the average person to follow. (And this implication in this sentence right here right now is something that could get me punished in many circles, even by people trying hard to do the right thing. For reasons related to Overconfident talking down, humble or hostile talking up)
Even in one of the possible-just-worlds, it seems like you’re going to incentivize the last one much more than the 2nd or 3rd.
This is not responsive to what I said! If you can see (or infer) the process by which someone decided to have one thought or another, you can reward them for doing things that have higher expected returns, e.g. having heretical thoughts when heresy is net positive in expectation. If you can’t implement a process that complicated, you can just stop punishing people for heresy, entirely ignoring their thoughts if necessary.
the key implication I believe in, is that humans are not nearly smart enough at present to coordinate on anything like a just world, even if everyone were incredibly well intentioned. This whole conversation is in fact probably not possible for the average person to follow.
Average people don’t need to do it, someone needs to do it. The first target isn’t “make the whole world just”, it’s “make some local context just”. Actually, before that, it’s “produce common knowledge in some local context that the world is unjust but that justice is desirable”, which might actually be accomplished in this very thread, I’m not sure.
And this implication in this sentence right here right now is something that could get me punished in many circles, even by people trying hard to do the right thing.
Thanks for adding this information. I appreciate that you’re making these parts of your worldview clear.
This is not responsive to what I said! If you can see (or infer) the process by which someone decided to have one thought or another, you can reward them for doing things that have higher expected returns, e.g. having heretical thoughts when heresy is net positive in expectation.
This was most of what I meant to imply. I am mostly talking about rewards, not punishments.
I am claiming that rewards distort thoughts similarly to punishments, although somewhat more weakly because humans seem to respond more strongly to punishment than reward.
You’re continuing to miss the completely obvious point that a just process does no worse (in expectation) by having more information potentially available to it, which it can decide what to do with. Like, either you are missing really basic decision theory stuff covered in the Sequences or you are trolling.
(Agree that rewards affect thoughts too, and that these can cause distortions when done unjustly)
Your comments don’t seem to be acknowledging that, so from my perspective you seem to be describing an Impossible Utopia (capitalized because I intend to write a post that encapsulates the concept of Which Utopias Are Possible), and so it doesn’t seem very relevant.
(I recall claims on LessWrong that a decision process can do no worse with more information, but I don’t recall a compelling case that this was true on bounded human agents. Though I am interested if you have a post that responds to Zvi’s claims in the Choices are Bad series, and/or a post that articulates what exactly you mean by “just” since it sounds like you’re using it as a jargon term that’s meant to encapsulate more information than I’m receiving right now).
I’ve periodically mentioned that my arguments about “just worlds implemented on humans”. “Just worlds implemented on non-humans or augmented humans” might be quite different, and I think it’s worth talking about too.
But the topic here is legalizing blackmail in a human world. So it matters how this will be implemented on the median human, who are responsible for most actions.
Notice that in this conversation, where you are and I are both smarter than average, it is not obvious to both of us what the correct answer is here, and we have spent some time arguing about it. When I imagine the average human town, or company, or community, attempting to implement a just world that includes blackmail and full transparency, I am imagining either a) lots more time being spent trying to figure out the right answer, b) people getting wrong answers all the time.
The two posts you linked are not even a little relevant to the question of whether, in general, bounded agents do better or worse by having more information (Yes, choice paralysis might make some information about what choices you have costly, but more info also reduces choice paralysis by increasing certainty about how good the different options are, and overall the posts make no claim about the overall direction of info being good or bad for bounded agents). To avoid feeding the trolls, I’m going to stop responding here.
I’m not trolling. I have some probability on me being the confused one here. But given the downvote record above, it seems like the claims you’re making are at least less obvious than you think they are.
If you value those claims being treated as obvious-things-to-build-off-of by the LW commentariat, you may want to expand on the details or address confusions about them at some point.
But, I do think it is generally important for people to be able to tap out of conversations whenever the conversation is seeming low value, and seems reasonable for this thread to terminate.
I have some probability on me being the confused one here.
In conversations like this, both sides are confused, that is don’t understand the other’s point, so “who is the confused one” is already an incorrect framing. One of you may be factually correct, but that doesn’t really matter for making a conversation work, understanding each other is more relevant.
(In this particular case, I think both of you are correct and fail to see what the other means, but Jessica’s point is harder to follow and pattern-matches misleading things, hence the balance of votes.)
(I downvoted some of Jessica’s comments, mostly only in the cases where I thought she was not putting in a good faith effort to try to understand what her interlocutor is trying to say, like her comment upstream in the thread. Saying that talking to someone is equivalent to feeding trolls is rarely a good move, and seems particularly bad in situations where you are talking about highly subjective and fuzzy concepts. I upvoted all of her comments that actually made points without dismissing other people’s perspectives, so in my case, I don’t really think that the voting patterns are a result of her ideas being harder to follow, and more the result of me perceiving her to be violating certain conversational norms)
In conversations like this, both sides are confused,
Nod. I did actually consider a more accurate version of the comment that said something like “at least one of us is at least somewhat confused about something”, but by the time we got to this comment I was just trying to disengage while saying the things that seemed most important to wrap up with.
Nod. I did actually consider a more accurate version of the comment that said something like “at least one of us is at least somewhat confused about something” [...]
The clarification doesn’t address what I was talking about, or else disagrees with my point, so I don’t see how that can be characterised with a “Nod”. The confusion I refer to is about what the other means, with the question of whether anyone is correct about the world irrelevant. And this confusion is significant on both sides, otherwise a conversation doesn’t go off the rails in this way. Paying attention to truth is counterproductive when intended meaning is not yet established, and you seem to be talking about truth, while I was commenting about meaning.
Hmm. Well I am now somewhat confused what you mean. Say more? (My intention was for ‘at least one of us is confused’ to be casting a fairly broad net that included ‘confused about the world’, or ‘confused about what each other meant by our words’, or ‘confused… on some other level that I couldn’t predict easily.’)
(In particular, ‘scapegoating’ feels like a very different frame than the one I’d use here)
Having read Zvi’s post and my comment, do you think the norm-enforcement process is just, or even not very unjust? If not, what makes it not scapegoating?
I think scapegoating has a particular definition – blaming someone for something that they didn’t do because your social environment demands someone get blamed. And that this isn’t relevant to most of my concerns here. You can get unjustly punished for things that have nothing to do with scapegoating.
Good point. I think there is a lot of scapegoating (in the sense you mean here) but that’s a further claim than that it’s unjust punishment, and I don’t believe this strongly enough to argue it right now.
I found this pretty useful—Zvi’s definitely reflecting a particular, pretty negative view of society and strategy here. But I disagree with some of your inferences, and I think you’re somewhat exaggerating the level of gloom-and-doom implicit in the post.
>Implication: “judge” means to use information against someone. Linguistic norms related to the word “judgment” are thoroughly corrupt enough that it’s worth ceding to these, linguistically, and using “judge” to mean (usually unjustly!) using information against people.
No, this isn’t bare repetition. I agree with Raemon that “judge” here means something closer to one of its standard usages, “to make inferences about”. Though it also fits with the colloquial “deem unworthy for baring [understandable] flaws”, which is also a thing that would happen with blackmail and could be bad.
>Implication: more generally available information about what strategies people are using helps “our” enemies more than it helps “us”. (This seems false to me, for notions of “us” that I usually use in strategy)
I can imagine a couple things going on here? One, if the world is a place where may more vulnerabilities are more known, this incentivizes more people to specialize in exploiting those vulnerabilities. Two, as a flawed human there are probably some stressors against which you can’t credibly play the “won’t negotiate with terrorists” card.
>Implication: even in the most just possible system of norms, it would be good to sometimes violate those norms and hide the fact that you violated them. (This seems incorrect to me!)
I think the assumption is these are ~baseline humans we’re talking about, and most human brains can’t hold norms of sufficient sophistication to capture true ethical law, and are also biased in ways that will sometimes strain against reflectively-endorsed ethics (e.g. they’re prone to using constrained circles of moral concern rather than universality).
>Implication: the bad guys won; we have rule by gangsters, who aren’t concerned with sustainable production, and just take as much stuff as possible in the short term. (This seems on the right track but partially false; the top marginal tax rate isn’t 100%)
This part of the post reminded me of (the SSC review of) Seeing Like a State, which makes a similar point; surveying and ‘rationalizing’ farmland, taking a census, etc. = legibility = taxability. “all of them” does seem like hyperbole here. I guess you can imagine the maximally inconvenient case where motivated people with low cost of time and few compunctions know your resources and full utility function, and can proceed to extract ~all liquid value from you.
I agree with Raemon that “judge” here means something closer to one of its standard usages, “to make inferences about”.
The post implies it is bad to be judged. I could have misinterpreted why, but that implication is there. If judge just meant “make inferences about” why would it be bad?
One, if the world is a place where may more vulnerabilities are more known, this incentivizes more people to specialize in exploiting those vulnerabilities.
But it also helps in knowing who’s exploiting them! Why does it give more advantages to the “bad” side?
Two, as a flawed human there are probably some stressors against which you can’t credibly play the “won’t negotiate with terrorists” card.
Why would you expect the terrorists to be miscalibrated about this before the reduction in privacy, to the point where they think people won’t negotiate with them when they actually will, and less privacy predictably changes this opinion?
I think the assumption is these are ~baseline humans we’re talking about, and most human brains can’t hold norms of sufficient sophistication to capture true ethical law
Perhaps the optimal set of norms for these people is “there are no rules, do what you want”. If you can improve on that, than that would constitute a norm-set that is more just than normlessness. Capturing true ethical law in the norms most people follow isn’t necessary.
I guess you can imagine the maximally inconvenient case where motivated people with low cost of time and few compunctions know your resources and full utility function, and can proceed to extract ~all liquid value from you.
The post implies it is bad to be judged. I could have misinterpreted why, but that implication is there. If judge just meant “make inferences about” why would it be bad?
As Raemon says, knowing that others are making correct inferences about your behavior means you can’t relax. No, idk, watching soap operas, because that’s an indicator of being less likely to repay your loans, and your premia go up. There’s an ethos of slack, decisionmaking-has-costs, strategizing-has-costs that Zvi’s explored in his previous posts, and that’s part of how I’m interpreting what he’s saying here.
But it also helps in knowing who’s exploiting them! Why does it give more advantages to the “bad” side?
Sure, but doesn’t it help me against them too?
You don’t want to spend your precious time on blackmailing random jerks, probably. So at best, now some of your income goes toward paying a white-hat blackmailer to fend off the black-hats. (Unclear what the market for that looks like. Also, black-hatters can afford to specialize in unblackmailability; it comes up much more often for them than the average person.) You’re right, though, that it’s possible to have an equilibrium where deterrence dominates and the black-hatting incentives are low, in which case maybe the white-hat fees are low and now you have a white-hat deterrent. So this isn’t strictly bad, though my instinct is that it’s bad in most plausible cases.
Why would you expect the terrorists to be miscalibrated about this before the reduction in privacy, to the point where they think people won’t negotiate with them when they actually will, and less privacy predictably changes this opinion?
That’s a fair point! A couple of counterpoints: I think risk-aversion of ‘terrorists’ helps. There’s also a point about second-order effects again; the easier it is to blackmail/extort/etc., the more people can afford to specialize in it and reap economies of scale.
Perhaps the optimal set of norms for these people is “there are no rules, do what you want”. If you can improve on that, than that would constitute a norm-set that is more just than normlessness. Capturing true ethical law in the norms most people follow isn’t necessary.
Eh, sure. My guess is that Zvi is making a statement about norms as they are likely to exist in human societies with some level of intuitive-similarity to our own. I think the useful question here is like “is it possible to instantiate norms s.t. norm-violations are ~all ethical-violations”. (we’re still discussing the value of less privacy/more blackmail, right?) No-rule or few-rule communities could work for this, but I expect it to be pretty hard to instantiate them at large scale. So sure, this does mean you could maybe build a small local community where blackmail is easy. That’s even kind of just what social groups are, as Zvi notes; places where you can share sensitive info because you won’t be judged much, nor attacked as a norm-violator. Having that work at super-Dunbar level seems tough.
As Raemon says, knowing that others are making correct inferences about your behavior means you can’t relax. No, idk, watching soap operas, because that’s an indicator of being less likely to repay your loans, and your premia go up.
This is really, really clearly false!
This assumes that, upon more facts being revealed, insurance companies will think I am less (not more) likely to repay my loans, by default (e.g. if I don’t change my TV viewing behavior).
More egregiously, this assumes that I have to keep putting in effort into reducing my insurance premiums until I have no slack left, because these premiums really, really, really matter. (I don’t even spend that much on insurance premiums!)
If you meant this more generally, and insurance was just a bad example, why is the situation worse in terms of slack than it was before? (I already have the ability to spend leisure time on gaining more money, signalling, etc.)
It’s true the net effect is low to first order, but you’re neglecting second-order effects. If premia are important enough, people will feel compelled to Goodhart proxies used for them until those proxies have less meaning.
Given the linked siderea post, maybe this is not very true for insurance in particular. I agree that wasn’t a great example.
Slack-wise, uh, choices are bad. really bad. Keep the sabbath. These are some intuitions I suspect are at play here. I’m not interested in a detailed argument hashing out whether we should believe that these outweigh other factors in practice across whatever range of scenarios, because it seems like it would take a lot of time/effort for me to actually build good models here, and opportunity costs are a thing. I just want to point out that these ideas seem relevant for correctly interpreting Zvi’s position.
Implication: it’s bad for people to have much more information about other people (generally), because they would reward/punish them based on that info, and such rewarding/punishing would be unjust.
I don’t think that’s a necessary implication. In a world where people live in fear of being punished they will be able to act in a way to avoid unjust punishment. That world is still one where people suffer from living in fear.
Whence fear of unjust punishment if there is no unjust punishment? Hypothetically there could be (justified) fear of a counterfactual that never happens, but this isn’t a stable arrangement (in practice, some people will not work as hard to avoid the unjust punishment, and so will get punished)
In a scapegoating environment, having privacy yourself is obviously pretty important. However, you seem to be making a stronger point, which is that privacy in general is good (e.g. we shouldn’t have things like blackmail and surveillance which generally reduce privacy, not just our own privacy). I’m going to respond assuming you are arguing in favor of the stronger point.
This post rests on several background assumptions about how the world works, which are worth making explicit. I think many of these are empirically true but are, importantly, not necessarily true, and not all of them are true.
Implication: it’s bad for people to have much more information about other people (generally), because they would reward/punish them based on that info, and such rewarding/punishing would be unjust. We currently have scapegoating, not justice. (Note that a just system for rewarding/punishing people will do no worse by having more information, and in particular will do no worse than the null strategy of not rewarding/punishing behavior based on certain subsets of information)
Implication: “judge” means to use information against someone. Linguistic norms related to the word “judgment” are thoroughly corrupt enough that it’s worth ceding to these, linguistically, and using “judge” to mean (usually unjustly!) using information against people.
Implication (in the context of the overall argument): a general reduction in privacy wouldn’t lead to norms changing or being enforced less strongly, it would lead to the same norms being enforced strongly. Whatever or whoever decides which norms to enforce and how to enforce them is reflexive rather than responsive to information. We live in a reflex-based control system.
Implication: the system of norms is so corrupt that they will regularly put people in situations where they are guaranteed to be blamed, regardless of their actions. They won’t adjust even when this is obvious.
Implication: people expect to lose value by knowing some things. Probably, it is because they would expect to be punished due to it being revealed they know these things (as in 1984). It is all an act, and it’s better not to know that in concrete detail.
Implication: the control system demands optimistic stories regardless of the facts. There is something or someone forcing everyone to call the deer a horse under threat of punishment, to maintain a lie about how good things are, probably to prop up an unjust regime.
Implication: even in the most just possible system of norms, it would be good to sometimes violate those norms and hide the fact that you violated them. (This seems incorrect to me!)
Implication: the bad guys won; we have rule by gangsters, who aren’t concerned with sustainable production, and just take as much stuff as possible in the short term. (This seems on the right track but partially false; the top marginal tax rate isn’t 100% [EDIT: see Ben’s comment, the actual rate of extraction is higher than the marginal tax rate])
Implication: more generally available information about what strategies people are using helps “our” enemies more than it helps “us”. (This seems false to me, for notions of “us” that I usually use in strategy)
Implication (in context): strategic ambiguity isn’t just necessary for us given our circumstances, it’s necessary in general, even if we lived in a surveillance state. (Huh?)
To conclude: if you think the arguments in this post are sound (with the conclusion being that we shouldn’t drastically reduce privacy in general), you also believe the implications I just listed, unless I (or you) misinterpreted something.
To me this feels like Zvi is talking about some impersonal universal law of economics (whether such law really exists or not, we may debate), and you are making it about people (“the bad guys”, “gangsters”) and their intentions, like we could get a better outcome instead by simply replacing the government or something.
I see it as something similar to Moloch. If you have resources, it creates a temptation for others to try taking it. Nice people will resist the temptation… but in a prisoners’ dilemma with sufficient number of players, sooner or later someone will choose to defect, and it only takes one such person for you to get hurt. You can defend against an attempt to steal your resources, but the defense also costs you some resources. And perhaps… in the hypothetical state of perfect information… the only stable equilibrium is when you spend so much on defense that there is almost nothing left to steal from you.
And there is nothing special about the “bad guys” other than the fact that, statistically, they exist. Actually, if the hypothesis is correct, then… in the hypothetical state of perfect information… the bad guys would themselves end up in the very same situation, having to spend almost all successfully stolen resources to defend themselves against theft by other bad guys.
To defend yourself from the ordinary thieves, you need police. The police needs some money to be able to do their job. But what prevents them from abusing their power to take more from you? So you have the government to protect you from the police, but the government also needs money to do their job, and it is also tempted to take more. In the democratic government, politicians compete against each other… and the good guy who doesn’t want to take more of your money than he actually needs to do his job, may be outcompeted by a bad guy who takes more of your resources and uses the surplus to defeat the good guy. Also, different countries expend resources on defending against each other. And you have corruption inside all organizations, including the government, the police, the army. The corruption costs resources, and so does fighting against it. It is a fractal of burning resources.
So… perhaps there is an economical law saying that this process continues until the available resources are exhausted (because otherwise, someone would be tempted to take some of the remaining resources, and then more resources would have to be spent to stop them). Unless there is some kind of “friction”, such as people not knowing exactly how much money you have, or how exactly would you react if pushed further (where exactly is your “now I have nothing to lose anymore” point, when instead of providing the requested resources you start doing something undesired, even if doing so is likely to hurt you more); or when it becomes too difficult for the government to coordinate to take each available penny (because their oversight and money extraction also have a cost). And making the situation more transparent reduces this “friction”.
It this model, the difference between the “good guy” and the “bad guy” becomes smaller than you might expect, simply because the good guy still needs (your) resources to fight against the bad guy, so he can’t leave you alone either.
I don’t think the 100% tax rate argument works, for several reasons:
100% is not the short-run maximum extraction rate (Cf “Laffer Curve,” which is explicitly short-term).
USGOVT is not really an agent here, some extractors taking all they can are subjectvto the top marginal tax rate & reallocating to themselves using subtler mechanisms like monetary policy and financial regulation (and deregulation, cyclically), boondoggles, other regulatory capture...
If you count other extraction points such as credentialism + high college tuition + need-based financial aid (mostly involving loans), hospital bills, lifetime extraction rate may be a lot higher.
Good point, I updated towards the extraction rate being higher than I thought (will edit my comment). Rich people do end up existing but they’re rare and are often under additional constraints.
I will attempt to clarify which of these things I actually believe, as best I can, but do not expect to be able to engage deeper into the thread.
Implication: it’s bad for people to have much more information about other people (generally), because they would reward/punish them based on that info, and such rewarding/punishing would be unjust. We currently have scapegoating, not justice. (Note that a just system for rewarding/punishing people will do no worse by having more information, and in particular will do no worse than the null strategy of not rewarding/punishing behavior based on certain subsets of information)
>> What I’m primarily thinking about here is that if one is going to be rewarded/punished for what one does and thinks, one chooses what one does and thinks largely based upon that—you have a signaling equilibria, as Wei Dei notes in his top-level comment. I believe that this in many situations is much worse, and will lead to massive warping of behavior in various ways, even if those rewarding/punishing were attempting to be just (or even if they actually were just, if there wasn’t both common knowledge of this and agreement on what is and isn’t just). The primary concern isn’t whether someone can expect to be on-net punished or rewarded, but on how behaviors are changed.
Implication: “judge” means to use information against someone. Linguistic norms related to the word “judgment” are thoroughly corrupt enough that it’s worth ceding to these, linguistically, and using “judge” to mean (usually unjustly!) using information against people.
>> Judge here means to react to information about someone or their actions or thoughts largely by updating their view of the person—to not have to worry (as much, at least) about how things make you seem. The second sentence is a second claim, that we also need them not to use the information against us. I did not intend for the second to seem to be part of the first.
Implication (in the context of the overall argument): a general reduction in privacy wouldn’t lead to norms changing or being enforced less strongly, it would lead to the same norms being enforced strongly. Whatever or whoever decides which norms to enforce and how to enforce them is reflexive rather than responsive to information. We live in a reflex-based control system.
>> That doesn’t follow at all, and I’m confused why you think that it does. I’m saying that when I try to design a norm system from scratch in order to be compatible with full non-contextual strong enforcement, I don’t see a way to do that. Not that things wouldn’t change—I’m sure they would.
Implication: the system of norms is so corrupt that they will regularly put people in situations where they are guaranteed to be blamed, regardless of their actions. They won’t adjust even when this is obvious.
>> The system of norms is messy, which is different than corrupt. Different norms conflict. Yes, the system is corrupt, but that’s not required for this to be a problem. Concrete example, chosen to hopefully be not controversial: Either turn away the expensive sick child patient, or risk bankrupting the hospital.
Implication: people expect to lose value by knowing some things. Probably, it is because they would expect to be punished due to it being revealed they know these things (as in 1984). It is all an act, and it’s better not to know that in concrete detail.
>> Consider the literal example of sausage being made. The central problem is not that people are afraid the sausage makers will strike back at them. The problem is knowing reduces one’s ability to enjoy sausage. Alternatively, it might force one to stop enjoying sausage.
>> Another important dynamic is that we want to enforce a norm that X is bad and should be minimized. But sometimes X is necessary. So we’d rather not be too reminded of the X that is necessary in some situations where we know X must occur, to avoid weakening the norm against X elsewhere, and because we don’t want to penalize those doing X where it is necessary as we would instinctively do if we learned too much detail.
Implication: the control system demands optimistic stories regardless of the facts. There is something or someone forcing everyone to call the deer a horse under threat of punishment, to maintain a lie about how good things are, probably to prop up an unjust regime.
>> OK, this one’s just straight up correct if you remove the unjust regime part. Also, I am married with children.
Implication: even in the most just possible system of norms, it would be good to sometimes violate those norms and hide the fact that you violated them. (This seems incorrect to me!)
>> As I noted above, my model of norms is that they are even at their best messy ways of steering behavior, and generally just norms will in some circumstances push towards incorrect action in ways the norm system would cause people to instinctively punish. In such cases it is sometimes correct to violate the norm system, even if it is as just a system as one could hope for. And yes, in some of those cases, it would be good to hide that this was done, to avoid weakening norms (including by allowing such cases not be punished thus enabling otherwise stronger punishment).
Implication: the bad guys won; we have rule by gangsters, who aren’t concerned with sustainable production, and just take as much stuff as possible in the short term. (This seems on the right track but partially false; the top marginal tax rate isn’t 100% [EDIT: see Ben’s comment, the actual rate of extraction is higher than the marginal tax rate])
>> This is not primarily a statement about The Powers That Be or any particular bad guys. I think this is inherent in how people and politics operate, and what happens when one has many conflicting would-be sacred values. Of course, it is also a statement that when gangsters do go after you, it is important that they not know, and there is always worry about potential gangsters on many levels whether or not they have won. Often the thing taking all your resources is not a bad guy—e.g. expensive medical treatments, or in-need family members, etc etc.
Implication: more generally available information about what strategies people are using helps “our” enemies more than it helps “us”. (This seems false to me, for notions of “us” that I usually use in strategy)
>> Often on the margin more information is helpful. But complete information is highly dangerous. And in my experience, most systems in an interesting equilibrium where good things happen sustain that partly with fuzziness and uncertainty—the idea that obeying the spirit of the rules and working towards the goals and good things gets rewarded, other action gets punished, in uncertain ways. There need to be unknowns in the system. Competitions where every action by other agents is known are one-player games about optimization and exploitation.
Implication (in context): strategic ambiguity isn’t just necessary for us given our circumstances, it’s necessary in general, even if we lived in a surveillance state. (Huh?)
>> Strategic ambiguity is necessary for the surveillance state so that people can’t do everything the state didn’t explicitly punish/forbid. It is necessary for those living in the state, because the risk of revolution, the we’re-not-going-to-take-it-anymore moment, helps keep such places relatively livable versus places where there is no such fear. It is important that you don’t know exactly what will cause the people to rise up, or you’ll treat them as bad as won’t do that. And of course I was also talking explicitly about things like ‘if you cross that border we will be at war’ - there are times when you want to be 100% clear that there will be war (e.g. NATO) and others where you want to be 100% unclear (e.g. Taiwan).
To conclude: if you think the arguments in this post are sound (with the conclusion being that we shouldn’t drastically reduce privacy in general), you also believe the implications I just listed, unless I (or you) misinterpreted something.
>> I hope this cleared things up. And of course, you can disagree with many, most or even all my arguments and still not think we should radically reduce privacy. Radical changes don’t default to being a good idea if someone gives invalid arguments against them!
I agree that privacy would be less necessary in a hypothetical world of angels. But I don’t find it convincing that removing privacy would bring about such a world, and arguments of this type (let’s discard a human right like property / free speech / privacy, and a world of angels will result) have a very poor track record.
Why do you think I’m arguing against privacy in my comment (the one you replied to)? I don’t think I’ve been taking a strong stance on it.
I think you have been. In every comment you try to cast doubt on justifications for privacy.
That isn’t the same as arguing against privacy. If someone says “I think X because Y” and I say “Y is false for this reason” that isn’t (necessarily) arguing against X. People can have wrong reasons for correct beliefs.
It’s epistemically harmful to frame efforts towards increasing local validity as attempts to control the outcome of a discussion process; they’re good independent of whether they push one way or the other in expectation.
In other words, you’re treating arguments as soldiers here.
(Additionally, in the original comment, I was mostly not saying that Zvi’s arguments were unsound (although I did say that for a few), but that they reflected a certain background understanding of how the world works)
Let’s get back to the world of angels problem. You do seem to be saying that removing privacy would get us closer to a world of angels. Why?
Where? (I actually think I am uncertain about this)
Maybe I’m misreading and you’re arguing that it will help us and enemies equally? But even that seems impossible. If Big Bad Wolf can run faster than Little Red Hood, mutual visibility ensures that Little Red Hood gets eaten.
OK, I can defend this claim, which seems different from the “less privacy means we get closer to a world of angels” claim; it’s about asymmetric advantages in conflict situations.
In the example you gave, more generally available information about people’s locations helps Big Bad Wolf more than Little Red Hood. If I’m strategically identifying with Big Bad Wolf then I want more information available, and if I’m strategically identifying with Little Red Hood then I want less information available. I haven’t seen a good argument that my strategic position is more like Little Red Hood’s than Big Bad Wolf’s (yes, the names here are producing moral connotations that I think are off).
So, why would info help us more than our enemies? I think efforts to do big, important things (e.g. solve AI safety or aging) really often get derailed by predatory patterns (see Geeks, Mops, Sociopaths), which usually aren’t obvious to the people cooperative with the original goal for a while. These patterns derail the group and cause it to stop actually targeting its original mission. It seems like having more information about strategies would help solve this problem.
Of course, it also gives the predators more information. But I think it helps defense more than offense, since there are more non-predators to start with than predators, and non-predators are (presently) at a more severe information disadvantage than the predators are, with respect to this conflict.
Anyway, I’m not that confident in the overall judgment, but I currently think more available info about strategies is good in expectation with respect to conflict situations.
Yes, less privacy leads to more conformity. But I don’t think that will disproportionately help small projects that you like. Mostly it will help big projects that feed on conformity—ideologies and religions.
OK, you’re right that less privacy gives significant advantage to non-generative conformity-based strategies, which seems like a problem. Hmm.
Only ones that don’t structurally depend on huge levels of hypocrisy. People can lie. It’s currently cheap and effective in a wide variety of circumstances. This does not make the lies true.
[edit: actually, I’m just generally confused about what the parent comment is claiming]
Conformity-based strategies only benefit from reductions in privacy, when they’re based on actual conformity. If they’re based on pretend/outer conformity, then they get exposed with less privacy.
Ah, gotcha. Yeah that makes sense, although it in turn depends a lot on what you think happens when lack-of-privacy forces the strategy to adapt.
(note: following comment didn’t end up engaging with a strong version of the claim, and I ran out of time to think through other scenarios.)
If you have a workplace (with a low generativity strategy) in which people are supposed to work 8 hours, but they actually only work 2 (and goof off the rest of the time), and then suddenly everyone has access to exactly how much people work, I’d expect one of a few things to happen:
1. People actually start working harder
2. People actually end up getting 2 hour work days (and then go home)
3. People continue working for 2 hours and then goofing off (with or without maintaining some kind of plausible fiction – i.e. I could easily imagine that even with full information, people still maintain the polite fiction that people work 8 hours a day, and people only go to the efforts of directing attention to those who goof off when they are a political enemy. “Polite” society often seems to not just be about concealing information but actively choosing to look away)
4. People start finding things to do with their extra 6 hours that look enough like work (but are low effort / fun) that even though people could theoretically check on them and expose them, there’d still be enough plausible deniability that it’d require effort to expose them and punish them.
These options range in how good they are – hopefully you get 1 or 2 depending on how much more valuable the extra 6 hours are.
But none of them actually change the underlying fact that this business is pursuing a simple, collectivist strategy.
(this line of options doesn’t really interface with the original claim that simple collective strategies are easier under a privacy-less regime, I think I’d have to look at several plausible examples to build up a better model and ran out of time to write this comment before, um, returning to work. [hi habryka])
I think the main thing is I can’t think of many examples where it seems like the active-ingredient in the strategy is the conformity-that-would-be-ruined-by-information.
The most common sort of strategy I’m imagining is “we are a community that requires costly signals for group membership” (i.e. strict sexual norms, subscribing to and professing the latest dogma, giving to the poor), but costly signals are, well, costly, so there’s incentive for people to pretend to meet them without actually doing so.
If it became common knowledge that nobody or very few people were “really” doing the work, one thing that might happen is that the community’s bonds would weaken or disintegrate. But I think these sorts of social norms would mostly just adapt to the new environment, in one of a few ways:
come up with new norms that are more complicated, such that it’s harder to check (even given perfect information) whether someone is meeting them. I think this what often happened in academia. (See jokes about postmodernism, where people can review each other’s work, but the work is sort of deliberately inscrutable so it’s hard to see if it says anything meaningful)
people just develop a norm of not checking in on each other (cooperating for the sake of preserving the fiction), and scrutiny is only actually deployed against political opponents.
(The latter one at least creates an interesting mutually assured destruction thing that probably makes people less willing to attack each other openly, but humans also just seem pretty good at taking social games into whatever domain seems most plausibly deniable)
Only if you assume everyone loses an equal amount of privacy.
I think you’re pointing in an important direction, but your phrasing sounds off to me.
(In particular, ‘scapegoating’ feels like a very different frame than the one I’d use here)
If I think out loud, especially about something I’m uncertain about, that other people have opinions on, a few things can happen to me:
Someone who overhears part of my thought process might think (correctly, even!) that my thought process reveals that I am not very smart. Therefore, they will be less likely to hire me. This is punishment, but it’s very much not “scapegoating” style punishment.
Someone who overhears my private thought process might (correctly, or incorrectly! either) come to think that I am smart, and be more likely to hire me. This can be just as dangerous. In a world where all information is public, I have to attend to how the process by which I act and think looks. I am incentivized to think in ways that are legibly good.
“Judgment” is dangerous to me (epistemically) even if the judgment is positive, because it incentives me against exploring paths that look bad, or are good for incomprehensible reasons.
This seems like a general argument that providing evidence without trying to control the conclusions others draw is bad because it leads to errors. It doesn’t seem to take into account the cost of reduced info glow or the possibility that the gatekeeper might also introduce errors. That’s before we even consider self-serving bias!
Related: http://benjaminrosshoffman.com/humility-argument-honesty/
TLDR: I literally do not understand how to interpret your comment as NOT a general endorsement of fraud and implicit declaration of intent to engage in it.
My intent was not that it’s “bad”, just, if you do not attempt to control the conclusions of others, they will predictably form conclusions of particular types, and this will have effects. (It so happens that I think most people won’t like those effects, and therefore will attempt to control the conclusions of others.)
(I feel somewhat confused by the above comment, actually. Can you taboo “bad” and try saying it in different words?)
Ah, if you literally just mean it increases variance & risk, that’s true in the very short term. In context it sounded to me like a policy argument against doing so, but on reflection it’s easy to read you as meaning the more reasonable thing. Thank you for explaining.
Hmm. I think I meant something more like your second interpretation than your first interpretation but I think I actually meant a third thing and am not confident we aren’t still misunderstanding each other.
An intended implication, (which comes with an if-then suggestion, which was not an essential part of my original claim but I think is relevant) is:
If you value being able to think freely and have epistemologically sound thoughts, it is important to be able to think thoughts that you will neither be rewarded nor punished for… [edit: or be extremely confident than you have accounted for your biases towards reward gradients]. And the rewards are only somewhat less bad than the punishments.
A followup implication is that this is not possible to maintain humanity-wide if thought-privacy is removed (which legalizing blackmail would contribution somewhat towards). And that this isn’t just a fact about our current equilibria, it’s intrinsic to human biology.
It seems plausible (although I am quite skeptical) that a small group of humans might be able to construct an epistemically sound world that includes lack-of-intellectual-privacy, but they’d have to have correctly accounted for wide variety of subtle errors.
[edit: all of this assumes you are running on human wetware. If you remove that as a constraint other things may be possible]
further update: I do think rewards are something like 10x less problematic than punishments, because humans are risk averse and fear punishment more than they desire reward. (“10x” is a stand-in for “whatever the psychological research says on how big the difference is between human response to rewards and punishments”)
[note: this subthread is far afield from the article—LW is about publication, not private thoughts (unless there’s a section I don’t know about where only specifically invited people can see things) . And LW karma is far from the sanctions under discussion in the rest of the post.]
Have you considered things to reduce the assymetric impact of up- and down-votes? Cap karma value at −5? Use downvotes as a divisor for upvotes (say, score is upvotes / (1 + 0.25 * downvotes)) rather than simple subtraction?
We’ve thought about things in that space, although any of the ideas would be a fairly major change, and we haven’t come up with anything we feel good enough about to commit to.
(We have done some subtle things to avoid making downvotes feel worse than they need to, such as not including the explicit number of downvotes)
Do you think that thoughts are too incentivised or not incentivised enough on the margin, for the purpose of epistemically sound thinking? If they’re too incentivised, have you considered dampening LWs karma system? If they’re not incentivised enough, what makes you believe that legalising blackmail will worsen the epistemic quality of thoughts?
The LW karma obviously has its flaws, per Goodhart’s law. It is used anyway, because the alternative is having other problems, and for the moment this seems like a reasonable trade-off.
The punishment for “heresies” is actually very mild. As long as one posts respected content in general, posting a “heretical” comment every now and then does not ruin their karma. (Compare to people having their lives changed dramatically because of one tweet.) The punishment accumulates mostly for people whose only purpose here is to post “heresies”. Also, LW karma does not prevent anyone from posting “heresies” on a different website. Thus, people can keep positive LW karma even if their main topic is talking how LW is fundamentally wrong as long as they can avoid being annoying (for example by posting hundred LW-critical posts on their personal website, posting a short summary with hyperlinks on LW, and afterwards using LW mostly to debate other topics).
Blackmail typically attacks you in real life, i.e. you can’t limit the scope of impact. If losing an online account on a website X would be the worst possible outcome of one’s behavior at the website X, life would be easy. (You would only need to keep your accounts on different websites separated from each other.) It was already mentioned somewhere in this debate that blackmail often uses the difference between norms in different communities, i.e. that your local-norm-following behavior in one context can be local-norm-breaking in another context. This is quite unlike LW karma.
I’d say thoughts aren’t incentivized enough on the margin, but:
1. A major bottleneck is how fine-tuned and useful the incentives are. (i.e. I’d want to make LW karma more closely track “reward good epistemic processes” before I made the signal stronger. I think it currently tracks that well enough that I prefer it over no-karma).
2. It’s important that people can still have private thoughts separate from the LW karma system. LW is where you come when you have thoughts that seem good enough to either contribute to the commons, or to get feedback on so you can improve your thought process… after having had time to mull things over privately without worrying about what anyone will think of you.
(But, I also think, on the margin, people should be much less scared about sharing their private thoughts than they currently are. Many people seem to be scared about sharing unfinished thoughts at all, and my actual model of what is “threatening” says that there’s a much narrower domain where you need to be worried in the current environment)
3. One conscious decision we made was not not display “number of downvotes” on a post (we tried it out privately for admins for awhile). Instead we just included “total number of votes”. Explicitly knowing how much one’s post got downvoted felt much worse than having a vague sense of how good it was overall + a rough sense of how many people *may* have downvoted it. This created a stronger punishment signal than seemed actually appropriate.
(Separately, I am right now making arguments in terms that I’m fairly confident both of us value, but I also think there are reasons to want private thoughts that are more like “having a Raemon_healthy soul”, than like being able to contribute usefully to the intellectual commons.
(I noticed while writing this that the latter might be most of what a Benquo finds important for having a healthy soul, but unsure. In any case healthy souls are more complicated and I’m avoiding making claims about them for now)
If privacy in general is reduced, then they get to see others’ thoughts too [EDIT: this sentence isn’t critical, the rest works even if they can only see your thoughts]. If they’re acting justly, then they will take into account that others might modify their thoughts to look smarter, and make basically well-calibrated (if not always accurate) judgments about how smart different people are. (People who are trying can detect posers a lot of the time, even without mind-reading). So, them having more information means they are more likely to make a correct judgment, hiring the smarter person (or, generally, whoever can do the job better). At worst, even if they are very bad at detecting posers, they can see everyone’s thoughts and choose to ignore them, making the judgment they would make without having this information (But, they were probably already vulnerable to posers, it’s just that seeing people’s thoughts doesn’t have to make them more vulnerable).
This response seems mostly orthogonal to what I was worried about. It is quite plausible that most hiring decisions would become better in fully transparent (and also just?) world. But, fully-and-justly-transparent-world can still mean that fewer people think original or interesting thoughts because doing so is too risky.
And I might think this is bad, not only because of fewer-objectively-useful thoughts get thunk, but also because… it just kinda sucks and I don’t get to be myself?
(As well as, fully-transparent-and-just-world might still be a more stressful world to live in, and/or involve more cognitive overhead because I need to model how others will think about me all the time. Hypothetically we could come to an equilibrium wherein we *don’t* put extra effort into signaling legibly good thought processes. This is plausible, but it is indeed a background assumption of mine that this is not possible to run on human wetware)
Regarding that sentence, I edited my comment at about the same time you posted this.
If someone taking a risk is good with respect to the social good, then the justice process should be able to see that they did that and reward them (or at least not punish them) for it, right? This gets easier the more information is available to the justice process.
So, much of my thread was respond to this sentence:
The point being, you can have entirely positive judgment, and have it still produce distortions. All that has to be true is that some forms of thought are more legibly good and get more rewarded, for a fully transparent system to start producing warped incentives on what sort of thoughts get thought.
i.e. say I have three options of what to think about today:
some random innocuous status quo thought (neither gets me rewarded nor punished)
some weird thought that seems kind of dumb, which most of the time is evidence about being dumb, which occasionally pays off with something creative and neat. (I’m not sure what kind of world we’re stipulating here. In some “just”-worlds, this sort of thought gets punished (because it’s usually dumb). In some “just worlds” it gets rewarded (because everyone has cooperated on some kind of long term strategy). In some just-worlds it’s hit or miss because there’s a collection of people trying different strategies with their rewards.
some heretical thought that seems actively dangerous, and only occasionally produces novel usefulness if I turn out to be real good at being contrarian.
a thought that is clearly, legibly good, almost certainly net positive, either by following well worn paths, or being “creatively out of the box” in a set of ways that are known to have pretty good returns.
Even in one of the possible-just-worlds, it seems like you’re going to incentivize the last one much more than the 2nd or 3rd.
This isn’t that different from the status quo – it’s a hard problem that VC funders have an easier time investing in people doing something that seems obviously good, then someone with a genuinely weird, new idea. But I think this would crank that problem up to 11, even if we stipulate a just-world.
...
Most importantly: the key implication I believe in, is that humans are not nearly smart enough at present to coordinate on anything like a just world, even if everyone were incredibly well intentioned. This whole conversation is in fact probably not possible for the average person to follow. (And this implication in this sentence right here right now is something that could get me punished in many circles, even by people trying hard to do the right thing. For reasons related to Overconfident talking down, humble or hostile talking up)
This is not responsive to what I said! If you can see (or infer) the process by which someone decided to have one thought or another, you can reward them for doing things that have higher expected returns, e.g. having heretical thoughts when heresy is net positive in expectation. If you can’t implement a process that complicated, you can just stop punishing people for heresy, entirely ignoring their thoughts if necessary.
Average people don’t need to do it, someone needs to do it. The first target isn’t “make the whole world just”, it’s “make some local context just”. Actually, before that, it’s “produce common knowledge in some local context that the world is unjust but that justice is desirable”, which might actually be accomplished in this very thread, I’m not sure.
Thanks for adding this information. I appreciate that you’re making these parts of your worldview clear.
This was most of what I meant to imply. I am mostly talking about rewards, not punishments.
I am claiming that rewards distort thoughts similarly to punishments, although somewhat more weakly because humans seem to respond more strongly to punishment than reward.
You’re continuing to miss the completely obvious point that a just process does no worse (in expectation) by having more information potentially available to it, which it can decide what to do with. Like, either you are missing really basic decision theory stuff covered in the Sequences or you are trolling.
(Agree that rewards affect thoughts too, and that these can cause distortions when done unjustly)
Yes, I disagree with that point, and I feel like you’ve been missing the completely obvious point that bounded agents have limited capabilities.
Choices are costly.
Choices are really costly.
Your comments don’t seem to be acknowledging that, so from my perspective you seem to be describing an Impossible Utopia (capitalized because I intend to write a post that encapsulates the concept of Which Utopias Are Possible), and so it doesn’t seem very relevant.
(I recall claims on LessWrong that a decision process can do no worse with more information, but I don’t recall a compelling case that this was true on bounded human agents. Though I am interested if you have a post that responds to Zvi’s claims in the Choices are Bad series, and/or a post that articulates what exactly you mean by “just” since it sounds like you’re using it as a jargon term that’s meant to encapsulate more information than I’m receiving right now).
I’ve periodically mentioned that my arguments about “just worlds implemented on humans”. “Just worlds implemented on non-humans or augmented humans” might be quite different, and I think it’s worth talking about too.
But the topic here is legalizing blackmail in a human world. So it matters how this will be implemented on the median human, who are responsible for most actions.
Notice that in this conversation, where you are and I are both smarter than average, it is not obvious to both of us what the correct answer is here, and we have spent some time arguing about it. When I imagine the average human town, or company, or community, attempting to implement a just world that includes blackmail and full transparency, I am imagining either a) lots more time being spent trying to figure out the right answer, b) people getting wrong answers all the time.
The two posts you linked are not even a little relevant to the question of whether, in general, bounded agents do better or worse by having more information (Yes, choice paralysis might make some information about what choices you have costly, but more info also reduces choice paralysis by increasing certainty about how good the different options are, and overall the posts make no claim about the overall direction of info being good or bad for bounded agents). To avoid feeding the trolls, I’m going to stop responding here.
I’m not trolling. I have some probability on me being the confused one here. But given the downvote record above, it seems like the claims you’re making are at least less obvious than you think they are.
If you value those claims being treated as obvious-things-to-build-off-of by the LW commentariat, you may want to expand on the details or address confusions about them at some point.
But, I do think it is generally important for people to be able to tap out of conversations whenever the conversation is seeming low value, and seems reasonable for this thread to terminate.
In conversations like this, both sides are confused, that is don’t understand the other’s point, so “who is the confused one” is already an incorrect framing. One of you may be factually correct, but that doesn’t really matter for making a conversation work, understanding each other is more relevant.
(In this particular case, I think both of you are correct and fail to see what the other means, but Jessica’s point is harder to follow and pattern-matches misleading things, hence the balance of votes.)
(I downvoted some of Jessica’s comments, mostly only in the cases where I thought she was not putting in a good faith effort to try to understand what her interlocutor is trying to say, like her comment upstream in the thread. Saying that talking to someone is equivalent to feeding trolls is rarely a good move, and seems particularly bad in situations where you are talking about highly subjective and fuzzy concepts. I upvoted all of her comments that actually made points without dismissing other people’s perspectives, so in my case, I don’t really think that the voting patterns are a result of her ideas being harder to follow, and more the result of me perceiving her to be violating certain conversational norms)
Nod. I did actually consider a more accurate version of the comment that said something like “at least one of us is at least somewhat confused about something”, but by the time we got to this comment I was just trying to disengage while saying the things that seemed most important to wrap up with.
The clarification doesn’t address what I was talking about, or else disagrees with my point, so I don’t see how that can be characterised with a “Nod”. The confusion I refer to is about what the other means, with the question of whether anyone is correct about the world irrelevant. And this confusion is significant on both sides, otherwise a conversation doesn’t go off the rails in this way. Paying attention to truth is counterproductive when intended meaning is not yet established, and you seem to be talking about truth, while I was commenting about meaning.
Hmm. Well I am now somewhat confused what you mean. Say more? (My intention was for ‘at least one of us is confused’ to be casting a fairly broad net that included ‘confused about the world’, or ‘confused about what each other meant by our words’, or ‘confused… on some other level that I couldn’t predict easily.’)
Having read Zvi’s post and my comment, do you think the norm-enforcement process is just, or even not very unjust? If not, what makes it not scapegoating?
I think scapegoating has a particular definition – blaming someone for something that they didn’t do because your social environment demands someone get blamed. And that this isn’t relevant to most of my concerns here. You can get unjustly punished for things that have nothing to do with scapegoating.
Good point. I think there is a lot of scapegoating (in the sense you mean here) but that’s a further claim than that it’s unjust punishment, and I don’t believe this strongly enough to argue it right now.
I found this pretty useful—Zvi’s definitely reflecting a particular, pretty negative view of society and strategy here. But I disagree with some of your inferences, and I think you’re somewhat exaggerating the level of gloom-and-doom implicit in the post.
>Implication: “judge” means to use information against someone. Linguistic norms related to the word “judgment” are thoroughly corrupt enough that it’s worth ceding to these, linguistically, and using “judge” to mean (usually unjustly!) using information against people.
No, this isn’t bare repetition. I agree with Raemon that “judge” here means something closer to one of its standard usages, “to make inferences about”. Though it also fits with the colloquial “deem unworthy for baring [understandable] flaws”, which is also a thing that would happen with blackmail and could be bad.
>Implication: more generally available information about what strategies people are using helps “our” enemies more than it helps “us”. (This seems false to me, for notions of “us” that I usually use in strategy)
I can imagine a couple things going on here? One, if the world is a place where may more vulnerabilities are more known, this incentivizes more people to specialize in exploiting those vulnerabilities. Two, as a flawed human there are probably some stressors against which you can’t credibly play the “won’t negotiate with terrorists” card.
>Implication: even in the most just possible system of norms, it would be good to sometimes violate those norms and hide the fact that you violated them. (This seems incorrect to me!)
I think the assumption is these are ~baseline humans we’re talking about, and most human brains can’t hold norms of sufficient sophistication to capture true ethical law, and are also biased in ways that will sometimes strain against reflectively-endorsed ethics (e.g. they’re prone to using constrained circles of moral concern rather than universality).
>Implication: the bad guys won; we have rule by gangsters, who aren’t concerned with sustainable production, and just take as much stuff as possible in the short term. (This seems on the right track but partially false; the top marginal tax rate isn’t 100%)
This part of the post reminded me of (the SSC review of) Seeing Like a State, which makes a similar point; surveying and ‘rationalizing’ farmland, taking a census, etc. = legibility = taxability. “all of them” does seem like hyperbole here. I guess you can imagine the maximally inconvenient case where motivated people with low cost of time and few compunctions know your resources and full utility function, and can proceed to extract ~all liquid value from you.
The post implies it is bad to be judged. I could have misinterpreted why, but that implication is there. If judge just meant “make inferences about” why would it be bad?
But it also helps in knowing who’s exploiting them! Why does it give more advantages to the “bad” side?
Why would you expect the terrorists to be miscalibrated about this before the reduction in privacy, to the point where they think people won’t negotiate with them when they actually will, and less privacy predictably changes this opinion?
Perhaps the optimal set of norms for these people is “there are no rules, do what you want”. If you can improve on that, than that would constitute a norm-set that is more just than normlessness. Capturing true ethical law in the norms most people follow isn’t necessary.
Sure, but doesn’t it help me against them too?
As Raemon says, knowing that others are making correct inferences about your behavior means you can’t relax. No, idk, watching soap operas, because that’s an indicator of being less likely to repay your loans, and your premia go up. There’s an ethos of slack, decisionmaking-has-costs, strategizing-has-costs that Zvi’s explored in his previous posts, and that’s part of how I’m interpreting what he’s saying here.
You don’t want to spend your precious time on blackmailing random jerks, probably. So at best, now some of your income goes toward paying a white-hat blackmailer to fend off the black-hats. (Unclear what the market for that looks like. Also, black-hatters can afford to specialize in unblackmailability; it comes up much more often for them than the average person.) You’re right, though, that it’s possible to have an equilibrium where deterrence dominates and the black-hatting incentives are low, in which case maybe the white-hat fees are low and now you have a white-hat deterrent. So this isn’t strictly bad, though my instinct is that it’s bad in most plausible cases.
That’s a fair point! A couple of counterpoints: I think risk-aversion of ‘terrorists’ helps. There’s also a point about second-order effects again; the easier it is to blackmail/extort/etc., the more people can afford to specialize in it and reap economies of scale.
Eh, sure. My guess is that Zvi is making a statement about norms as they are likely to exist in human societies with some level of intuitive-similarity to our own. I think the useful question here is like “is it possible to instantiate norms s.t. norm-violations are ~all ethical-violations”. (we’re still discussing the value of less privacy/more blackmail, right?) No-rule or few-rule communities could work for this, but I expect it to be pretty hard to instantiate them at large scale. So sure, this does mean you could maybe build a small local community where blackmail is easy. That’s even kind of just what social groups are, as Zvi notes; places where you can share sensitive info because you won’t be judged much, nor attacked as a norm-violator. Having that work at super-Dunbar level seems tough.
This is really, really clearly false!
This assumes that, upon more facts being revealed, insurance companies will think I am less (not more) likely to repay my loans, by default (e.g. if I don’t change my TV viewing behavior).
More egregiously, this assumes that I have to keep putting in effort into reducing my insurance premiums until I have no slack left, because these premiums really, really, really matter. (I don’t even spend that much on insurance premiums!)
If you meant this more generally, and insurance was just a bad example, why is the situation worse in terms of slack than it was before? (I already have the ability to spend leisure time on gaining more money, signalling, etc.)
Relevant: https://siderea.dreamwidth.org/1486739.html
It’s true the net effect is low to first order, but you’re neglecting second-order effects. If premia are important enough, people will feel compelled to Goodhart proxies used for them until those proxies have less meaning.
Given the linked siderea post, maybe this is not very true for insurance in particular. I agree that wasn’t a great example.
Slack-wise, uh, choices are bad. really bad. Keep the sabbath. These are some intuitions I suspect are at play here. I’m not interested in a detailed argument hashing out whether we should believe that these outweigh other factors in practice across whatever range of scenarios, because it seems like it would take a lot of time/effort for me to actually build good models here, and opportunity costs are a thing. I just want to point out that these ideas seem relevant for correctly interpreting Zvi’s position.
I don’t think that’s a necessary implication. In a world where people live in fear of being punished they will be able to act in a way to avoid unjust punishment. That world is still one where people suffer from living in fear.
Whence fear of unjust punishment if there is no unjust punishment? Hypothetically there could be (justified) fear of a counterfactual that never happens, but this isn’t a stable arrangement (in practice, some people will not work as hard to avoid the unjust punishment, and so will get punished)
Most people who have fear of heights don’t often fall in a way that hurts them.