-
Any position that requires that group A not be equal before the law compared to group B, who get the full benefit of the law, means that group A probably has rational grounds to fight against the position. Thus that position has built into it a group that should oppose it, and if one applied the golden rule, if the second group B were in their shoes, they would also oppose it.
Given how hard it is to make any very large and operationally functioning system, it is a lot to ask for it to also withstand the fact that for an entire group of people, it must be stopped. Thus with racism, sexism, nativism, etc, a lot of energy must be expended defending ideology and policy. -
This is one major strength of Utilitarian ethics: The system you designed should have as little built-in rational opposition as possible. The converse of “The greatest good for the most possible” is “There will be the minimum possible number of people with an automatic morally-aligned drive to stop you.”
Edward Pascal
Edward Pascal’s Shortform
I find on the internet that people treat logical fallacies like moves on a Chessboard. Meanwhile, IRL, they’re sort of guidelines you might use to treat something more carefully. An example I often give is that in court we try to establish the type of person the witness is—because we believe so strongly that Ad Hominem is a totally legitimate matter.
But Reddit or 4chan politics and religion is like, “I can reframe your argument into a form of [Fallacy number 13], check and mate!”
It’s obviously a total misunderstanding of what a logical fallacy even is. They treat it like rules of logical inference, which it is definitely not (and would disprove what someone said, however outside of exotic circumstances, such a mistake would be trivial to spot).
Thank you for this Data Point. I’m 6′1″ and age 43 and still have these issues. I thought by now I would not need as much food, but it’s still there. I’m still rail thin, and I can easily eat two breakfasts and elevensies before 1pm lunch.
One thing I love is my instant pot. It can get me a porridge of maple syrup, buckwheat groats, sprouted brown rice, and nuts and dried fruit within 20 minutes by just dumping in ingredients. Yeah, it only lasts 90 minutes or so, but I have enough to eat it again in 90 minutes. Later, for lunch, I can combine some more with a 12″ subway sandwich or something.
It could be the classic issue of enemies misunderstanding each other/modeling each other very badly.
I think pre-invasion, Putin had a lot more effective options for bothering the US/NATO, causing them to slip, etc. For example, he could have kept moving troops around at his borders in ambiguous ways, or put a ton of nukes out on Kaliningrad, with big orange nuclear signs all over them, or etc, etc. But he misread the situation.
Which I think the US also does, and has done in more wars than it has not (Vietnam, Afghanistan, Iraq, or any other place where “They’re going to throw down their weapons and welcome us as liberators.”)
Truly, knowing the psychological models of the enemy is rare and non-trivial.
I’m thinking, based on what you have said, that there does have to be a clear WIFM (what’s in it for me). So, any entity covering its own ass (and only accidentally benefitting others, if at all) doesn’t qualify as good paternalism (I like your term “Extractive”). Likewise, morality without creating utility for people subject to those morals won’t qualify. The latter is the basis for a lot of arguments against abortion bans. Many people find abortion in some sense distasteful, but outright banning it creates more pain and not enough balance of increased utility. So I predict strongly that those bans are not likely to endure the test of time.
Thus, can we start outlining the circumstances in which people are going to buy in? Within a nation, perhaps as long things are going fairly well? Basically, then, paternalism always depends on something like the “mandate of heaven”—the kingdom is doing well and we’re all eating, so we don’t kill the leaders. Would this fit your reasoning (even broadly concerning nuclear deterrence)?
Between nations, there would need to be enough of a sense of benefit to outweigh the downsides. This could partly depend on a network effect (where when more parties buy in, there is greater benefit for each party subject to the paternalism).
So, with AI, you need something beyond speculation that shows that governing or banning it has more utility for each player than not doing so, or prevents some vast cost from happening to individual players. I’m not sure such a case can be made, as we do not currently even know for sure if AGI is possible or what the impact will be.
Summary: Paternalism might depend on something like “This paternalism creates an environment with greater utility than you would have had otherwise.” If a party believes this, they’ll probably buy in. If indeed it is True that the paternalism creates greater utility (as with DUI laws and having fewer drunk people killing everyone on the roads), that seems likely to help the buy-in process. That would be the opposite of what you called “Extractive” paternalism.
In cases where the outcome seems speculative, it is pretty hard to make a case for Paternalism (which is probably why it broadly fails in matters of climate change prior to obvious evidence of climate change occurring). Can you think of any (non-religious) examples where buy-in happens in Paternalism on speculative matters?
There must be some method to do something, legitimately and in good-faith, for people’s own good.
I would like to see examples of when it works.
Deception is not always bad. I doubt many people would go so far as to say the DoD never needs to keep secrets, for example, even if there’s a sunset on how long they can be classified.
Authoritarian approaches are not always bad, either. I think many of us might like police interfering with people’s individual judgement about how well they can drive after X number of drinks. Weirdly enough, once sober, the individuals themselves might even approve of this (as compared to being responsible for killing a whole family, driving drunk).
(I am going for non-controversial examples off the top of my head).
So what about cases where something is legitimately for people’s own good and they accept it? In what cases does this work? I am not comfortable that since no examples spring to mind, no examples exist. If we could meaningfully discuss cases where it works out, then we might be able to contrast that to when it does not.
Is it possible to build a convincing case for the majority that it is either acceptable or that it is not, in fact, paternalism?
Can you articulate your own reasoning and intuitions as to why it isn’t? That might address the reservations most people have.
Then a major topic LessWrong community should focus on is how buy-in happens in Paternalism. My first blush thought is through educating and consensus-building (like the Japanese approach to changes within a company), but my first blush thought probably doesn’t matter. It is surely a non-trivial problem that will put the breaks on all these ideas if it is not addressed well.
Does anyone know some literature on generating consensus for paternalist policies and avoiding backlash?
The other (perhaps reasonable and legitimate) strategies would be secretive approaches or authoritarian approaches. Basically using either deception or force.
The problem I think this article is getting at is paternalism without buy-in.
On the topic of loss of credibility, I think focusing on nudity in general is also a credibility-losing problem. Midjourney will easily make very disturbing, gory, bloody images, but neither the Vitruvian man nor Botticelli’s Venus would be acceptable.
Corporate comfort with basic violence while blushing like a puritan over the most innocuous, healthy, normal nudity or sexuality is very weird. Also, few people for even a moment think any of it is anything other than CYOA on their part. Also, some may suspect a disingenuous double standards like, “Yeah, I guess those guys are looking at really sick stuff all afternoon on their backend version” or “I guess only the C-Suite gets to deepfake the election in Zimbabwe.” This would be a logical offshoot of the feeling that “The only purpose to the censorship is CYOA for the company.”
In summary: Paternalism has to be done very, very carefully, and with some amount of buy-in, or it burns credibility and good-will very quickly. I doubt that is a very controversial presupposition here, and it is my basic underlying thought on most of this. Eventually, in many cases, paternalism without buy-in yields outright hostility toward a policy or organization and (as our OP is pointing out) the blast radius can get wide.
I liked it. Made me consider a bit more.
First Take: Tangentially, does this point to an answer to the question of what are bureaucrats trying to maximize? (As sometimes addressed on LessWrong) Maybe they are trying to minimize operational hitches within their small realm.
Duly Noted. What about the Subtopic Title? I’ll see if I can change to normal-sentence case and bold.
You are making too many assumptions about my values and desires. I don’t care for religion and I think people can get a lot more social statues by bypassing or rendering irrelevant the social systems around them.
To pay all the dues would be like “Work to rule” in a factory, a well-known protest tactic of adhering to every policy as a method for bringing an operation to a standstill.
Many who get far places didn’t pay all their dues. Your life isn’t long enough. Maybe some pragmatic signaling, but no need to actually do everything that seems to be demanded.
The story is from the 1990s. The character is actually my dad. It was a mid-sized actuarial firm. He started by writing a whole new program to do the function he needed the spaghetti-code laden crap to do. Then he added features here and there until he had just made a whole new program which was documented, easier to read, and functioning well. After awhile, he passed it to the other actuaries, and his work became the new software. But he never did use the old software.
I guess things are different now. As the person above also said, it’s impossible to ignore the super-system that a small system is embedded in. Additionally, I think some of his reasoning for outright refusal to use the old software was that he wasn’t comfortable that he wouldn’t be able to audit it, and he was signing off on yearly reports for the firm.
Is it this, or that simply appears to be the case because someone older is likely to be deeply embedded?
My dad doesn’t think Windows is better than Linux or Mac. He sees me with OpenSuse and openly derides Windows all the time, but he figures he doesn’t want to learn a whole new system. He’s past EOL on Win7 at this point, but is so embedded in it, down to Excel for his accounting (was an actuary, on Excel from like the 80s through the 2000s).
Also, I have not argued that every new way is good. Some older techs are extremely good (Top of head example: no one who has ever used film and worked in a darkroom would say the experience of working in photoshop could ever fully replace that experience. Or another example, I hate turning on my computer to do anything with music. The screen/mouse/key interface is nothing nice to my creativity. And oh my goddess how cool the whole thing can sound and come together on a four track!).
A reason behind bad systems, and moral implications of seeing this reason
One of the most basic general sales scripts is this: After a purchase has been made, say “Great. Today only and for people who have already bought from us, we have 25% off our XXX, if you just check catalogue page 19.”
Whether they buy or not, you follow with, “We also have 25% off our XXY, if you have a look here.”
And on and on.
The script is simply not to go away, keep asking for more sales, until the buyer breaks social decorum by being literally rude and just saying (some version of), “Stop. I am done. This conversation is over.”
Have any members here or other third party entities performed physical deep audits of these facilities?
It’s an extremely attractive business proposition for a grand scam that by the time you’re figured out, the situation will surely be murky, the victims will all be dead, and very likely so will you.
Remember when CAT company got scammed in China? The company was publicly traded, due diligence from 2 of the big 5 consulting firms, etc. Still CAT bought 600 Million dollars of a company with facilities and equipment that didn’t exist (ERA Machinery), known to be China’s biggest manufacturer of Coal mining equipment.
I know Arizona isn’t China, but the setup circumstances in this industry sure seem ripe for a grift, don’t they?
I’m thinking about number 24: “As the overall maze level rises, mazes gain a competitive advantage over non-mazes.”
Why is this?
Do you only mean this in the sense that in a mazey environment, mazes grow (like a fungus or a virus?). I am trying to think my objection through clearly, but it seems to be that mazes should have some inefficiencies and organizational failure modes that would make them less competitive on a level playing field.
Is it that even a single maze will tend to be so politically oriented and capable (as politics is almost definitionally maze-like) that it will have an advantage over everything else? Is the root problem politics? And taking that a step further, is the root problem of politics one of effectively signaling in a communication-constricted (i.e. too big to clearly suss out all communications) environment?
If that is the case, then an organizational culture and individuals dedicated to hacking signals (politicking) would dominate. It seems this would be a characteristic of people who are fully sold-out to the maze—their resumes would tend to tick every box, their credentials would seem flawless, and their memos would read as perfect professionalism, right?
Put another way, Maze-People are usually really good at audit trails, I wager. Their narratives, credentials, and etc will all seem to ‘add up’ in a way that interfaces well with mazes.
This would make them dominant in some sense and easy to infiltrate organizations.
But I’m still caught on number 24 in the sense that the organizations that have become mazes should have competitive disadvantages. For one thing, they’re being operated by people whose actions might be completely opposed, or at least tangential, to the organizations.
I did not say engineer something so that no one wants to destroy it. Just that if you have actually reached towards the greatest good for the greatest number, then the fewest should want to destroy it.
Or have I misunderstood you?
My argument is going something along the lines of the Tautological argument that ( I think) Mills (but maybe Bentham) made about Utilitarianism (paraphrasing much), “People who object to Utilitarianism that it will end up with some kind of calculated dystopia where we trade off a few people’s happiness for the many actually prove the principle of utilitarianism in their very objection to this. Such a system would be anti-utilitarian. No one likes that. Therefore it is not utilitarianism at all.”