DM me anything
I’m nerveWrangler on discord
DM me anything
I’m nerveWrangler on discord
In a well-founded marriage, spouses don’t try to induce internal conflict within their partner (e.g. shaming or guilting them) to win fights.
So I would expect that giving others a list of true information which connotes their relevant wrongness in some way on some topic (and may thereby induce guilt especially when the problem is explicitly stated) is not well-founded, according to you. Under well-founded environments, those with the advantage of existing unchallenged multi-prejudiced ideology would never be held accountable to their mistakes because all conscientious objectors can just be made into annoying squares.
Even worse, you contend that the opposite is to be “coherent” “like North Korea” “because everyone listens to the same person”. So in your option model there’s just no position corresponding to being virtuously willing to contend with guilt as a fair emergent consequence of hearing carefully considered and selected information.
Strong downvoted for not just saying what you’re really thinking to the person you have a criticism about which is almost definitely wrong.
Still I guess there should be a word for being mean to one or a few guys in particular against one’s stated principles without an objectively justifying explanation. I would like it to be something else. Especially because your example does not involve predictable scapegoat targeting to match the way that this phenomenon happens in real life.
well, there are positive-sum games. also, it may turn out that acquiring power is more complicated, in an almost fundamentally benevolent way, than grabbing an object from someone else and pulling in hard with your arms; people don’t like ceding power to individuals who seem myopically selfish.
This exchange reveals a pervasive mechanism: pseudo-principality—the selective application of principles based solely on whether they advance one’s concealed interests while maintaining a facade of consistent ethical behavior.
While your analysis may fairly apply to the example you have constructed, in practice, it is important to be strategic about intrinsic value, people do not often have the framing of intrinsic value strategicism readily in mind to make their behavior explicitly consistent about, and all shortly specified principles which are not about being strategic about intrinsic value will tend to lead a person away from that — can be followed myopically.
So while it may be extremely unpleasant to not understand the behavior of an inconsistent advice-applier, especially one who chooses outcomes which have any amount (no matter if it’s the option where the unpleasantness is most mitigated) of unpleasantness, inconsistency may be the only option for someone who wants to be good and not just predictably bad and doesn’t have the framing available to them of strategicism of intrinsic value.
I agree that it is better to have principles-enough with some sad exceptions than to be a predictable nihilist.
This reframes the act as a legitimate inquiry into whether stated principles hold up across relevant situations. We can reserve “whataboutism” specifically for bad-faith distractions. When someone earnestly
Bolding mine. Not that you’d be definitely myopic or definitely self-privileging upon close examination, but this is a lot of buck-passing (“passing the buck”) going on here.
I propose we adopt a more neutral and accurate term: Principle Consistency Challenge (PCC). This reframes the act as a legitimate inquiry into whether stated principles hold up across relevant situations.
… When someone earnestly asks, “Why does this principle apply here but not there?”, that question deserves respect and engagement, not ridicule. 1
I love that; thank you.
Context Inflation: Excessive appeals to “unique circumstances” to justify inconsistency, especially when those circumstances conveniently align with self-interest.
If your model failed to account for vast sections of reality then it failed to account; that is simply sufficient cause for update, and not sufficient cause for incurring a reputation of not really meaning the good-when-universal features of one’s given advice that they really meant, though I agree that one should not remain wrong in light of definite exceptions, and changing one’s model may be seen as humbling.
This pervasiveness raises a provocative question: If economists try to estimate the percentage of counterfeit currency in circulation, what percentage of publicly stated principles are functionally “counterfeit”—applied selectively for gain?
Most principles should be applied selectively ‘for gain’ when comparing their total application across a multiverse of conceivable conditions, instead of given all the say in the outcomes all the time. Beauty should make room for Freedom, and if any person is deprived too severely of the former then maybe the latter should even make room for the former too, in some intelligently implemented way and not in just any way.
While I take no position on the general accuracy or contextual robustness of the post’s thesis, I find that its topics and analogies inspire better development of my own questions. The post may not be good advice, but it is good conversation. In particular I really like the attempt to explicitly analyze possible explanations of processes of consciousness emerging from physical formal systems instead of just remarking on the mysteriousness of such a thing ostensibly having happened.
girl prety
personal desire to be worthy of being an example vindicating the hope that good guys can ‘get the girl’; giving up on one means nothing will ever stay and doom is eternal
What are the other describable or possible-though-indescribable hypotheses? If it’s intuitive that there are no other hypotheses to start from — if the explanations have been reduced to some small number of all imaginable possibilities — that’s a non-nothing sort of evidence which ought to be contended with at the very least, rather than scoffed at with ‘you didn’t see an epistemic polylemma therefor there’s no evidence that there was one’.
this is clearly polemical satire and not true, and I cannot readily infer about you a spirit of curiosity about the subject of the satire, so I will downvote, in an attempt of assistance of the spirit of LessWrong; nothing personal & i hope you fare well on this site generally :)
Ignoring everything underneath the title, this advice makes more convenient what people wanted to do anyway, changing nothing about the typical quality of the implementation; not the cruel extent nor the unjust kind of it. “Oh, not even rationalists will object? Excellent.”
It would fall harshest on those who are most small, most libertarian, and most habitually argumentative, and not on dogmatic censors, nor coercive aesthetic isolationists, nor speech duressors.
FFM is great except for two things:
it purports to measure conscientiousness, and it measures that by how much money you make and how often you work, which have nothing to do with acting according to your conscience in a world where the highest-relevance acts are speech-acts
its ‘agreeableness’ metric is supposed to be about niceness and harmony-making, but people who reveal their disagreements for the sake of resolving them get sorted as argumentative and thus low in agreeability, and there’s no separate metric for niceness in the model, so as a worldview FFM basically says you’re either dogmatically conformist or a jerk
Which is actually a lot of things wrong with it, considering that’s two of its five factors.
It may be strategic about intrinsic value for a small group of people to suffer to implement highly demanding altruistic lifestyles of their own authentic diligence, but for everyone to operate at the extremes of altruism would make everything suck, which is something morality would advise against. Morality is demanding, but it can’t be demanding to an extent that comes out wasteful of intrinsic value in the end. Well, that’s my working hypothesis at least.
You are free to choose between A or B if your choice will determine the outcome.
Right, but there’s a lot of conflation between what people should think I am and what they do unfairly think I am, which to be fair is a real thing, though it’s a real thing which the thing that people should think I am is trapped inside of, and to the extent that it is responsible for causing problems which the thing people should think I am are inclined to blame by nonconsensual association, it is parasitic, and the thing which people should think I am is a victim.
Upvoted for the finalmost sentence of your post; thank you so much.
Whoever argues that “MLK is a criminal” with the intent of instilling the negative connotation of the term is unlikely to apply the same standard everywhere.
This is an indictment of the human species, if this purported “unlikelihood” is true. Maybe you should not underestimate the likelihood that your interlocutors have a serious deep resentment of unlawful behavior, however alien this might be to you. Maybe part of their fundamental self-narrative includes the unforgivable harms consistently caused to them by crimes which were superficially dismissed as mild by others. They may think “If this is a mild (read: non-central) crime, I don’t want to know what the serious (read: central) ones are.” Maybe they feel they have no choice but to become a total “I’ll end it forever if it’s the last thing I do”-level enemy with criminality in all forms, as a precaution.
If humanity is willing to coexist with anything, well, imagine the worst possible thing. Imagine something worse than that. Worse in ways you didn’t even realize things could be worse by. Recursive worseness. Explosive worseness via combination of worseness-multipliers. Worseness-multipliers that might seem like normally good things if they weren’t being used by your imagination for the explicit purpose of making things worse. (Like hope, for example.) That is a thing which counts as a member of the “anything” set which humanity would be willing to coexist with, in the world where humanity would coexist with anything.
Unconditional coexistence is not safe for humans. To refuse coexistence with something that is evil in letter and spirit, on the outside and on the inside, you must have a clear sense of that thing no matter what are the stereotypes — the consensus reality — about its symbolic representation.
I liked this post on a personal level, because I like seeing how people can, with extremely fine subtlety, trick themselves into thinking the world is cooler than it is, but I had to downvote because that is not what LessWrong is for, or at least to the extent that self-deceiving memes are being shared then it’s supposed to be explicitly intentional; “Instructions For Tricking Yourself Into Feeling That The World Is Cooler” is a thing you could plausibly post and explain, such that your beliefs about which tricks actually work pay rent in anticipated experiences.
My objection about specific contents of this post: you cannot make good things more plausible-about-reality by writing stories where realistic events happen plus good unrealistic events happen; the unrealistic events do not gain plausibility-about-reality by association-through-fiction.
Some clarifications about my objection, and some questions to help you hold your ground if you should and if you can: I don’t take for granted that this observation is necessarily mutually exclusive with what you have written, but the observation is ostensibly mutually exclusive; the relation of ‘subjectively-unresolved ostensible mutual exclusivity’ between your post and my observation is what we might call ‘tension’. Can you explain how the intended spirit of your post survives my objection? What do you think is the right way to resolve the tension between our world models?
One option for resolving the tension is to fix your world-model by removing this meme from it because you realize my model about reality, which does not contain your meme, is more consistent with what is noticeable about reality. Another option is to explain how I’ve misinterpreted the differences between what your argument should have been (which could be considered close enough to what you articulated), versus the worse version that it actually sounded like, followed by explaining that what your argument was close to is more important than how it sounded to me even if I heard right. This latter option could be considered ‘rescuing the spirit of the post from the letter of it’.
(Sidenote: I will concede to you the merit that having to explain the trick makes it less subtle, and might make it work less for people who care about their beliefs paying rent in anticipated experiences. This is not fun, and I think there should be a place where you can post specifically rationalism-informed tricks like that; maybe a forum called FunTricks. Arguably this would boost epistemic security for the people who do care about beliefs paying rent in anticipated experiences, as content posted to FunTricks would serve as puzzles for experienced Bayescrafters to learn more about the nature of self-deception from. The irrationalists can get lost in a fun hall of mirrors, and the Bayescrafters can improve their epistemic security; it would be win-win.
FunTricks posters could rate posts by how subtle the trick was; whether they noticed the mistake. Subtlevote vs “Erm, wait”-vote)
Imagine that your meme is importantly inconsistent with what is noticeable about reality. After all my criticisms, what merits about your post, do you think, are still true? I am interested in this! I do not want to deny your post any credit that is due to it, even if I tentatively must downvote it because that credit is outweighed by the fact that it can mislead people about how cool reality is, which is something LessWrongers care about!
It is, on principle, possible that I am in the wrong; that your model is better due to the presence of your meme(s). That would be great if it were demonstrated, because I would have the privilege of learning more from you than what you would learn from me, which is a serious kind of ‘winning’ in debates! I am especially excited about opportunities for viewquakes!
Finally, thank you for posting on LessWrong! Thank you for engaging with philosophy and the memetic evolutionary process! Every interaction can make us wiser if we have the courage to admit error, forgive error, and persist, in the course of memetic negotiation! If you post memes (idea-genes) on LessWrong, please make those memes pay rent in anticipated experiences; those are the memes we do want here! :)
I don’t agree that focusing on extrinsic value is less myopic than focusing on intrinsic value. This world is full of false promises, self-delusion, rationalization of reckless commitment, complexity of value, bad incentives/cybernetics, and the fallaciousness of planning. My impression is that the conscientious sort of people who think so much about utility have overconfidence in the world’s structural friendliness and are way more screwed than the so-called “myopic” value-focused individuals.
It’s objectively not good enough to be good to a boring degree. The world is full of bullying, we should stand up to it, and to stand up effectively against bullying is rarely boring.
Objective general morality exists, it doesn’t have to exist for the sake of anything outside itself, and you should collaborate control over the world with objective general morality if not outright obey it; whichever is better after fully accounting for the human hunger for whimsy. The protection of whimsy is objectively a fragment of objective goodness.
All the narrative proofs that the world should not flow in accordance with good intentions are just hints about how to refine one’s conception of Good Itself so that it does not lead to outcomes that are, surprise surprise, actually bad.
“Always remember that it is impossible to speak in such a way that you cannot be misunderstood: there will always be some who misunderstand you.”
― Karl Popper
A person can rationalize the existence of causal pathways where people end up not understanding things that you think are literally impossible to misunderstand, and then very convincingly pretend that that was the causal pathway which led them to where they are,
and there is also the possibility that someone will follow such a causal pathway towards actually sincerely misunderstanding you and you will falsely accuse them of pretending to misunderstand.
This is wonderful; feels much more friendly, practical, and conducive to ideal speech situations. If someone tries to attack me for a wrong probability, I can respond “I’m just talking but with additional clarity; no one is perfect.”
I am under the impression that here at LessWrong, everyone knows we have standards about what makes good, highly-upvotable top-level content. Currently I would not approve of a version of myself who would conform to those standards I perceive, but I can be persuaded otherwise, including by methods such as improving my familiarity with the real standards.
Addendum: I am not the type of guy who does homework. I am not the type of guy who pretends to have solved epistemology when they haven’t. I am the type of guy who exchanges considerations and honestly tries to solve epistemology, and follows up with “but I’m not really sure; what do you guys think?” That is not highly-upvotable content in these parts’a town.
Mhm, and in practice no one who accuses of guilt tripping actually cares about that distinction; if someone is being made to look bad then they basically never wonder if it’s right. I’m not objecting to the ‘guilt-tripping’ framing for no reason; it’s a thought-terminating cliche in 99.99% of cases where it’s used.
[reading what I actually wrote here] … And anyways ‘inducing guilt’ is what the most relevant informing-act looks like; if you’re doing something wrong then you don’t necessarily change it without attending to the exact details which would induce guilt. I never even said anything about ‘guilting’; OP explicitly discouraged a correct thing to do without even mentioning ‘guilting’.