We can only go a step at a time. The other recent post about politics in Discussion was rife with obvious mind-kill. I’m seeing this thread filling up with it too. I’d advocate downvoting of obvious mind-kill, but it’s probably not very obvious at all and would just result in mind-killed people voting politically without giving the slightest measure of useful feedback. I’m really at a loss for how to get over the mind-kill of politics and the highly paired autocontrarian mind-kill of “politics is the mind-killer” other than just telling people to shut the fuck up, stop reading comments, stop voting, go lie down, and shut the fuck up.
BaconServ
- Oct 27, 2013, 7:45 AM; 0 points) 's comment on Only You Can Prevent Your Mind From Getting Killed By Politics by (
So because you already have the tool, nobody else needs to be told about it? I feel like I’m strawmanning here, but I’m not sure what your point is if not, “I didn’t need to read this.”
Do you have an actual complaint here or are you disagreeing for the sake of disagreeing
Because it sounds a damn lot like you’re upset about something but know better than to say what you actually think, so you’re opting to make sophomoric objections instead.
I don’t really care how special you think you are.
See, that’s the kind of stance I can appreciate. Straight to the point without any wasted energy. That’s not the majority response LessWrong gives, though. If people really wanted me to post about this as the upvotes on the posts urging me to post about this would suggest, why is each and every one of my posts getting downvoted? How am I supposed to actually do what people are suggesting when they are actively preventing me from doing so?
...Or is the average voter simply not cognizant enough to realize this...?
Worst effect of having sub-zero karma? Having to wait ten minutes between comments.
Wow, what an interesting perspective. Never heard that before.
Not sure if sarcasm or...
Why are you convinced I haven’t posted them explicitly? Or otherwise tested the reactions of LessWrongers to my ideas? Are you under the impression that they were going to be recognized as worth thinking about and that they would be brought to your personal attention?
Let’s say I actually possess ideas with future light cones on order of strong AI. Do you earently expect me to honestly send that signal and bring a ton of attention to myself? In a world of fools that want nothing more than to believe in divinity? (Beliefs about strong AI are pretty qualitatively similar to religious ideas of god, up to and included, “Works in mysterious ways that we can’t hope to fathom.”)
I have every reason not to share my thoughts and every reason to play coy and try to get LessWrong thinking for itself. I’m getting pretty damn close to jumping ship and watching the aftermath here as it is.
These ideas are trivial. When I say “accessible,” I mean in terms of the people educated in the world of the past who systematically had their ideas shut down. Anyone who has been able to control their education from an early age is a member of the Singularity already; their genius—the genius that each person possesses—has simply yet to fully shatter the stale ideas of a generation or two of fools who thought they knew much about anything. You really don’t need to waste your time trying to get them to recognize the immense quality of this old-world content to old-world rationalists.
I apologize that this will come across as an extraordinary claim, but I’ve already grown up in the Singularity and derived 99% of the compelling content of LessWrong—sequences and Yudkowsky’s thoughts included—by the age of 20. I’m gonna get downvoted to hell saying this, but really I’m just letting you know this so you don’t get confused by how amazing unrestricted human curiosity is. Basically, I’m only saying this because I want to see your reaction in ten years.
Certainly; I wouldn’t expect it to.
Hah. I like and appreciate the clarity of options here. I’ll attempt to explain.
A lot about social situations is something we’re directly told: “Elbows off the table. Close your mouth when you chew. Burping is rude, other will become offended.” Others are more biologically inherent; murder isn’t likely to make you popular a party. (At least not the positive kind of popularity...) What we’re discussing here lies somewhere between these two borders. We’ll consider aversion to murderers to be the least biased, having very little bias to it and being more a rational reaction, and we’ll consider asserted matters of “manners” to be maximally biased, having next to nothing to do with rationality and everything to do with believing whatever you’re told.
It’s a fuzzy subject without fully understanding psychology, but for the most part these decisions about social interaction are made consciously. In the presence of a biased individual, for whatever reason and whatever cause, if you challenge them on their strong opinions you’re liable to start an argument. There are productive arguments and unproductive arguments alike, but if the dinner table is terribly quiet already and an argument breaks out between some two members, everyone else has the option of “politely” letting the argument run its course, or intervening to stop this silly discussion that everyone’s heard time and time again and are tired of hearing. Knowing all to well how these kind of things start, proceed, and stop, the most polite thing you can do to not disrupt the pleasant atmosphere that everyone is pleased with is simply not to indulge the argument. Find another time, another place. Do it in private. Do whatever. Just not now at the dinner table, while everyone’s trying to have a peaceful meal.
There’s an intense meme among rationalists that whenever two rational agents disagree, they must perform a grand battle. This is just not true. There are many many opportunities in human interaction to solve the same problem. What you find is that people never work up the courage to do it ever, because of how “awkward” it would be, or any other number of excuses. “What if s/he rejects me? I’ll be devastated!” Intelligent agents are something to be afraid of, especially when their reactions control your feelings.
The courtesy isn’t so much for the opiner as it is for everyone else present. It is a result of bias, but not on the part of the people signaling silence; they’re just trying to keep things pleasant and peaceful for everyone.
Of course my description here could be wrong, but it’s not. The easy way to determine this is to ask each person in turn why they chose to be silent. Pretty much all of them are going to recite some subset of my assessment. Some people may have acquired that manner from being instructed to hold that manner, while others derived it from experience. The former case can be identified by naive confusion, “Mommy, why didn’t anyone tell him he was being racist?” You’ll understand when you’re older because people periodically fail to recognize the usefulness of civility. You’ll see it eventually, possibly coming from the people who were surrounded by mannerly people to the degree that they never were able to acquire the experience that got everyone else to adopt that manner. Even if it makes sense rationally, it could be the result of bias, but it can be hard to convince a child of complex things like that, so the bias doesn’t play a role beyond that that person finding that the things they were told as a child that they distinctly remember never understanding growing up did actually make sense in reality.
You can’t fault the child for being ignorant, but you can fault them for not recognizing the truth of mother’s words when the situation comes up that’s supposed to show them why the wisdom was correct. If they don’t learn it from experience like everyone else does, something went wrong. Possibly they overcompensated when they rejected Christianity and thought that it was a total fluke that their parents were competent enough to take care of a child. All those things that didn’t have to do with Christianity? Nope. Biased by Christianity. Out the window they go, along with the bathwater. When grandma says something racist and everyone goes silent, that is not tacit approval, that is polite disapproval. To not recognize something so obvious is going to be the result of some manner of cognitive bias, whether it’s a mindset of being the victim, white knighting on Tumblr’s behalf, an extreme bias against Christianity, etc.. Whatever it is that makes you think your position that contradicts the wisdom handed down and independently verified by generation after generation of highly intelligent agents capable of abstract reasoning is something that contradicts rationality.
Our ancestors didn’t derive quantum mechanics, no. That doesn’t make them unintelligent by any stretch of the imagination. When it came to interacting with other intelligent agents, we had intense pressure on us to optimize, and we did. Only now are we formally recognizing the principles that underlie deep generational wisdom.
So to answer concisely:
Barring that “treating silence as a way of expressing that the opiner deserves courtesy” is the result of bias, but that the bias originates in the opiner, not the analyzer of the silence, if we’re speaking strictly about the analysis of silence in modern social settings...
Do you in fact believe that?
Yes.
Can you provide any justification for believing it?
I can cite a pretty large chunk of the history of civilized humanity, yes.
The confusion is arising from your misunderstanding that decision theory is embedded more deeply in our psychology than our conscious mind—primitive decision theory (everything we’ve formally derived about decision theory up to this point) is embedded in our evolutionary psychology. There’s a ton more nuance to human interaction than social justice’s founding premise of, “Words hurt!!! (What are sticks and stones?)”
More or less, yeah. The totaled deltas weren’t of the necessary magnitude order in my approximation. It’s not that many pages if you set the relevant preference to 25 per page and have iterated all the way back a couple times before.
That’s odd and catches me completely off guard. I wouldn’t expect someone who seems to be deeply inside the hive to both cognize my stance as well as you have and be judging that my heretofore unstated arguments might be worth hearing. Your submission history reflects what I assume; that you are on the outer edges of the hive despite an apparently deep investment.
With the forewarning that my ideas may well be hard to rid yourself of and that you might lack the communicate skills to adequately convey the ideas to your peers, are you willing to accept the consequences of being rejected by the immune system? You’re risking becoming a “carrier” of the ideas here.
I’d need an expansion on “bias” to discuss this with any useful accuracy. Is ignorance a state of “bias” in the presence of abundant information to the contrary of the naive reasoning from ignorance? Please let me know if my stance becomes clearer when you mentally disambiguate “bias.”
I iterated my entire comment history to find the source of an immediate −15 spike in karma; couldn’t find anything. My main hypothesis was moderator reprimand until I put the pieces together on the cost of replying to downvoted comments. Further analysis today seems to confirm my suspicion. I’m unsure if the retroactive quality of it is immediate or on a timer but I don’t see any reason it wouldn’t be immediate. Feel free to test on me, I think the voting has stabilized.
Everything being polite and rational is informational; the point is to demonstrate that those qualities are not evidence of the hive mind quality. Something else is, which I clearly identity. Incidentally, though I didn’t realize it at the time, I wasn’t actually advocating dismantling it, or that it was a bad thing to have at all.
I mean, it’s not like none of us ever goes beyond the walls of LessWrong.
That’s the perception that LessWrong would benefit from correcting; it is as if LessWrongers never go outside the walls of LessWrong. Obviously you physically do, but there are strict procedures and social processes in place that prevent planting outside seeds in the fertile soil within the walls. When you come inside the walls, you quarantine yourself to only those ideas which LessWrong already accepts as being discussable. The article you link is three years old; what has happened in the time? If it was so well-received, where are the results? There is learning happening that is advancing human rationality far more qualitatively than LessWrong will publicly acknowledge. It’s in a stalemate with itself for accomplishing its own mission statement; a deadlock of ideas enforced by a self-reinforcing social dynamic against ideas that are too far outside the very narrow norm.
Insofar as LessWrong is a hive mind, that same mind is effectively afraid of thinking and doing everything it can to not do so.
I see. I’ll have to look into it some time.
Actually I think I found out the cause: Commenting on comments below the display threshold costs five karma. I believe this might actually be retroactive so that downvoting a comment below display the display threshold takes five karma from each user possessing a comment under it.
As a baseline, I need a program that will give me more information than simply being slightly more aware of my actions does. I want something that will give me surprising information I wouldn’t have noticed otherwise. This is necessarily non-trivial, especially given my knack for metacognition.
A habit I find my mind practicing incredibly often is simulation of the worst case scenario. Obviously the worst case scenario for any human interaction is that they will become murderously enraged and do everything in their power to destroy you. This is generally safe to dismiss as nonsense/completely paranoid. After numerous iterations of this, you start ignoring the unrealistic worst-possible scenarios (that often make so little sense there is nothing you can do to react to them) and get down to the realistic worst case scenario. Often times in my youth this meant thinking about the reaction to my saying exactly what I felt and thought. The reactions I predicted in response were inaccurate to the point of caricature, but I often found that, even in the wost case scenario that made half sense, there was still a path forward. It wasn’t the end of the world or some irreversible horror that would scar me forever, it was just an event where emotions got heated. That’s generally it. There’s little way to create a lasting problem without planning to create such a thing.
Obviously this doesn’t apply to supernatural actions on your part (creating strong AI is, in many ways, a supernatural scenario), but since those lie outside the realm of common logic, you have to handle them specially. Interestingly, when I was realistic about it, people didn’t react too badly to when I thought about what would happen if I suddenly did some intensely supernatural event like telekinesis. Sure, it’s surprising, and they’ll want you to help them move, but there’s nothing they can really do if you insist you want to keep it a secret. They pretty much have to respect your right to self-determination. Of course they could always go supervillain on you like in the comics, but that’s not a terribly realistic worst-case scenario even if it were strictly possible.
Of course it sounds like meaningless fiction at that point, but it serves to illustrate just how bad the worst case scenario is; I’ve found it is very hard to pretend the worst case is immensely terrible when you think about it realistically.
I’ve noticed that I crystallize discrete and effective sentences like that a lot in response to talking to others. Something about the unique way they need things phrased for them to understand well results in some compelling crystallized wisdoms that I simply would not have figured out nearly as precisely if I hadn’t explained my thoughts to them.
A lot can come across differently when you’re trapped behind an inescapable cognitive bias.
ETA: I should probably be more clear about the main implication I intend here: Convincing yourself that you are the victim all the time isn’t going to improve your situation in any way. I could make an argument that even the sympathy one might get out of such a method of thinking/acting is negatively useful, but that might be pressing the matter unfairly.
Thank you. I no longer suspect you of being mind-killed by “politics is the mind-killer.” Retracted.
Maybe I’m being too hasty trying to pinpoint people being mind-killed here, but it’s hard to ignore that it’s happening. I think I probably need to take my own advice right about now if I’m trying to justify my jumping to conclusions with statements like, “It’s hard to ignore that it’s happening.”
I was planning to make a top-level comment here to the effect of, “INB4obvious mind-kill,” but I think I just realized why the thoughts that thought that up were flawed from a basic level. Still, I think someone should point out that the comments here are barely touching the content of this article, which is odd for LessWrong.