So far as I can tell, the common line that bear spray is more effective than firearms is based on an atrociously bad reading of the (limited) science, which is disavowed by the author of the studies. In short, successfully spraying a bear is more effective at driving off curious bears than simply having a firearm is are at stopping charging bears, but when you’re comparing apples to apples then firearms are much more effective.
Here’s a pretty good overview: https://www.outsideonline.com/2401248/does-bear-spray-work. I haven’t put a ton of work into verifying what he’s claiming here, but it does match with the other data I’ve seen and I haven’t seen anyone be nearly as careful and reach the opposite conclusion.
jimmy
I’m the person JenniferRM mentioned. I’m also a physics guy, and got into studying/practicing hypnosis in ~2010/2011. I kinda moved on from “hypnosis” and drifted up the abstraction ladder, but still working on similar things and working on tying them together.
Anyway, here are my thoughts.
Suppose I really want her to be spinning clockwise in my mind. What might I do?
What worked for me is to focus on the foot alone and ignore the broader context so that I had a “clean slate” without “confirmatory experience” blocking my desired conclusion. When looking at the foot alone I experience it as oscillating rather than rotating (which I guess it technically is), and from there I can “release” it into whichever spin I intend by just kinda imagining that this is what’s going on.
On the one hand, shifting intuitive models is surprisingly hard! You can’t necessarily just want to have a particular intuitive model, and voluntarily make that happen.
I actually disagree with this. It certainly seems hard, but the difficulty is largely illusory and pretty much disappears once you stop trying to walk through the wall and notice the front door.
The problem is that “wanting to have a particular model” isn’t the thing that matters. You can want to have a particular model all you want, and you can even think the model is true all you want, but you’re still talking about the statement itself not about the reality to which the statement refers. Even if you convince someone that their fear is irrational and they’d be better off not being scared, you’ve still only convinced them that their fear is irrational and they’d be better off not being scared. If you want to convince them that they are safe—and therefore change their fear response itself—then you need to convince them that they’re safe. It’s the difference between looking at yourself from the third person and judging whether your beliefs are correct or not, vs looking at the world from the first person and seeing what is there. If you want to change the third person perspective, then you can look at which models are desirable and why. If you want to change the first person models themselves, you have to look to the world and see what’s there.
This doesn’t really work with the spinning dancer because “Which way is the dancer spinning?” doesn’t have an answer, but this is an artificial issue which doesn’t exist in the real world. You still have to figure out “Is this safe enough to be worth doing?” and that’s not always trivial, but the problem of “How do I change this irrational fear?” (for example) is. The answer is “By attending to the question of whether it is actually safe”.
I don’t deny that there’s “skill” to it, but most of the skill IME is a meta skill of knowing what to even aim for rather than aiming well. Once you start attending to “Is it safe enough?”, then when the answer is actually obvious the intuitive models just change. I can give a whole bunch of examples of this if you want, where people were stuck unable to change their responses and the problem just melts away with this redirection. Even stuff that you’d think would be resistant to change like physical pain can change essentially instantly. I’ve had it take as little as a single word.
Again we see that the subject is made to feel that his body is out of control, and becomes subject to a high-status person. Some hypnotists sit you down, ask you to stare upwards into their eyes and suggest that your eyelids are wanting to close—which works because looking upwards is tiring, and because staring up into a high-status person’s eyes makes you feel inferior.
This isn’t exactly wrong, but I want to push back on the implication that this is the central or most important thing here.
The central thing, IMO, is a willingness to try on another person’s worldview even though it clashes with your own. It doesn’t require “inferiority”/”high status”/”control” except in the extremely minimal sense that they might know something important that you don’t, and that seeing it for yourself might change your behavior. That alone will get you inhibition of all the normal stuff and an automatic (albeit tentative) acceptance of worldview-dissonant perspectives (e.g. name amnesia). It helps if the person has reason to respect and trust you which is kinda like “high status”, but not really because it can just as easily happen with people on equal social standing in neutral contexts.
Similarly, hypnosis has very little to do with sleep and eye fatigue/closure is not the important part of eye contact. The important part of eye contact is that it’s incredibly communicative. You can convey with eye contact things which you can’t convey with words. “I see you”. “Seeing you doesn’t cause conflict in me”. “I see you seeing me see you” and so on, to name a few. All the things you need to communicate to show someone that your perspective is safe and worthy of experiencing are best communicated with the eyes. And perhaps equally important it is a bid for attention, by holding your own.
So far, this isn’t a trance; I’m just describing a common social dynamic. Specifically, if I’m not in a hypnotic trance, the sequence of thoughts in the above might look like a three-step process:
[...]
i.e., in my intuitive model, first, the hypnotist exercises his free will with the intention of me standing; second, I (my homunculus) exercise my own free will with the intention of standing; and third, I actually stand. In this conceptualization, it’s my own free will / vitalistic force / wanting (§3.3.4) that causes me to stand. So this is not a trance.It’s important to note that while this self reflective narrative is indeed different in the way you describe, the underlying truth often is not. In the hypnosis literature this is known as “cold control theory”, because it’s the same control without the usual Higher Order Thoughts (HOT).
In “common social dynamics” we explain it as “I chose to”, but what is actually happening a lot of the time is the speaker is exercising their free will through your body, and you’re not objecting because it matches your narrative. The steps aren’t actually in series, and you didn’t choose to do it so much as you chose to not decline to do it.
These “higher order thoughts” do change some things, but turn out to be relatively unimportant and the better hypnotists usually don’t bother too much with them and instead just address the object level. This is also why you get hypnotists writing books subtitled “there’s no such thing as hypnosis” and stuff like that.
The short version is: If I have a tune in my head, then I’m very unlikely to simultaneously recall a memory of a different tune. Likewise, if I’m angry right now, then I’m less likely to recall past memories where I felt happy and forgiving, and vice-versa.
As far as I can tell, there are several different things going on with amnesia. I agree that this is one of them, and I’m not sure if I’ve seen anyone else notice this, so it’s cool to see someone point it out.
The “null hypothesis”, though, any time it comes to hypnosis is that it’s all just response to suggestion. You “know” that being hypnotized involves amnesia, and you believe you’re hypnotized, so you experience what you expect. There’s an academic hypnosis researcher I talk to sometimes who doesn’t even believe “hypnotic trance” is real in any fundamental sense and thinks that all the signs of trance are the result of suggestion.
I don’t believe suggestion is all that’s going on, but it really is sufficient for amnesia. The answer to Yudkowsky’s old question of “Do we believe everything we’re told?” is indeed “Yes”—if we don’t preemptively push it away or actively remember to unbelieve later. Back when I was working this stuff out I did a fun experiment where I’d come up with an excuse to get people to not pre-emptively reject what I was about to say, then I’d suggest amnesia for this conversation and that they’d laugh when I scratch my nose, and then I’d distract them so that the suggestion could take effect before they had a chance to unbelieve it. The excuse was something like “I know this is ridiculous so I don’t expect you to believe it, but hear me out and let me know if you understand”—which is tricky because they think the fact that we “agreed” that they won’t believe it means they actually aren’t believing it when they say “I understand”, even though the full statement is “I understand [that I will laugh when you scratch your nose and have no idea why”]. They still had awareness that this belief is wrong and would therefore act to stop themselves from acting on it, which is why the unexpected distraction was necessary in order to get their mind off of it long enough for it to work.
If someone’s only option for dealing with a hostile telepath is self-deception, and then you come in and punish them for using it, thou art a dick.
Like, do you think it helps the abused mothers I named if you punish them somehow for not acknowledging their partners’ abuse? Does it even help the social circle around them?
If that’s their only option, and the hostility in your telepathy is antisocial, then yes. In some cases though, people do have other options and their self-deception is offensive, so hostile telepathy is pro-social.
For example, it would probably help those mothers if the men knew to anticipate punishment for not acknowledging their abuse of their partners. I bet at least one of those abusive husbands/boyfriends will give his side of the story that’s a bit more favorable than “I’m a bad guy, lol”, and that it will start to fall apart when pressed. In those cases, he’ll have to choose between admitting wrongdoing or playing dumb, and people often do their best to play really dumb. The self-deception there is a ploy to steal someone else’s second box, so fuck that guy.
I think the right response is to ignore the “self” part of the deception and treat it like any other deception. If it’s okay to lie to the Nazis about hiding Jews, then it’s okay to deceive yourself into believing it too. If we’re going to make it against the law to lie under oath, then making it legal so long as they lie to themselves too is only going to increase the antisocial deception.
The reason I trust research in physics in general is that it doesn’t end with publishing a paper. It often ends with building machines that depend on that research being right.
We don’t just “trust the science” that light is a wave; we use microwave ovens at home.
Well said. I’m gonna have to steal that.
Therefore, in a world where we all do power poses all the time, and if you forget to do them, you will predictably fail the exam...
...well, actually that could just be a placebo effect.
Yeah, “Can I fail my exam” is a bad test, because when the test is “can I fail” then it’s easy for the theory to be “wrong in the right way”. GPS is a good test of GR because you just can’t do it without a better understanding of spacetime so it has to at least get something right even if it’s not the full picture. When you actually use the resulting technology in your day to day life and get results you couldn’t have gotten before, then it almost doesn’t matter what the scientific literature says, because “I would feel sorry for the good Lord. The theory is correct.”.
There are psychological equivalents of this, which rest on doing things that are simply beyond the abilities of people who lack this understanding. The “NLP fast phobia cure” is a perfect example of this, and I can provide citations if anyone is interested. I really get a kick out of the predictable arguments between those who “trust the science” but don’t understand it, and those who actually do it on a regular basis.
(Something like seeing a black cat on your way to exam, freaking out about it, and failing to pay full attention to the exam.) Damn!
This reminds me of an amusing anecdote.
I had a weird experience once where I got my ankle sprained pretty bad and found myself simultaneously indignantly deciding that my ankle wasn’t going to swell and also thinking I was crazy for feeling like swelling was a thing I could control—and it didn’t swell. I told my friend about this experience, and while she was skeptical and thought it sounded crazy, she tried it anyway and her next several injuries didn’t swell.
Eventually she casually mentioned to someone “Nah, my broken thumb isn’t going to swell because I decided not to”, and the person she was talking to responded as if she had said something else because his brain just couldn’t register what she actually said as a real possibility. She then got all self conscious about it and was kinda unintentionally gaslighted into feeling like she was crazy for thinking she could do that, and her thumb swelled up.
I had to call her and remind her “No, you don’t give up and expect it to swell because it ‘sounds crazy’, you intend for it to not swell anyway and find out whether it is something you can control or not”. The swelling went back down most of the way after that, though not to the same degree as in the previous cases where the injury never swelled in the first place.
Can you come up with a better way of doing Psychology research?
Yes. More emphasis on concrete useful results, less emphasis on trying to find simple correlations in complex situations.
For example, “Do power poses work?”. They did studies like this one where they tell people to hold a pose for five minutes while preparing for a fake job interview, and then found that the pretend employers pretended to hire them more often in the “power pose” condition. Even assuming there’s a real effect where those students from that university actually impress those judges more when they pose powerfully ahead of time… does that really imply that power posing will help other people get real jobs and keep them past the first day?
That’s like studying “Are car brakes really necessary?” by setting up a short track and seeing if the people who run the “red light” progress towards their destination quicker. Contrast that with studying the cars and driving behaviors that win races, coming up with your theories, and testing them by trying to actually win races. You’ll find out very quickly if your “brakes aren’t needed” hypothesis is a scientific breakthrough or foolishly naive.
Instead of studying “Does CBT work?”, study the results of individual therapists, see if you can figure out what the more successful ones are doing differently than the less successful ones, and see if you can use what you learn to increase the effectiveness of your own therapy or the therapy of your students. If the answer turns out to be “The successful therapists all power pose pre-session, then perform textbook CBT” and that allows you to make better therapists, great. If it’s something else, then you get to focus on the things that actually show up in the data.
The results should speak for themselves. If they don’t, and you aren’t keeping in very close contact with real world results, then it’s super easy to go astray with internal feedback loops because the loop that matters isn’t closed.
Claim: memeticity in a scientific field is mostly determined, not by the most competent researchers in the field, but instead by roughly-median researchers. [...] Sure, the most competent people in the field may recognize the problems, but the median researchers don’t, and in aggregate it’s mostly the median researchers who spread the memes.
This assumes the median researchers can’t recognize who the competent researchers are, or otherwise don’t look to them as thought leaders.
I’m not arguing that this isn’t often the case, just that it isn’t always the case. In engineering, if you’re more competent than everyone else, you can make cooler shit. If you’re a median engineer trying to figure out which memes to take on and spread, you’re going to be drawn to the work of the more competent engineers because it is visibly and obviously better.
In fields where distinguishing between bad research and good research has to be done by knowing how to do good research, rather than “does it fly or does it crash”, then the problem you describe is much more difficult to avoid. I argue that the difference between the fields which replicate and those which don’t is as much about the legibility of the end product as it is about the quality of the median researcher.
There’s no norm saying you can’t be ignorant of stats and read, or even post about things not requiring an understanding of stats, but there’s still a critical mass of people who do understand the topic well enough to enforce norms against actively contributing with that illiteracy. (E.g. how do you expect it to go over if someone makes a post claiming that p=0.05 means that there’s a 95% change that the hypothesis is true?)
Taking it a step further, I’d say my household “has norms which basically require everyone to speak English”, but that doesn’t mean the little one is quite there yet or that we’re gonna boot her for not already meeting the bar. It just means that she has to work hard to learn how to talk if she wants to be part of what’s going on.
Lesswrong feels like that to me in that I would feel comfortable posting about things which require statistical literacy to understand, knowing that engagement which fails to meet that bar will be downvoted rather than getting downvoted for expecting to find a statistically literate audience here.
I think this is correct as a conditional statement, but I don’t think one can deduce the unconditional implication that attempting to price some externalities in domains where many externalities are difficult to price is generally bad.
It’s not “attempting to price some externalities where many are difficult to price is generally bad”, it’s “attempting to price some externalities where the difficult to price externalities on the other side is bad”. Sometimes the difficulty of pricing them means it’s hard to know which side they primarily lie on, but not necessarily.
The direction of legible/illegible externalities might be uncorrelated on average, but that doesn’t mean that ignoring the bigger piece of the pie isn’t costly. If I offer “I’ll pay you twenty dollars, and then make up some rumors about you which may or may not be true and may greatly help or greatly harm your social standing”, you don’t think “Well, the difficult part to price is a wash, but twenty dollars is twenty dollars”
you can just directly pay the person who stops the shooting,
You still need a body.
Sure, you can give people like Elisjsha Dicken a bunch of money, but that’s because he actually blasted someone. If we want to pay him $1M per life he saved though, how much do we pay him? We can’t simply go to the morgue and count how many people aren’t there. We have to start making assumptions, modeling the system, and paying out based on our best guesses of what might have happened in what we think to be the relevant hypothetical. Which could totally work here, to be clear, but it’s still a potentially imperfect attempt to price the illegible and it’s not a coincidence that this was left out of the initial analysis that I’m responding.
But what about the guy who stopped a shooting before it began, simply by walking around looking like the kind of guy that would stop an a spree killer before he accomplished much? What about the good role models in the potential shooters life that lead him onto the right track and stopped a shooting before it was ever planned? This could be ten times as important and you wouldn’t even know without a lot of very careful analysis. And even then you could be mistaken, and good luck creating enough of a consensus on your program to pay out what you believe to be the appropriate amount to the right people who have no concrete evidence to stand on. It’s just not gonna work.
I don’t agree that most of the benefits of AI are likely to be illegible. I expect plenty of them to take the form of new consumer products that were not available before, for example.
Sure, they’ll be a lot of new consumer products and other legible stuff, but how are you estimating the amount of illegible stuff and determining it to be smaller? That’s the stuff that by definition is going to be harder to recognize so you can’t just say “all of the stuff I recognize is legible, therefore legible>>illegible”.
For example, what’s the probability that AI changes the outcome of future elections and political trajectory, is it a good or bad change, and what is the dollar value of that compared to the dollar value of ChatGPT?
I think my main point would be that Coase’s theorem is great for profitable actions with externalities, but doesn’t really work for punishment/elimination of non-monetary-incented actions where the cost is very hard to calculate.
This brings up another important point which is that a lot of externalities are impossible to calculate, and therefore such approaches end up fixating on the part that seems calculable without even accounting for (or even noticing) the incalculable part. If the calculable externalities happen to be opposed to larger incalculable externalities, then you can end up worse off than if you had never tried.
As applied to the gun externality question, you could theoretically offer a huge payday to the gun shop that sold the firearm used to stop a spree shooting in progress, but you still need a body to count before paying out. It’s really hard to measure the number of murders which didn’t happen because the guns you sold deterred the attacks. And if we accept the pro 2A arguments that the real advantage of an armed populace is that it prevents tyranny, that’s even harder to put a real number on.
I think this applies well to AI, because absent a scenario where gray goo rearranges everyone into paperclips (in which case everyone pays with their life anyway), a lot of the benefits and harms are likely to be illegible. If AI chatbots end up swaying the next election, what is the dollar value we need to stick on someone? How do we know if it’s even positive or negative, or if it even happened? If we latch onto the one measurable thing, that might not help.
The frustrating thing about the discussion about the origins is that people seldom show recognition of the priorities here, and all get lost in the weeds.
You can get n layers deep into the details, and if the bottom is at n+1 you’re fucked. To give an example I see people talking about with this debate, “The lab was working on doing gain of function to coronaviruses just like this!” sounds pretty damning but “actually the grant was denied, do you think they’d be working on it in secret after they were denied funding?” completely reverses it. Then after the debate, “Actually, labs frequently write grant proposals for work they’ve already done, and frequently are years behind in publishing” reverses it again. Even if there’s an odd number of remaining counters, the debate doesn’t demonstrate it. If you’re not really really careful about this stuff, it’s very easy to get lost and not realize where you’ve overextended on shaky ground.
Scott talks about how Saar is much more careful about these “out of model” possibilities and feels ripped off because his opponent wasn’t, but at least judging from Scott’s summary it doesn’t appear he really hammered on what the issue is here and how to address it.
Elsewhere in the comments here Saar is criticized for failing to fact check the dead cat thing, and I think that’s a good example of the issue here. It’s not that any individual thing is too difficult to fact check, it’s that when all the evidence is pointing in one direction (so far as you can tell) then you don’t really have a reason to fact check every little thing that makes total sense so of course you’re likely to not do it. If someone argues that clay bricks weigh less than an ounce, you’re going to weigh the first brick you see to prove them wrong, and you’re not going to break it open to confirm that it’s not secretly filled with something other than clay. And if it turns out it is, that doesn’t actually matter because your belief didn’t hinge on this particular brick being clay in the first place.
If it turns out that a lot of your predictions turn out to be based on false presuppositions, this might be an issue. If it turns out the trend you based your perspective on just isn’t there, then yeah that’s a problem. But if that’s not actually the evidence that formed your beliefs, and they’re just tentative predictions that aren’t required by your belief under question, then it means much less. Doubly so if we’re at “there exists a seemingly compelling counterargument” and not “we’ve gotten to the bottom of this, and there are no more seemingly compelling counter-counterarguments”.
So Saar didn’t check if the grant was actually approved. And Peter didn’t check if labs sometimes do the work before writing grant proposals. Or they did, and it didn’t come through in the debate. And Saar missed the cat thing. Peter did better on this game of “whack-a-mole” of arguments than Saar did, and more than I expected, but what is it worth? Truth certainly makes this easier, but so does preparation and debate skill, so I’m not really sure how much to update here.
What I want to see more than “who can paint an excessively detailed story that doesn’t really matter and have it stand up to surface level scrutiny better”, is people focusing on the actual cruxes underlying their views. Forget the myriad of implications n steps down the road which we don’t have the ability to fully map out and verify, what are the first few things we can actually know, and what can we learn from this by itself? If we’re talking about a controversial “relationship guru”, postpone discussions of whether clips were “taken out of context” and what context might be necessary until we settle whether this person is on their first marriage or fifth. If we’re wondering if a suspect is guilty of murder, don’t even bother looking into the credibility of the witness until you’ve settled the question of does the DNA match.If there appears to be a novel coronavirus outbreak right outside a lab studying novel coronaviruses, is that actually the case? Do we even need to look at anything else, and can looking at anything else even change the answer?
To exaggerate the point to highlight the issue, if there were unambiguously a million wet markets that are all equivalent, and one lab, and the outbreak were to happen right between the lab and the nearest wet market, you’re done. It doesn’t matter how much you think the virus “doesn’t look engineered” because you can’t get to a million to one that way. Even if you somehow manage to make what you think is a 1000:1 case, a) even if your analysis is sound it still came from the lab, b) either your analysis there or the million to one starting premise is flawed. And if we’re looking for a flaw in our analyses, it’s going to be a lot easier to find flaws in something relatively concrete like “there are a million wet markets just like this one” than whatever is going into arguing that it “looks natural”.
So I really wish they’d sit down and hammer out the most significant and easiest to verify bits first. How many equally risky wet markets are there? How many labs? What is the quantitative strength of the 30,000 foot view “It looks like an outbreak of chocolatey goodness in Hershey Pennsylvania”? What does it actually take to have arguments that contain leaks to this degree, and can we realistically demonstrate that here?
The difference between what I strive for (and would advocate) and “epistemic learned helplessness” is that it’s not helpless. I do trust myself to figure out the answers to these kinds of things when I need to—or at least, to be able to come to a perspective that is worth contending with.
The solution I’m pointing at is simply humility. If you pretend that you know things you don’t know, you’re setting yourself up for failure. If you don’t wanna say “I dunno, maybe” and can’t say “Definitely not, and here’s why” (or “That’s irrelevant and here’s why” or “Probably not, and here’s why I suspect this despite not having dived into the details”), then you were committing arrogance by getting into a “debate” in the first place.
Easier said than done, of course.
I think “subject specific knowledge is helpful in distinguishing between bullshit and non-bullshit claims.” is pretty clear on its own, and if you want to add an example it’d be sufficient to do something simple and vague like “If someone cites scientific studies you haven’t had time to read, it can sound like they’ve actually done their research. Except sometimes when you do this you’ll find that the study doesn’t actually support their claim”.
“How to formulate a rebuttal” sounds like a very different thing, depending on what your social goals are with the rebuttal.
I think I’m starting to realize the dilemma I’m in.
Yeah, you’re kinda stuck between “That’s too obvious of a problem for me to fall into!” and “I don’t see a problem here! I don’t believe you!”. I’d personally err on the side of the obvious, while highlighting why the examples I’m picking are so obvious.
I could bring out the factual evidence and analyze it if you like, but I don’t think that was your intention
Yeah, I think that’d require a pretty big conversation and I already agree with the point you’re trying to use it to make.
I did get feedback warning that the Ramaswamy example was quite distracting (my beta reader reccomended flat eartherism or anti-vaxxing instead). In hindsight it may have been a better choice, but I’m not too familiar with geology or medicine, so I didn’t think I could do the proper rebuttal justice.
My response to your Ramaswamy example was to skip ahead without reading it to see if you would conclude with “My counterarguments were bullshit, did you catch it?”.After going back and skimming a bit, it’s still not clear to me that they’re not.
The uninformed judge cannot tell him from someone with a genuine understanding of geopolitics.
The thing is, this applies to you as well. Looking at this bit, for example:
What about Ukraine? Ukrainians have died in the hundreds of thousands to defend their country. Civil society has mobilized for a total war. Zelensky retains overwhelming popular support, and by and large the populace is committed to a long war.
Is this the picture of a people about to give up? I think not.
This sure sounds like something a bullshit debater would say. Hundreds of thousands of people dying doesn’t really mean a country isn’t about to give up. Maybe it’s the reason they are about to give up; there’s always a line, and whos to say it isn’t in the hundreds of thousands? Zelensky having popular support does seem to support your point, and I could go check primary sources on that, but even if I did your point about “selecting the right facts and omitting others” still stands, and there’s no easy way to find out if you’re full of shit here or not.
So it’s kinda weird to see it presented as if we’re supposed to take your arguments at face value… in a piece purportedly teaching us to defend against the dark art of bullshit. It’s not clear to me how this section even helps even if we do take it at face value. Okay, so Ramaswamy said something you disagree with, and you might even be right and maybe his thoughts don’t hold up to scrutiny? But even if so, that doesn’t mean he’s “using dark arts” any more than he just doesn’t think things through well enough to get to the right answer, and I don’t see what that teaches us about how to avoid BS besides “Don’t trust Ramaswamy”.
To be clear, this isn’t at all “your post sucks, feel bad”. It’s partly genuine curiosity about where you were trying to go with that part, and mostly that you seem to genuinely appreciate feedback.
My own answer to “how to defend against bullshit” is to notice when I don’t know enough on the object level to be able to know for sure when arguments are misleading, and in those cases refrain from pretending that I know more than I do. In order to determine who to take how seriously, I track how much people are able to engage with other worldviews, and which worldviews hold up and don’t require avoidance techniques in order to preserve the worldview.
The frequency explanation doesn’t really work, because men do sometimes get excess compliments and it doesn’t actually become annoying; it’s just background. Also, when women give men the kind of compliments that men tend to give women, it can be quite unwanted even when infrequent.
The common thing, which you both gesture at, is whether it’s genuinely a compliment or simply a bid for sexual attention, borne out of neediness. The validation given by a compliment is of questionable legitimacy when paired with some sort of tug for reciprocation, and it’s simply much easier to have this kind of social interaction when sexual desire is off the table the way it is between same sex groups of presumably straight individuals.
For example, say you’re a man who has gotten into working out and you’re visiting your friend whom you haven’t seen in a while. If your friend goes wide eyed, saying “Wow, you look good. Have you been working out?” and starts feeling your muscles, that’s a compliment because it’s not too hard for your friend to pull off “no homo”. He’s not trying to get in your pants. If that friend’s new girlfriend were to do the exact same thing, she’d have to pull off “no hetero” for it to not get awkward, and while that’s doable it’s definitely significantly harder. If she’s been wanting an open relationship and he hasn’t, it gets that much harder to take it as “just a compliment” and this doesn’t have to be a recurring issue in order for it to be quite uncomfortable to receive that compliment. As a result, unless their relationship is unusually secure she’s less likely to compliment you than he is—and when she does she’s going to be a lot more restrained than he can be.
The question, to me, is to what extent people are trying to “be sexy for their homies” because society has a semi-intentional way of doing division of labor to allow formation of social hierarchies without having to go directly through the mess of sexual desires, and to what extent people are simply using their homies as a proxy for what the opposite sex is into and getting things wrong because they’re projecting a bit. The latter seems sufficient and a priori expected, but maybe it leads into the former.
I want there to be a way to trade action for knowledge- to credibly claim I won’t get upset or tell anyone if a lizardman admits their secret to me- but obviously the lizardman wouldn’t know that I could be trusted to keep to that,
The thing people are generally trying to avoid, when hiding their socially disapproved of traits, isn’t so much “People are going to see me for what I am”, but that they won’t.
Imagine you and your wife are into BDSM, and it’s a completely healthy and consensual thing—at least, so far as you see. Then imagine your aunt says “You can tell me if you’re one of those BDSM perverts. I won’t tell anybody, nor will I get upset if you’re that degenerate”. You’re still probably not going to be inclined to tell her, because even if she’s telling the truth about what she won’t do, she’s still telling you that she’s already written the bottom line that BDSM folks are “degenerate perverts”. She’s still going to see you differently, and she’s still shown that her stance gives her no room for understanding what you do or why, so her input—hostile or not—cannot be of use.
In contrast, imagine your other aunt tells you about how her friends relationship benefitted a lot from BDSM dynamics which match your own quite well, and then mentions that they stopped doing it because of a more subtle issue that was causing problems they hadn’t recognized. Imagine your aunt goes on to say “This is why I’ve always been opposed to BDSM. It can be so much fun, and healthy and massively beneficial in the short term, but the longer term hidden risks just aren’t worth it”. That aunt sounds worth talking to, even if she might give pushback that the other aunt promised not to. It would be empathetic pushback, coming from a place of actually understanding what you do and why you do it. Instead of feeling written off and misunderstood, you feel seen and heard—warts and all. And that kind of “I got your back, and I care who you are even if you’re not perfect” response is the kind of response you want to get from someone you open up to.
So for lizardmen, you’d probably want to start by understanding why they wouldn’t be so inclined to show their true faces to most people. You’d want to be someone who can say “Oh yeah, I get that. If I were you I’d be doing the same thing” for whatever you think their motivation might be, even if you are going to push back on their plans to exterminate humanity or whatever. And you might want to consider whether “lizardmen” really captures what’s going on or if it’s functioning in the way “pervert” does for your hypothetical aunt.
I get that “humans are screwed up” is a sequences take, that you’re not really sure how to carve up the different parts of your mind, etc. What I’m pointing at here is substantive, not merely semantic.
The dissociation of saying “humans are messed up”/”my brain is messed up” feels different than saying “I am messed up”. The latter is speaking from a perspective that is associated with the problem and has the responsibility to fix it from the first person. This perspective shift is absolutely crucial, and trying to solve your problems “from the outside” gets people very very caught up in additional meta level problems and unable to touch the object level problem. This is a huge topic.
I had as a strong an aversion to homework as anyone, including homework which I knew to be important. It’s not a matter of “finding a situation where you notice part of your mind attempting to write the bottom line first”, but of noticing why that part of your mind will try to write the bottom line first, and relating to yourself in a way that eliminates the motivation to do so in the first place. I don’t have situations where part of my mind attempts to write the bottom line first… that I’m aware of, at least. There are things that I’m attached to, which is what causes the “bottom line first” issues and which is still an obstacle to be overcome in itself, but the motivation to write the bottom line first can be completely obsoleted by stopping and giving more attention to the possibility that you’ve been trying to undervalue something that you can sense is critically important. This mental move shifts all of your “my brain is being irrational” problems into “I don’t know what to do on the object level”/”I don’t know why this is so important to me” problems, which are still problems but they are much nicer because they highlight rather than obscure the path to solution.
“I want some kind of language to distinguish the truth seeking part from the biased part”. I don’t think such a distinction exists in any meaningful sense.
In my model, there’s a part of your brain that recognizes that something is important (e.g. social time), and a part of your brain that recognizes that something else is important (e.g. doing homework), and that neither are “truth seeking” or “biased”, but simply tugging you towards a particular goal. Then there’s a part of your brain which feels tugged in both directions and has to mediate and try to form this incoherent mess into something resembling useful behavior.
This latter part wants to get out of the conflict, and there are many strategies to do this. This is another big topic, but one way to get out of the conflict is to simply give in to the more salient side and shut out the less salient side. This strategy has obvious and serious problems, so making an explicit decision to use this strategy itself can cause conflict between the desire “I want to not deal with this discomfort” and “I want to not drive my life into the ground by ignoring things that might be important”.
One way to attempt to resolve that conflict is to decide “Okay, I’ll ‘be rational’, ‘use logic and evidence and reason’, and then satisfy the side which is more logical and shut out the side that is ‘irrational and wrong’”. This has clear advantages over the “be a slave to impulses” strategy, but it has it’s own serious issues. One is that the side that you judge to be “irrational” isn’t always the side that’s easier to shut out, so attempting to do so can be unsuccessful at the actual goal of “get out of this uncomfortable conflict”.
A more successful strategy to resolving like these is to shut out the easy to shut out side, and then use “logic and reason” to justify it if possible, so that the “I don’t want to run my life into the ground by making bad decisions” part is satisfied too. The issue with this one comes up when part of you notices that the bottom line is getting written first and that the pull isn’t towards truth—but so long as you fail to notice, this strategy actually does quite well, so every time your algorithm that you describe as “logical and reasoned” drifts in this direction it gets rewarded and you end up sliding down this path. That’s why you get this repeating pattern of “Dammit, my brain was writing the bottom line again. I shall keep myself from doing that next time!”.
It’s simply not the case that you have a “truth seeking part” and a “biased part”. You contain a multitude of desires, and strategies for achieving these desires and mediating conflicts between these desires. The strategies you employ, which call for shutting out desires which retain power over you unless they can come up with sufficient justification, requires you to come up with justifications and find them sufficient in order to get what you want. So that’s what you’re motivated to do, and that’s what you tend to do.
Then you notice that this strategy has problems, but so long as you’re working within this strategy, adding the extra desires of “but don’t fool myself here!” becomes simply another desire that can be rationalized away if you succeed in coming up with a justification that you’re willing to deem sufficient (“Nah, I’m not fooling myself this time! These reasons are sound!”, “Shit, I did it again didn’t I. Wow, these biases sure can be sneaky!”).
The framing itself is what creates the problems. By the time you are labeling one part “truth seeking” and one part “biased, and therefore important to not listen to”, you are writing the bottom line . And if your bottom line includes “there is a problem with how my brain is working”, then that’s gonna be in your bottom line.
The alternative is to not purport to know which side is “truth seeking” and which side is “biased”, and simply look, until you see the resolution.
1) You keep saying “My brain”, which distances you from it. You say “Human minds are screwed up”, but what are you if not a human mind? Why not say “I am screwed up”? Notice how that one feels different and weightier? Almost like there’s something you could do about it, and a motivation to do it?
2) Why does homework seem so unfun to you? Why do you feel tempted to put off homework and socialize? Have you put much thought into figuring out if “your brain” might be right about something here?In my experience, most homework is indeed a waste of time, some homework very much is not, and even that very worthwhile homework can be put off until the last minute with zero downside. I decided to stop putting it off to the last minute once it actually became a problem, and that day just never came. In hindsight, I think “my brain” was just right about things.
How sure are you that you’d have noticed if this applies to you as well?
3) “If your brain was likely to succeed in deceiving you”.
You say this as if you are an innocent victim, yet I don’t think you’d fall for any of these arguments if you didn’t want to be deceived. And who can blame you? Some asshole won’t let you have fun unless you believe that homework isn’t worthwhile, so of course you want to believe it’s not worth doing.
Your “trick” works because it takes off the pressure to believe the lies. You don’t need to dissociate from the rest of your mental processes to do this, and you don’t have to make known bad decisions in order to do this. You simply need to give yourself permission to do what you want, even when you aren’t yet convinced that it’s right.
Give yourself that permission, and there’s no distortionary pressure so you can be upfront about how important you think doing your homework tonight really is. And if you decide that you’d rather not put it off, you’re allowed to choose that too. As a general rule, rationality is improved by removing blocks to looking at reality, not adding more blocks to compensate for other blocks.
It’s not that “human minds are messed up” in some sort of fundamental architectural way and there’s nothing you can do about it, it’s that human minds take work to organize, people don’t fully recognize this or how to do it, and until that work you’re going to be full of contradictions.
As an update, the 3rd thing I tried also failed. Now I ran out of things to try.I wouldn’t be discouraged. There are a lot of ways to do “the same thing” differently, and I wouldn’t expect a first try success. In particular, I’d expect you to need a lot more time letting yourself “run free”—at least “in sim”—and using that to figure out what exactly it is that you want and how to actually get it without screwing anything else up. Like, “Okay, if I get that, then what?”/”What’s so great about that” and drilling down on that felt sense until something shifts.
Sure took me a while, at least. And I wouldn’t claim to be “finished”
The problem is that anything that is non-sexual love seems to be corrupted by sexual love, in a way that makes the non-sexual part worse. E.g. imagine you have a female friend that you like to talk to because she is a good interlocutor. [...] I expect that if you would now start to have sex with that female friend your mind would get corrupted by sexual desire. E.g. instead of thinking about what to discuss in the next meeting, a sexual fantasy would pop into your head.
How sure are you that this is actually a problem? Is it the hypothetical female friend that has an issue with just focusing on sex as much as you’d be tempted to, or is it a you thing? The former can definitely complicate things, but if it’s the latter I’d be inclined to just run with it and see what happens. It’s a lot harder to get distracted by the possibility of having sex immediately after having it.
My current strategy is to just not think anything sexual anymore, and be sensitive to any negative emotions that arise. I then plan to use my version of IDC on them to figure out what the subagents that generate the emotions want. So far it seems that to some extent realizing this corruption dynamic has cooled down the sexual part of my mind a bit. But attempt 3 only failed yesterday so this cooling effect might only be temporary.
Yeah, that’s the inhibitory side of the equation. Kinda like fasting for a while and realizing that it’s not necessary/helpful/appropriate to panic about being hungry, and chilling out for a bit.
But if you don’t eat sooner or later or make an earnest effort to obtain sufficient food, it might not stay so easy to continue to set the hunger aside.
I feel like I have figured out a lot of stuff about this general topic in the last month. Probably more than in the rest of my life so far.:) good.
I also realize now that this just solves the problem that I have had with romance all along. That is the reason why I did not like how my mind behaved. My mind normally just starts to love somebody immediately, overwriting all of the other aspects of the relationship. This is exactly not what I want love to be.
This does sound like premature/overattachment. I bet watching what happens to the other aspects of the relationship puts a damper on that impulse.
The ideal version of this is getting maximally close in a relationship via some context, and only once you get maximally close in that context do you extend the context. And then again you optimize for getting as close as possible in the new extended context, before extending the context again. And you add things to the context sorted such that you add the less impactful stuff first. Adding the component of love to the context should be very late in this chain. [...] I want love to be the thing that follows after everything else is maximally good. And I want the same to be true for other attributes. E.g. before feeling friendly with somebody, you should like them as much as possible, and get as close to them as possible, without that friendliness feeling there.
This sounds pretty idealized. “Should” is a red flag word here, as it covers over what “is”, the reasons things are the way they are, and why you want things to be another way instead. In context, “maximally” is too because “maximally” on any dimension rarely matches “optimally”—so whence this motivation, and what is being avoided?
That’s not to say that it’s wrong or misguided as ideals often have important value, but the real world tends to be messy and bring surprises.
Good, I’m glad my comments had the effect I was aiming for.
It’s an interesting and fun project for sure. A few notes...
* I wouldn’t expect to get it all figured out quickly, but rather for things to change shape over the course of years. Pieces can change quickly of course, but there’s a lot to figure out and sometimes you need to find yourself in the right experience to have the perspective to see what comes next.
* I’d also caution against putting the cart too far ahead of the horse, even if you have pretty good justification. “Extension of non-sexual love” sounds right, but also just so much weird and unexpected stuff that it’s hard to foresee in sufficient detail that it’s likely that your perspective on what this entails isn’t complete.
* Freedom to explore is freedom to learn, but also freedom to fail—like removing training wheels from a bike so that you can engage with the process of balancing, but also risk falling. Managing this tradeoff can be tricky, especially when the cost of failure gets high.
* “Allocating specific periods of time to run free” reminds me of how I’ve been approaching my daughters developing appetite. Monday through Saturday she has to eat what we make her so that she gets good nutrition and builds familiarity with good foods, and on Sunday’s she’s free to learn exactly how much ice cream is *too* much and otherwise eat whatever she wants. I’m not entirely sure what to think yet and the arbitrariness of it bothers my sense of aesthetics a bit, but I’m relatively happy with how it’s going so far and I’m not really sure how to do it any less arbitrarily in context.
This is only true if you can’t figure out how to handle disagreements.
It will often be better to have wrong beliefs if it keeps you from acting on the even wronger belief that you must argue with everyone who disagrees. It’s better yet to believe the truth on both fronts, and simply prioritize getting along when it is more important to get along.
It’s more fundamental than that. The way you pick up a glass of water is by predicting that you will pick up a glass of water, and acting so as to minimize that prediction error. Motivated cognition is how we make things true, and we can’t get rid of it except by ceasing to act on the environment—and therefore ceasing to exist.
Motivated cognition causes no epistemic problem so long as we can realize our predictions. The tricky part comes when we struggle to fit the world to our beliefs. In these cases, there’s an apparent tension between “believing the truth” and “working towards what we want”. This is where all that sports stuff of “you have to believe you can win!” comes from, and the tendency to lose motivation once we realize we’re not going to succeed.
If we try to predict that we will win the contest despite being down 6-0 and clearly less competent, we will either have to engage in the willful delusion of pretending we’re not less competent and/or other things (which makes it harder to navigate reality, because we’re using a false map and can’t act so as to minimize the consequences of our flaws) or else we will just fail to predict success altogether and be unable to even try.
If instead, we don’t predict anything about whether we will win or lose, and instead predict that we will play to the absolute best of our abilities, then we can find out whether we win or lose, and give ourselves room to be pleasantly surprised.
The solution isn’t to “believe the truth” because the truth has not been set yet. The solution is to pay attention to our anticipated prediction errors, and shift to finer grain modeling when the expected error justifies the cost of thinking harder.
If you stop predicting “I am a highly intelligent individual, so I’m not wrong!”, then you get to find out if you’re a highly intelligent individual, as well as all of the things that may provide evidence in that direction (i.e. being wrong about things). This much is a subset of the solution I offer.
The next part is a bit trickier because of the question of what “cultivate enjoying being wrong” means, and how exactly you go about making sure you enjoy a fundamentally bad and unpleasant thing (not saying this is impossible, my two little girls are excited to get their flu shots today).
One way to attempt this is to predict “I am the kind of person who enjoys being wrong, because that means I get to learn [which puts me above the monkeys that can’t even do this]”, which is an improvement. If you do that, then you get to learn more things you’re wrong about.… except when you’re wrong about how much you enjoy being wrong—which is certainly going to become a thing, when it matters to you most.
On top of that, the fact that it feels like “giving up” something and that it gets easier when you remember the grading curve suggests more vulnerabilities to motivated thinking, because there’s still a potential truth being avoided (“I’m dumb on the scale that matters”) and because switching to a model which yields strictly better results feels like losing something.