Open thread, Apr. 18 - Apr. 24, 2016
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Until LessWrong 2.0 comes out, this is how I’ve been staying in touch with the Rationalist Diaspora. It took about an hour to set up and I can now see almost everything in the one place.
I’ve been using an RSS reader (I use feedly) to collate RSS feeds from these lists,
Rationist Blogs,
https://wiki.lesswrong.com/wiki/List_of_Blogs
https://www.reddit.com/r/RationalistDiaspora/
Effective Altruist Blogs,
http://www.stafforini.com/blog/effective-altruism-blogs/
Rationalist Tumblers,
http://yxoque.tumblr.com/RationalistMasterlist
And using this twitter to RSS tool for these LessWrong Twitters,
http://lesswrong.com/lw/d92/less_wrong_on_twitter/
This system is unsatisfying in a number of ways the two most obvious to me being 1) I don’t know of any way to integrate the Rationalists on Facebook into this system and 2) Upvotes from places that use them like LW or r/rational aren’t displayed. Nevertheless it is still much simpler for me to be notified of new material. If anyone has suggestions on improvements or wants to share how they follow the Diaspora that’d be most welcome.
Trying to think of what’s not on this list:
The EA forum sometimes has insightful posts, mostly EA news
Givewell and Open Philanthropy Project blogs
http://nickbostrom.com
/r/slatestarcodex, /r/LessWrong, /r/HPMoR, /r/smartgiving, /r/effectivealtruism, /r/rational
Thing of Things (Ozy’s blog)
Topher’s blog URL is now http://topherhallquist.wordpress.com
http://thefutureprimaeval.net is a group blog by a few ex-LWers I believe
You could probably dig up more by looking through the blogrolls of the blogs you’ve already identified. For example, Scott Aaronson considers himself part of the rationalist blogosphere and is listed on the SlateStarCodex sidebar.
Andrew Critch’s blog is great
PredictionBook, Omnilibrium
Of course there are lots of Facebook groups (especially EA-related Facebook groups) and Facebook personalities, notably Eliezer
Somewhere I got the impression that HBD Chick and Sarah Perry of Ribbonfarm were LWers at some point. There’s also the “post-rationalist” community which includes sites like Melting Asphalt.
Scott’s community map
Many of these update infrequently, making it bothersome to check all of them. I’ll bet it wouldn’t be very hard to create a single site that lets you see what’s new across the entire diaspora (including LW) by combining all these RSS feeds in to something like http://lesswrong.com/r/all/recentposts/ Could be a fun webdev project. Register a domain for it and put it on your resume.
She was/is. Her (now dead) blog, The View From Hell, is on the lesswrong wiki list of blogs. She has another blog, at https://theviewfromhellyes.wordpress.com which updates, albeit at a glacial pace.
This seems very useful. Thank you for posting it.
Out of all of the blogs, which ones do you prioritize in reading first? It seems like there are far too many to always read all of them.
If you just wanted blogs (i.e. no twitter+tumblr) the following are blogs I personally like that post frequently in rough order of how useful/insightful I have found them,
mindingourway.com
slatestarcodex.com
srconstantin.wordpress.com
thingofthings.wordpress.com
malcolmocean.com
agentyduck.blogspot.com
https://blog.jaibot.com/
lukemuehlhauser.com
meteuphoric.wordpress.com
There are a few that are very infrequent but very good when they do post,
relentlessdawn.wordpress.com
http://www.theunforgivingminute.co/
https://alexvermeer.com/blog/
I think most update pretty infrequently, which makes RSS a good solution.
Not enough karma to post anywhere else so i guess i’ll post this here. This is from a few days ago.
I’m currently a psychology undergrad and i was talking to a fellow student who had some odd symptoms.
I took out my notepad and jotted a few things down. “I don’t necessarily lose consciousness, but when i’m going about my day, i suddenly find myself in a different place to what i had intended on going. Sort of like going into a sleep walking state during the day then snapping out of it a few moments later. For example if i’m walking somewhere like the kitchen, my brain seemingly shuts off and i find myself in the bathroom almost if i had teleported there. I’m not sure if it’s some kind of sudden memory loss. It’s like a split second loss of consciousness. like someone else is controlling me for a few seconds with me not realizing it. ”
This happens to him apparently 2-3 time a day. He tells me he doesn’t suffer from bad memory or amnesia. When i told him it’s common to suddenly forget what you were doing for example I.E when talking to someone pouring liquid overflowing the glass. He told me he knew what that was and stated that it wasn’t similar to what he was experiencing. He said that it feels like suddenly (in a few seconds time) involuntarily finding yourself from the wrong place. I asked him things like can he see during this state and he told me that he doesn’t know. He further explained that it feels like he doesn’t exist in that point in time that his body has been hijacked and someone else is controlling him while he is in a non-conscious non-existing state.
I asked him later what he looks like to outside observers or that have they remarked on these symptoms and he told me that they sometimes ask him “Why did you just walk to that door and back again?” After class had finished or “Why did you just walk around the table and spin around” during lunch.
I couldn’t find that much info on this online so i’m making this post to ask if anyone else has had these kind of experiences or if they have heard of people who had or know more about this. Thanks.
Video demonstration of my findings. https://www.youtube.com/watch?v=gVzt2lQsg4E&feature=youtu.be
Possibly similar to absence seizures or complex partial seizures. This person should really be checking with a neurologist rather than a psychology undergrad.
Well he wasn’t “Checking in” with me. I was just talking to him and taking notes.
The best fitting symptoms are those of automatism), a kind of epileptyc seizure where the person lose consciousness and starts to do automatic behaviour.
As with anything associated with epilepsy, he should treat the symptoms very seriously and promptly consult a neurologist.
From a hypnosis perspective I would say that he falls in somnambulistic trance state. I however don’t know people regularly falling into such a state 2-3 times a day for no reason.
How well does he sleep? It might be interesting to get a Smartwatch with sensors to see whether his vital signs change in those episodes.
I’ll ask him how he sleeps when i talk to him next time i’ll also mention the possibility of trying a Smartwatch.
To note i have had people tell me it’s a Fugue state. I don’t think so mainly because Fugue state usually lasts anywhere from a hour to a day. Maybe this might be a short Fugue state but i doubt it.
I think I remember in the Sequences dude is talking about how he’s on the phone and someone is talking about chest pains and the emts won’t take them. He’s like “huh, that’s odd”, and comes up with reasons. Turns out its just a fake.
Seems like that’s what’s going on here. The odds that someone has this incredibly bizarre illness, yet has managed to function to the point that he’s a student (never got puppetted off the road, or into traffic, etc.), yet confides in you, and his problems don’t match up to stuff that happens to lots of people (No one else goes crazy like this, it’s weird enough that it’d be a known thing)… OR, a human is lying.
Here, I think.
[EDITED to add:] This case doesn’t seem so unbelievable to me, though—but I may be underestimating its implausibility.
update: asked someone else; they say likely epilepsy.
worth considering: https://en.wikipedia.org/wiki/Tourette_syndrome—for uncontrollable compulsion to do a thing.
I imagine a condition causing these symptoms would be tested by putting the person under observation for a day or a few days. It might be possible to get them to attend a sleep lab, if the events happen while they are asleep too then that might be a key to diagnosing it as easy/fast as possible. Otherwise does the person know of anything that causes the events more or less?
Also the general rule for diagnosing/treating conditions—is it negatively impacting their life? If not then leave it alone.
if:
they are curious and really want to know, get further testing done.
they are in danger—i.e. find themselves unable to drive a car safely due to strange events
their life is impacted greatly (i.e. cannot drive a car)
If it’s not doing anything bad; leave it.
If it’s an early symptom of a condition it might help them to be diagnosed, I don’t know why but it could also be like Schizophrenia.
Welcome.
As for your friend, I would very tentatively go with some sort of epilepsy.
Thanks Nancy. Some of the symptoms are similar to states caused by epilepsy., but i’m not sure if they are seizures or not. It’s not like he stops and his eyes roll back. According to him it only lasts around 5-20 seconds. And seems pretty normal to observers. I guess the best way he could explain it was that someone just controls him completely for a few seconds. And to him it feels like a snap of fingers. He goes to brush his teeth, looking in the mirror he lifts up the tooth brush SNAP he is no longer facing the mirror. He then resumes brushing of teeth facing back at the mirror.
Strongly second the advice to have him go to a psychiatrist or neurologist. The type of seizure you are thinking of is a grand mal seizure which is not the only kind. This sounds like a very typical partial seizure to me.
On the recommendation of someone who may or may not wish to identify themselves publicly, I’m posting the contents of a private message I sent to them with regard to Gleb_Tsipursky, as they believe the ambiguity of what and why I was doing may be causing some people consternation/cause for alarm, and contributing to some overall negative feelings on Less Wrong, as my reasons for behaving the way I was weren’t terribly transparent to some people.
“His pinging of my emotional immune system is, I’m pretty sure, a false alarm. I have no reason to disbelieve him when he professes to, effectively, be emulating a sociopath, particularly in light of how bad he is at it.
Most of the point of that lengthy ‘attack’ was in the exaggeration, as I wanted -Gleb- to know how he’s taken, and he didn’t react to my more subtle attempts to nudge his behavior; the purpose wasn’t hostility for the sake of hostility, it was to try to get him to modify some of his behavior while giving him a redemptive path (not redeemed by me, but redeemed by his behavior; my role is that of the villain, providing an obstacle which requires him to overcome personal faults and which plays conveniently to his preexisting strengths), which worked to a limited extent, although he seems quite inventive about new ways to break social norms.
Honestly I think he is just an incredibly socially clueless person who wants to be liked by a community he respects, and everything he’s doing is to prove his worthiness to Less Wrong. Unfortunately his… tactics aren’t particularly well-received, being painfully transparent.
The public attacks encourage him to modify his behavior while giving him a sympathetic position from which to recover his reputation. He’s done a decent job of being graceful about it, and has modified his behaviors somewhat, which helps his reputation along, but then turns around and transgresses again in some other fashion. Sigh. (They do serve a secondary purpose, in the doubtful case he -is- a predator, of making people more cognizant of his behavior patterns.)”
I don’t like how people are talking about Gleb here where everybody involved knows that he reads it without much respect for that. I understand that it is necessary for this community to solve this out and in a way this community is good that using reflection and neutral point of view but still I’m not too happy how it is done. I’d wish somebody would say:
As for the tension that Gleb brings (and actually some other newbies, including me too): I think this is a natural process for a community that is developing after some initial hype. People taking the seed elsewhere; the origin not having the same close-knit focus anymore. I’m OK with this and I think adjusting to it and making the best out of it is better than fighting (which has its own questionable trade-offs). So Gleb is just one example. I have seen this very process in the c2 forum almost exactly the same way (there I also arrived after the hype; c2 is defunct now; make of it what you want...).
I like Gleb’s intention and I partly recognize myself. How do you expect somebody with such a skill-set to act and learn? I also tried things. I mean there is a whole CFAR topic about it: CoZE. I hope nobody expects that CoZE always comes with pleasant socially adequate and successful results. I did quite some blunders not that different what some people here feel uncomfortable with tried by Gleb. But he does. And he learns and adjusts. Fast. Maybe too fast because that creates incomplete adjustments that probably add to the uneasy impression he makes. But who knows how non-LW people in his circle perceive him? Who knows what feedback he gets or doesn’t get?
I welcome Gleb and I hope he continues to improve because I see lots of potential. Maybe more impact than many other people here. Make the best out of it.
Gunnar, I read it like you see some similarities between you and Gleb, but from by point of view, you two are quite unsimilar. You often write about the topic you feel most experienced (parenting), your advice seems good, and you fit the local culture well (after the few initial blunders). Gleb’s writings seem very cargo-cultish, he constantly does weird things, and his employees posting here only make it more weird.
Essentially, your posts are valued for their content, while I am afraid that Gleb is merely tolerated here because we still hope that maybe his activities outside of LW will be useful somehow.
What would make me improve my opinion on Gleb?
a) If someone coming here from Intentional Insights would actually fit in our culture and post useful stuff. Then I would say: “Okay, Gleb’s personal style rubs me the wrong way, but now I see that’s only a superficial thing and he actually helps to spread the kind of rationality we value here. There are many paths to the same goal.”
b) If Gleb himself would change.
At this moment I simply don’t see any evidence that what Gleb does is useful. I derive no personal pleasure from reading his articles; and I see no data that it actually helps anyone outside of LW. (I am not saying that everyone here must do super useful things, but someone who tries to become a public face of rationalist movement should.)
I agree with Gunnar that it’s not polite to talk about someone in the third person to their face. I wasn’t sure how to handle that part of it, so I’m glad Gunnar has brought it up.
Why do we need all this drama? Why can’t the people who don’t like Gleb just downvote or ignore him?
I suspect people feel he gives a negative impression of lesswrong. And he will not go away with downvotes. Trouble is that it’s not just that we disagree with him; often he is behaving in ways that are, not even wrong. If he were doing something wrong, it would be simple to say, “that is something wrong; instead of doing wrong you should do right like X”. By being uncannily off, we can’t even help.
First, he puts HuffPo-style posts onto LW which are pretty nauseating.
Second, he hires people—virtual assistants from the third world—to get LW accounts and praise him. They mostly post inanities like this for example. There are what, about five of them at the moment?
Third, he wants to be the face of rationality for the unwashed masses. In the unlikely event that this comes to pass, it… would not be optimal :-/
Otherwise I continue to think that he is in dire need of a clue and that he is the clearest example of cargo cult behaviour that I have seen in a while.
Because he occasionally (when he’s targeting intelligent people rather than stupid people, to put it bluntly) does some good work. I don’t want to downvote or ignore him, I want him to be the person he should be.
I’m the person who encouraged OrphanWilde to post that PM. I’m quite grateful that he did because this has recast the situation.
I don’t know about anyone else, but my impression of Gleb is that he’s annoying but mostly harmless. Mostly harmless because that early project of trying to promote rationality by turning it into an applause light was definitely a bad idea.
The PM has caused me to do some updating which I hope I will generalize. I started out with an assumption that there might be something wrong with Gleb for attracting that sort of animus, and something wrong with me for not seeing what was wrong with Gleb.
I think OrphanWilde’s approach has made LW seem like a place where people can be attacked for unclear reasons, and as moderator, I probably should have moved much earlier to discourage this.
It literally never occurred to me that the animus was (or had shifted to) something strategic, and it wasn’t trolling exactly but it was still not an accurate presentation of OrphanWilde’s beliefs.
I haven’t seen a clear explanation from anyone (though I may have missed some comments) of what they think Gleb misunderstands.
Here’s the comment which OW’s PM was an answer to. I don’t have a copy of what I PM’d to OW, but it was tactful.It took me a while to get from “What the fucking fuck?” to “Thank you for the information”. I believe that you can’t force a mind. Shaming people isn’t a reliable way of getting the behavior you want from them. Neither is anything else, but a light touch has fewer side effects.
I think a fine line needs to be walked when addressing Gleb, if only because he evidently has media visibility skills that could be useful for the community if he were less misguided.
>Honestly I think he is just an incredibly socially clueless person who wants to be liked by a community he respects, and everything he’s doing is to prove his worthiness to Less Wrong. Unfortunately his… tactics aren’t particularly well-received, being painfully transparent.
> encourage him to modify his behavior
would encourage him to continue to modify further.
Perhaps he is lacking some fundamental understanding that if he were given we could trust him to no longer transgress and instead improve. I haven’t the faintest idea what; or how to find it. Nor do I have much energy left (after already trying many times) for trying down this pathway (or any) on his behalf.
I think he doesn’t understand average people. And I think that he thinks that he does.
And then he uses people’s mental faults to try to improve their minds, which is… wrong? I think that’s the issue, the thing he is “uncannily off” about. He’s trying to fool people into not being fools.
People don’t like being made into fools, so insofar as he succeeds, he turns them off from rationality, rather than turning them onto it. And for a community that already struggles with an aura of cultishness, it’s exactly the wrong kind of approach.
I don’t think he’s doing anything like that. It’s true that he tailors his content to the lowest common denominator (because he views this as the only feasible way of reaching that crowd), but surely people with no “mental faults” can benefit from it just as much as anyone else.
My reaction to Gleb’s “emulating a sociopath” is essentially this. :(
This is a strategy that I’ve only seen working in comic books.
At least this should show the danger of taking cues about interpersonal behavior from fiction.
It works all the time. It relies on cognitive dissonance—if you’re so nasty that other people want to defend somebody, they think more positively of that person than they would if they didn’t.
It looks like an awfully long chain of inference to me. One weak point was that other people were attacking Gleb—it wasn’t you as a lone attacker.
Note that (and this may be a flaw in my character), I was left wondering if you were right rather wanting to defend Gleb.
I think his empathy deficits are more autistic than psychopathic.
I recently decided to cut back my time on Facebook from several logins per day to once or twice a week. I used the following lifehack to tweak my own behavior:
I asked my girlfriend to come up with a random number X. Then, with her help, I changed my FB password to X+Y (string concat), where Y is a short pseudo-password I know. So now when I want to log into FB, I need to ask for her help. The trivial inconvenience prevents me from doing multiple daily “impulse” logins.
Just an idea: generate a random long password, store it on a USB memory stick, then put the memory stick in your cellar. That would be an inconvenience that doesn’t involve another person (so you can also use it when your girlfriend is away). Just remember that after logging into FB, you first return the memory stick to the cellar, and only then start reading.
Yeah, or use the old trick to change /etc/hosts—this may be enough of a minor inconvenience.
The LessWrong Facebook group has a post “How many Less Wrongers are natural late-sleepers?” and lots of people, including myself, are. Why are so many of us natural late-sleepers? Speculation welcomed.
As of the last survey, 87% of LWers were male and 75% are 31 or younger, and I’ve read that men tend to be more owlish than women and young people tend to be more owlish than old people.
Such an open question on FB is bound to show selection effects. Better to do a straight poll:
I prefer to go to sleep [pollid:1134]
I prefer to wake up [pollid:1135]
I have to go to sleep [pollid:1136]
I have to wake [pollid:1137]
For comparison: http://www.frontiersin.org/files/Articles/93218/fneur-05-00081-HTML/image_m/fneur-05-00081-g001.jpg
Because they spend their nights reading LW?
Yeah. I am curious what kind of sleeper would I be in a world without internet, but I guess I will never find out.
I’m old enough to know. Even without computers, I still prefer to stay up until dawn.
Lots of late night computer screen time? That blue light is messing with sleep cycles. I used to think I’m a night owl myself but can adjust my sleep schedule at will if I just mind the lighting. These days I wake up at 2-4am.
Curiously all males in my family used to be late sleepers when young but effortlessly switched to rising early when their careers kicked off. They didn’t have computers back then so maybe it was their social lives keeping them up late.
This past week gave me an example of my bipolar disorder in action.
A TV company announced they were open to story proposals. After a few weeks without ideas, I managed to come up with a story that sounded interesting to me. I spent the better part of a weekend at home writing the beginning of a plot outline, and felt extremely excited.
Then the week started and normal life resumed, and after the commute back home I didn’t feel like writing anything. A few days later I deleted the folder I had created. I no longer saw any potential in it.
Part of the reason I did it was because I estimated I wouldn’t make the deadline for submittal, but part of the reason I can’t make the deadline is that I had already promised to prepare a lecture for the local atheist group next month.
Then a disturbing idea came to me. Why am I sacrificing big projects for the small ones? My dreams will come to nothing if I keep standing in my own way like this.
Now I want to know what to do with this revelation.
Three thoughts.
First, evaluate your broader activities, and see where you have the right balance of big projects and small ones. If you’re unsatisfied with the balance, do more big projects.
Second, explore collaborations with others. You can often go further together with others, and it would help address the bipolar swings through others providing more stability during down swings. Of course, make sure the “other” in this case is pretty stable.
Third, create a trigger action plan of noticing when you’re about to sacrifice projects. Stop and evaluate whether you’re doing the right move for your long-term goals.
If you just deleted it it might still be in your trash can, ready to be brought back.
I always delete permanently. But every detail is still in my head.
I also delete permanently, but I back up with CrashPlan, so anything on my computer for a few days would be backed up automatically.
I’ve recovered small things occasionally using it, and one occasion even a 1TB hard drive that failed.
Oh well, that’s rough. Still, at least since you remember it you can get it back up and on track again next weekend, if you want to.
Maybe because big projects mean big risk since 1) lots of effort might lead nowhere and 2) you don’t have time to do other small risk small reward projects. Maybe your level of risk aversion fluctuates?
After the survey I’ve become confused about what it means for HBD to be false. Should any difference between two separated populations be completely environmental? I believe it’s an antiprediction to think it’s not. I would bet that the “genetic potential” for any complex trait will be slightly different on average between different populations even if we are talking about two neighboring cities. Even if they started out as copies of each other just a few generations ago. I also believe that the differences are small and are mostly irrelevant to any real world problem. If it’s HBD, how does a person argue that it’s false? And how does someone argue that believing it makes someone a bad person?
This looks to me like you have a fairly good view of what it would look like for HBD to be false. That is, there would be no meaningful biological diversity among humans, and the idea of people with different levels of intelligence would be as outlandish as the idea of people with different number of eyes. I mean, we don’t expect that to vary between two neighboring cities, and even though there are some people who have something other than five fingers per hand, that’s also something we would expect to only barely vary between cities, and so on.
I don’t think the survey asks that question. Just like it doesn’t ask whether feminism or social justice are false. Those are cultural movements that can’t simply be understood by boiling issues down to one sentence.
The HBD crowd doesn’t.
The question was: “115. How would you describe your opinion of the idea of “human biodiversity”, as you understand the term? No Wiki page available, but essentially it is the belief that there are important genetic differences between human populations and that therefore ideas generally considered racist, such as different races having different average intelligence or personality traits, are in fact scientifically justified”.
No matter how we clusterize people into races, unless it’s some kind of a good randomization procedure I think the probability of their average traits being exactly equal is really small. I’m much less sure it’s scientifically justified to say so as I don’t know much about the state of research there.
I maybe kind of missed the “important” word there. Still...
Scientific debates are never about whether two groups are “exactly equal”. The notion that the question is about whether something is “exactly equal” ignores the core about what the debate is about.
Important is indeed an important word in the sentence.
When trying to understand writing, don’t go for the strawman. Try to understand what could be meant. What differences in beliefs in the question about? It’s not a hard question if you look at it with genuine interest of understanding it.
I try to, but here I could be overcompensating from sometimes “going too deep” with questions like that. If the question was “Do you believe interpopulational genetic differences in mental abilities or character traits are large enough to be a factor in policy making”, I’d answer “No” and maybe even “Hell no, for a multinational population”. But that seems like a very different question.
I don’t think “ignoring the context” is well described as going deep. Part of critical reading is to think about why someone writes what they write instead just trying to focus on the literal meaning of words. It’s rather only engaging with the surface.
It’s ignoring the context that can be described as not going deep enough. My other usual algorithm “if the question seems easy, look for a deeper meaning” is not without its faults either. Btw, what the context of a single question that asks me to describe my opinion of something as I understand the term actually is?
Alright, I got it, I fail critical reading forever. Yet. Growth mindset. What was the real meaning?
People in our society differ in how they think about genetic differences. There are people who think that race matters a great deal and other you think it doesn’t matter. It’s useful to have a metric that distinguishes those people.
If you have that metric you can ask interesting questions such as whether people who are well calibrated are more likely to score high on that metric. It’s interesting whether the metric changes from year to year.
That means the question tries to point at a property that people disagree about. In this case it’s whether genetic differences are important. The question doesn’t define “important” but there are various right wing people such as neoreocons and red-pill-folks who identify with the term “human biodiversity”. The question doesn’t try to ask for a specific well-defined belief but points to that cluster of beliefs. It’s the same way that the feminism question doesn’t point to a well-defined belief. You don’t need a well-defined belief to get valuable information from a poll.
The question made it into the the survey because I complained about the usage of tribal labels such as liberal/conversative where people have to pick one choice as a way to measure political beliefs. I argued that focusing on agreement on issues is more meaningful and provides better data.
What about people who think that neither of these positions is defensible? Well, I suppose you wouldn’t go wrong by calling them metacontrarians.
That why you don’t ask for a “yes”/”no” answer.
Thank you for the explaination.
Sorry, I’m still not getting it. Doesn’t matter.
There is the unstated but implied “differences which are significant and have important consequences in real life”.
Sure, I would still bet they’re going to be statistically significant if we get millions of people into the dataset. They may also have some important consequences in real life (a higher resistance to a specific disease may be important for a person with some usually small probability. A population of million that is more resistant to the disease than it could be is about million times that important). It just shouldn’t influence policies much. Though it can make a difference in healthcare… well, no. It actually can influence some policies and economic results for countries with different populations. Lactose tolerance may have effects on agriculture and the export structure, especially long term. The question singles out intelligence and personality traits for no apparent reasons but controversy, being hard to measure and being on the spiritual side of dualism. And probably being more involved in our ideas of human worthiness than height is.
I am not sure what are you arguing. The fact that there are important genetic differences between populations at the medical level is uncontroversial. The controversial issue is whether these differences, as some people put it, “stop at the neck”.
Nope, there are apparent reasons. Intelligence of the populations is massively, hugely important, much more so than lactose intolerance or propensity for exotic diseases. See e.g. this or this.
Not really arguing anything. I’m asking if there is a rational non-meta reason to believe they do “stop at the neck” even if we throw away all the IQ/nations data.
Thanks for the reason I’ve missed. Are personal traits as important?
Of course there are. The standard argument is that the history of human evolution suggests that increased intelligence and favorable personality traits were strongly selected for, and traits which are strongly selected tend to reach fixation rather quickly.
But then the difference in intelligence would be almost completely shared + nonshared environment. And twin studies suggest it’s very inheritable. It also seems to be a polygenic trait, so there can be quite a lot of new mutations there that haven’t yet reached fixation even if it’s strongly selected for.
Not to my knowledge.
That is a more controversial subject. They are clearly less important than intelligence, but things like time preference (what kind of trade-offs do you make between a smaller reward now and a bigger reward later) or, say, propensity for violence got to be at least somewhat important.
I don’t know too much about HBD, but I would guess that the most important trait for them is intelligence. And maybe aggressivity, impulse control, ability to cooperate with non-relatives, and this kind of necessary-for-civilization things. (You can ignore other traits, such as eye color or lactose tolerance, they don’t make a big difference in modern society. So you’ll buy a different box of milk, big deal.)
Mathematically speaking, if you could measure something to million decimal places, it is very unlikely that the averages for different populations would be exactly the same. But in real life, a difference of 1 IQ point does not make a huge difference. So the question is whether the differences are large enough to matter in real life.
Humanity split from our common origin about 10000 years ago. It seems like enough time to make significant changes; for example, mere 1 IQ point per century could accumulate to a difference of dozens of IQ points below distant populations. On the other hand, humans were already shaped by evolution millenia before they split, so maybe most possibilities of cheaply gaining yet another IQ point were already exhausted before we split. I don’t feel certain enough to make a hypothesis either way.
So we should solve this question empirically, and then we get into problems—the old research was unreliable, and the new one is not done for political reasons. So I still feel like the answer could go either way.
This question is solved empirically. If you look at the data it’s really obvious. There is NO serious research which claims that all populations have essentially the same IQ.
You’re off by an order of magnitude or so.
Oops, indeed.
“citation needed”
Here is a quick-n-easy example, or if you want details they are here.
Here are a couple of books
And here is a long, detailed post with a lot of numbers, graphs, and references.
Am I reading the linked example correctly, that Asian-Americans’ IQ keeps growing during the last decade, and everyone else’s IQ in USA keeps dropping?
I mean, if you assume that the differences is SAT scores between races reflect their differences in IQ, it seems reasonable to assume that the differences in SAT scores between now and ten years ago reflect the differences in IQ between now and ten years ago. (Either that, or SAT also reflects something else beyond IQ.)
Several things here.
First, look at the magnitudes. The difference between Asians and blacks is about 380 points in 2015. Compared to that number the declines (-6 to −28) are very minor.
Second, while I don’t have data at hand, I strongly suspect that the Asian population of SAT takers changed during the last decade. In particular, the upper class of China got wealthy enough and “international” enough to start sending their kids to US universities (which usually involves taking the SAT) and that’s besides increased immigration from China in general.
Third, SAT is a proxy for IQ and it’s not a stable test. It’s being tweaked and adjusted constantly. Among other things, SAT is normalized so that the score of 500 corresponds to about the 50th percentile of test-takers. If you have a influx of smarter-than-average kids, they will not only push their subgroup scores up, they will also push everyone else’s scores down.
The SAT scores from different years are somewhat comparable (because they are normalized), but not fully comparable because the tests from these different years are literally different. That’s not a problem for comparing the performance of subgroups in any given year, though.
Some time ago there has been a post (or a comment?) arguing against the tendency of not answering the questions one poses directly. Say for example: “Can you recommend me a book about virtue ethics?” being answered mostly by “Here’s why virtue ethics is wrong” (this is a fictional example, it was not in the original post).
The discussion hit me deeply because from that time on I’ve noticed doing the same behavior a lot, and I’ve tried to correct myself with mixed success.
I’ve come to realize that there’s a symmetryc ‘bias’ (it’s not really a cognitive bias, more like a cognitive quirk): answering questions too literally. For example:
Can you reccomend me a book about rationality?
The one I liked the most is Rationality: from AI to zombie.
[some times pass]
Hey, you know about the book you recommended me? I’ve found a blog that contains almost exactly the same contents: it’s called LessWrong!
Yes, I know, it’s five years I’ve been reading it.
Why didn’t you said that?
Because you asked me for a book
[grumbles]
strike to the heart of the question, not to defeat it. I plan to write it still. finding time appears to be difficult.
Looks like Andrea Rossi’s E-Cat cold fusion scam is finally reaching its end-phase. Some previous LW discussion here, here and here.
Yeah, what a hoot it has been watching this whole debacle slowly unfold! Someone should really write a long retrospective on the E-Cat controversy as a case-study in applying rationality to assess claims.
My priors about Andrea Rossi’s claims were informed by things such as:
He has been convicted of fraud before. (Strongly negative factor)
The idea of this type of cold fusion has been deemed by most scientists to be far-fetched. (Weakly negative factor. Nobody has claimed that physics is a solved domain, and I’m always open to new ideas...)
From there, I updated on the following evidence:
Rossi received apparent lukewarm endorsement from several professional scientists. (Weakly positive factor. Still didn’t mean a whole lot.)
Rossi dragged his feet on doing a clear, transparent, independently-conducted calorimetric test of his device—something that many people were willing to do for him, and which is not rocket science to perform. (Strongly negative factor—strongly pattern-matches with a fraudster).
Rossi claimed to have received independent contracts for licensing his device. First Defkalion in Greece, then Industrial Heat. Rossi also made various claims about NASA and Texas Instruments being involved. When investigated, the claims about the reputable organizations being involved turned out to be exaggerations, and the other partners were either of unknown reputation (Defkalion) that quickly disappeared, or had close ties to Rossi himself. Still no independent validation. (Strongly negative factor).
And now we arrive at the point where even Industrial Heat is breaking ties with Rossi. What a fun show!
I’ve been preparing for coding interviews, and I realized that the skill had gotten “rusty” from disuse. A specific example is coding a binary search, which is a little nontrivial because you have to think carefully to avoid off-by-one errors.
When people talk about old skills they talk about them in two ways: some skills you can supposedly never forget, like riding a bike, Some others can get rusty, so you need to keep brushing them up over and over again.
Neither of these seems actually true. I think it’s more like the exponential forgetting curve we have for (verbal) memory. The neurons for the skill still exist but you can’t access them after a while, and when you recall the skill, you get to the same level as before. If you keep reinforcing it from time to time, say according to the spaced repetition schedule, the skills become permanent. (I’ve made the exponential analogy because it would be cool if motor memory and verbal memory had similar mechanisms, but it’s just a model that I’m familiar with)
Has anyone heard of something like this in the psych literature?
What are your experiences with skills like these that you don’t use as often? Have you made a skill “permanent” through repeated practice.
Half-open ranges are your friend :-)
Related to Disguised Queries:
Concept Creep: Psychology’s Expanding Concepts of Harm and Pathology by Nick Haslam
Report about the paper in The Atlantic.
Reminds me professional opinions that avoiding trigger warnings exacerbates pathology whereas exposure diminishes is. Haslam is from my university, too, and I’ve chatted to him! I tend to think in black and white, and catastrophise when I think of the past. Maybe I wasn’t actually ‘abused’.. just overly sensitive :P I mean there was god, and there was bad.
It interesting given that we frequently see complaints that LW should reuse more concepts from the outside instead of making up it’s own concepts.
Concept creep automatically comes with reusing concepts.
I’ve been reading a lot of Robin Hanson lately and I’m curious at how other people parse his statements about status. Hanson often says something along the lines of: “X isn’t about what you thought. X is about status.”
I’ve been parsing this as: “You were incorrect in your prior understanding of what components make up X. Somewhere between 20% and 99% of X is actually made up of status. This has important consequences.”
Does this match up to how you parse his statements?
edit
To clarify: I don’t usually think anything is just about one thing. I think there are a list of motivations towards taking an action for the first person who does it and that one motivation is often stronger than the others. Additionally, new motivations are created or disappear as an action continues over time for the original person. For people who come later, I suspect factors of copying successful patterns (also for a variety of reasons including status matching) as well as the original possible reasons for the first person. This all makes a more complicated pattern and generational system than just pointing and yelling “Status!” (which I hope isn’t the singular message people get from Hanson).
Nope. I parse them in terms of incentives. When Hanson says “X isn’t about what you thought. X is about status”, I understand it as “People are primarily motivated to do X not because of X’s intrinsic rewards, but because doing X will give them status points”.
Status, as far as I can tell, has a motte-and-bailey problem.
The bailey is that status is a complex, technically defined concept, specifically involving primate hierarchies, extremely sensitive to context.
The motte is that status is exactly what it sounds like—just your standing in the eyes of other people.
(Or am I using the terms backwards? Let me know.)
You could say that I actually brush my teeth in the morning because I would lose status if I didn’t due to having visibly stained teeth and bad breath, etc., and my reasons really don’t have anything to do with preventing cavities and avoiding dental suffering. This is somewhere between facile and banal, depending on how you’re reading “status”.
At best, Hanson’s work forces you to consider the social context of certain actions and policies.
I’ve found this Caplan post to be a pretty good example of what the world looks like in the status lens. It’s not arguing that all motivations “really” boil down to status, but that when it comes to tradeoffs between status and something else, people almost always pick status (or weight it very highly).
This makes me wonder whether lots of people who are socially awkward or learning about socialization (read: many LWers) need not only social training but conformity coaches.
One should learn to walk before they try to run. Conformity is a signal of average social skills. Non-conformity usually means low social skills; and sometimes is a costly signal of high social skills. Unless you really know what you are doing, it’s the former.
Generally, if your social skills are really bad (and they could be worse than you imagine), imitating average people is an improvement, and is probably a better idea than any smart idea you invented yourself (assuming that your current bad situation already is a result of years of following your own ideas about how to behave).
If you want to be a socially successful person, aim to be a diplomat, not a clown, because it’s better to be a mediocre diplomat than a mediocre clown.
Of course, everything is a potential trade-off. Sometimes you have a good reason to do something differently than most people, maybe even differently than everyone else. But choose your battles wisely. Be weird strategically, not habitually.
It is probably wise to see your non-conformity impulses as a part of your self-sabotaging mechanism.
Er, no, I don’t think so.
Conformity is a function that requires a rather important argument: conformity to what? When you see non-conformity, the usual case is that you see someone from a different tribe and that tells you nothing about this person’s social skills.
Occasionally there is a different situation: someone is trying to conform and failing. Now that is actually a sign of low social skills. But that isn’t quite non-conformity, that’s failing at conformity.
Well, us nerds are famous for lacking social skills. We may imagine ourselves to be a parallel (superior) tribe to the rest of the society, but the fact is that we are usually unable to cooperate even with each other. So let’s continue making fun of the sour grapes of conformity.
That’s a popular meme. I’m not sure how well does it match reality.
Sure, socially incompetent nerds exist. But socially incompetent yobs and rednecks exist, too, and might well outnumber the nerds. The meme is sticky for a couple of reasons: (1) it cuts nerds down to size (“He might be much smarter than me, but he couldn’t pick up a girl if his life depended on it”); and (2) it has a nice reversion of skills (“Brainiac, but clueless”).
Besides, there is an important distinction between people who want to but can’t and people who just don’t want to—a distinction that’s not made here.
I am not sure what does that show or prove. Cooperation is not the holy grail of human social behaviour.
Mutant and proud :-P
Don’t believe stereotypes you see in media, and don’t use a static model of “social skills”. Being nerdy went mainstream 30 years ago, and there’s probably a similar social success rate among nerds as among other groups.
“just another tribe” is pretty accurate IMO.
I don’t care about media. My model is based on interaction with nerds in fandom and in Mensa, and the simplified generalization is: too busy engaging in signalling competitions, which undermines their ability to cooperate and win anything other than mostly imaginary debate points.
The few cases I have seen nerds achieving something impressive, it was usually because some half-nerdy/half-mainstream person organized the project (using the skills they gained elsewhere) and did not participate in the pissing contests.
The cooperation mostly fails because everyone is too busy to prove they are better than everyone else in the group. Trying to explain why that might be a mistake runs into exactly the same problem, only one level higher (i.e. it becomes a competition of writing the best snarky comment on why people who cooperate are idiots).
That’s not terribly representative (MENSA, in particular, is known to be quite dysfunctional). Here is a field report describing successful nerds:
OK, how did we get here...
And now you post this:
I feel like it actually supports my opinion. The field report describes successful nerds who are good at the social game. And what I say is, essentially, “let’s do it (or at least let’s not have a norm of going in the opposite direction)”.
I’m confused.
This is a descriptive statement about reality. I happened to disagree with it.
This is prescriptive statement about… I’m actually not sure what. Let’s be successful? Sure, let’s. But it has nothing to do with non-conformity.
Note that in esr’s example the nerds are successful and good at social games. But they are not successful because they’re good at social games.
Let’s be successful through cooperation, which conformity is an ingredient of.
For people to cooperate, they have to agree on the project they cooperate on, and also agree on the general strategy to accomplish this project. With perfect Bayesian reasoners, the agreement would be achieved by Aumann’s Theorem. With humans, certain doze of conformity is required to overcome the remaining differences in opinion remaining after people have already updated on each other’s opinions.
If you can’t do this last step, you get Mensa. Nothing ever gets done, because everyone has a different opinion, and everyone feels it would be low-status to accept someone else’s solution when it is obviously imperfect (therefore it wouldn’t be accepted on basis of pure logic).
As an example, a few years ago, when I had much more free time, I was active in two societies: Mensa, and a local Esperanto group.
In the Esperanto group, as a team of five or ten people we succeeded to publish a new textbook, a multi-media CD (containing books, songs, and computer programs in E-o) and later a larger DVD edition (with added E-o courses, and an offline version of E-o Wikipedia), and created a website containing a wiki and a forum; all this within two years. (Later I decided that E-o isn’t my high priority anymore, so I quit the team. As far as I know, the remaining members now use their skills for some commercial projects related to learning languages other than E-o, plus organize international E-o meetups.)
During the same time in Mensa… generally, whenever I suggested anything, it was almost certainly rejected; and even when by miracle people finally agreed about something, when we looked at the details, the same pattern repeated on the lower level. It was a fractal of nitpicking. At the end, nothing got done. We succeeded to agree that we ought to change our web forum, because it had no moderation and was dominated by a few prolific crackpots (who weren’t even our members). But during two years we were unable to agree on which software solution to use, and what specific rules should the new forum have.
I spent about the same amount of energy in both groups, and the difference in outcome was staggering. This is how I learned that productivity is a two-place word: how much I am productive is a function of both my personal traits and the traits of the environment I am trying to work in. When you have people who second-guess everything but contribute nothing, the output is close to zero. When you have people who can go along with your crazy experiments, some of those experiements succeed, and a few of them will be really impressive. (But going along with something that you have a different opinion about, that’s conformity.)
When programming, you have the option to do the whole thing yourself, and then you don’t need to cooperate with anyone. But even that applies to specific kinds of projects, where you can become an expert at every relevant aspect. (For example, if you make a computer game, it is unlikely for the same person to be great at coding and graphics and music and level design and balancing multiplayer.) But when you look at the real world, you have basically two options: either cooperate with non-nerds, or find nerds who are able to cooperate.
My first thought is that it’s easier to get things done in an Esperanto group because the goal—spread Esperanto—is more obvious than what a Mensa group should do, but perhaps I’m underestimating how much is obvious for a Mensa group to do.
I was a member of Mensa for a while, but was underwhelmed by the intellectual quality. I know a couple of very smart people who are or were in Mensa, but they weren’t local to me. I’ve been told that there’s a lot of variation between local groups.
There’s a pattern I saw in local Mensa publications that I now have filed under people trying to appear intelligent. The article starts with a bunch of definitions that don’t look obviously awful, but which somehow lead to a preferred conclusion.
Quoting Wikipedia, the mission of Mensa is:
to identify and to foster human intelligence for the benefit of humanity;
to encourage research into the nature, characteristics, and uses of intelligence; and
to provide a stimulating intellectual and social environment for its members.
Assuming that the research part is better left for professional researchers, the average member can contribute by finding more high-IQ people and creating a network for them. But I’d say that Mensa fails at this too. Although this may be country-specific; I would say that British Mensa does much better in this aspect.
When I think about the goal of finding and connecting high-IQ people, my first idea is to create a website like Reddit, where only certified high-IQ people could register. Regardless of whether they are Mensa members or not. One website for the whole world; of course different subreddits could use different languages. Well, this already proved quite controversial.
It seems obvious to me to use one mutlilingual website for the whole world, instead of every national Mensa having to create and maintain their own software solution. First, it saves a lot of work. Second, I just don’t see any reason why people should be divided by countries, especially if we talk about members of a world-wide organization. Sure, there is the language barrier; but that can be solved by creating a multilingual website, and specifying which subreddits use which language; there is no need to maintain a separate codebase for each country. This specifically applies to small countries, or not-so-small countries with few Mensa members, such as Slovakian Mensa with 200 members, which would save a lot of work by joining an existing solution, and would also gain access to a larger network.
To achieve this goal, it is not even necessary to get agreement of all local groups in advance. Just develop the website for one group, but already make it multilingual, already provide options for having multiple moderator teams (representatives of multiple local groups), etc. Then start using the website for one group. Then offer other groups the options to join you, one by one.
The argument for why the website should be open to high-IQ people who are not Mensa members is more tricky, but essentially, it’s about the value of network. The same reason why phone companies allow you to call people who are not their customers. Because doing this increases the value for the customers. Sure, there is the free-rider problem; what is everyone will use this website, but no one will want to pay for Mensa membership? But I would assume that Mensa also provides some other services to its members. (Alternatively, the paying members could have some privileges on the website.) So why is it even necessary to involve Mensa in the whole project? Because if you want to have a high-IQ website, someone has to test those people, and Mensa is already doing this. The only change is that now they would also create web accounts for people who passed their tests, even if they don’t want to become paying members.
I was willing to write the whole code myself (yeah, back then when I had a lot of free time). In the situation where the old forum was falling apart and no one else volunteered to fix the problem, so I wasn’t really competing against an alternative. Yet somehow… :(
For some reasons that I didn’t understand clearly, the members of Slovakian Mensa (about 200 members total, maybe 15 of them active online) objected both against having non-paying members on the website, and against international cooperation. So they literally wanted to have a web forum for 15 people. I am sorry, but for that amount of people, anything other than a mailinglist or out-of-the-box solution (I recommended phpbb) is a waste of resources. Unfortunately, they had some objection against all existing solutions. Sigh. (Meanwhile, for the Esperanto group I have installed mediawiki + phpbb + some hand-coded specific functionality, and everyone was happy.)
I’m rambling… the essence is that the Mensa members I know seem like they don’t give a fuck about Mensa’s official goals. Either that, or they are completely irrational at trying to achieve them.
Yep, that’s Mensa in the nutshell. People who have the IQ, but don’t know how to use it for anything other than signalling. Actually, even the signalling becomes unimpressive once you recognize the pattern.
Esperanto fans are also quite obsessed with signalling, but there is a subgroup that gets things done. Maybe all groups are like that, that the people who get shit done are but a small minority, only somewhere the minority is large enough to actually become a group within the group.
My impression is that the point of Mensa is to provide smart people who would otherwise be isolated with opportunities to interact with other smart people. Now:
These days, the internet makes this much less of a problem than before, so Mensa has less value, so people who might otherwise have joined will be less inclined to do so.
There’s always been a tendency for smart people to congregate in places with a high density of other smart people, not only for social reasons but also because that’s where good smart-people jobs tend to be.
So entrance to Mensa has to be easy enough that you get a reasonable number of potential Mensa members even in places where most of the smart people have gone elsewhere.
And then the people in a given place who want to join Mensa will tend to be the ones who haven’t found other things to do (there or elsewhere) that put them in contact with other smart people. And who don’t form satisfactory (to them) relationships with other not-so-smart people.
So Mensa seems likely to be selecting for the following combination of attributes:
Intelligent
… but not too intelligent
Not especially social
Not especially ambitious
Not a lot of specific strong intellectual interests
Now, of course not everyone there will fit that pattern, for all kinds of reasons. And some people who do fit that pattern may be interesting fun people capable of getting things done. But it doesn’t seem like it should be a big surprise if a lot of them aren’t.
For statistical reasons, there are much more people with IQ 130 than with IQ 150 (or whatever is the LW average). So an organization of “IQ 130 or more” will turn out to be “IQ 130, and only rarely more”.
I’d say that the less ambitious members are more visible in Mensa, because they don’t have alternatives. For example, one member of our local Mensa got currently into parliament. That seems ambitious enough to me. But he doesn’t spend nearly as much time in Mensa as the others.
Again, I’d blame this on visibility. When a person with strong intellectual interests comes to a Mensa meetup, they are likely to be alone with that one specific interest. So they end up talking about something else, just to be able to join the group. Unfortunately, instead of educating each other, this results in the lowest common denominator. Seems like Mensa would benefit from having less debating and more lectures.
I think that you may overestimate the ability of people born with high IQ to find their place in the society. There was a research done by Terman a century ago, that I am too lazy to google now, essentially concluding that the fate of high-IQ people often depends on whether they come from an environment they can fit it, or whether they are alone in their environment.
The people coming from high-IQ families or studying at high-IQ school usually behave like you describe. They follow the strategies of smarter people around them, and those strategies work for them too.
Then you have high-IQ people who happen to live in an environment where the high IQ is rare, where they have no models to copy, and where people around them have really wrong ideas about how high IQ is supposed to work. Those people are fucked, unless they have a lot of luck. Helping these people should in my opinion be the #1 priority of Mensa, because that is where Mensa can do most good.
I think opinions on this subject are very much coloured by the personal experience. Generalisation is risky.
That’s my whole fucking point that some high-IQ people find it easy, and some high-IQ people find it difficult… and according to Terman’s research it depends a lot on the environment where they grew up.
For example, if your parents are quantum physicists, and your friends are quantum physicists, and you have high IQ, it is usually not a problem to make a career as a quantum physicist, because all you need is to copy what others do.
On the other hand, if you happen to be a high-IQ child born somewhere in a ghetto, and all you know about yourself is that you are “weird” even within that ghetto (and no one ever suspects that the reason for weirdness may be a random mutation that gave you IQ 150), and you even have problem in school because you are not that good in the majority language, and if the teachers give you bad grades anyway because they are racist… such people usually don’t create and test a hypothesis “oh, it’s probably because I am actually a genius; I should start studying quantum physics”.
My own impression is that when people talk about intelligence, there is usually a lot of middle/upper class snobbery. They mistake education for intelligence, or more precisely various certificates and signal of education. A rich child may be borderline retarded, but will study at an expensive university, and will think about themselves as a genius. A child from workers’ family may not study at university (because parents discourage them, because they don’t see a point), and may never realize they are actually highly intelligent, simply because their culture does not support this hypothesis.
Sure, but that looks like a fully-general argument, true for pretty much any subset of the population.
I think you’re underestimating the advantages of general intelligence. But in any case, here are two propositions:
High IQ does not guarantee life success and, in particular, social success.
People with high IQ find it easier to achieve life success and, in particular, social success than people with low IQ
They seem to be non-controversial to me. You’re pushing the first one, but do you disagree with the second one?
Depends on the environment (I feel like I keep repeating myself).
High IQ gives you a boost to your skills, including social skills.
Depending on the environment, high IQ can also make you less compatible with everyone around you, thus making you play the social game on a higher level. Instead of “be social with people like you” that most people play, you play “be social with people unlike you”. Maybe you spend the formative years of your life without having an opportunity to significantly interact with people like you.
If the first point applies to you, and the second one does not, great for you!
If the first point applies to you, and the second one does too… now that’s a question of whether the boost in skills is enough to overcome the challenge of not getting the opportunity to play social games in the easy mode during your formative years.
To make it easier to imagine, consider this scenario:
You have two children with IQ 100. You let one child grow up among other children with IQ 100. You let the other child grow up exclusively among children with IQ 50 during the first 18 years of their life. Then the second child is allowed to freely choose their own environment.
Would you expect both of these children to have the same social skills, just because they both have the same IQ?
(In my analogy, the meaningful task for the “Mensa” of this alternative world would be to find the children of the second type and connect them with other people with IQ 100.)
Yes, because I can’t see the meaning in this repeating. In the trivial sense, everything depends on the environment. In the context of this discussion, my point is that IQ is a strong factor that will be able to overcome some (but not all) environmental barriers.
Yes, sure, so? People “unlike you” are the majority of the society, so if you learned to play “be social with people unlike you” this is actually the right skill to have. I am assuming lack of other problems like autism.
You’re getting it in reverse. The social games should have been easier because you’re smarter than people around you. And again, the skills of social games with non-smart people are what you need.
I think you’re confusing self-perception issues (confidence, Maslowian self-actualisation, etc.) and nerdiness with getting ahead in life. People who are highly successful are often abnormally smart, even if they aren’t nerds. Not all smart people are nerds.
Let’s change it a bit. You have two children with IQ 150. One goes to a special boarding school for kids with the same IQ of 150. The other one goes to a normal school with the normal kids of normal (100) IQ. After they both get out of school, which one will be better positioned to deal with real life and real society?
I think that “spending most of your youth mostly among people who have IQ 50 points less than you” is a very specific problem. Which happens to some high-IQ people. And doesn’t happen to the some others.
Humans are social species. We learn the culture. High IQ is not magic. People learn a lot by copying people around them. You cannot effectively learn dealing with everyday problems by e.g. reading a book about Feynman.
Doesn’t make the task of learning to interact with them easier. The usual scenario for average people is: 1. learn to interact with your parents; 2. learn to interact with your peers; 3. learn to interact with weird people. Skipping the step 2 makes the step 3 harder, because we cannot use the natural “what would I do in their situation” heuristic.
I assume it is two situations causing the same problem for two different reasons. Simply said, you can have problem understanding other people either because your detection mechanism is broken, or because they think and behave differently from how you would think and behave in the same situation.
Of course, for some people it can be both.
As long as you don’t desire to do things or discuss topics that are beyond their reach. Like, never.
Need for what? Winning a pissing contest? Or feeling like a member of the tribe?
As if I ever denied that. I am talking about “P(successful | smart)”, you reply with “P(smart | successful)”.
Many people in “real life and real society” actually live in a bubble. When you e.g. study computer science, and then work as a programmer, you are usually not surrounded by people with IQ 100 most of the day. People even often choose their life partners with similar IQ. For some reasons this is considered natural for adults, but a horrible heresy when talking about children.
Assuming they both get into the same university, etc., I would bet on the child from the boarding school. But of course other factors can change that; for example if the child from the boarding school chooses a university where the average IQ is 100, and also loses all contacts with their former classmates, that can have a bad impact.
“Smart” and “nerd” are different things, overlapping but not the same. Note that it’s not smart to try to deal with everyday problems by reading books about Feynman.
Sure, but you’re stuck with them anyway. It’s not like you have an option to move to some version of Galt’s Gulch where only the IQ elite are admitted.
For life. To be able to find friends, dates, jobs, business opportunities, allies, enemies. To be able to deal with whatever shit life throws at you. Yes, you may not be able to get the warm feeling of belonging, but no one promised you that. Go read Ecclesiastes: “For in much wisdom is much grief: and he that increaseth knowledge increaseth sorrow.”
Getting tired of this thread, but I randomly found this link:
Um, this is getting complicated :-)
First, terminology. By “conformity” I mean matching the social expectations. If your tribe expects people like you to dance naked under a full moon, you dance naked under a full moon. If your tribe expects people like you to catch and burn witches, you catch and burn witches. That’s conformity.
As an aside, conformity is NOT “having social skills” and non-conformity is NOT “lacking social skills”. These are rather different things.
Second, success. Speaking crudely, there are people who Get Shit Done and there are people who don’t. People who don’t, as you point out, talk and critique and nitpick and delay and form committees and find reasons why that’s impossible, etc. etc. (see Mensa).
Basically whether you successfully Get Shit Done depends on your executive function and on the incentives. For a given person the executive function is fairly stable and the incentives, of course, vary a lot in each situation. Notably, people who Get Shit Done are often more non-conformist because they can afford to. They are valuable to their group/organisation/tribe and that gives them the freedom to ignore (within limits) the social expectations. Those who are not as capable are more conformist because they are less valuable, more fungible, and so more in need of maintaining high social approval of themselves.
Third, cooperation. I don’t think that cooperation is a function of conformity, outside of extreme cases. Or, rather, let me put it this way: people who Get Shit Done tend to cooperate with those who can provide what they need. They don’t care much about social expectations because they care about Getting Shit Done and the conformity is rather peripheral to that. On the other hand people who play social power games do care about conformity because conformity is a major dimension in these social power games.
I suggest that ability to Get Shit Done often depends on getting other people to help do your shit, and that depends on the attitude of those other people, and in some social contexts that attitude will be more positive if you are more willing to conform.
If you are effective enough in other ways, then indeed that may outweigh the effects of nonconformity. But the effects will still be there, and other things being equal collaborative Getting Shit Done will work better for people who aren’t too aggressively nonconforming.
To put it differently, in terms of your last paragraph: sometimes Getting Shit Done requires playing social power games, and sometimes success in social power games requires conformity.
Yeah, it depends on the specific thing you want to do. Sometimes one person is enough for the whole project. Sometimes one person is enough for making the most simple version of the project; and if others are impressed and joined, they can make the project greater, but their participation is optional. But sometimes you need a team to create even the smallest version of the project (either because the project is too large in scope, or because it requires many different specializations).
Not being able to cooperate limits a person to one-person projects; and only those where they have all the necessary skills.
Not recognizing what the rules are or not understanding why they exist could be more easily confused with non-conformity, while recognizing the rules but failing to apply them is more apparently incompetent. Volitional non-conformity requires understanding of the rules and the ability to apply them, and it’s not entirely obvious what constitutes understanding in this highly subjective matter. The aspect of opting in/out of acquiring the skills needed for conformity complicates things further.
I think most people need non-conformity coaches.
Could you expand on this? Is this just an idea you generally hold to be true or are there specific areas you think people should conform far less in (most especially the LW crowd)?
It is an idea I generally hold true.
“When people are free to do as they please, they usually imitate each other”—Eric Hoffer
A herd of Dollys) is a sorry thing to behold.
Of course this is all IMHO (I like weirdness and dislike vanilla).
I have a similar aesthetic. What areas of weirdness are present in the people you like the most?
I don’t know if there’s a general answer to that. It depends, mostly on the person in question. The same thing in one person might be exciting and in another person—creepy.
As to areas, I’m usually more interested in the insides of someone’s head than in the outsides of his/her skin.
He likes to use this as a catchphrase, but the actual content of his statements is more like: “Here’s how status most likely affects X, and here’s some puzzling facts about X that are easily explained once we involve status.” Of course the importance of status dynamics may vary quite a bit depending on what X is and perhaps other things, so your question doesn’t really have a single answer.
It’s about mental models. It says that the standard mental model isn’t good at explaining reality. On the other hand the status model is better at explaining reality and therefore a better model to use. It’s not the claim that the status model is perfect at predicting. Models don’t need to be perfect at predicting to be useful.
In general Hanson tries to focus on expressing concepts clearly and arguing for them instead of making them complex by introducing all sorts of caviats.
I think this is closest to what I thought Hanson was trying to say and it was close to what I was hoping people were interpreting his writing as saying. The way other people were interpreting his statements wasn’t clear from some comments I’ve read I thought it was worth checking in to.
I think it reads better if you say “about signalling” rather than “about status”. The relationship to actual status evaulations is murky and complicated. The motivations to affiliate with high-status groups and ideas are much more straightforward.
Personally, I tend to parse them as “Look how cynical and worldly-wise I am, how able I am to see through people’s pretences to their ugly true motivations. Aren’t I clever and edgy?”.
I am aware that this is not very charitable of me.
In more charitable mood, I interpret these statements roughly as Lumifer does: Hanson is making claims about why (deep enough down) people do what they do.
I don’t think that’s the best non charitable version.
More accurate: “Hanson profits from memes that are associated with him spreading. That’s his job as a public intellectual. Therefore he does everything to make them spread and win. He optimizes for winning.”
That’s exactly how Hanson sounds to me, and why I tend to read his blog less often now.
Overcoming Bias is not about overcoming bias.
Both of those could be true: if “deep down” people have motivations like that, it may be that deep down Hanson has that kind of motivation for making such observations.
This is an example of why I’m curious about everyone else’s parsing. I bet Robin Hanson does talk about status in the pursuit of status, however I bet he also enjoys going around examining social phenomenon in terms of status and that he is quite often on to something. These aren’t mutually exclusive. People may have an original reason for doing something, but they may have multiple reasons that develop over time and their most strongly motivating reason can change.
Amazon famously found that 100ms faster page generation increased sales by 1%. It seems like this mechanism should also be able to used in the other direction. People who want to use facebook less often might benefit from increasing page-loading time on facebook by 1 second.
Is there any existing program that can do this? The program could also be configured in a way that allows automatically raising the lag if you spent more than 15 minutes at facebook.
There’s a Chrome extension called Crackbook that does exactly this.
My Firefox often slows down horribly when opening Facebook, and I don’t see myself procrastinating less. It makes me angry when it happens, but I stay and wait until the page is loaded. (Or go drink some water while it is loading, etc.)
Could it perhaps be a difference between random slowing down and predictable slowing down?
Do you have an accurate view of how much you are using facebook? Do you have hard data, and know how it changes?
Has anyone developed a quantitative theory of personal finance in the following sense?
Most money advice falls back on rules of thumb; I’m looking for an approach that’s made-up numbers all the way down.
The main idea would be to express utility as a function of financial quantities; an obvious candidate would be utility per unit time equals the log of money spent per unit time, making sure to count things like imputed rent on owned property as spending. Once you have that, there’s an exact answer to the optimal risk/reward tradeoff in investments, how much to save/borrow, etc.
You may want to rephrase that :-)
No, I don’t think so. For example, let’s say your utility = log(wealth). That’s a monotonous transformation, so if you want to maximize utility you just maximize your wealth. That doesn’t answer the question of what is the appropriate risk/reward trade-off because you haven’t even started talking about risk yet. And if you just want to maximize expected wealth you are open to being Pascal-mugged.
Maximizing expected log(wealth) is very different than maximizing expected wealth. A log utility function us much more risk averse.
The Wikipedia article on VNM Utility Theory explains the relationship between the utility function and risk aversion (in the Consequences section).
Yes, you are right. However even a log utility function does not let you escape a Pascal mugging (you just need bigger numbers).
Risk aversion (in reality) does not boil down to a concave utility function. So the OP’s claim that a well-defined utility function will fully determine the optimal risk-reward tradeoff is still false.
See, e.g., this paper: there are theorems saying e.g. that if your utility function is concave enough to make you turn down a bet where you win $110 or lose $100 with equal probability, it must also be concave enough to make you turn down a bet where you win a trillion dollars or lose $1k with equal probability.
...at any wealth level, which should be surprising. If Bill Gates thinks that gamble is an expected utility loss, we predict he’ll be opposed to basically any gamble, but why would we believe the premise that Bill Gates thinks that gamble is an expected utility loss?
The “concave utility function” theory of risk aversion predicts that, all else being equal, richer people will be less risk-averse about any given sum of money. And I would in fact expect Bill Gates to accept positive-dollar-expectation bets of size ~$100 without a moment’s thought.
Why would maximizing expectation on a concave utility function lead to losing your shirt? It seems like any course of action that predictably leads to losing your shirt is self-evidently not maximizing expected concave-utility-function, unless it’s a Pascal mugging type scenario. I don’t think there are credible Pascal muggings in the world of personal finance, and if there are I’d be willing to accept an ad hoc axiom that we limit our theory to more conventional investments.
Now, I’ll admit it’s possible we should have a loss averse utility function, but we can do that without abandoning the mathematical approach—just add a time derivative of wealth, or something.
Because you’re ignoring risk.
The expectation is a central measure of a distribution. If that’s the only thing you look at, you have no idea about the width of your distribution. How long and thick is that left tail which is curling around preparing to bite you in the ass? Um, you don’t know.
Is that a critique of expected utility maximization in general, or are you saying that concave functions of wealth aren’t risk-averse enough?
Is it an observation that expected utility maximization does not include risk management for free, just because it’s “utility”.
I’m still not sure which line you’re taking on this: A) Disputing the VNM formulation of rational behavior that a rational agent should maximize expected utility (https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem), or B) Disputing that we can write down an approximate utility function accurate enough to sufficiently capture our risk preferences.
Both.
VNM doesn’t offer any “formulation of rational behavior”. VNM says that a function with a particular set of properties must exist and relies on assumptions that do not necessarily hold in real life.
I also don’t think that a utility function that can condense the risk preferences into a single scalar is likely to be accurate enough for practical purposes.
Can you by chance pin down your disagreement to a particular axiom? You’re modus tollensing where I expected you would modus ponens.
You are looking at the wrong meta level.
When I say “VNM doesn’t offer any formulation of rational behavior” I’m not disagreeing with any particular axiom. It’s like I’m saying that an orange is not an apple and you respond by asking me what kind of apples I dislike.
Which (possibly all) of the VNM axioms do you think are not appropriate as part of a formulation of rational behavior?
I think the Peano natural numbers is a reasonable model for the number of steins I own (with the possible exception that if my steins fill up the universe a successor number of steins might not exist). But I don’t think the Peano axioms are a good model for how much beer I drink. It is not the case that all quantities of beer can be expressed as successors to 0 beer, so beer does not follow the axiom of induction.
I think ZFC axioms are a poor model of impressionist paintings. For example, it is not the case that for every impressionist paintings x and y, there exists an impressionist painting that contains both x and y. Therefore impressionist paintings violate the axiom of pairing.
I don’t think that rational behaviour as understood on LW (basically, instrumental rationality) has anything to do with VNM axioms. In particular, I do not think that the VNM model is an adequate model of human decision-making once you go beyong toy examples.
By “risk aversion in realty,” do you mean “the descriptive thing that people actually do when it comes to risk,” or “the prescriptive thing that people should do when it comes to risk”?
Because, sure, it looks like most people do some sort of prospect theory reasoning where they don’t use probabilities correctly / have a strong reliance on cached answers and avoiding planning. (This is one of the reasons to think loss aversion is helpful, for example; if you get a windfall you don’t need to replan things, but if you suffer a loss you may have to replan things.) But it’s not at all obvious that they’re making the right call.
Both. I primarily have in mind risk management in finance where what people actually do is much more than compensate for the curve of the utility function; and where people should do what they are doing or they will lose their shirts pretty quickly.
The OP is interested in the prescriptive mode so the simple answer is that dealing with the risk-return tradeoff solely on the basis of the concavity of the utility function is inadequate (see finance which has to and does deal with risk all day long).
Inspired by the recent post of Kaj Sotala, I ask myself: is any kind of meta-goal a convergent instrumental goal? If so, isn’t any kind of meta-meta-goal convergent instrumental for the meta-goal?
Does this create a Zeno effect of AI motivation, trying to achieve more and more meta goals?
Comment spammer and some spam.
Banned.
Thanks!
You can try soliciting people’s relative weighting of items on the PERMA model to learn lots more about them.
Large fluctuations in a person’s interpersonal behavior across situations and over time are thought to be associated with poor personal and interpersonal outcomes.
as a new, yet undoctrinated researcher, I feel it is my duty to further investigate my leturers recent comments that suggest there are publishing biases against plainly reported the results of her tobacco research when they go against the mainstream opinion.
I tried to publish a joke nomination for a university leadership position with a meme, but the emoticons didn’t print properly so now it looks odd
Some of my room mate’s wisdom:
Life is not a linear journey. You may be going around in circles. Don’t try to reach a destination
Videos are higher fidelity more persuasive than text 2 . Universities are a control mechanism
For both partners, empathy and kindness were the most consistent predictors of relationship satisfaction...For couples in which the male partner has a neurotic personality, the non-neurotic female must avoid criticism, convey empathy and resist becoming emotionally floode
She doesn’t think I’m long term material but that’s just an assumption. All her previous assumptions about good relationship material, her institutions, have been wrong! Can’t wait to tell her this!
I’m just your problem—Marcelene
RSd’s ‘power of extreme purpose’ motivational video I love the music!
I find meaning in my relationships. I don’t know if I ‘meaning’ elsewhere. How about you?
I suffer from grandiosity..any tips to overcome it other than CBT?
I find that biological/scientific plausibility of a thing at a lower level of abstraction makes people believe something is more likely...disproportionate to anything else. I think this is a failure in appropriate reductionism and hypothesis generation.
Wikipedia
What could they mean by ‘rationalize’ here?
Today I pitched my research to a classroom with the other people in the class. We had to convince a hypothetical Bill Gates to give us money for our funding. I tried to structure my pitch around scale, neglectfulness and tractability, but I felt less compelling than others who pitched around regularly persuasive devices
You’re on the aisle seat on a tram. Should you move over, and promote/normalise a permissive culture, or stay in place to promote a permission-seeking culture?
I find that four hours of happy personality (I call him Aaron) in row flips me from my unhappy personality (I call him Carlos), to my serious/rage/serenity personality (I call him Eric). This should make more sense once I give a backstory to my multiple personalities. I’m still trying to figure it out myself. I feel the high expectations of a family friends family (the ‘c’ family) helped destabilize my self esteem, then it all went downhill from there. I just want to live the simple life of a barrista!
No rhythm assisted poetry (rap) this week. Chris brown—sex wins the award for best dirty talk. And, Andrew Do’s music.
Interpreting Linkin Park’s Numb and other ‘that generation’ LP songs as if it is self-referential (‘you’ relates to yourself, as well as ‘me’ relating to yourself) it takes on a whole new flavour. Same with ‘Hell of a night’ by Schoolboy Q. In contrast: anything by Lil b. Not really insightful, but check out the: ‘Feeling Good (Instrumental Cover) Muse Album: cover by night86pl’ on youtube.
I thought up a joke:
Q) What’s the difference between a sociopath and psychopath?
A) A sociopath takes a moment to explain the difference
I did a huge write-up about evidence based relationships that somehow dissapeared. I’m so dissapointed!
I had a dream about my psychologist last night...she was telling me off and stuff..and looked differnet I was hanging at some kind of hide out house that was run by a secret student club …. and the night before I had a dream about something called ‘mr. hippo’...and a few days before I dreamed I had x with my mom haha, She says I can’t take care of her cause I don’t do much housework therefore I’m not a good long term partner which she says she wants.
Kimspirations. This is some of the funniest stuff I’ve seen in months. Lots of repeat viewing value.
How convincing a pitch happens to be depends a lot on the standards of the person who hears the pitch.
Secret to (real world) pitching success. Name the profits; Name the cost (what you are asking for); then name the “how”, who, why etc.
Top down, investors want to know $ first.
YMMV
I like this template. I’d like to see a guided example to help people make pitches. All it would take is one person to come with an example. That person could be me, and they could be doing it right now in order to help the community and clarify this example.
I’ve been thinking of how to build on the #LessWrongMoreNice idea while taking some criticisms into account, and thought about something that seems to only have upsides—namely, creating a Gratitude thread. It would make the overall tone of LW more positive, without impeding the honest criticism. There’s a lot of research showing that expressing gratitude improves mental and physical health. Thoughts?
I appreciate your efforts, but I need to point out that I am strongly opposed to this sort of idea, and anything else that explicitly turns LW into group therapy.
Introducing niceness into discussions on LW is not the same as changing the topic of discussions away from where it belongs, which is epistemic and instrumental rationality.
The discussion of how to use expressing gratitude to improve mental health, and how effective it is for LW folk, definitely belongs to LW. However creating an object-level gratitude thread feels like a step in the wrong direction (or at least, it seems sensible to keep it as a sub-thread in the open thread).
#LessWrongMoreNice
Alright, fair enough. Chalk this up as a failed experiment, no replication necessary :-)
do it.
Posted
Oy, the gratitude thread is getting some deterioration into discussions of Intentional Insights. Perhaps I shouldn’t be the one to post it next time, as some people have strong negative triggering about the outreach I do.
My main goal is not to get karma/brownie points for the success of this project, but to have the project succeed and help make LW a nicer place.
So what are thoughts about whether I should be the one to post it next time?
Definitely not. It looked seriously awkward to me.
What are the special rules involved that are mentioned in the thread? Are they the same as the Happiness Thread?
Pretty much