EDIT: I think this comment was overly harsh, leaving it below for reference. The harsh tone was contributed from being slightly burnt out from feeling like many people in EA were viewing me as their potential ender wiggin, and internalizing it.[1]
The people who suggest schemes like what I’m criticizing are all great people who are genuinely trying to help, and likely are.
Sometimes being a child in the machine can be hard though, and while I think I was ~mature and emotionally robust enough to take the world on my shoulders, many others (including adults) aren’t.
An entire school system (or at least an entire network of universities, with university-level funding) focused on Sequences-style rationality in general and AI alignment in particular.
[...]
Genetic engineering, focused-training-from-a-young-age, or other extreme “talent development” setups.
Please stop being a fucking coward speculating on the internet about how child soldiers could solve your problems for you. Enders game is fiction, it would not work in reality, and that isn’t even considering the negative effects on the kids. You aren’t smart enough for galaxy brained plans like this to cause anything other than disaster.
In general rationalists need to get over their fetish for innate intelligence and actually do something instead of making excuses all day. I’ve mingled with good alignment researchers, they aren’t supergeniuses, but they did actually try.
(This whole comment applies to Rationalists generally, not just the OP.)
I should clarify this mostly wasn’t stuff the atlas program contributed to. Most of the damage was done from my personality + heroic responsibility in rat fiction + dark arts of rationality + death with dignity post. Nor did atlas staff do much to extenuate this, seeing myself as one of the best they could find was most of it, cementing the deep “no one will save you or those you love” feeling.
I… didn’t mention Ender’s Game or military-setups-for-children. I’m sorry for not making that clearer and will fix in the main post. Also, I am try to do something instead of solely complaining (I’ve written more object-level posts and applied for technical-research grants for alignment).
There’s also the other part that, actually, innate intelligence is real and important and should be acknowledged and (when possible) enhanced and extended, but also not used as a cudgel against others. I honestly think that most of the bad examples “in” the rationality community are on (unfortunately-)adjacent communities like TheMotte and sometimes HackerNews, not LessWrong/EA Forum proper.
Sorry, I was more criticizing a pattern I see in the community rather than you specifically
However, basically everyone I know who takes innate intelligence as “real and important” is dumber for it. It is very liable to mode collapse into fixed mindsets, and I’ve seen this (imo) happen a lot in the rat community.
(When trying to criticize a vibe / communicate a feeling it’s more easily done with extreme language, serializing loses information. sorry.)
However, basically everyone I know who takes innate intelligence as “real and important” is dumber for it. It is very liable to mode collapse into fixed mindsets, and I’ve seen this (imo) happen a lot in the rat community.
To the extent that that this is actually true, I suspect it comes down to underrating luck as a factor, which I could definitely see as a big problem, and not understanding that general innate intelligence isn’t widely distributed (such that even selecting pretty hard for general innate intelligence will at best get you an OOM better than average, if a supergenius and a ridiculous outlier, with the real life attempts being at best 2-3x median human, and that’s being generous.)
In essence, I think general, innate intelligence is real, it matters, but compared to luck or non-intelligence factors, it’s essentially a drop in the ocean and rationalists overrate it a lot.
I disagree quite a bit with the pattern of “there’s this true thing, but everyone around me is rounding it off to something dumb and bad, so I’m just gonna shout that the original thing is not-true, in hopes people will stop rounding-it-off”.
Like, it doesn’t even sound like you think the “real and important” part is false? Maybe you’d disagree, which would obviously be the crux there, but if this describes you, keep reading:
I don’t think it’s remotely intractable to, say, write a LessWrong post that actually convinces lots of the community to actually change their mind/extrapolation/rounding-off of an idea. Yudkowsky did it (as a knowledge popularizer) by decoupling “rationality” from “cold” and “naive”. Heck, part of my point was that SSC Scott has written multiple posts doing the exact thing for the “intelligence” topic at hand!
I get that there’s people in the community, probably a lot, who are overly worried about their own IQ. So… we should have a norm of “just boringly send people links to posts about [the topic-and-hand] that we think are true”! I’m sure, if someone wrote or dug up a good post about [why not to be racist/dickish/TheMotte about innate intelligence], we should link the right people that, too.
I agree with the meta-point that extreme language is sometimes necessary (the paradigmatic example imho being Chomsky’s “justified authority” example of a parent yelling at their kid to get out of the road, assuming the yell and/or swear during it), good on you for making that decision explicit here.
I upvoted this to get it out of the negative, but also marked it as unnecessarily combative. I think a lot of the vitriol is deserved by the situation as a whole but not OP in particular.
Vitriol isn’t useful. Most of what they were saying was obviously mindkilled bullshit (accusation of cowardice, “fetish”, “making excuses”). I encourage Ulisse to try to articulate their position again when they’re in less of a flaming asshole mood.
I wasn’t in a flaming asshole mood, it was a deliberate choice. I think being mean is necessary to accurately communicate vibes & feelings here, I could serialize stuff as “I’m feeling XYZ and think this makes people feel ABC” but this level of serialization won’t activate people’s mirror neurons & have them actually internalize anything.
Unsure if this worked, it definitely increased controversy & engagement but that wasn’t my goal. The goal was to shock one or two people out of bad patterns.
I think there’s probably something to the theory driving this, but 2 problems:
It seems half-baked, or half-operationalized. Like, “If I get them angry at my comment, then they’ll really feel the anger that [person] feels when hearing about IQ!”. No, that makes most people ignore you or dig in their heels. If I were using “mirror neurons, empathy, something...” to write a comment, it’d be like a POV story of being told “you’re inherently inferior!” for the 100th time today. It’d probably be about as memetically-fit, more helpful, and even more fun to write!
Related story, not as central: I used to, and still sometimes do, have some kind of mental bias of “the angrier someone is while saying something, it must have more of The Truth” in it. The object-level problems with that should be pretty obvious, but the meta-level problem is that different angry people still disagree with each other. I think there is a sort of person on LessWrong who might try steelmanning your view. But… you don’t give them much to go off of, not even linking to relevant posts against the idea that innate intelligence is real and important.
LessWrong as-a-whole is place where we ought to have, IMHO, norms of this place is okay to be honest in. You shouldn’t start a LessWrong comment by putting on your social-engineer hat and saying “Hmmm, what levers should I pull to get the sheep to feel me?”. And, as noted in (1), this precise example probably didn’t work, and shouldn’t be the kind of thing that works on LessWrong.
[Less central: In general, I think that paying attention to vibes is considerate and good for lots of circumstances, but that truth-seeking requires decoupling, and that LessWrong should at-its-core be about truth-seeking. If I changed my mind on this within about a week, I would probably change the latter belief, but not the former.]
I admire your honesty (plain intention-stating in these contexts is rare!), and hope this feedback helps you and/or others persuade better.
(I also have angrier vibes I could shout at you, but they’re pretty predictable given what I’m arguing for, and basically boil down to ”
To be fair here, part of the problem is more so that innate intelligence does exist, but is on a normal distribution, not a power law distribution, so you can’t have massive differences in innate intelligence being the dominant factor for success.
IQ is on a normal distribution because we force it to be normalized that way. Task performance tends to vary by large factors, resembling something closer to a log normal or exponential distribution, suggesting intelligence is indeed heavy tailed.
It sure has come up frequently enough that I’ve been thinking about writing this post. I hope I’ll get around to it, but would also greatly appreciate anyone else familiar with the literature here to write something.
A crux here is that I think there are reasons beyond defining it to be normal that the normal distribution prevails, and the biggest reason for this is that I generally model the contributions of human intelligence as additive, not an AND function and in particular that they are independent, that is one gene for intelligence can do it’s work without requiring any other genes. This basically lets us construct the normal distribution, and explains why it’s useful to model it as a normal distribution.
As far as the result that task performance is heavy tailed, another consistent story is that what’s going on is people get mostly lucky, and then post-hoc a story about how their innate intelligence/sheer willpower made them successful, and this is important, since I suspect it’s the most accurate story given the divergence of us being normal, but the world is extreme.
A lot of genes have multiplicative effects instead of additive effects. E.g. vegetable size is surprisingly log-normally distributed, not normally distributed, so I don’t think you should have a huge prior on normal here. See also one of my favorite papers of all time “Log-Normal Distributions Across The Sciences”.
In retrospect, I’ve come to agree more on this since we last debated, and I now think genetic effects are log-normally distributed, and I think you were directionally correct here (though I do still think that there’s a significant chance that what’s going on is people get mostly lucky, and then post-hoc a story about how their innate intelligence/sheer willpower made them successful, and this is important, because I do think the world in general is way more extreme than human genetics/traits.)
Thanks to @tailcalled for convincing me I was wrong here:
EDIT: I think this comment was overly harsh, leaving it below for reference. The harsh tone was contributed from being slightly burnt out from feeling like many people in EA were viewing me as their potential ender wiggin, and internalizing it.[1]
The people who suggest schemes like what I’m criticizing are all great people who are genuinely trying to help, and likely are.
Sometimes being a child in the machine can be hard though, and while I think I was ~mature and emotionally robust enough to take the world on my shoulders, many others (including adults) aren’t.
Please stop being a fucking coward speculating on the internet about how child soldiers could solve your problems for you. Enders game is fiction, it would not work in reality, and that isn’t even considering the negative effects on the kids. You aren’t smart enough for galaxy brained plans like this to cause anything other than disaster.
In general rationalists need to get over their fetish for innate intelligence and actually do something instead of making excuses all day. I’ve mingled with good alignment researchers, they aren’t supergeniuses, but they did actually try.
(This whole comment applies to Rationalists generally, not just the OP.)
I should clarify this mostly wasn’t stuff the atlas program contributed to. Most of the damage was done from my personality + heroic responsibility in rat fiction + dark arts of rationality + death with dignity post. Nor did atlas staff do much to extenuate this, seeing myself as one of the best they could find was most of it, cementing the deep “no one will save you or those you love” feeling.
I… didn’t mention Ender’s Game or military-setups-for-children. I’m sorry for not making that clearer and will fix in the main post. Also, I am try to do something instead of solely complaining (I’ve written more object-level posts and applied for technical-research grants for alignment).
There’s also the other part that, actually, innate intelligence is real and important and should be acknowledged and (when possible) enhanced and extended, but also not used as a cudgel against others. I honestly think that most of the bad examples “in” the rationality community are on (unfortunately-)adjacent communities like TheMotte and sometimes HackerNews, not LessWrong/EA Forum proper.
Sorry, I was more criticizing a pattern I see in the community rather than you specifically
However, basically everyone I know who takes innate intelligence as “real and important” is dumber for it. It is very liable to mode collapse into fixed mindsets, and I’ve seen this (imo) happen a lot in the rat community.
(When trying to criticize a vibe / communicate a feeling it’s more easily done with extreme language, serializing loses information. sorry.)
To the extent that that this is actually true, I suspect it comes down to underrating luck as a factor, which I could definitely see as a big problem, and not understanding that general innate intelligence isn’t widely distributed (such that even selecting pretty hard for general innate intelligence will at best get you an OOM better than average, if a supergenius and a ridiculous outlier, with the real life attempts being at best 2-3x median human, and that’s being generous.)
In essence, I think general, innate intelligence is real, it matters, but compared to luck or non-intelligence factors, it’s essentially a drop in the ocean and rationalists overrate it a lot.
I disagree quite a bit with the pattern of “there’s this true thing, but everyone around me is rounding it off to something dumb and bad, so I’m just gonna shout that the original thing is not-true, in hopes people will stop rounding-it-off”.
Like, it doesn’t even sound like you think the “real and important” part is false? Maybe you’d disagree, which would obviously be the crux there, but if this describes you, keep reading:
I don’t think it’s remotely intractable to, say, write a LessWrong post that actually convinces lots of the community to actually change their mind/extrapolation/rounding-off of an idea. Yudkowsky did it (as a knowledge popularizer) by decoupling “rationality” from “cold” and “naive”. Heck, part of my point was that SSC Scott has written multiple posts doing the exact thing for the “intelligence” topic at hand!
I get that there’s people in the community, probably a lot, who are overly worried about their own IQ. So… we should have a norm of “just boringly send people links to posts about [the topic-and-hand] that we think are true”! I’m sure, if someone wrote or dug up a good post about [why not to be racist/dickish/TheMotte about innate intelligence], we should link the right people that, too.
In four words: “Just send people links.”
I agree with the meta-point that extreme language is sometimes necessary (the paradigmatic example imho being Chomsky’s “justified authority” example of a parent yelling at their kid to get out of the road, assuming the yell and/or swear during it), good on you for making that decision explicit here.
I upvoted this to get it out of the negative, but also marked it as unnecessarily combative. I think a lot of the vitriol is deserved by the situation as a whole but not OP in particular.
Vitriol isn’t useful. Most of what they were saying was obviously mindkilled bullshit (accusation of cowardice, “fetish”, “making excuses”). I encourage Ulisse to try to articulate their position again when they’re in less of a flaming asshole mood.
I wasn’t in a flaming asshole mood, it was a deliberate choice. I think being mean is necessary to accurately communicate vibes & feelings here, I could serialize stuff as “I’m feeling XYZ and think this makes people feel ABC” but this level of serialization won’t activate people’s mirror neurons & have them actually internalize anything.
Unsure if this worked, it definitely increased controversy & engagement but that wasn’t my goal. The goal was to shock one or two people out of bad patterns.
I think there’s probably something to the theory driving this, but 2 problems:
It seems half-baked, or half-operationalized. Like, “If I get them angry at my comment, then they’ll really feel the anger that [person] feels when hearing about IQ!”. No, that makes most people ignore you or dig in their heels. If I were using “mirror neurons, empathy, something...” to write a comment, it’d be like a POV story of being told “you’re inherently inferior!” for the 100th time today. It’d probably be about as memetically-fit, more helpful, and even more fun to write!
Related story, not as central: I used to, and still sometimes do, have some kind of mental bias of “the angrier someone is while saying something, it must have more of The Truth” in it. The object-level problems with that should be pretty obvious, but the meta-level problem is that different angry people still disagree with each other. I think there is a sort of person on LessWrong who might try steelmanning your view. But… you don’t give them much to go off of, not even linking to relevant posts against the idea that innate intelligence is real and important.
LessWrong as-a-whole is place where we ought to have, IMHO, norms of this place is okay to be honest in. You shouldn’t start a LessWrong comment by putting on your social-engineer hat and saying “Hmmm, what levers should I pull to get the sheep to feel me?”. And, as noted in (1), this precise example probably didn’t work, and shouldn’t be the kind of thing that works on LessWrong.
[Less central: In general, I think that paying attention to vibes is considerate and good for lots of circumstances, but that truth-seeking requires decoupling, and that LessWrong should at-its-core be about truth-seeking. If I changed my mind on this within about a week, I would probably change the latter belief, but not the former.]
I admire your honesty (plain intention-stating in these contexts is rare!), and hope this feedback helps you and/or others persuade better.
(I also have angrier vibes I could shout at you, but they’re pretty predictable given what I’m arguing for, and basically boil down to ”
To be fair here, part of the problem is more so that innate intelligence does exist, but is on a normal distribution, not a power law distribution, so you can’t have massive differences in innate intelligence being the dominant factor for success.
IQ is on a normal distribution because we force it to be normalized that way. Task performance tends to vary by large factors, resembling something closer to a log normal or exponential distribution, suggesting intelligence is indeed heavy tailed.
I’d love to see a top level post laying this out, it seems like it’s been a crux in a few recent discussions.
It sure has come up frequently enough that I’ve been thinking about writing this post. I hope I’ll get around to it, but would also greatly appreciate anyone else familiar with the literature here to write something.
A crux here is that I think there are reasons beyond defining it to be normal that the normal distribution prevails, and the biggest reason for this is that I generally model the contributions of human intelligence as additive, not an AND function and in particular that they are independent, that is one gene for intelligence can do it’s work without requiring any other genes. This basically lets us construct the normal distribution, and explains why it’s useful to model it as a normal distribution.
As far as the result that task performance is heavy tailed, another consistent story is that what’s going on is people get mostly lucky, and then post-hoc a story about how their innate intelligence/sheer willpower made them successful, and this is important, since I suspect it’s the most accurate story given the divergence of us being normal, but the world is extreme.
A lot of genes have multiplicative effects instead of additive effects. E.g. vegetable size is surprisingly log-normally distributed, not normally distributed, so I don’t think you should have a huge prior on normal here. See also one of my favorite papers of all time “Log-Normal Distributions Across The Sciences”.
In retrospect, I’ve come to agree more on this since we last debated, and I now think genetic effects are log-normally distributed, and I think you were directionally correct here (though I do still think that there’s a significant chance that what’s going on is people get mostly lucky, and then post-hoc a story about how their innate intelligence/sheer willpower made them successful, and this is important, because I do think the world in general is way more extreme than human genetics/traits.)
Thanks to @tailcalled for convincing me I was wrong here:
https://www.lesswrong.com/posts/yJEf2TpPJstfScSnt/ldsl-1-performance-optimization-as-a-metaphor-for-life#CSCLkNhzzc5hqYM3n