There is intense censorship of some facts of human traits, and biology. Of the variance in intelligence and economic productivity, the percent attributable to genetic factors is >0%. But almost nobody prestigious, semi-prestigious—nor anything close—can ever speak of those facts, without social shaming. You’d probably be shamed before you even got to the question of phenotypic causation—speaking as if the g factor exists would often suffice. (Even though g factor is an unusually solid empirically finding, in fact I can hardly think of any more reliable one from the social sciences.)
But with all the high-functioning and prestigious people filtered out, the topic is then heavily influenced by people who have something wrong with them. Such as having an axe to grind with a racial group. Or people who like acting juvenile. Or a third group that’s a bit too autistic, to easily relate with the socially-accepted narratives. I’ll give you a hint: the first 2 groups rarely know enough to format the question in a meaningful way, such as “variance attributable to genes”, and instead often ask “if it’s genetic”, which is a meaningless format.
The situation is like an epistemic drug prohibition, where the empirical insights aren’t going anywhere, but nobody high-functioning or good can be the vendor. The remaining vendors have a disproportionate number of really awful people.
I should’ve first learned about the Wilson effect on IQ from a liberal professor. Instead I first heard it mentioned from some guy with an axe to grind with other groups. I should’ve been conditioned with prosocial memes that don’t pretend humans are exempt from the same forces that shape dogs and guppies. Instead it’s memes predicting any gaps would trend toward 0 given better controls for environment (which hasn’t been the trend for many years, the recent magnitude is similar despite improving sophistication, and many interventions that didn’t replicate). The epistemics of this whole situation are egregiously dysfunctional.
I haven’t read her book, but I know Kathryn Paige Harden is making an attempt. So hats off to her.
this connects to the social model of disability; too many people think of iq differences as evidence that people’s value differs, which is in fact a lot of the problem in the first place, the idea that intelligence is a person’s value as a soul. Intelligence does increase people’s ability to create externalized value, but everyone has a base, human value that is near completely independent of iq. we’ll eventually figure out how to calculate the moral value of a human, and I expect it to turn out to have something to do with how much memory they’ve collected, something to do with counterfactual selfhood with veil of ignorance, something to do with possible future life trajectories given appropriate tools. what we need is to end the entire concept of relative ability by making everyone maximally capable. As far as I’m concerned, anyone being less than the hard superintelligence form of themselves is an illness; the ai safety question fundamentally is the question of how to cure it without making it worse!
being less than the hard superintelligence form of themselves is an illness
I’m not sure abstracting away the path there is correct. Getting a fast-forward ASI-assisted uplifting instead of walking the path personally in some proper way might be losing a lot of value. In that case being less capable than an ASI is childhood, not illness. But an aligned ASI would inform you if this is the case, so it’s not a practical concern.
I confess I don’t know what it means to talk about a person’s value as a soul. I am very much in that third group I mentioned.
On an end to relative ability: is this outcome something you give any significant probability to? And if there existed some convenient way to make long-term bets on such things, what sorts of bets would you be willing to make?
I’d make excessively huge bets that it’s not gonna happen so people can bet against me. It’s not gonna be easy, and we might not succeed; it’s possible inequality of access to fuel and living space will still be severe arbitrarily long after humanity are deep space extropians. But I think we can at very least ensure that everyone is maximum capability per watt that they’d like to be. Give it 400 years before you give up on the idea.
I’d say a soul is a self-seeking shape of a body, ish. The agentic self-target an organism heals towards.
There is intense censorship of some facts of human traits, and biology. Of the variance in intelligence and economic productivity, the percent attributable to genetic factors is >0%. But almost nobody prestigious, semi-prestigious—nor anything close—can ever speak of those facts, without social shaming. You’d probably be shamed before you even got to the question of phenotypic causation—speaking as if the g factor exists would often suffice. (Even though g factor is an unusually solid empirically finding, in fact I can hardly think of any more reliable one from the social sciences.)
But with all the high-functioning and prestigious people filtered out, the topic is then heavily influenced by people who have something wrong with them. Such as having an axe to grind with a racial group. Or people who like acting juvenile. Or a third group that’s a bit too autistic, to easily relate with the socially-accepted narratives. I’ll give you a hint: the first 2 groups rarely know enough to format the question in a meaningful way, such as “variance attributable to genes”, and instead often ask “if it’s genetic”, which is a meaningless format.
The situation is like an epistemic drug prohibition, where the empirical insights aren’t going anywhere, but nobody high-functioning or good can be the vendor. The remaining vendors have a disproportionate number of really awful people.
I should’ve first learned about the Wilson effect on IQ from a liberal professor. Instead I first heard it mentioned from some guy with an axe to grind with other groups. I should’ve been conditioned with prosocial memes that don’t pretend humans are exempt from the same forces that shape dogs and guppies. Instead it’s memes predicting any gaps would trend toward 0 given better controls for environment (which hasn’t been the trend for many years, the recent magnitude is similar despite improving sophistication, and many interventions that didn’t replicate). The epistemics of this whole situation are egregiously dysfunctional.
I haven’t read her book, but I know Kathryn Paige Harden is making an attempt. So hats off to her.
this connects to the social model of disability; too many people think of iq differences as evidence that people’s value differs, which is in fact a lot of the problem in the first place, the idea that intelligence is a person’s value as a soul. Intelligence does increase people’s ability to create externalized value, but everyone has a base, human value that is near completely independent of iq. we’ll eventually figure out how to calculate the moral value of a human, and I expect it to turn out to have something to do with how much memory they’ve collected, something to do with counterfactual selfhood with veil of ignorance, something to do with possible future life trajectories given appropriate tools. what we need is to end the entire concept of relative ability by making everyone maximally capable. As far as I’m concerned, anyone being less than the hard superintelligence form of themselves is an illness; the ai safety question fundamentally is the question of how to cure it without making it worse!
I’m not sure abstracting away the path there is correct. Getting a fast-forward ASI-assisted uplifting instead of walking the path personally in some proper way might be losing a lot of value. In that case being less capable than an ASI is childhood, not illness. But an aligned ASI would inform you if this is the case, so it’s not a practical concern.
I confess I don’t know what it means to talk about a person’s value as a soul. I am very much in that third group I mentioned.
On an end to relative ability: is this outcome something you give any significant probability to? And if there existed some convenient way to make long-term bets on such things, what sorts of bets would you be willing to make?
I’d make excessively huge bets that it’s not gonna happen so people can bet against me. It’s not gonna be easy, and we might not succeed; it’s possible inequality of access to fuel and living space will still be severe arbitrarily long after humanity are deep space extropians. But I think we can at very least ensure that everyone is maximum capability per watt that they’d like to be. Give it 400 years before you give up on the idea.
I’d say a soul is a self-seeking shape of a body, ish. The agentic self-target an organism heals towards.