this connects to the social model of disability; too many people think of iq differences as evidence that people’s value differs, which is in fact a lot of the problem in the first place, the idea that intelligence is a person’s value as a soul. Intelligence does increase people’s ability to create externalized value, but everyone has a base, human value that is near completely independent of iq. we’ll eventually figure out how to calculate the moral value of a human, and I expect it to turn out to have something to do with how much memory they’ve collected, something to do with counterfactual selfhood with veil of ignorance, something to do with possible future life trajectories given appropriate tools. what we need is to end the entire concept of relative ability by making everyone maximally capable. As far as I’m concerned, anyone being less than the hard superintelligence form of themselves is an illness; the ai safety question fundamentally is the question of how to cure it without making it worse!
being less than the hard superintelligence form of themselves is an illness
I’m not sure abstracting away the path there is correct. Getting a fast-forward ASI-assisted uplifting instead of walking the path personally in some proper way might be losing a lot of value. In that case being less capable than an ASI is childhood, not illness. But an aligned ASI would inform you if this is the case, so it’s not a practical concern.
I confess I don’t know what it means to talk about a person’s value as a soul. I am very much in that third group I mentioned.
On an end to relative ability: is this outcome something you give any significant probability to? And if there existed some convenient way to make long-term bets on such things, what sorts of bets would you be willing to make?
I’d make excessively huge bets that it’s not gonna happen so people can bet against me. It’s not gonna be easy, and we might not succeed; it’s possible inequality of access to fuel and living space will still be severe arbitrarily long after humanity are deep space extropians. But I think we can at very least ensure that everyone is maximum capability per watt that they’d like to be. Give it 400 years before you give up on the idea.
I’d say a soul is a self-seeking shape of a body, ish. The agentic self-target an organism heals towards.
this connects to the social model of disability; too many people think of iq differences as evidence that people’s value differs, which is in fact a lot of the problem in the first place, the idea that intelligence is a person’s value as a soul. Intelligence does increase people’s ability to create externalized value, but everyone has a base, human value that is near completely independent of iq. we’ll eventually figure out how to calculate the moral value of a human, and I expect it to turn out to have something to do with how much memory they’ve collected, something to do with counterfactual selfhood with veil of ignorance, something to do with possible future life trajectories given appropriate tools. what we need is to end the entire concept of relative ability by making everyone maximally capable. As far as I’m concerned, anyone being less than the hard superintelligence form of themselves is an illness; the ai safety question fundamentally is the question of how to cure it without making it worse!
I’m not sure abstracting away the path there is correct. Getting a fast-forward ASI-assisted uplifting instead of walking the path personally in some proper way might be losing a lot of value. In that case being less capable than an ASI is childhood, not illness. But an aligned ASI would inform you if this is the case, so it’s not a practical concern.
I confess I don’t know what it means to talk about a person’s value as a soul. I am very much in that third group I mentioned.
On an end to relative ability: is this outcome something you give any significant probability to? And if there existed some convenient way to make long-term bets on such things, what sorts of bets would you be willing to make?
I’d make excessively huge bets that it’s not gonna happen so people can bet against me. It’s not gonna be easy, and we might not succeed; it’s possible inequality of access to fuel and living space will still be severe arbitrarily long after humanity are deep space extropians. But I think we can at very least ensure that everyone is maximum capability per watt that they’d like to be. Give it 400 years before you give up on the idea.
I’d say a soul is a self-seeking shape of a body, ish. The agentic self-target an organism heals towards.