(Just acknowledging that my response is kinda disorganized. Take it or leave it, feel free to ask followups.)
Most easy interventions work on a generational scale. There’s pretty easy big wins like eliminating lead poisoning (and, IDK, feeding everyone, basic medicine, internet access, less cannibalistic schooling) which we should absolutely do, regardless of any X-risk concerns. But for X-risk concerns, generational is pretty slow.
This is both in terms of increasing general intelligence, and also in terms of specific capabilities. Even if you bop an adult on the head and make zer +2SDs smarter, ze still would have to spend a bunch of time and effort to train up on some new field that’s needed for the next approach to further increasing intelligence. That’s not a generational scale exactly, maybe more like 10 years, but still.
We’re leaking survival probability mass to an AI intelligence explosion year by year. I think we have something like 0-2 or 0-3 generations before dying to AGI.
To be clear, I’m assuming that when you say “we don’t need 7SDs”, you mean “we don’t need to find an approach that could give 7SDs”. (Though to be clear, I agree with that in a literal sense, because you can start with someone who’s already +3SDs or whatever.) A problem with this is that approaches don’t necessarily stack or scale, just because they can give a +2SD boost to a random person. If you take a starving person and feed zer well, ze’ll be able to think better, for sure. Maybe even +2SDs? I really don’t know, sounds plausible. But you can’t then feed them much more and get them another +2SDs—maybe you can get like +.5SD with some clever fine-tuning or something. And you can’t then also get a big boost from good sleep, because you probably already increased their sleep quality by a lot; you’re double counting. Most (though not all!) people in, say, the US, probably can’t get very meaningfully less lead poisoned.
Further, these interventions I think would generally tend to bring people up to some fixed “healthy Gaussian distribution”, rather than shift the whole healthy distribution’s mean upward. In other words, the easy interventions that move the global average are more like “make things look like the developed world”. Again, that’s obviously good to do morally and practically, but in terms of X-risk specifically, it doesn’t help that much. Getting 3x samples from the same distribution (the healthy distribution) barely increases the max intelligence. Much more important (for X-risk) is to shift the distribution you’re drawing from. Stronger interventions that aren’t generational, such as prosthetic connectivity or adult brain gene editing, would tend to come with much more personal risks, so it’s not so scalable—and I don’t think in terms of trying to get vast numbers of people to do something, but rather just in terms of making it possible for people to do something if they really want to.
So what this implies is that either
your approach can scale up (maybe with more clever technology, but still riding on the same basic principle), or
you’re so capable that you can keep coming up with different effective, feasible approaches that stack.
So I think it matters to look for approaches that can scale up to large intelligence gains.
To put things in perspective, there’s lots of people who say that {nootropics, note taking, meditation, TCMS, blood flow optimization, …} give them +1SD boost or something on thinking ability. And yet, if they’re so smart, why ain’t they exploding?
All that said, I take your point. It does make it seem slightly more appealing to work on e.g. prosthetic connectivity, because that’s a non-generational intervention and could plausibly be scaled up by putting more effort into it, given an initial boost.
I think brain editing is maybe somewhat less scalable, though I’m not confident (plausibly it’s more scalable; it might depend for example on the floor of necessary damage from editing rounds, and might depend on a ceiling of how much you can get given that you’ve passed developmental windows). Support for thinking (i.e. mental / social / computer tech) seems like it ought to be scalable, but again, why ain’t you exploding? (Or in other words, we already see the sort of explosion that gets you; improving on it would take some major uncommon insights, or an alternative approach.) Massive neural transplantation might be scalable, but is very icky. If master regulator signaling molecules worked shockingly well, my wild guess is that they would be a little scalable (by finding more of them), but not much (there probably isn’t that much room to overclock neurons? IDK); they’d be somewhat all-or-nothing, I guess?
You’re correct that the average IQ could be increased in various ways, and that increasing the minimum IQ of the population wouldn’t help us here. I was imagining shifting the entire normal distribution two SDs to the right, so that those who are already +4-5SDs would become +5-7SDs.
As far as I’m concerned, the progress of humanity stands on the shoulders of giants, and the bottom 99.999% aren’t doing much of a difference.
The threshold for recursive self-improvement in humans, if one exists, is quite high. Perhaps if somebody like Neumann lived today it would be possible. By the way, most of the people who look into nootropics, meditations and other such things do so because they’re not functional, so in a way it’s a bit like asking “Why are there so many sick people in hospitals if it’s a place for recovery?” thought you could make the argument that geniuses would be doing these things if they worked.
My score on IQ tests has increased about 15 points since I was 18, but it’s hard to say if I succeeded in increasing my intelligence or if it’s just a result of improving my mental health and actually putting a bit of effort into my life. I still think that very high levels of concentration and effort can force the brain to reconstruct itself, but that this process is so unpleasant that people stop doing it once they’re good enough (for instance, most people can’t read all that fast, despite reading texts for 1000s of hours. But if they spend just a few weeks practicing, they can improve their reading speed by a lot, so this kind of shows how improvement stops once you stop applying pressure)
By the way, I don’t know much about neurons. It could be that 4-5SD people are much harder to improve since the ratio of better states to worse states is much lower
I was imagining shifting the entire normal distribution two SDs to the right,
Right, but those interventions are harder (shifting the right tail further right is especially hard).
Also, shifting the distribution is just way different numerically from being able to make anyone who wants be +7SD. If you shift +1SD, you go from 0 people at +7SD to ~8 people.
(And note that the shift is, in some ways, more unequal compared to “anyone who wants, for the price of a new car, can reach the effective ceiling”.)
A right shift by 2SDs would make people like Hawkings, Einstein, Tesla, etc. about 100 times more common, and make it so that a few people who are 1-2SDs above these people are likely to appear soon. I think this is sufficient, but I don’t know enough about human intelligence to guarantee it.
I think it depends on how the SD is increased. If you “merely” create a 150-IQ person with a 20-item working memory, or with a 8SD processing speed, this may not be enough to understand the problem and to solve it. Of course, you can substitute with verbal intelligence, which I think a lot of mathematicians do. I can’t rotate 5D objects in my head, but I can write equations on paper which can rotate 5D objects and get the right answer. I think this is how mathematics is progressing past what we can intuitively understand. Of course, if your non-verbal intelligence can keep up, you’re much better off, since you can combine any insights from any area of life and get something new out of it.
(Just acknowledging that my response is kinda disorganized. Take it or leave it, feel free to ask followups.)
Most easy interventions work on a generational scale. There’s pretty easy big wins like eliminating lead poisoning (and, IDK, feeding everyone, basic medicine, internet access, less cannibalistic schooling) which we should absolutely do, regardless of any X-risk concerns. But for X-risk concerns, generational is pretty slow.
This is both in terms of increasing general intelligence, and also in terms of specific capabilities. Even if you bop an adult on the head and make zer +2SDs smarter, ze still would have to spend a bunch of time and effort to train up on some new field that’s needed for the next approach to further increasing intelligence. That’s not a generational scale exactly, maybe more like 10 years, but still.
We’re leaking survival probability mass to an AI intelligence explosion year by year. I think we have something like 0-2 or 0-3 generations before dying to AGI.
To be clear, I’m assuming that when you say “we don’t need 7SDs”, you mean “we don’t need to find an approach that could give 7SDs”. (Though to be clear, I agree with that in a literal sense, because you can start with someone who’s already +3SDs or whatever.) A problem with this is that approaches don’t necessarily stack or scale, just because they can give a +2SD boost to a random person. If you take a starving person and feed zer well, ze’ll be able to think better, for sure. Maybe even +2SDs? I really don’t know, sounds plausible. But you can’t then feed them much more and get them another +2SDs—maybe you can get like +.5SD with some clever fine-tuning or something. And you can’t then also get a big boost from good sleep, because you probably already increased their sleep quality by a lot; you’re double counting. Most (though not all!) people in, say, the US, probably can’t get very meaningfully less lead poisoned.
Further, these interventions I think would generally tend to bring people up to some fixed “healthy Gaussian distribution”, rather than shift the whole healthy distribution’s mean upward. In other words, the easy interventions that move the global average are more like “make things look like the developed world”. Again, that’s obviously good to do morally and practically, but in terms of X-risk specifically, it doesn’t help that much. Getting 3x samples from the same distribution (the healthy distribution) barely increases the max intelligence. Much more important (for X-risk) is to shift the distribution you’re drawing from. Stronger interventions that aren’t generational, such as prosthetic connectivity or adult brain gene editing, would tend to come with much more personal risks, so it’s not so scalable—and I don’t think in terms of trying to get vast numbers of people to do something, but rather just in terms of making it possible for people to do something if they really want to.
So what this implies is that either
your approach can scale up (maybe with more clever technology, but still riding on the same basic principle), or
you’re so capable that you can keep coming up with different effective, feasible approaches that stack.
So I think it matters to look for approaches that can scale up to large intelligence gains.
To put things in perspective, there’s lots of people who say that {nootropics, note taking, meditation, TCMS, blood flow optimization, …} give them +1SD boost or something on thinking ability. And yet, if they’re so smart, why ain’t they exploding?
All that said, I take your point. It does make it seem slightly more appealing to work on e.g. prosthetic connectivity, because that’s a non-generational intervention and could plausibly be scaled up by putting more effort into it, given an initial boost.
I think brain editing is maybe somewhat less scalable, though I’m not confident (plausibly it’s more scalable; it might depend for example on the floor of necessary damage from editing rounds, and might depend on a ceiling of how much you can get given that you’ve passed developmental windows). Support for thinking (i.e. mental / social / computer tech) seems like it ought to be scalable, but again, why ain’t you exploding? (Or in other words, we already see the sort of explosion that gets you; improving on it would take some major uncommon insights, or an alternative approach.) Massive neural transplantation might be scalable, but is very icky. If master regulator signaling molecules worked shockingly well, my wild guess is that they would be a little scalable (by finding more of them), but not much (there probably isn’t that much room to overclock neurons? IDK); they’d be somewhat all-or-nothing, I guess?
You’re correct that the average IQ could be increased in various ways, and that increasing the minimum IQ of the population wouldn’t help us here. I was imagining shifting the entire normal distribution two SDs to the right, so that those who are already +4-5SDs would become +5-7SDs.
As far as I’m concerned, the progress of humanity stands on the shoulders of giants, and the bottom 99.999% aren’t doing much of a difference.
The threshold for recursive self-improvement in humans, if one exists, is quite high. Perhaps if somebody like Neumann lived today it would be possible. By the way, most of the people who look into nootropics, meditations and other such things do so because they’re not functional, so in a way it’s a bit like asking “Why are there so many sick people in hospitals if it’s a place for recovery?” thought you could make the argument that geniuses would be doing these things if they worked.
My score on IQ tests has increased about 15 points since I was 18, but it’s hard to say if I succeeded in increasing my intelligence or if it’s just a result of improving my mental health and actually putting a bit of effort into my life. I still think that very high levels of concentration and effort can force the brain to reconstruct itself, but that this process is so unpleasant that people stop doing it once they’re good enough (for instance, most people can’t read all that fast, despite reading texts for 1000s of hours. But if they spend just a few weeks practicing, they can improve their reading speed by a lot, so this kind of shows how improvement stops once you stop applying pressure)
By the way, I don’t know much about neurons. It could be that 4-5SD people are much harder to improve since the ratio of better states to worse states is much lower
Right, but those interventions are harder (shifting the right tail further right is especially hard).
Also, shifting the distribution is just way different numerically from being able to make anyone who wants be +7SD. If you shift +1SD, you go from 0 people at +7SD to ~8 people.
(And note that the shift is, in some ways, more unequal compared to “anyone who wants, for the price of a new car, can reach the effective ceiling”.)
Right, I agree with that.
A right shift by 2SDs would make people like Hawkings, Einstein, Tesla, etc. about 100 times more common, and make it so that a few people who are 1-2SDs above these people are likely to appear soon. I think this is sufficient, but I don’t know enough about human intelligence to guarantee it.
I think it depends on how the SD is increased. If you “merely” create a 150-IQ person with a 20-item working memory, or with a 8SD processing speed, this may not be enough to understand the problem and to solve it. Of course, you can substitute with verbal intelligence, which I think a lot of mathematicians do. I can’t rotate 5D objects in my head, but I can write equations on paper which can rotate 5D objects and get the right answer. I think this is how mathematics is progressing past what we can intuitively understand. Of course, if your non-verbal intelligence can keep up, you’re much better off, since you can combine any insights from any area of life and get something new out of it.