I think different people might imagine quite different intelligence levels when under +7std thinkoompf.
E.g. I think that from like +6.3std the heavytail becomes even a lot stronger because those people can bootstrap themselves extremely good mental software. (My rough guess is that Tsvi::+7std = me::+6.5std, though I’d guess many readers would need to correct in the other direction (aka they might imagine +7std as less impressive than Tsvi).)
I think one me::Tsvi::+7std person would probably be enough to put humanity on a path to success (given Tsvi timelines), so the “repeatedly” criterion seems a bit off to me. (Though maybe my model of Tsvi is bad and me::Tsvi::+7std > Tsvi::+7std.) (Also I would not expect them to want to build aligned superintelligence directly but rather find some way to transition civilization to a path towards dath ilan.)
By how many standard deviations you can increase intelligence seems to me to extremely heavily depend on what level you’re starting from.
E.g. for adult augmentation 2-3std for the average person seems plausible, but for the few +6std people on earth it might just give +0.2std or +0.3std, which tbc I think is incredibly worthwhile. (Again I’d expect I think small-std improvements starting from 6std make a much bigger effect than most people think.)
However I think for mental software and computer software it’s sorta vice versa that extremely smart individuals might find ways to significantly leverage their capability through that even though the methods won’t work for non-supergeniuses. (And it seems possible to me that for those there would then be AI capability externalities, aka high dual use.)
I think that from like +6.3std the heavytail becomes even a lot stronger because those people can bootstrap themselves extremely good mental software.
I agree something like this happens, I just don’t think it’s that strong of an effect.
I think one me::Tsvi::+7std person would probably be enough to put humanity on a path to success (given Tsvi timelines), so the “repeatedly” criterion seems a bit off to me.
A single human still has pretty strong limitations. E.g. fixed skull size (without further intervention); other non-scalable hardware (~one thread of attention, one pair of eyes and hands); self-reprogramming is just hard; benefits of self-reprogramming don’t scale (hard to share with other people).
Coercion is bad; without coercion, a supergenius might just not want to work on whatever is strategically important for humanity.
It doesn’t look to me like we’re even close to being able to figure out AGI alignment, or other gnarly problems for that matter (such as a decoding egregores). So we need a lot more brainpower, lots of lottery tickets.
There’s a kind of power that comes from having many geniuses—think Manhattan project.
for the few +6std people on earth it might just give +0.2std or +0.3std,
Not sure what you’re referring to here. Different methods have different curves. Adult brain editing would have diminishing returns, but nowhere near that diminishing.
it’s sorta vice versa that extremely smart individuals might find ways to significantly leverage their capability
Plausibly, though I don’t know of strong evidence for this. For example, my impression is that modern proof assistants still aren’t in a state where a genius youngster with a proof assistant can unlock what feels like the possibility of learning a seemingly superhuman amount of math via direct dialogue with the truth—but I could imagine this being created soon. Do you have other evidence in mind?
There’s a kind of power that comes from having many geniuses—think Manhattan project.
Basically agree, but I think alignment is the kind of problem where one supergenius might matter more. E.g. for general relativity Einstein basically managed to find in 3 times faster or sth than the rest of physics would’ve. I don’t think a Manhatton project would’ve helped there because even after Einstein published GR only relatively few people understood it (if i am informed correctly), and I don’t think they could’ve made progress in the same way Einstein did but would’ve needed more experimental evidence.
Plausible to me that there are other potentially pivotal problems that have something of this character, but idk.
Do you have other evidence in mind?
Well not very legible evidence, and I could be wrong, but some of my thoughts on mental software:
It seems plausible to me that someone with +6.3std would be able to do some bootstrapping loop very roughly like:
find better ontology for modelling what is happening in my mind.
train to relatively-effortlessly model my thoughts in the new better ontology that compresses observations more and thus lets me notice a bit more of what’s happening in my mind (and notice pieces where the ontology doesn’t seem to fit well).
repeat.
The “relatively-effortlessly model well what is happening in my mind” part might help significantly for getting much faster and richer feedback loops for learning thinking skills.
When you have a good model of what happened in your mind to produce some output you can better see the parts that were useless and the parts that were important and see how you want your cognitive algorithms to look like and plan how to train yourself to shape them that way.
When you master this kind of review-and-improving really well you might be able to apply the skill on itself and bootstrap your review process.
It’s generally hard to predict what someone smarter might figure out so I wouldn’t be confident it’s not possible.
I agree that peak problem-solving ability is very important, which is why I think strong amplification is such a priority. I just… so far I’m either not understanding, or else you’re completely making up some big transition between 6 and 6.5?
Yeah I sorta am. I feel like that’s what I see from eyeballing the largest supergeniuses (in particular Einstein and Eliezer) but idk it’s very few data and maybe I’m wrong.
My guess would be that you’re seeing a genuine difference, but that flavor/magnitude of difference is not not very special to the 6 → 6.5 transition. See my other comment.
Seems way underestimated. While I don’t think he’s at “the largest supergeniuses” level either, even +3 SD implies just top 1 in ~700 i.e. millions of Eliezer-level people worldwide. I’ve been part of more quantitatively-selected groups talent-wise (e.g. for national scholarships awarded on academic merit) and I’ve never met anyone like him.
Wait are you saying it’s illegible, or just bad? I mean are you saying that you’ve done something impressive and attribute that to doing this—or that you believe someone else has done so—but you can’t share why you think so?
Maybe bad would be a better word. Idk I feel like I have a different way of thinking about such intelligence-explosion-dynamics stuff that most people don’t have (though Eliezer does) and I cannot really describe it all that well and I think it makes sensible predictions but yeah idk I’d stay sceptical given that I’m not that great at saying why I believe what I believe there.
No I don’t know of anyone who did that.
It’s sorta what I’ve been aiming for since very recently and I don’t particularly expect a high chance of success but I’m also not quite +6.3std I think (though I’m only 21 and the worlds where it might succeed are the ones where I continue getting smarter for some time). Maybe I’m wrong but I’d be pretty surprised if sth like that wouldn’t work for someone with +7std.
I mean, I agree that intelligence explosion is a thing, and the thing you described is part of it, and humans can kinda do it, and it helps quite a lot to have more raw cognitive horsepower...
I guess I’m not sure we’re disagreeing about much here, except that
I don’t know why you’re putting some important transition around 6 SDs. I expect that many capabilities will have shitty precursors in people with less native horsepower; I also expect some capabilities will basically not have such precursors, and so will be “transitions”; I just expect there to be enough such things that you wouldn’t see some major transition at one point. I do think there’s an important different between 5.5 SD and 7.5 SD, which is that now you’ve created a human who’s probably smarter than any human who’s ever lived, so you’ve gone from 0 to 1 on some difficult thoughts; but I don’t think that’s special about this range, it would happen at any range.
I think that adding more 6 SD or 7 SD is really important, but you maybe don’t as much? Not sure what you think.
First tbc, I’m always talking about thinkoompf, not just what’s measured by IQ tests but also sanity and even drive.
Idk I’m not at all sure about that but it seems to me like Nate and Eliezer might be a decent chunck more competent than all the other people I’m aware of. So maybe for the current era (by which I mostly mean “after the sequences were published”) it’s like 1 Person (Nate) per decade-or-a-bit-more who becomes really competent, which is very roughly +6std. (EDIT: Retracted because evidence too shaky. It still seems to me like the heavytail of intelligence gets very far very quickly though.)
Like I’d guess before the sequences and without having the strong motivator of needing to save humanity the transition might rather have been +6.4std -- +6.8std. Idk. Though tbc I don’t really expect to be like “yeah maybe from 6.3std it enters a faster improvement curve which is then not changing that much” but more like the curve just getting steeper and steeper very fast without there being a visible kink.
I feel like if we now created someone with +6.3std the person would already become smarter than any person who ever lived because there are certain advantages of being born now which would help a lot for getting up to speed (e.g. the sequences, the Internet).
adult augmentation 2-3std for the average person seems plausible, but for the few +6std people on earth it might just give +0.2std or +0.3std, which tbc I think is incredibly worthwhile.
Such high diminishing returns in g based on genes seems quite implausible to me, but would be happy if you can point to evidence to the contrary. If it works well for people with average Intelligence, I’d expect it to work at most half as well with +6sd.
Idk I’d be intuitively surprised if adult augmentation would get someone from +6 to +7. I’m like from +0 to +3 is a big difference, and from +6 to +6.3 is an almost as big difference too. But idk maybe not. Maybe partially it’s also that I think that intelligence augmentation interventions get harder once you get into higher intelligence levels. Where there are previously easy improvement possibilities there might later need to be more entangled groups of genes that are good and it’s harder to tune those. And it’s hard to get very good data on what genes working together actually result in very high intelligence because we don’t have that many very smart people.
Is there a reason you are thinking of to expect that transition to happen at exactly the tail end of the distribution of modern human intelligence? There don’t seem, as far as I’m aware, to have been any similar transitions in the evolution of modern humans from our chimp-like ancestors. If you look at proxies, like stone tools from homo-habilis to modern humans you see very slow improvements that slowly, but exponentially, accelerate in the rate of development.
I suspect that most of that improvement, once cultural transition took off at all, happens because of the ways in which cultural/technological advancements feed into each other (in part due to economic gains meaning higher populations with better networks which means accelerated discovery which means more economic gains and higher better connected populations), and that is hard to disentagle from actual intelligence improvements. So I suppose its still possible that you could have these exponential progress in technology feeding itself while at the same time actual intelligence is hitting a transition to a regime of diminishing returns, and it would be hard to see the latter in the record.
Another decent proxy for intelligence is brain size, though. If intelligence wasn’t actually improving the investment in larger brains just wouldn’t pay off evolutionarily, so I expect that when we see brain size increases in the fossil record we are also seeing intelligence increasing at at least a similar rate. Are there transitions in the fossil record from fast to slow changes in brain size in our lineage? That wouldn’t demonstrate diminishing returns intelligence (could be diminishing returns in the use of intelligence relative to the other metabolic costs, which is different from just particular changes to genes just not impacting intelligence as much as in the past), but it would at least be consistent with it.
Anyway, I’m not entirely sure where to look for evidence of the transition you seem to expect. If such transitions were common in the past it would increase my credence in one in the near future. But apriori it seems unlikely to me that there is such a transition at exactly the tail of the modern human intelligence distribution.
I mostly expect you start getting more and more into sub-critical intelligence explosion dynamics when you exceed +6std more and more. (E.g. see second half of this other comment i wrote) I also expect very smart people will be able to better setup computer-augmented note organizing systems or maybe code narrow aligned AIs that might help them with their tasks (in a way it’s a lot more useful than current LLMs but hard to use for other people). But idk.
I’m not sure how big the difference between +6 and +6.3std actually is. I also might’ve confused the actual-competence vs genetical-potential scale. On the scale I used the drive/”how hard one is trying” also plays a big role.
I actually mostly expect this from seeing that intelligence is pretty heavitailed. E.g. alignment research capability seems incredibly heavitailed to me, though it might be hard to judge the differences in capability there if you’re not already one of the relatively few people who are good at alignment research. Another example is how Einstein managed to find general relativity where the combined rest of the world wouldn’t have been able to do it like that without more experimental evidence. I do not know why this is the case. It is (very?) surprising to me. Einstein didn’t even work on understanding and optimizing his mind. But yeah that’s how I guess.
Thanks for writing this amazing overview!
Some comments:
I think different people might imagine quite different intelligence levels when under +7std thinkoompf.
E.g. I think that from like +6.3std the heavytail becomes even a lot stronger because those people can bootstrap themselves extremely good mental software. (My rough guess is that Tsvi::+7std = me::+6.5std, though I’d guess many readers would need to correct in the other direction (aka they might imagine +7std as less impressive than Tsvi).)
I think one me::Tsvi::+7std person would probably be enough to put humanity on a path to success (given Tsvi timelines), so the “repeatedly” criterion seems a bit off to me. (Though maybe my model of Tsvi is bad and me::Tsvi::+7std > Tsvi::+7std.) (Also I would not expect them to want to build aligned superintelligence directly but rather find some way to transition civilization to a path towards dath ilan.)
By how many standard deviations you can increase intelligence seems to me to extremely heavily depend on what level you’re starting from.
E.g. for adult augmentation 2-3std for the average person seems plausible, but for the few +6std people on earth it might just give +0.2std or +0.3std, which tbc I think is incredibly worthwhile. (Again I’d expect I think small-std improvements starting from 6std make a much bigger effect than most people think.)
However I think for mental software and computer software it’s sorta vice versa that extremely smart individuals might find ways to significantly leverage their capability through that even though the methods won’t work for non-supergeniuses. (And it seems possible to me that for those there would then be AI capability externalities, aka high dual use.)
I agree something like this happens, I just don’t think it’s that strong of an effect.
A single human still has pretty strong limitations. E.g. fixed skull size (without further intervention); other non-scalable hardware (~one thread of attention, one pair of eyes and hands); self-reprogramming is just hard; benefits of self-reprogramming don’t scale (hard to share with other people).
Coercion is bad; without coercion, a supergenius might just not want to work on whatever is strategically important for humanity.
It doesn’t look to me like we’re even close to being able to figure out AGI alignment, or other gnarly problems for that matter (such as a decoding egregores). So we need a lot more brainpower, lots of lottery tickets.
There’s a kind of power that comes from having many geniuses—think Manhattan project.
Not sure what you’re referring to here. Different methods have different curves. Adult brain editing would have diminishing returns, but nowhere near that diminishing.
Plausibly, though I don’t know of strong evidence for this. For example, my impression is that modern proof assistants still aren’t in a state where a genius youngster with a proof assistant can unlock what feels like the possibility of learning a seemingly superhuman amount of math via direct dialogue with the truth—but I could imagine this being created soon. Do you have other evidence in mind?
Basically agree, but I think alignment is the kind of problem where one supergenius might matter more. E.g. for general relativity Einstein basically managed to find in 3 times faster or sth than the rest of physics would’ve. I don’t think a Manhatton project would’ve helped there because even after Einstein published GR only relatively few people understood it (if i am informed correctly), and I don’t think they could’ve made progress in the same way Einstein did but would’ve needed more experimental evidence.
Plausible to me that there are other potentially pivotal problems that have something of this character, but idk.
Well not very legible evidence, and I could be wrong, but some of my thoughts on mental software:
It seems plausible to me that someone with +6.3std would be able to do some bootstrapping loop very roughly like:
find better ontology for modelling what is happening in my mind.
train to relatively-effortlessly model my thoughts in the new better ontology that compresses observations more and thus lets me notice a bit more of what’s happening in my mind (and notice pieces where the ontology doesn’t seem to fit well).
repeat.
The “relatively-effortlessly model well what is happening in my mind” part might help significantly for getting much faster and richer feedback loops for learning thinking skills.
When you have a good model of what happened in your mind to produce some output you can better see the parts that were useless and the parts that were important and see how you want your cognitive algorithms to look like and plan how to train yourself to shape them that way.
When you master this kind of review-and-improving really well you might be able to apply the skill on itself and bootstrap your review process.
It’s generally hard to predict what someone smarter might figure out so I wouldn’t be confident it’s not possible.
I agree that peak problem-solving ability is very important, which is why I think strong amplification is such a priority. I just… so far I’m either not understanding, or else you’re completely making up some big transition between 6 and 6.5?
Yeah I sorta am. I feel like that’s what I see from eyeballing the largest supergeniuses (in particular Einstein and Eliezer) but idk it’s very few data and maybe I’m wrong.
My guess would be that you’re seeing a genuine difference, but that flavor/magnitude of difference is not not very special to the 6 → 6.5 transition. See my other comment.
I think you’re massively overestimating Eliezer Yudkowsky’s intelligence. I would guess it’s somewhere between +2 and +3 SD.
Seems way underestimated. While I don’t think he’s at “the largest supergeniuses” level either, even +3 SD implies just top 1 in ~700 i.e. millions of Eliezer-level people worldwide. I’ve been part of more quantitatively-selected groups talent-wise (e.g. for national scholarships awarded on academic merit) and I’ve never met anyone like him.
But are you sure the way in which he is unique among people you’ve met is mostly about intelligence rather than intelligence along with other traits?
Wait are you saying it’s illegible, or just bad? I mean are you saying that you’ve done something impressive and attribute that to doing this—or that you believe someone else has done so—but you can’t share why you think so?
Maybe bad would be a better word. Idk I feel like I have a different way of thinking about such intelligence-explosion-dynamics stuff that most people don’t have (though Eliezer does) and I cannot really describe it all that well and I think it makes sensible predictions but yeah idk I’d stay sceptical given that I’m not that great at saying why I believe what I believe there.
No I don’t know of anyone who did that.
It’s sorta what I’ve been aiming for since very recently and I don’t particularly expect a high chance of success but I’m also not quite +6.3std I think (though I’m only 21 and the worlds where it might succeed are the ones where I continue getting smarter for some time). Maybe I’m wrong but I’d be pretty surprised if sth like that wouldn’t work for someone with +7std.
I mean, I agree that intelligence explosion is a thing, and the thing you described is part of it, and humans can kinda do it, and it helps quite a lot to have more raw cognitive horsepower...
I guess I’m not sure we’re disagreeing about much here, except that
I don’t know why you’re putting some important transition around 6 SDs. I expect that many capabilities will have shitty precursors in people with less native horsepower; I also expect some capabilities will basically not have such precursors, and so will be “transitions”; I just expect there to be enough such things that you wouldn’t see some major transition at one point. I do think there’s an important different between 5.5 SD and 7.5 SD, which is that now you’ve created a human who’s probably smarter than any human who’s ever lived, so you’ve gone from 0 to 1 on some difficult thoughts; but I don’t think that’s special about this range, it would happen at any range.
I think that adding more 6 SD or 7 SD is really important, but you maybe don’t as much? Not sure what you think.
First tbc, I’m always talking about thinkoompf, not just what’s measured by IQ tests but also sanity and even drive.
Idk I’m not at all sure about that but it seems to me like Nate and Eliezer might be a decent chunck more competent than all the other people I’m aware of. So maybe for the current era (by which I mostly mean “after the sequences were published”) it’s like 1 Person (Nate) per decade-or-a-bit-more who becomes really competent, which is very roughly +6std.(EDIT: Retracted because evidence too shaky. It still seems to me like the heavytail of intelligence gets very far very quickly though.)Like I’d guess before the sequences and without having the strong motivator of needing to save humanity the transition might rather have been +6.4std -- +6.8std. Idk. Though tbc I don’t really expect to be like “yeah maybe from 6.3std it enters a faster improvement curve which is then not changing that much” but more like the curve just getting steeper and steeper very fast without there being a visible kink.
I feel like if we now created someone with +6.3std the person would already become smarter than any person who ever lived because there are certain advantages of being born now which would help a lot for getting up to speed (e.g. the sequences, the Internet).
Such high diminishing returns in g based on genes seems quite implausible to me, but would be happy if you can point to evidence to the contrary. If it works well for people with average Intelligence, I’d expect it to work at most half as well with +6sd.
Idk I’d be intuitively surprised if adult augmentation would get someone from +6 to +7. I’m like from +0 to +3 is a big difference, and from +6 to +6.3 is an almost as big difference too. But idk maybe not. Maybe partially it’s also that I think that intelligence augmentation interventions get harder once you get into higher intelligence levels. Where there are previously easy improvement possibilities there might later need to be more entangled groups of genes that are good and it’s harder to tune those. And it’s hard to get very good data on what genes working together actually result in very high intelligence because we don’t have that many very smart people.
Is there a reason you are thinking of to expect that transition to happen at exactly the tail end of the distribution of modern human intelligence? There don’t seem, as far as I’m aware, to have been any similar transitions in the evolution of modern humans from our chimp-like ancestors. If you look at proxies, like stone tools from homo-habilis to modern humans you see very slow improvements that slowly, but exponentially, accelerate in the rate of development.
I suspect that most of that improvement, once cultural transition took off at all, happens because of the ways in which cultural/technological advancements feed into each other (in part due to economic gains meaning higher populations with better networks which means accelerated discovery which means more economic gains and higher better connected populations), and that is hard to disentagle from actual intelligence improvements. So I suppose its still possible that you could have these exponential progress in technology feeding itself while at the same time actual intelligence is hitting a transition to a regime of diminishing returns, and it would be hard to see the latter in the record.
Another decent proxy for intelligence is brain size, though. If intelligence wasn’t actually improving the investment in larger brains just wouldn’t pay off evolutionarily, so I expect that when we see brain size increases in the fossil record we are also seeing intelligence increasing at at least a similar rate. Are there transitions in the fossil record from fast to slow changes in brain size in our lineage? That wouldn’t demonstrate diminishing returns intelligence (could be diminishing returns in the use of intelligence relative to the other metabolic costs, which is different from just particular changes to genes just not impacting intelligence as much as in the past), but it would at least be consistent with it.
Anyway, I’m not entirely sure where to look for evidence of the transition you seem to expect. If such transitions were common in the past it would increase my credence in one in the near future. But apriori it seems unlikely to me that there is such a transition at exactly the tail of the modern human intelligence distribution.
I mostly expect you start getting more and more into sub-critical intelligence explosion dynamics when you exceed +6std more and more. (E.g. see second half of this other comment i wrote) I also expect very smart people will be able to better setup computer-augmented note organizing systems or maybe code narrow aligned AIs that might help them with their tasks (in a way it’s a lot more useful than current LLMs but hard to use for other people). But idk.
I’m not sure how big the difference between +6 and +6.3std actually is. I also might’ve confused the actual-competence vs genetical-potential scale. On the scale I used the drive/”how hard one is trying” also plays a big role.
I actually mostly expect this from seeing that intelligence is pretty heavitailed. E.g. alignment research capability seems incredibly heavitailed to me, though it might be hard to judge the differences in capability there if you’re not already one of the relatively few people who are good at alignment research. Another example is how Einstein managed to find general relativity where the combined rest of the world wouldn’t have been able to do it like that without more experimental evidence.
I do not know why this is the case. It is (very?) surprising to me. Einstein didn’t even work on understanding and optimizing his mind. But yeah that’s how I guess.