I think that from like +6.3std the heavytail becomes even a lot stronger because those people can bootstrap themselves extremely good mental software.
I agree something like this happens, I just don’t think it’s that strong of an effect.
I think one me::Tsvi::+7std person would probably be enough to put humanity on a path to success (given Tsvi timelines), so the “repeatedly” criterion seems a bit off to me.
A single human still has pretty strong limitations. E.g. fixed skull size (without further intervention); other non-scalable hardware (~one thread of attention, one pair of eyes and hands); self-reprogramming is just hard; benefits of self-reprogramming don’t scale (hard to share with other people).
Coercion is bad; without coercion, a supergenius might just not want to work on whatever is strategically important for humanity.
It doesn’t look to me like we’re even close to being able to figure out AGI alignment, or other gnarly problems for that matter (such as a decoding egregores). So we need a lot more brainpower, lots of lottery tickets.
There’s a kind of power that comes from having many geniuses—think Manhattan project.
for the few +6std people on earth it might just give +0.2std or +0.3std,
Not sure what you’re referring to here. Different methods have different curves. Adult brain editing would have diminishing returns, but nowhere near that diminishing.
it’s sorta vice versa that extremely smart individuals might find ways to significantly leverage their capability
Plausibly, though I don’t know of strong evidence for this. For example, my impression is that modern proof assistants still aren’t in a state where a genius youngster with a proof assistant can unlock what feels like the possibility of learning a seemingly superhuman amount of math via direct dialogue with the truth—but I could imagine this being created soon. Do you have other evidence in mind?
There’s a kind of power that comes from having many geniuses—think Manhattan project.
Basically agree, but I think alignment is the kind of problem where one supergenius might matter more. E.g. for general relativity Einstein basically managed to find in 3 times faster or sth than the rest of physics would’ve. I don’t think a Manhatton project would’ve helped there because even after Einstein published GR only relatively few people understood it (if i am informed correctly), and I don’t think they could’ve made progress in the same way Einstein did but would’ve needed more experimental evidence.
Plausible to me that there are other potentially pivotal problems that have something of this character, but idk.
Do you have other evidence in mind?
Well not very legible evidence, and I could be wrong, but some of my thoughts on mental software:
It seems plausible to me that someone with +6.3std would be able to do some bootstrapping loop very roughly like:
find better ontology for modelling what is happening in my mind.
train to relatively-effortlessly model my thoughts in the new better ontology that compresses observations more and thus lets me notice a bit more of what’s happening in my mind (and notice pieces where the ontology doesn’t seem to fit well).
repeat.
The “relatively-effortlessly model well what is happening in my mind” part might help significantly for getting much faster and richer feedback loops for learning thinking skills.
When you have a good model of what happened in your mind to produce some output you can better see the parts that were useless and the parts that were important and see how you want your cognitive algorithms to look like and plan how to train yourself to shape them that way.
When you master this kind of review-and-improving really well you might be able to apply the skill on itself and bootstrap your review process.
It’s generally hard to predict what someone smarter might figure out so I wouldn’t be confident it’s not possible.
I agree that peak problem-solving ability is very important, which is why I think strong amplification is such a priority. I just… so far I’m either not understanding, or else you’re completely making up some big transition between 6 and 6.5?
Yeah I sorta am. I feel like that’s what I see from eyeballing the largest supergeniuses (in particular Einstein and Eliezer) but idk it’s very few data and maybe I’m wrong.
My guess would be that you’re seeing a genuine difference, but that flavor/magnitude of difference is not not very special to the 6 → 6.5 transition. See my other comment.
Seems way underestimated. While I don’t think he’s at “the largest supergeniuses” level either, even +3 SD implies just top 1 in ~700 i.e. millions of Eliezer-level people worldwide. I’ve been part of more quantitatively-selected groups talent-wise (e.g. for national scholarships awarded on academic merit) and I’ve never met anyone like him.
Wait are you saying it’s illegible, or just bad? I mean are you saying that you’ve done something impressive and attribute that to doing this—or that you believe someone else has done so—but you can’t share why you think so?
Maybe bad would be a better word. Idk I feel like I have a different way of thinking about such intelligence-explosion-dynamics stuff that most people don’t have (though Eliezer does) and I cannot really describe it all that well and I think it makes sensible predictions but yeah idk I’d stay sceptical given that I’m not that great at saying why I believe what I believe there.
No I don’t know of anyone who did that.
It’s sorta what I’ve been aiming for since very recently and I don’t particularly expect a high chance of success but I’m also not quite +6.3std I think (though I’m only 21 and the worlds where it might succeed are the ones where I continue getting smarter for some time). Maybe I’m wrong but I’d be pretty surprised if sth like that wouldn’t work for someone with +7std.
I mean, I agree that intelligence explosion is a thing, and the thing you described is part of it, and humans can kinda do it, and it helps quite a lot to have more raw cognitive horsepower...
I guess I’m not sure we’re disagreeing about much here, except that
I don’t know why you’re putting some important transition around 6 SDs. I expect that many capabilities will have shitty precursors in people with less native horsepower; I also expect some capabilities will basically not have such precursors, and so will be “transitions”; I just expect there to be enough such things that you wouldn’t see some major transition at one point. I do think there’s an important different between 5.5 SD and 7.5 SD, which is that now you’ve created a human who’s probably smarter than any human who’s ever lived, so you’ve gone from 0 to 1 on some difficult thoughts; but I don’t think that’s special about this range, it would happen at any range.
I think that adding more 6 SD or 7 SD is really important, but you maybe don’t as much? Not sure what you think.
First tbc, I’m always talking about thinkoompf, not just what’s measured by IQ tests but also sanity and even drive.
Idk I’m not at all sure about that but it seems to me like Nate and Eliezer might be a decent chunck more competent than all the other people I’m aware of. So maybe for the current era (by which I mostly mean “after the sequences were published”) it’s like 1 Person (Nate) per decade-or-a-bit-more who becomes really competent, which is very roughly +6std. (EDIT: Retracted because evidence too shaky. It still seems to me like the heavytail of intelligence gets very far very quickly though.)
Like I’d guess before the sequences and without having the strong motivator of needing to save humanity the transition might rather have been +6.4std -- +6.8std. Idk. Though tbc I don’t really expect to be like “yeah maybe from 6.3std it enters a faster improvement curve which is then not changing that much” but more like the curve just getting steeper and steeper very fast without there being a visible kink.
I feel like if we now created someone with +6.3std the person would already become smarter than any person who ever lived because there are certain advantages of being born now which would help a lot for getting up to speed (e.g. the sequences, the Internet).
I agree something like this happens, I just don’t think it’s that strong of an effect.
A single human still has pretty strong limitations. E.g. fixed skull size (without further intervention); other non-scalable hardware (~one thread of attention, one pair of eyes and hands); self-reprogramming is just hard; benefits of self-reprogramming don’t scale (hard to share with other people).
Coercion is bad; without coercion, a supergenius might just not want to work on whatever is strategically important for humanity.
It doesn’t look to me like we’re even close to being able to figure out AGI alignment, or other gnarly problems for that matter (such as a decoding egregores). So we need a lot more brainpower, lots of lottery tickets.
There’s a kind of power that comes from having many geniuses—think Manhattan project.
Not sure what you’re referring to here. Different methods have different curves. Adult brain editing would have diminishing returns, but nowhere near that diminishing.
Plausibly, though I don’t know of strong evidence for this. For example, my impression is that modern proof assistants still aren’t in a state where a genius youngster with a proof assistant can unlock what feels like the possibility of learning a seemingly superhuman amount of math via direct dialogue with the truth—but I could imagine this being created soon. Do you have other evidence in mind?
Basically agree, but I think alignment is the kind of problem where one supergenius might matter more. E.g. for general relativity Einstein basically managed to find in 3 times faster or sth than the rest of physics would’ve. I don’t think a Manhatton project would’ve helped there because even after Einstein published GR only relatively few people understood it (if i am informed correctly), and I don’t think they could’ve made progress in the same way Einstein did but would’ve needed more experimental evidence.
Plausible to me that there are other potentially pivotal problems that have something of this character, but idk.
Well not very legible evidence, and I could be wrong, but some of my thoughts on mental software:
It seems plausible to me that someone with +6.3std would be able to do some bootstrapping loop very roughly like:
find better ontology for modelling what is happening in my mind.
train to relatively-effortlessly model my thoughts in the new better ontology that compresses observations more and thus lets me notice a bit more of what’s happening in my mind (and notice pieces where the ontology doesn’t seem to fit well).
repeat.
The “relatively-effortlessly model well what is happening in my mind” part might help significantly for getting much faster and richer feedback loops for learning thinking skills.
When you have a good model of what happened in your mind to produce some output you can better see the parts that were useless and the parts that were important and see how you want your cognitive algorithms to look like and plan how to train yourself to shape them that way.
When you master this kind of review-and-improving really well you might be able to apply the skill on itself and bootstrap your review process.
It’s generally hard to predict what someone smarter might figure out so I wouldn’t be confident it’s not possible.
I agree that peak problem-solving ability is very important, which is why I think strong amplification is such a priority. I just… so far I’m either not understanding, or else you’re completely making up some big transition between 6 and 6.5?
Yeah I sorta am. I feel like that’s what I see from eyeballing the largest supergeniuses (in particular Einstein and Eliezer) but idk it’s very few data and maybe I’m wrong.
My guess would be that you’re seeing a genuine difference, but that flavor/magnitude of difference is not not very special to the 6 → 6.5 transition. See my other comment.
I think you’re massively overestimating Eliezer Yudkowsky’s intelligence. I would guess it’s somewhere between +2 and +3 SD.
Seems way underestimated. While I don’t think he’s at “the largest supergeniuses” level either, even +3 SD implies just top 1 in ~700 i.e. millions of Eliezer-level people worldwide. I’ve been part of more quantitatively-selected groups talent-wise (e.g. for national scholarships awarded on academic merit) and I’ve never met anyone like him.
But are you sure the way in which he is unique among people you’ve met is mostly about intelligence rather than intelligence along with other traits?
Wait are you saying it’s illegible, or just bad? I mean are you saying that you’ve done something impressive and attribute that to doing this—or that you believe someone else has done so—but you can’t share why you think so?
Maybe bad would be a better word. Idk I feel like I have a different way of thinking about such intelligence-explosion-dynamics stuff that most people don’t have (though Eliezer does) and I cannot really describe it all that well and I think it makes sensible predictions but yeah idk I’d stay sceptical given that I’m not that great at saying why I believe what I believe there.
No I don’t know of anyone who did that.
It’s sorta what I’ve been aiming for since very recently and I don’t particularly expect a high chance of success but I’m also not quite +6.3std I think (though I’m only 21 and the worlds where it might succeed are the ones where I continue getting smarter for some time). Maybe I’m wrong but I’d be pretty surprised if sth like that wouldn’t work for someone with +7std.
I mean, I agree that intelligence explosion is a thing, and the thing you described is part of it, and humans can kinda do it, and it helps quite a lot to have more raw cognitive horsepower...
I guess I’m not sure we’re disagreeing about much here, except that
I don’t know why you’re putting some important transition around 6 SDs. I expect that many capabilities will have shitty precursors in people with less native horsepower; I also expect some capabilities will basically not have such precursors, and so will be “transitions”; I just expect there to be enough such things that you wouldn’t see some major transition at one point. I do think there’s an important different between 5.5 SD and 7.5 SD, which is that now you’ve created a human who’s probably smarter than any human who’s ever lived, so you’ve gone from 0 to 1 on some difficult thoughts; but I don’t think that’s special about this range, it would happen at any range.
I think that adding more 6 SD or 7 SD is really important, but you maybe don’t as much? Not sure what you think.
First tbc, I’m always talking about thinkoompf, not just what’s measured by IQ tests but also sanity and even drive.
Idk I’m not at all sure about that but it seems to me like Nate and Eliezer might be a decent chunck more competent than all the other people I’m aware of. So maybe for the current era (by which I mostly mean “after the sequences were published”) it’s like 1 Person (Nate) per decade-or-a-bit-more who becomes really competent, which is very roughly +6std.(EDIT: Retracted because evidence too shaky. It still seems to me like the heavytail of intelligence gets very far very quickly though.)Like I’d guess before the sequences and without having the strong motivator of needing to save humanity the transition might rather have been +6.4std -- +6.8std. Idk. Though tbc I don’t really expect to be like “yeah maybe from 6.3std it enters a faster improvement curve which is then not changing that much” but more like the curve just getting steeper and steeper very fast without there being a visible kink.
I feel like if we now created someone with +6.3std the person would already become smarter than any person who ever lived because there are certain advantages of being born now which would help a lot for getting up to speed (e.g. the sequences, the Internet).