I disagree. The hypothesis that it is literally impossible to build FAI (but not AGI) without intelligence amplification first is merely the most extreme version of the hypothesis that intelligence amplification accelerates FAI relative to AGI, and I don’t see why it would be more plausible than less extreme versions.
I guess I would ask: Considering that there are probably a great many discernible levels of intelligence above that of our own species, and that we were not especially designed to build FAI, do you have reasons to think that the problem difficulty and human intelligence are within what seems to me to be a narrow range necessary for success?
To expand, I agree that we can imagine these hypotheses on a continuum. I feel that I misunderstood what you were saying so that I don’t stand behind what I said about the language, but I do have something to say about why we might consider the most extreme hypothesis more plausible than it seems at first glance. If you just imagine this continuum of hypotheses, then you might apply a sort of principle of indifference and not think that the critical level for FAI being far above biological human intelligence should be any more plausible than the many other lower critical levels of intelligence that are possible, as I think you are arguing. But if we instead imagine all possible pairs of FAI problem difficulty and intelligence across some fully comprehensive intelligence scale, and apply a sort of principle of indifference to this instead, then it seems like it would actually be a rather fortunate coincidence that human intelligence was sufficient to build FAI. (Pairs where actual human intelligence far exceeds the critical level are ruled out empirically.) So I think trying to evaluate plausibility in this way depends heavily on how we frame the issue.
Even if intelligence doesn’t help at all for advancing FAI relative to AGI except via rationality, it still seems pretty unlikely that intelligence amplification would hurt, even if it doesn’t lead to improvements in rationality. It’s not like intelligence amplification would decrease rationality.
Well, for me it depends on the scope of your statement. If all goes well, then it seems like it couldn’t make you less rational and could only make you more intelligent (and maybe more rational thereby), but if we assume a wider scope than this, then I’m inclined to bring up safety considerations about WBE (like, maybe it’s not our rationality that is primarily keeping us safe right now, but our lack of capability; and other things), although I don’t think I should bring this up here because what you’re doing is exploratory and I’m not trying to argue that you shouldn’t explore this.
If you can run an emulation at much faster than human speed, then you don’t have a hardware overhang.
Good point; I didn’t think about this in enough detail.
Considering that there are probably a great many discernible levels of intelligence above that of our own species, and that we were not especially designed to build FAI, do you have reasons to think that the problem difficulty and human intelligence are within what seems to me to be a narrow range necessary for success?
Humans have already invented so many astounding things that it seems likely that for most things that are possible to build in theory, the only thing preventing humans from building them is insufficient time and attention.
I guess I would ask: Considering that there are probably a great many discernible levels of intelligence above that of our own species, and that we were not especially designed to build FAI, do you have reasons to think that the problem difficulty and human intelligence are within what seems to me to be a narrow range necessary for success?
To expand, I agree that we can imagine these hypotheses on a continuum. I feel that I misunderstood what you were saying so that I don’t stand behind what I said about the language, but I do have something to say about why we might consider the most extreme hypothesis more plausible than it seems at first glance. If you just imagine this continuum of hypotheses, then you might apply a sort of principle of indifference and not think that the critical level for FAI being far above biological human intelligence should be any more plausible than the many other lower critical levels of intelligence that are possible, as I think you are arguing. But if we instead imagine all possible pairs of FAI problem difficulty and intelligence across some fully comprehensive intelligence scale, and apply a sort of principle of indifference to this instead, then it seems like it would actually be a rather fortunate coincidence that human intelligence was sufficient to build FAI. (Pairs where actual human intelligence far exceeds the critical level are ruled out empirically.) So I think trying to evaluate plausibility in this way depends heavily on how we frame the issue.
Well, for me it depends on the scope of your statement. If all goes well, then it seems like it couldn’t make you less rational and could only make you more intelligent (and maybe more rational thereby), but if we assume a wider scope than this, then I’m inclined to bring up safety considerations about WBE (like, maybe it’s not our rationality that is primarily keeping us safe right now, but our lack of capability; and other things), although I don’t think I should bring this up here because what you’re doing is exploratory and I’m not trying to argue that you shouldn’t explore this.
Good point; I didn’t think about this in enough detail.
Humans have already invented so many astounding things that it seems likely that for most things that are possible to build in theory, the only thing preventing humans from building them is insufficient time and attention.