It seems possible that you can increase someone’s raw capability without making them reflective enough to get all of the important answers right. This would mean that rationality is not just a bonus.
Even if intelligence doesn’t help at all for advancing FAI relative to AGI except via rationality, it still seems pretty unlikely that intelligence amplification would hurt, even if it doesn’t lead to improvements in rationality. It’s not like intelligence amplification would decrease rationality.
Also, I think the language of ‘critical levels’ is probably better than the language of ‘acceleration’ in this context. … A large part of why I consider IA-first an alternative worth thinking about is not because I think it’s likely to differentially affect technological development in the obvious way, but because we may be below some critical threshold of intelligence necessary to build FAI and thus IA-first would be preferable to AI-first because AI-first would almost certainly fail. This also is not a sure thing and I think also warrants investigation.
I disagree. The hypothesis that it is literally impossible to build FAI (but not AGI) without intelligence amplification first is merely the most extreme version of the hypothesis that intelligence amplification accelerates FAI relative to AGI, and I don’t see why it would be more plausible than less extreme versions.
if there is no hardware overhang, and WBE begins as a monopoly in the way that nuclear weapons began, then the monopolist may have to choose between uploading a lone human, psychological effects be damned, and delaying the use of an IA technique that is actually available to them. It seems that you could allow the lone emulation to interact with biological humans, and perhaps even ‘pause’ itself so that it experiences a natural amount of subjective time during social interaction, but if you abuse this too much for the sake of maintaining the emulation’s mental health, then you sacrifice the gains in subjective time.
If you can run an emulation at much faster than human speed, then you don’t have a hardware overhang. The hardware to run 10 emulations at 1⁄10 the speed should cost about the same amount. If you really have a hardware overhang, then the emulations are running slower than humans, which also decreases how dangerous they could be. Alternatively, it’s possible that no one bothers running an emulation until they can do so at approximately human speed, at which point the emulation would be able to socialize with biological humans.
I disagree. The hypothesis that it is literally impossible to build FAI (but not AGI) without intelligence amplification first is merely the most extreme version of the hypothesis that intelligence amplification accelerates FAI relative to AGI, and I don’t see why it would be more plausible than less extreme versions.
I guess I would ask: Considering that there are probably a great many discernible levels of intelligence above that of our own species, and that we were not especially designed to build FAI, do you have reasons to think that the problem difficulty and human intelligence are within what seems to me to be a narrow range necessary for success?
To expand, I agree that we can imagine these hypotheses on a continuum. I feel that I misunderstood what you were saying so that I don’t stand behind what I said about the language, but I do have something to say about why we might consider the most extreme hypothesis more plausible than it seems at first glance. If you just imagine this continuum of hypotheses, then you might apply a sort of principle of indifference and not think that the critical level for FAI being far above biological human intelligence should be any more plausible than the many other lower critical levels of intelligence that are possible, as I think you are arguing. But if we instead imagine all possible pairs of FAI problem difficulty and intelligence across some fully comprehensive intelligence scale, and apply a sort of principle of indifference to this instead, then it seems like it would actually be a rather fortunate coincidence that human intelligence was sufficient to build FAI. (Pairs where actual human intelligence far exceeds the critical level are ruled out empirically.) So I think trying to evaluate plausibility in this way depends heavily on how we frame the issue.
Even if intelligence doesn’t help at all for advancing FAI relative to AGI except via rationality, it still seems pretty unlikely that intelligence amplification would hurt, even if it doesn’t lead to improvements in rationality. It’s not like intelligence amplification would decrease rationality.
Well, for me it depends on the scope of your statement. If all goes well, then it seems like it couldn’t make you less rational and could only make you more intelligent (and maybe more rational thereby), but if we assume a wider scope than this, then I’m inclined to bring up safety considerations about WBE (like, maybe it’s not our rationality that is primarily keeping us safe right now, but our lack of capability; and other things), although I don’t think I should bring this up here because what you’re doing is exploratory and I’m not trying to argue that you shouldn’t explore this.
If you can run an emulation at much faster than human speed, then you don’t have a hardware overhang.
Good point; I didn’t think about this in enough detail.
Considering that there are probably a great many discernible levels of intelligence above that of our own species, and that we were not especially designed to build FAI, do you have reasons to think that the problem difficulty and human intelligence are within what seems to me to be a narrow range necessary for success?
Humans have already invented so many astounding things that it seems likely that for most things that are possible to build in theory, the only thing preventing humans from building them is insufficient time and attention.
Even if intelligence doesn’t help at all for advancing FAI relative to AGI except via rationality, it still seems pretty unlikely that intelligence amplification would hurt, even if it doesn’t lead to improvements in rationality. It’s not like intelligence amplification would decrease rationality.
I disagree. The hypothesis that it is literally impossible to build FAI (but not AGI) without intelligence amplification first is merely the most extreme version of the hypothesis that intelligence amplification accelerates FAI relative to AGI, and I don’t see why it would be more plausible than less extreme versions.
If you can run an emulation at much faster than human speed, then you don’t have a hardware overhang. The hardware to run 10 emulations at 1⁄10 the speed should cost about the same amount. If you really have a hardware overhang, then the emulations are running slower than humans, which also decreases how dangerous they could be. Alternatively, it’s possible that no one bothers running an emulation until they can do so at approximately human speed, at which point the emulation would be able to socialize with biological humans.
I guess I would ask: Considering that there are probably a great many discernible levels of intelligence above that of our own species, and that we were not especially designed to build FAI, do you have reasons to think that the problem difficulty and human intelligence are within what seems to me to be a narrow range necessary for success?
To expand, I agree that we can imagine these hypotheses on a continuum. I feel that I misunderstood what you were saying so that I don’t stand behind what I said about the language, but I do have something to say about why we might consider the most extreme hypothesis more plausible than it seems at first glance. If you just imagine this continuum of hypotheses, then you might apply a sort of principle of indifference and not think that the critical level for FAI being far above biological human intelligence should be any more plausible than the many other lower critical levels of intelligence that are possible, as I think you are arguing. But if we instead imagine all possible pairs of FAI problem difficulty and intelligence across some fully comprehensive intelligence scale, and apply a sort of principle of indifference to this instead, then it seems like it would actually be a rather fortunate coincidence that human intelligence was sufficient to build FAI. (Pairs where actual human intelligence far exceeds the critical level are ruled out empirically.) So I think trying to evaluate plausibility in this way depends heavily on how we frame the issue.
Well, for me it depends on the scope of your statement. If all goes well, then it seems like it couldn’t make you less rational and could only make you more intelligent (and maybe more rational thereby), but if we assume a wider scope than this, then I’m inclined to bring up safety considerations about WBE (like, maybe it’s not our rationality that is primarily keeping us safe right now, but our lack of capability; and other things), although I don’t think I should bring this up here because what you’re doing is exploratory and I’m not trying to argue that you shouldn’t explore this.
Good point; I didn’t think about this in enough detail.
Humans have already invented so many astounding things that it seems likely that for most things that are possible to build in theory, the only thing preventing humans from building them is insufficient time and attention.