It seems to me that embryo selection could be one way to increase the intelligence of the first generation of uploads without (initially) producing a great deal of defective minds of the sort that you mention in the article. WBE may take several decades to achieve, the embryos will take some time to grow into adult humans anyway, and it seems to me that a great deal of the defective minds in your scenario wouldn’t be created because we’re selecting for genes that go on to produce minds, as opposed to selecting directly for mind designs. (Please do correct me if you were only talking about selecting for genes as opposed to mind designs.)
It’s worth noting that embryo selection seems to me a much less extreme version of intelligence amplification than what you have suggested, and even with embryo selection it seems that we run into some oldquestions about how IQ and rationality are related. As argued elsewhere, it may be that finding ways to amplify intelligence without understanding this relation between intelligence and rationality could actually increase risk, as opposed to mitigating it.
As a side note, something I’ve always wondered about is how unusually long periods of subjective time and potential relative social isolation would affect the mental health of uploads of modern humans.
Yes, embryo selection and other non-WBE intelligence amplification techniques would be useful in similar ways as applying evolutionary algorithms to emulations. I’d expect non-WBE intelligence amplification to typically have much lower risks, but also smaller effect size, and would be useful independently of the eventual arrival of WBE technology.
I’m fairly confident that intelligence enhancement would be good for our chances of future survival. I’m not convinced by the case for fast economic growth increasing risk much, and FAI is probably a more IQ-intensive problem than AGI is, so intelligence enhancement would likely accelerate FAI more than AGI even if it doesn’t result in increased rationality as well (although it would be a bigger plus if it did).
As a side note, something I’ve always wondered about is how unusually long periods of subjective time and potential relative social isolation would affect the mental health of uploads of modern humans.
I doubt it would be a problem. Forager bands tended to be small, and if hardware to run uploads on is not the limiting factor to first creating them, then it will be feasible to run small groups of uploads together as soon as it is feasible to run a single upload.
I’m fairly confident that intelligence enhancement would be good for our chances of future survival. I’m not convinced by the case for fast economic growth increasing risk much, and FAI is probably a more IQ-intensive problem than AGI is, so intelligence enhancement would likely accelerate FAI more than AGI even if it doesn’t result in increased rationality as well (although it would be a bigger plus if it did).
I realize that Luke considered economic growth a crucial consideration, but I was really relying on Keith Stanovich’s proposed distinction between intelligence and rationality. It seems possible that you can increase someone’s raw capability without making them reflective enough to get all of the important answers right. This would mean that rationality is not just a bonus. On the other hand, these things might go hand in hand. It seems worth investigating to me and relevant to comparing ‘AI-first’ and ‘IA-first’ risk mitigation strategies.
Also, I think the language of ‘critical levels’ is probably better than the language of ‘acceleration’ in this context. It seems safe to assume that FAI is a more difficult problem than AGI at this point, but I don’t think it follows only from that that IA will accelerate FAI more than it accelerates AGI. That depends on many more facts, of which problem difficulty is just one. I have no problem with ceteris paribus clauses, but it’s not clear what we’re holding equal here. The identity and size of the party in control of the IA technology intuitively seems to me like the biggest consideration besides problem difficulty. A large part of why I consider IA-first an alternative worth thinking about is not because I think it’s likely to differentially affect technological development in the obvious way, but because we may be below some critical threshold of intelligence necessary to build FAI and thus IA-first would be preferable to AI-first because AI-first would almost certainly fail. This also is not a sure thing and I think also warrants investigation.
I doubt it would be a problem. Forager bands tended to be small, and if hardware to run uploads on is not the limiting factor to first creating them, then it will be feasible to run small groups of uploads together as soon as it is feasible to run a single upload.
Forgive me if I’m starting to ramble but, something I find interesting about this is, unless you have other reasons to reject the relevance of this point, it seems to me you have also implied that, if there is no hardware overhang, and WBE begins as a monopoly in the way that nuclear weapons began, then the monopolist may have to choose between uploading a lone human, psychological effects be damned, and delaying the use of an IA technique that is actually available to them. It seems that you could allow the lone emulation to interact with biological humans, and perhaps even ‘pause’ itself so that it experiences a natural amount of subjective time during social interaction, but if you abuse this too much for the sake of maintaining the emulation’s mental health, then you sacrifice the gains in subjective time. Sacrificing subjective time is perhaps not so bad as it might seem because speed intelligence can be useful for other reasons, some of which you outlined in the article. Nonetheless, this seems like a related problem where you have to ask yourself what you would actually do with an AI that is only good for proving theorems. There often seems to be a negative correlation between safety and usefulness. Still, I don’t know what I would choose if I could choose between uploading exactly one extraordinary human right now and not doing so. My default is to not do it and subsequently think very hard, because that’s the reversible decision, but that can’t be done forever.
It seems possible that you can increase someone’s raw capability without making them reflective enough to get all of the important answers right. This would mean that rationality is not just a bonus.
Even if intelligence doesn’t help at all for advancing FAI relative to AGI except via rationality, it still seems pretty unlikely that intelligence amplification would hurt, even if it doesn’t lead to improvements in rationality. It’s not like intelligence amplification would decrease rationality.
Also, I think the language of ‘critical levels’ is probably better than the language of ‘acceleration’ in this context. … A large part of why I consider IA-first an alternative worth thinking about is not because I think it’s likely to differentially affect technological development in the obvious way, but because we may be below some critical threshold of intelligence necessary to build FAI and thus IA-first would be preferable to AI-first because AI-first would almost certainly fail. This also is not a sure thing and I think also warrants investigation.
I disagree. The hypothesis that it is literally impossible to build FAI (but not AGI) without intelligence amplification first is merely the most extreme version of the hypothesis that intelligence amplification accelerates FAI relative to AGI, and I don’t see why it would be more plausible than less extreme versions.
if there is no hardware overhang, and WBE begins as a monopoly in the way that nuclear weapons began, then the monopolist may have to choose between uploading a lone human, psychological effects be damned, and delaying the use of an IA technique that is actually available to them. It seems that you could allow the lone emulation to interact with biological humans, and perhaps even ‘pause’ itself so that it experiences a natural amount of subjective time during social interaction, but if you abuse this too much for the sake of maintaining the emulation’s mental health, then you sacrifice the gains in subjective time.
If you can run an emulation at much faster than human speed, then you don’t have a hardware overhang. The hardware to run 10 emulations at 1⁄10 the speed should cost about the same amount. If you really have a hardware overhang, then the emulations are running slower than humans, which also decreases how dangerous they could be. Alternatively, it’s possible that no one bothers running an emulation until they can do so at approximately human speed, at which point the emulation would be able to socialize with biological humans.
I disagree. The hypothesis that it is literally impossible to build FAI (but not AGI) without intelligence amplification first is merely the most extreme version of the hypothesis that intelligence amplification accelerates FAI relative to AGI, and I don’t see why it would be more plausible than less extreme versions.
I guess I would ask: Considering that there are probably a great many discernible levels of intelligence above that of our own species, and that we were not especially designed to build FAI, do you have reasons to think that the problem difficulty and human intelligence are within what seems to me to be a narrow range necessary for success?
To expand, I agree that we can imagine these hypotheses on a continuum. I feel that I misunderstood what you were saying so that I don’t stand behind what I said about the language, but I do have something to say about why we might consider the most extreme hypothesis more plausible than it seems at first glance. If you just imagine this continuum of hypotheses, then you might apply a sort of principle of indifference and not think that the critical level for FAI being far above biological human intelligence should be any more plausible than the many other lower critical levels of intelligence that are possible, as I think you are arguing. But if we instead imagine all possible pairs of FAI problem difficulty and intelligence across some fully comprehensive intelligence scale, and apply a sort of principle of indifference to this instead, then it seems like it would actually be a rather fortunate coincidence that human intelligence was sufficient to build FAI. (Pairs where actual human intelligence far exceeds the critical level are ruled out empirically.) So I think trying to evaluate plausibility in this way depends heavily on how we frame the issue.
Even if intelligence doesn’t help at all for advancing FAI relative to AGI except via rationality, it still seems pretty unlikely that intelligence amplification would hurt, even if it doesn’t lead to improvements in rationality. It’s not like intelligence amplification would decrease rationality.
Well, for me it depends on the scope of your statement. If all goes well, then it seems like it couldn’t make you less rational and could only make you more intelligent (and maybe more rational thereby), but if we assume a wider scope than this, then I’m inclined to bring up safety considerations about WBE (like, maybe it’s not our rationality that is primarily keeping us safe right now, but our lack of capability; and other things), although I don’t think I should bring this up here because what you’re doing is exploratory and I’m not trying to argue that you shouldn’t explore this.
If you can run an emulation at much faster than human speed, then you don’t have a hardware overhang.
Good point; I didn’t think about this in enough detail.
Considering that there are probably a great many discernible levels of intelligence above that of our own species, and that we were not especially designed to build FAI, do you have reasons to think that the problem difficulty and human intelligence are within what seems to me to be a narrow range necessary for success?
Humans have already invented so many astounding things that it seems likely that for most things that are possible to build in theory, the only thing preventing humans from building them is insufficient time and attention.
It seems to me that embryo selection could be one way to increase the intelligence of the first generation of uploads without (initially) producing a great deal of defective minds of the sort that you mention in the article. WBE may take several decades to achieve, the embryos will take some time to grow into adult humans anyway, and it seems to me that a great deal of the defective minds in your scenario wouldn’t be created because we’re selecting for genes that go on to produce minds, as opposed to selecting directly for mind designs. (Please do correct me if you were only talking about selecting for genes as opposed to mind designs.)
It’s worth noting that embryo selection seems to me a much less extreme version of intelligence amplification than what you have suggested, and even with embryo selection it seems that we run into some old questions about how IQ and rationality are related. As argued elsewhere, it may be that finding ways to amplify intelligence without understanding this relation between intelligence and rationality could actually increase risk, as opposed to mitigating it.
As a side note, something I’ve always wondered about is how unusually long periods of subjective time and potential relative social isolation would affect the mental health of uploads of modern humans.
Yes, embryo selection and other non-WBE intelligence amplification techniques would be useful in similar ways as applying evolutionary algorithms to emulations. I’d expect non-WBE intelligence amplification to typically have much lower risks, but also smaller effect size, and would be useful independently of the eventual arrival of WBE technology.
I’m fairly confident that intelligence enhancement would be good for our chances of future survival. I’m not convinced by the case for fast economic growth increasing risk much, and FAI is probably a more IQ-intensive problem than AGI is, so intelligence enhancement would likely accelerate FAI more than AGI even if it doesn’t result in increased rationality as well (although it would be a bigger plus if it did).
I doubt it would be a problem. Forager bands tended to be small, and if hardware to run uploads on is not the limiting factor to first creating them, then it will be feasible to run small groups of uploads together as soon as it is feasible to run a single upload.
I realize that Luke considered economic growth a crucial consideration, but I was really relying on Keith Stanovich’s proposed distinction between intelligence and rationality. It seems possible that you can increase someone’s raw capability without making them reflective enough to get all of the important answers right. This would mean that rationality is not just a bonus. On the other hand, these things might go hand in hand. It seems worth investigating to me and relevant to comparing ‘AI-first’ and ‘IA-first’ risk mitigation strategies.
Also, I think the language of ‘critical levels’ is probably better than the language of ‘acceleration’ in this context. It seems safe to assume that FAI is a more difficult problem than AGI at this point, but I don’t think it follows only from that that IA will accelerate FAI more than it accelerates AGI. That depends on many more facts, of which problem difficulty is just one. I have no problem with ceteris paribus clauses, but it’s not clear what we’re holding equal here. The identity and size of the party in control of the IA technology intuitively seems to me like the biggest consideration besides problem difficulty. A large part of why I consider IA-first an alternative worth thinking about is not because I think it’s likely to differentially affect technological development in the obvious way, but because we may be below some critical threshold of intelligence necessary to build FAI and thus IA-first would be preferable to AI-first because AI-first would almost certainly fail. This also is not a sure thing and I think also warrants investigation.
Forgive me if I’m starting to ramble but, something I find interesting about this is, unless you have other reasons to reject the relevance of this point, it seems to me you have also implied that, if there is no hardware overhang, and WBE begins as a monopoly in the way that nuclear weapons began, then the monopolist may have to choose between uploading a lone human, psychological effects be damned, and delaying the use of an IA technique that is actually available to them. It seems that you could allow the lone emulation to interact with biological humans, and perhaps even ‘pause’ itself so that it experiences a natural amount of subjective time during social interaction, but if you abuse this too much for the sake of maintaining the emulation’s mental health, then you sacrifice the gains in subjective time. Sacrificing subjective time is perhaps not so bad as it might seem because speed intelligence can be useful for other reasons, some of which you outlined in the article. Nonetheless, this seems like a related problem where you have to ask yourself what you would actually do with an AI that is only good for proving theorems. There often seems to be a negative correlation between safety and usefulness. Still, I don’t know what I would choose if I could choose between uploading exactly one extraordinary human right now and not doing so. My default is to not do it and subsequently think very hard, because that’s the reversible decision, but that can’t be done forever.
Even if intelligence doesn’t help at all for advancing FAI relative to AGI except via rationality, it still seems pretty unlikely that intelligence amplification would hurt, even if it doesn’t lead to improvements in rationality. It’s not like intelligence amplification would decrease rationality.
I disagree. The hypothesis that it is literally impossible to build FAI (but not AGI) without intelligence amplification first is merely the most extreme version of the hypothesis that intelligence amplification accelerates FAI relative to AGI, and I don’t see why it would be more plausible than less extreme versions.
If you can run an emulation at much faster than human speed, then you don’t have a hardware overhang. The hardware to run 10 emulations at 1⁄10 the speed should cost about the same amount. If you really have a hardware overhang, then the emulations are running slower than humans, which also decreases how dangerous they could be. Alternatively, it’s possible that no one bothers running an emulation until they can do so at approximately human speed, at which point the emulation would be able to socialize with biological humans.
I guess I would ask: Considering that there are probably a great many discernible levels of intelligence above that of our own species, and that we were not especially designed to build FAI, do you have reasons to think that the problem difficulty and human intelligence are within what seems to me to be a narrow range necessary for success?
To expand, I agree that we can imagine these hypotheses on a continuum. I feel that I misunderstood what you were saying so that I don’t stand behind what I said about the language, but I do have something to say about why we might consider the most extreme hypothesis more plausible than it seems at first glance. If you just imagine this continuum of hypotheses, then you might apply a sort of principle of indifference and not think that the critical level for FAI being far above biological human intelligence should be any more plausible than the many other lower critical levels of intelligence that are possible, as I think you are arguing. But if we instead imagine all possible pairs of FAI problem difficulty and intelligence across some fully comprehensive intelligence scale, and apply a sort of principle of indifference to this instead, then it seems like it would actually be a rather fortunate coincidence that human intelligence was sufficient to build FAI. (Pairs where actual human intelligence far exceeds the critical level are ruled out empirically.) So I think trying to evaluate plausibility in this way depends heavily on how we frame the issue.
Well, for me it depends on the scope of your statement. If all goes well, then it seems like it couldn’t make you less rational and could only make you more intelligent (and maybe more rational thereby), but if we assume a wider scope than this, then I’m inclined to bring up safety considerations about WBE (like, maybe it’s not our rationality that is primarily keeping us safe right now, but our lack of capability; and other things), although I don’t think I should bring this up here because what you’re doing is exploratory and I’m not trying to argue that you shouldn’t explore this.
Good point; I didn’t think about this in enough detail.
Humans have already invented so many astounding things that it seems likely that for most things that are possible to build in theory, the only thing preventing humans from building them is insufficient time and attention.