A few random thoughts on ems… (None of this is normative—I’m just listing things I could imagine being done, not saying they’re a good or a bad idea).
After the creation of the first em...
A lot seems to hinge on whether em technology:
can be contained within research institutes that have strong ethical & safety guidelines, good information security and an interest in FAI research, or
can’t be contained and get released into the economic jungle
Bear in mind general difficulties of information security, as well as the AI box experiment which may apply to augmented ems.
Ems can hack themselves (or each other):
Brain-computer interfacing seems much easier if you’re an em
Create a bunch of copies of yourself, create fast brain-to-brain interfaces and with a bit of training you have a supermind
Grow more neocortex; simulate non-Euclidean space and grow into configurations which are way larger than what would fit into your original head
Research ways to run on cheaper hardware while maintaining subjective identity or economic productivity
Can restore from backups if something goes wrong
With those kinds of possibilities, it seems that ems could drift quite a long way into mind design space quite quickly, to the point where it might make more sense to view them as AGIs than as ems.
In the released-into-the-jungle scenario, the ems which are successful may be the ones which abandon their original human values in favor of whatever’s successful in an economic or Darwinian sense. Risk of a single mind pattern (or a small number of them) coming to dominate.
Someone has to look after the bioconservatives, i.e. people who don’t want their mind uploaded.
Em technology can allow enormous amounts of undetectable suffering to be created.
I can’t think of a scenario where ems would make AGI creation less likely to happen (except if they trigger some (other) global catastrophe first). Maybe “we already have ems so why bother researching AI?” It seems weak—ems are hardware-expensive and anyway people will research AGI just because they can.
Robin Hanson’s vision of em-world seems to include old-fashioned humans surviving by owning capital. I’m not sure this is feasible (requires property rights to be respected and value not to be destroyed by inflation).
Before the creation of the first em...
The appearance of a serious WBE project might trigger public outcry, laws etc. for better or for worse. Once people understand what an em is they are likely to be creeped out by it. Such laws might be general enough to retard all AI research; AI/AGI/em projects could be driven underground (but maybe only when there’s a strong business case for them); libertarians will become more interested in AI research.
WBE research is likely to feed into AGI research. AGI researchers will be very interested in information about how the human brain works. (There’s even the possibility of a WBE vs AGI “arms race”).
Maybe “we already have ems so why bother researching AI?” It seems weak—ems are hardware-expensive and anyway people will research AGI just because they can.
From a utilitarian point of view, once you can create a VR, why would you want to create an FAI? On the other hand, if you create an FAI, wouldn’t you still want to create WBE and VR? If you had WBE and VR, would you want to create more existential risks by making an AGI? The motive would have to be non-utilitarian. “Because they can” seems about right.
Also, about the bioconservatives objecting and libertarians becoming more interested. A Seastead would be the perfect retreat for WBE researchers.
A few random thoughts on ems… (None of this is normative—I’m just listing things I could imagine being done, not saying they’re a good or a bad idea).
After the creation of the first em...
A lot seems to hinge on whether em technology:
can be contained within research institutes that have strong ethical & safety guidelines, good information security and an interest in FAI research, or
can’t be contained and get released into the economic jungle
Bear in mind general difficulties of information security, as well as the AI box experiment which may apply to augmented ems.
Ems can hack themselves (or each other):
Brain-computer interfacing seems much easier if you’re an em
Create a bunch of copies of yourself, create fast brain-to-brain interfaces and with a bit of training you have a supermind
Grow more neocortex; simulate non-Euclidean space and grow into configurations which are way larger than what would fit into your original head
Research ways to run on cheaper hardware while maintaining subjective identity or economic productivity
Can restore from backups if something goes wrong
With those kinds of possibilities, it seems that ems could drift quite a long way into mind design space quite quickly, to the point where it might make more sense to view them as AGIs than as ems.
In the released-into-the-jungle scenario, the ems which are successful may be the ones which abandon their original human values in favor of whatever’s successful in an economic or Darwinian sense. Risk of a single mind pattern (or a small number of them) coming to dominate.
Someone has to look after the bioconservatives, i.e. people who don’t want their mind uploaded.
Em technology can allow enormous amounts of undetectable suffering to be created.
Ems → enormous economic growth → new unforeseeable technologies → x-risks aplenty.
I can’t think of a scenario where ems would make AGI creation less likely to happen (except if they trigger some (other) global catastrophe first). Maybe “we already have ems so why bother researching AI?” It seems weak—ems are hardware-expensive and anyway people will research AGI just because they can.
Robin Hanson’s vision of em-world seems to include old-fashioned humans surviving by owning capital. I’m not sure this is feasible (requires property rights to be respected and value not to be destroyed by inflation).
Before the creation of the first em...
The appearance of a serious WBE project might trigger public outcry, laws etc. for better or for worse. Once people understand what an em is they are likely to be creeped out by it. Such laws might be general enough to retard all AI research; AI/AGI/em projects could be driven underground (but maybe only when there’s a strong business case for them); libertarians will become more interested in AI research.
WBE research is likely to feed into AGI research. AGI researchers will be very interested in information about how the human brain works. (There’s even the possibility of a WBE vs AGI “arms race”).
Why would you simulate the geometry? Just simulate the connections.
From a utilitarian point of view, once you can create a VR, why would you want to create an FAI? On the other hand, if you create an FAI, wouldn’t you still want to create WBE and VR? If you had WBE and VR, would you want to create more existential risks by making an AGI? The motive would have to be non-utilitarian. “Because they can” seems about right.
Also, about the bioconservatives objecting and libertarians becoming more interested. A Seastead would be the perfect retreat for WBE researchers.