the first uploads may be selected for high risk-tolerance
The obvious solution would be to use cryopreserved brains. Perhaps this would be necessary anyway, because of all the moral and legal problems with slicing up living person’s brain to take SEM images and map the connectome. This suggests that an extremely effective EA cause would be to hand out copies of Bostrom’s Superintelligence at Cryonics conventions.
It’s not clear whether the cryonics community would be more or less horrified by defective spurs than the average person, though. Perhaps EAs could request to be revived early, at increased risk of information-theoretic death, if digital uploading is attempted and if self-modifying AI is a risk. Perhaps the ideal would be to have a steady stream of FAI-concerned volunteers in the front of the line, so that the first successes are likely to be cautious about such things. Ideally, we wouldn’t upload anyone not concerned with FAI until we had a FAI in place, but that may not be possible if there is a coordination problem between several groups across the planet. A race to the bottom seems like a risk, if Moloch has his say.
A possible (but probably smaller) source of positive selection is that currently, people who are enthusiastic about uploading their brains correlate strongly with people who are concerned about AI safety
I ordinarily wouldn’t make such a minor nitpick, (because of this) but it might be an important distinction, so I’ll make an exception: People who worry about FAI are likely to also be enthusiastic about uploading, but I’m not sure if the average person who is enthusiastic about uploading is worried about FAI. For most people, “AI safety” means self driving cars that don’t hit people.
People who worry about FAI are likely to also be enthusiastic about uploading, but I’m not sure if the average person who is enthusiastic about uploading is worried about FAI.
Right, that’s why I said it would probably be a smaller source of selection, but the correlation is still strong, and goes in the preferred direction.
The obvious solution would be to use cryopreserved brains. Perhaps this would be necessary anyway, because of all the moral and legal problems with slicing up living person’s brain to take SEM images and map the connectome. This suggests that an extremely effective EA cause would be to hand out copies of Bostrom’s Superintelligence at Cryonics conventions.
It’s not clear whether the cryonics community would be more or less horrified by defective spurs than the average person, though. Perhaps EAs could request to be revived early, at increased risk of information-theoretic death, if digital uploading is attempted and if self-modifying AI is a risk. Perhaps the ideal would be to have a steady stream of FAI-concerned volunteers in the front of the line, so that the first successes are likely to be cautious about such things. Ideally, we wouldn’t upload anyone not concerned with FAI until we had a FAI in place, but that may not be possible if there is a coordination problem between several groups across the planet. A race to the bottom seems like a risk, if Moloch has his say.
I ordinarily wouldn’t make such a minor nitpick, (because of this) but it might be an important distinction, so I’ll make an exception: People who worry about FAI are likely to also be enthusiastic about uploading, but I’m not sure if the average person who is enthusiastic about uploading is worried about FAI. For most people, “AI safety” means self driving cars that don’t hit people.
Right, that’s why I said it would probably be a smaller source of selection, but the correlation is still strong, and goes in the preferred direction.
Ah, understood. We’re on the same page, then.