I have encountered the argument that safe brain uploads are as hard as friendly AI. In particular, this is offered as justification for focusing on the development of FAI rather than spending energy trying to make sure WBE (or an alternative based on stronger understanding of the brain) comes first. I don’t yet understand/believe these arguments.
I don’t believe the literal “as hard as” claim, or have the impression that such a strong claim is common.
I have not seen a careful discussion of these issues anywhere, although I suspect plenty have occurred. My question is: why would I support the SIAI instead of directing my money towards the technology needed to better understand and emulate the human brain?
The latter is usually pursued without safety in mind at all; supporting that seems like a bad idea. As far as I know, only SIAI and the Future of Humanity Institute have studied safe uploading, and no one is both doing actual neuroscience and taking safety into account.
I don’t believe the literal “as hard as” claim, or have the impression that such a strong claim is common.
The latter is usually pursued without safety in mind at all; supporting that seems like a bad idea. As far as I know, only SIAI and the Future of Humanity Institute have studied safe uploading, and no one is both doing actual neuroscience and taking safety into account.