The best argument against this I’ve heard is that technology isn’t built in a vacuum—if you build the technology to upload people’s brains, then before you have the technology to upload people’s brains, you probably have the technology to almost upload people’s brains and fill in the gap yourself, creating neuromorphic AI that has all the same alignment problems as anything else.
Even so, I’m not convinced this is definitively true—if you can upload an entire brain at 80% of the necessary quality, “filling in” that last 20% does not strike me as an easy problem, and it might be easier to improve fidelity of uploading than to engineer a fix for it.
Well, not as aligned as the best case—humans often screw things up for themselves and each other, and emulated humans might just do that but faster. (Wei Dai might call this “human safety problems.”)
But probably, it would be good.
From a strategic standpoint, I unfortunately don’t think this seems to inform strategy too much, because afaict scanning brains is a significantly harder technical problem than building de novo AI.
I think the observation that it just isn’t obvious that ems will come before de novo AI is sufficient to worry about the problem in the case that they don’t. Possibly while focusing more capabilities development towards creating ems (whatever that would look like)?
Also, would ems actually be powerful and capable enough to reliably stop a world-destroying non-em AGI, or an em about to make some world-destroying mistake because of its human-derived flaws? Or would we need to arm them with additional tools that fall under the umbrella of AGI safety anyway?
The only reason we care about AI Safety is because we believe the consequences are potentially existential. If it wasn’t, there would be no need for safety.
What if we’d upload a person’s brain to a computer and run 10,000 copies of them and/or run them very quickly?
Seems as-aligned-as-an-AGI-can-get (?)
The best argument against this I’ve heard is that technology isn’t built in a vacuum—if you build the technology to upload people’s brains, then before you have the technology to upload people’s brains, you probably have the technology to almost upload people’s brains and fill in the gap yourself, creating neuromorphic AI that has all the same alignment problems as anything else.
Even so, I’m not convinced this is definitively true—if you can upload an entire brain at 80% of the necessary quality, “filling in” that last 20% does not strike me as an easy problem, and it might be easier to improve fidelity of uploading than to engineer a fix for it.
Well, not as aligned as the best case—humans often screw things up for themselves and each other, and emulated humans might just do that but faster. (Wei Dai might call this “human safety problems.”)
But probably, it would be good.
From a strategic standpoint, I unfortunately don’t think this seems to inform strategy too much, because afaict scanning brains is a significantly harder technical problem than building de novo AI.
I think the observation that it just isn’t obvious that ems will come before de novo AI is sufficient to worry about the problem in the case that they don’t. Possibly while focusing more capabilities development towards creating ems (whatever that would look like)?
Also, would ems actually be powerful and capable enough to reliably stop a world-destroying non-em AGI, or an em about to make some world-destroying mistake because of its human-derived flaws? Or would we need to arm them with additional tools that fall under the umbrella of AGI safety anyway?
The only reason we care about AI Safety is because we believe the consequences are potentially existential. If it wasn’t, there would be no need for safety.