What happens afterwards, I don’t know. A perfect upload is trivially aligned. I wouldn’t be that worried about random errors. (Brain damage, mutations and drugs don’t usually make evil geniuses) But the existence of uploading doesn’t stop alignment being a problem. It may hand a decisive strategic advantage to someone, which could be a good thing if that someone happens to be worried about alignment.
Going from a big collection of random BMI data to uploads is hardish. There is no obvious easily optimized metric. It would depend on the particular BMI. I think its fairly likely something else happens first. Like say someone cracking some data efficient algorithm. Or self replicating nanotech. Or something.
An upload (an exact imitation of a human) is the most straightforward way of securing time for alignment research, except it’s not plausible in our world for uploads to be developed before AGIs. The plausible similar thing is more capable language/multimodal models, steeped in human culture, where alignment guarantees at least a priori look very dubious. And an upload probably needs to be value-laden to be efficient enough to give an advantage, while remaining exact in morally relevant ways, though there’s a glimmer of hope generalization can capture this without a need to explicitly set up a fixpoint through extrapolated values. Doing the same with Tool AIs or something is only slightly less speculative than directly developing aligned AGIs without that miracle, so the advantage of an upload is massive.
What happens afterwards, I don’t know. A perfect upload is trivially aligned. I wouldn’t be that worried about random errors. (Brain damage, mutations and drugs don’t usually make evil geniuses) But the existence of uploading doesn’t stop alignment being a problem. It may hand a decisive strategic advantage to someone, which could be a good thing if that someone happens to be worried about alignment.
Going from a big collection of random BMI data to uploads is hardish. There is no obvious easily optimized metric. It would depend on the particular BMI. I think its fairly likely something else happens first. Like say someone cracking some data efficient algorithm. Or self replicating nanotech. Or something.
An upload (an exact imitation of a human) is the most straightforward way of securing time for alignment research, except it’s not plausible in our world for uploads to be developed before AGIs. The plausible similar thing is more capable language/multimodal models, steeped in human culture, where alignment guarantees at least a priori look very dubious. And an upload probably needs to be value-laden to be efficient enough to give an advantage, while remaining exact in morally relevant ways, though there’s a glimmer of hope generalization can capture this without a need to explicitly set up a fixpoint through extrapolated values. Doing the same with Tool AIs or something is only slightly less speculative than directly developing aligned AGIs without that miracle, so the advantage of an upload is massive.
Assuming of course that the first upload/(sufficiently humanlike model ) is developed by someone actually trying to do this.