And if the first uploaded person is not sufficiently rational, they will rapidly become Unfriendly AI
“Will” is far too strong. Becoming UFAI at least requires that an upload be given sufficient ability to self-modify (or sufficiently modified from outside), and that IA up to superintelligence on uploads be not only tractable (likely but not guaranteed) but, if it’s going to be the first upload, easy enough that lots more uploads don’t get made first. Digital intelligences are not intrinsically, automatically hard takeoff risks, which it sounds like you’re modeling them as. (Not to mention, up to a point insufficient rationality would make an upload less likely to ever successfully increase its intelligence.)
(That said, there are lots of risks and horrible scenarios involving uploads that don’t require strong superintelligence, just subjective speedup or copiability.)
(Tangentially:)
“Will” is far too strong. Becoming UFAI at least requires that an upload be given sufficient ability to self-modify (or sufficiently modified from outside), and that IA up to superintelligence on uploads be not only tractable (likely but not guaranteed) but, if it’s going to be the first upload, easy enough that lots more uploads don’t get made first. Digital intelligences are not intrinsically, automatically hard takeoff risks, which it sounds like you’re modeling them as. (Not to mention, up to a point insufficient rationality would make an upload less likely to ever successfully increase its intelligence.)
(That said, there are lots of risks and horrible scenarios involving uploads that don’t require strong superintelligence, just subjective speedup or copiability.)