I think that I’d rather have an uploaded crow brain have its computational power and memory substantially increased and then go FOOM than have an arbitrary powerful optimization process; just because a neuromorphic AI wouldn’t have values that are precisely human doesn’t mean it would be totally devoid of value from our point of view.
I expect it would; even a human whose brain was meddled with to make it more intelligent is probably a very bad idea, unless this modified human builds a modified-human-Friendly-AI (in which case some value drift would probably be worth protection from existential risk) or, even better, a useful FAI theory elicited Oracle AI-style. The crucial question here is the character of FOOMing, how much of initial value is retained.
I expect it would; even a human whose brain was meddled with to make it more intelligent is probably a very bad idea, unless this modified human builds a modified-human-Friendly-AI (in which case some value drift would probably be worth protection from existential risk) or, even better, a useful FAI theory elicited Oracle AI-style. The crucial question here is the character of FOOMing, how much of initial value is retained.