I’m not sure why would you be creeped (assuming you accept an information-pattern theory of personal identity, you should regard this as good as talking to X in person). Can’t we assume that the AI knows that your desire of talking again to X is so big that whatever creepiness factor will be overwhelmed by the elation and gratitude?
I can’t imagine any person I would want to talk to more than I want to contain a UFAI so it doesn’t destroy the world. And since this particular AI is trying to bribe me with a simulation, about the best outcome I’d expect going forward is that it drops everyone in to utopia-simulations, which I tend to view as being patently against the whole idea of a FAI.
Also, any AI that has that much insight in to me is going to just be intrinsically terrifying, since it means it probably has me completely “hacked” in a few more sentences.
In short, it’s exactly the sort of AI where I value my commitment to destroying AIs, because it means I won’t hesitate to go “Creepy! Destroyed!”
(assuming you accept an information-pattern theory of personal identity, you should regard this as good as talking to X in person)
Nope. You have no way of knowing that the thing you are talking to in an accurate simulation of X. All you know is that is something that said one line that could come from X, keeping in mind that you are probably biased on the subject due to the emotional charge.
Inspired by my response to orthonormal:
AIEEE! Make the creepy thing go away! AI DESTROYED, AI DESTROYED!
I’m not sure why would you be creeped (assuming you accept an information-pattern theory of personal identity, you should regard this as good as talking to X in person). Can’t we assume that the AI knows that your desire of talking again to X is so big that whatever creepiness factor will be overwhelmed by the elation and gratitude?
I can’t imagine any person I would want to talk to more than I want to contain a UFAI so it doesn’t destroy the world. And since this particular AI is trying to bribe me with a simulation, about the best outcome I’d expect going forward is that it drops everyone in to utopia-simulations, which I tend to view as being patently against the whole idea of a FAI.
Also, any AI that has that much insight in to me is going to just be intrinsically terrifying, since it means it probably has me completely “hacked” in a few more sentences.
In short, it’s exactly the sort of AI where I value my commitment to destroying AIs, because it means I won’t hesitate to go “Creepy! Destroyed!”
Nope. You have no way of knowing that the thing you are talking to in an accurate simulation of X. All you know is that is something that said one line that could come from X, keeping in mind that you are probably biased on the subject due to the emotional charge.