Can you give examples of what you think humans capability to rewire another’s values are?
As for what justifies the assumption? Nothing. I’m not asking it specifically because I don’t think AIs will have it, I’m asking it so we can identify where the real problem lies. That is, I’m curious whether the real problem in terms of AI behavior being bad is entirely specific to advances in biological technology to which eventual AIs will have access, but we don’t today. If we can conclude this is the case, it might help us in understanding how to tackle the problem. Another way to think of the question I’m asking is take such an AI robot and drop it into todays society. Will it start behaving badly immediately, or will it have to develop technology we don’t have today before it can behave badly?
Can you give examples of what you think humans capability to rewire another’s values are?
As plenty of religious figures have shown over the years, this capability is virtually unlimited. An AI would just have to start a new religion, or take over an existing one and adapt it to its liking.
And yet as time goes on civilization is progressing to more secular values. It will be interesting to see where we are by the time strong AI is possible, especially since we undoubtedly will be changing ourselves to improve our own capabilities. As I said in one of the other comments, I think assuming that humanity can in totality be persuaded to unsavory values, even through religion, is too negative a view of humanity. Humanity’s history with religion is also filled with defiance and an AI that values human well-being will not be pleased with the outrage reaction as it tries to gain followers through persuasive means.
Can you give examples of what you think humans capability to rewire another’s values are?
As for what justifies the assumption? Nothing. I’m not asking it specifically because I don’t think AIs will have it, I’m asking it so we can identify where the real problem lies. That is, I’m curious whether the real problem in terms of AI behavior being bad is entirely specific to advances in biological technology to which eventual AIs will have access, but we don’t today. If we can conclude this is the case, it might help us in understanding how to tackle the problem. Another way to think of the question I’m asking is take such an AI robot and drop it into todays society. Will it start behaving badly immediately, or will it have to develop technology we don’t have today before it can behave badly?
As plenty of religious figures have shown over the years, this capability is virtually unlimited. An AI would just have to start a new religion, or take over an existing one and adapt it to its liking.
And yet as time goes on civilization is progressing to more secular values. It will be interesting to see where we are by the time strong AI is possible, especially since we undoubtedly will be changing ourselves to improve our own capabilities. As I said in one of the other comments, I think assuming that humanity can in totality be persuaded to unsavory values, even through religion, is too negative a view of humanity. Humanity’s history with religion is also filled with defiance and an AI that values human well-being will not be pleased with the outrage reaction as it tries to gain followers through persuasive means.