The question is whether AIs can have a fixed UF …specifically whether they can both self modify and maintain their goals. If they can’t, there is no point in loading then with human values upfront (as they won’t stick to them anyway), and the problem of corrigibility becomes one of getting them to go in the direction we want, not of getting them to budge at all.
Which is not to say that goal unstable AIs will be safe, but they do present different problems and require different solutions. Which could do with being looked at some time.
In the face of iinstability, you can rescue the idea of the utility function by feeding in an agent’s entire history, but rescuing the UF is not what is important. Is stability versus instability. I am still against the use of the phrase utility function, because when people read it, they think time independent utility function, which is why, I think, there is so little consideration of unstable AI.
The question is whether AIs can have a fixed UF …specifically whether they can both self modify and maintain their goals. If they can’t, there is no point in loading then with human values upfront (as they won’t stick to them anyway), and the problem of corrigibility becomes one of getting them to go in the direction we want, not of getting them to budge at all.
Which is not to say that goal unstable AIs will be safe, but they do present different problems and require different solutions. Which could do with being looked at some time.
In the face of iinstability, you can rescue the idea of the utility function by feeding in an agent’s entire history, but rescuing the UF is not what is important. Is stability versus instability. I am still against the use of the phrase utility function, because when people read it, they think time independent utility function, which is why, I think, there is so little consideration of unstable AI.