What is the difference between things you-on-reflection says being the definition of an agent’s preference, and running a program that just performs whatever actions you-on-reflection tells it to perform, without the indirection of going through preference?
On reflection, there probably is not much difference.
Well, there is a huge difference, it’s just not in how the decisions of you-on-reflections get processed by some decision theory vs. repeated without change. The setup of you-on-reflection can be thought of as an algorithm, and the decisions or declared preferences are the results of its computation. Computation of an abstract algorithm doesn’t automatically get to affect the real world, as it may fail to actually get carried out, so it has to be channeled by a process that takes place there. And for the purpose of channeling your decisions, a program that just runs your algorithm is no good, it won’t survive AI x-risks (from other AIs, assuming the risks are not resolved), and so won’t get to channel your decisions. On the other hand, a program that runs a sufficiently sane decision theory might be able to survive (including by destroying everything else potentially dangerous to its survival) and eventually get around to computing your decision and affecting the world with it.
When discussing the idea of a program implementing what you on reflection would do, I think we had different ideas in mind. What I meant was that every action the AI would take would be its best approximation of what you-on-reflection would want. This doesn’t sound dangerous to me. I think that approval-based AI and iterated amplification with HCH would be two ways of making approximations to the output of you-on-reflection. And I don’t think they’re unworkably dangerous.
If the AI is instead allowed to take arbitrarily many unaligned actions before taking the actions you’d recommend, then you are right in that that would be very dangerous. I think this was the idea you had in mind, but feel free to correct me.
If we did misunderstand each other, I apologize. If not, then is there something I’m missing? I would think that a program that faithfully outputs some approximation of “what I’d want on reflection” on every action it takes would not perform devastatingly badly. I on reflection wouldn’t want the world destroyed, so I don’t think it would take actions that would destroy it.
Well, there is a huge difference, it’s just not in how the decisions of you-on-reflections get processed by some decision theory vs. repeated without change. The setup of you-on-reflection can be thought of as an algorithm, and the decisions or declared preferences are the results of its computation. Computation of an abstract algorithm doesn’t automatically get to affect the real world, as it may fail to actually get carried out, so it has to be channeled by a process that takes place there. And for the purpose of channeling your decisions, a program that just runs your algorithm is no good, it won’t survive AI x-risks (from other AIs, assuming the risks are not resolved), and so won’t get to channel your decisions. On the other hand, a program that runs a sufficiently sane decision theory might be able to survive (including by destroying everything else potentially dangerous to its survival) and eventually get around to computing your decision and affecting the world with it.
When discussing the idea of a program implementing what you on reflection would do, I think we had different ideas in mind. What I meant was that every action the AI would take would be its best approximation of what you-on-reflection would want. This doesn’t sound dangerous to me. I think that approval-based AI and iterated amplification with HCH would be two ways of making approximations to the output of you-on-reflection. And I don’t think they’re unworkably dangerous.
If the AI is instead allowed to take arbitrarily many unaligned actions before taking the actions you’d recommend, then you are right in that that would be very dangerous. I think this was the idea you had in mind, but feel free to correct me.
If we did misunderstand each other, I apologize. If not, then is there something I’m missing? I would think that a program that faithfully outputs some approximation of “what I’d want on reflection” on every action it takes would not perform devastatingly badly. I on reflection wouldn’t want the world destroyed, so I don’t think it would take actions that would destroy it.