I am not an expert, but as I remember it, it was a claim that “any system that follows certain axioms can be modeled as maximizing some utility function”. The axioms assumed that there were no circular preferences—if someone prefers A to B, B to C, and C to A, it is impossible to define a utility function such that u(A) > u(B) > u(C) > u(A) -- and that if the system says that A > B > C, it can decide between e.g. a 100% chance of B, and a 50% chance of A with a 50% chance of C, again in a way that is consistent.
I am not sure how this works when the system is allowed to take current time into account, for example when it is allowed to prefer A to B on Monday but prefer B to A on Tuesday. I suppose that in such situation any system can trivially be modeled by a utility function that at each moment assigns utility 1 to what the system actually did in that moment, and utility 0 to everything else.
Corrigibility is incompatible with assigning utility to everything in advance. A system that has preferences about future will also have a preference about not having its utility function changed. (For the same reason people have a preference not to be brainwashed, or not to take drugs, even if after brainwashing they are happy about having been brainwashed, and after getting addicted they do want more drugs.)
Corrigible system would be like: “I prefer A to B at this moment, but if humans decide to fix me and make me prefer B to A, then I prefer B to A”. In other words, it doesn’t have values for u(A) and u(B), or it doesn’t always act according to those values. A consistent system that currently prefers A to B would prefer not to be fixed.
I am not an expert, but as I remember it, it was a claim that “any system that follows certain axioms can be modeled as maximizing some utility function”. The axioms assumed that there were no circular preferences—if someone prefers A to B, B to C, and C to A, it is impossible to define a utility function such that u(A) > u(B) > u(C) > u(A) -- and that if the system says that A > B > C, it can decide between e.g. a 100% chance of B, and a 50% chance of A with a 50% chance of C, again in a way that is consistent.
I am not sure how this works when the system is allowed to take current time into account, for example when it is allowed to prefer A to B on Monday but prefer B to A on Tuesday. I suppose that in such situation any system can trivially be modeled by a utility function that at each moment assigns utility 1 to what the system actually did in that moment, and utility 0 to everything else.
Corrigibility is incompatible with assigning utility to everything in advance. A system that has preferences about future will also have a preference about not having its utility function changed. (For the same reason people have a preference not to be brainwashed, or not to take drugs, even if after brainwashing they are happy about having been brainwashed, and after getting addicted they do want more drugs.)
Corrigible system would be like: “I prefer A to B at this moment, but if humans decide to fix me and make me prefer B to A, then I prefer B to A”. In other words, it doesn’t have values for u(A) and u(B), or it doesn’t always act according to those values. A consistent system that currently prefers A to B would prefer not to be fixed.
I think John’s 1st bullet point was referring to an argument you can find in https://www.lesswrong.com/posts/NxF5G6CJiof6cemTw/coherence-arguments-do-not-entail-goal-directed-behavior and related.