I was hoping somebody would come up with more schools… I think I could interpret the techniques of school 3 as a particular way to implement the `make some edits before you input it into the reasoning engine engine’ prescription of school 2, but maybe school 3 is different from school 2 in how it would describe its solution direction.
There is definitely also a school 4 (or maybe you would say this is the same one as school 3) which considers it to be an obvious truth that that when you run simulations or start up a sandbox, you can supply any starting world state that you like, and there is nothing strange or paradoxical about this. Specifically, if you are an agent considering a choice between taking actions A, B, and C as the next action, you can run different simulations to extrapolate the results of each. If a self-aware agent inside the simulation for action B computes the action that an optimal agent would have taken at the point in time where its simulation started was A, this agent cannot conclude there is a contradiction: such a conclusion would rest on making a category error. (See my answer in this post for a longer discussion of the topic.)
I was hoping somebody would come up with more schools… I think I could interpret the techniques of school 3 as a particular way to implement the `make some edits before you input it into the reasoning engine engine’ prescription of school 2, but maybe school 3 is different from school 2 in how it would describe its solution direction.
There is definitely also a school 4 (or maybe you would say this is the same one as school 3) which considers it to be an obvious truth that that when you run simulations or start up a sandbox, you can supply any starting world state that you like, and there is nothing strange or paradoxical about this. Specifically, if you are an agent considering a choice between taking actions A, B, and C as the next action, you can run different simulations to extrapolate the results of each. If a self-aware agent inside the simulation for action B computes the action that an optimal agent would have taken at the point in time where its simulation started was A, this agent cannot conclude there is a contradiction: such a conclusion would rest on making a category error. (See my answer in this post for a longer discussion of the topic.)