Humans have easy-to-extract preferences over possible “wiser versions of ourselves.” That is, you can give me a menu of slightly modified versions of myself, and I can try to figure out which of those best capture my real values (or over what kind of process should be used for picking which of those best capture my real values, or etc.). Those wiser versions of ourselves can in turn have preferences over even wiser/smarter versions of ourselves, and we can hope that the process might go on ad infinitum.
This seems a pretty bold claim to me. We might be tempted to construe our regular decision making process as doing this (I come up with what wiser-me might do in the next instant, and then do it), but this to me seems to be misunderstanding how decisions happen by confusing the abstraction of “decision” and “preferences” for the actual process that results in the world ending up in a causally subsequent state which I might later look back on and reify as myself having made some decision. Since I’m suspicious that something like this is going on when the inferential distance is very short, I’m even more suspicious when the inferential distance is longer, as you seem to be proposing.
I’m not sure if I’m arguing against your claim that the situations are not symmetrical, but I do think this reasoning for thinking the situations are not symmetrical is likely flawed because it seems to be to be assuming something about humans being fundamentally different from e-coli that is not.
(There are of course many differences between the two, just not ones that seem relevant to this line of argument.)
This seems a pretty bold claim to me. We might be tempted to construe our regular decision making process as doing this (I come up with what wiser-me might do in the next instant, and then do it), but this to me seems to be misunderstanding how decisions happen by confusing the abstraction of “decision” and “preferences” for the actual process that results in the world ending up in a causally subsequent state which I might later look back on and reify as myself having made some decision. Since I’m suspicious that something like this is going on when the inferential distance is very short, I’m even more suspicious when the inferential distance is longer, as you seem to be proposing.
I’m not sure if I’m arguing against your claim that the situations are not symmetrical, but I do think this reasoning for thinking the situations are not symmetrical is likely flawed because it seems to be to be assuming something about humans being fundamentally different from e-coli that is not.
(There are of course many differences between the two, just not ones that seem relevant to this line of argument.)