I think UDT manages to sidestep this question. Would you agree? (To be more explicit, UDT manages to make decisions without having to explicitly determine whether something in the world is a “causal implementation” of itself. It just makes logical deductions about the world from statements like “S outputs X” where S is a code string that is its own source code, and that seems to be enough.)
But unfortunately I can’t see how to similarly sidestep the problem of consciousness, if we humans are to make use of UDT in a formal way. The problem is that we don’t have access to our own source code, so we can’t write down S directly. All we have are access to subjective sensations and memories, and it seems like we need a theory of consciousness to tell us how to write down the description of an object (or a class of objects), given its subjective sensations and memories.
A UDT agent is a sort of ethereal thing, a class of logically-equivalent algorithms (up to rewriting and such) that can never believe it “sees” one universe—only the equivalence class of universes that gave it equivalent sensory inputs up to now. Okay, I can agree that it’s meaningless to ask “where” you are in the universe. But it doesn’t seem meaningless to ask you for your beliefs about your future sensory input #11, given sensory inputs #1-#10. Unfortunately, it’s hard to see how you can define such credences—the naive idea is to count different instantiations of the algorithm within the world program, but we just threw away our concept of what counts as an “instance”.
The equivalence class of algorithms is wider than one might think. For example, if (by way of some tricky mathematical fact) the algorithm’s output is in fact independent from the value of one of the inputs, say input #11, then the algorithm cannot “perceive” that input. In other words, you cannot register any sensation that doesn’t end up affecting your actions in the future. Weird, huh.
if (by way of some tricky mathematical fact) the algorithm’s output is in fact independent from the value of one of the inputs, say input #11, then the algorithm cannot “perceive” that input. In other words, you cannot register any sensation that doesn’t end up affecting your actions in the future. Weird, huh.
You all may be interested in some recent (since 1990 or so) work in theoretical computer science dealing roughly with “what is observationally equivalent with what”. Google for strings including the keywords “bisimulation”, “process algebra”, and “observational equivalence”. Or maybe not—it is unclear to me what you think the problem really is.
UDT sidesteps that question as well, because while it makes decisions, it never needs to compute things like “beliefs about your future sensory input #11, given sensory inputs #1-#10”. I would say that an UDT agent doesn’t have such beliefs.
Not quite sure what this part has to do with what I wrote. If you still think it’s relevant, can you explain how?
Your answers have showed me that my original comment was wrong: the question of “algorithmicness” is uninteresting unless we imagine that algorithms can have “subjective experience”, which brings us back to consciousness again. Oh well, another line of attack goes dead.
A UDT agent is a program (axioms), not algorithm (theory). The way in which something is specified matters to the way it decides how to behave. If you are only talking about behavior, and not underlying decision-making, then you can abstract from the detail of how it’s generated, but then you presuppose that condition.
I think UDT manages to sidestep this question. Would you agree? (To be more explicit, UDT manages to make decisions without having to explicitly determine whether something in the world is a “causal implementation” of itself. It just makes logical deductions about the world from statements like “S outputs X” where S is a code string that is its own source code, and that seems to be enough.)
But unfortunately I can’t see how to similarly sidestep the problem of consciousness, if we humans are to make use of UDT in a formal way. The problem is that we don’t have access to our own source code, so we can’t write down S directly. All we have are access to subjective sensations and memories, and it seems like we need a theory of consciousness to tell us how to write down the description of an object (or a class of objects), given its subjective sensations and memories.
The situation with UDT is mysterious.
A UDT agent is a sort of ethereal thing, a class of logically-equivalent algorithms (up to rewriting and such) that can never believe it “sees” one universe—only the equivalence class of universes that gave it equivalent sensory inputs up to now. Okay, I can agree that it’s meaningless to ask “where” you are in the universe. But it doesn’t seem meaningless to ask you for your beliefs about your future sensory input #11, given sensory inputs #1-#10. Unfortunately, it’s hard to see how you can define such credences—the naive idea is to count different instantiations of the algorithm within the world program, but we just threw away our concept of what counts as an “instance”.
The equivalence class of algorithms is wider than one might think. For example, if (by way of some tricky mathematical fact) the algorithm’s output is in fact independent from the value of one of the inputs, say input #11, then the algorithm cannot “perceive” that input. In other words, you cannot register any sensation that doesn’t end up affecting your actions in the future. Weird, huh.
You all may be interested in some recent (since 1990 or so) work in theoretical computer science dealing roughly with “what is observationally equivalent with what”. Google for strings including the keywords “bisimulation”, “process algebra”, and “observational equivalence”. Or maybe not—it is unclear to me what you think the problem really is.
UDT sidesteps that question as well, because while it makes decisions, it never needs to compute things like “beliefs about your future sensory input #11, given sensory inputs #1-#10”. I would say that an UDT agent doesn’t have such beliefs.
Not quite sure what this part has to do with what I wrote. If you still think it’s relevant, can you explain how?
Yes, it seems most of my comment was irrelevant, and even the original question was so weird that I can no longer make sense of it. Sorry.
Your answers have showed me that my original comment was wrong: the question of “algorithmicness” is uninteresting unless we imagine that algorithms can have “subjective experience”, which brings us back to consciousness again. Oh well, another line of attack goes dead.
A UDT agent is a program (axioms), not algorithm (theory). The way in which something is specified matters to the way it decides how to behave. If you are only talking about behavior, and not underlying decision-making, then you can abstract from the detail of how it’s generated, but then you presuppose that condition.