Assuming the subject doesn’t want to get his head chopped off, then you’re no longer asking the question “what does decision theory say you should do”, you’re asking “what does decision theory say you should do, given that certain types of analysis to determine what decision is the best are not allowed”. Such a question may provide an incentive for the person sitting there in front of a homicidal computer, but doesn’t really illuminate decision theory much.
Also, the human can’t avoid getting his head chopped off by saying “I’ll just not make any decisions that trigger the halting problem”—trying to determine if a line of reasoning will trigger the halting problem would itself trigger the halting problem. You can’t think of this as “either the human answers in a split second, or he knows he’s doing something that won’t produce an answer”.
(Of course, the human could say “I’ll just not make any decisions that are even close to the halting problem”, and avoid triggering the halting problem by also avoiding a big halo of other analyses around it. If he does that, then my first objection is even worse.)
You’re the one who brought computational constraints into the problem, not me. In the abstract sense, a decision-theoretically optimal agent has to be able to solve the halting problem in order to be optimal.
If we start to consider real-world constraints, such as being unable to solve the halting problem, then real-world constraints like having a limit of five seconds to make a decision are totally reasonable as well.
As for how to avoid getting your head chopped off, it’s pretty easy; just press a button within five seconds.
If it’s purely theoretical then why can’t I have a hypercomputer? What’s wrong with simply solving the halting problem by using an oracle, or by running a Turing machine for infinitely many steps before I make my decision?
If I can’t have infinite time, then I might as well have 5 seconds.
If it’s purely theoretical then why can’t I have a hypercomputer? What’s wrong with simply solving the halting problem by using an oracle, or by running a Turing machine for infinitely many steps before I make my decision?
You’re asking the same question three times.
Anyway, an oracle can determine whether a program in a Turing machine can halt. It can’t determine whether it itself can halt.
Any attempt to use an oracle could lead to X predicting Y who tries to predict X using an oracle. That can be equivalent to the oracle trying to determine whether it itself can halt.
If I can’t have infinite time, then I might as well have 5 seconds.
This is of course true, but it just means that both finite time and 5 seconds are bad.
OK, I think I’ve found a source of confusion here.
There’s two fundamentally different questions one could ask:
what is the optimal action for X/X* to perform?
what computations should X/X* perform in order to work out which action ve should perform?
The first question is the standard decision-theoretic question, and in that context the halting problem is of no relevance because we’re solving the problem from the outside, not from the inside.
On the other hand, there is no point to taking the inside or “embedded” view unless we specifically want to consider computational or real-world constraints. In that context, the answer is that it’s pretty stupid for the agent to run a simulation of itself because that obviously won’t work.
Any decision-making algorithm in the real world has to be smart enough not to go into infinite loops. Of course, such an algorithm won’t be optimal, but it would be very silly to expect it to be optimal except in relatively easy cases.
Assuming the subject doesn’t want to get his head chopped off, then you’re no longer asking the question “what does decision theory say you should do”, you’re asking “what does decision theory say you should do, given that certain types of analysis to determine what decision is the best are not allowed”. Such a question may provide an incentive for the person sitting there in front of a homicidal computer, but doesn’t really illuminate decision theory much.
Also, the human can’t avoid getting his head chopped off by saying “I’ll just not make any decisions that trigger the halting problem”—trying to determine if a line of reasoning will trigger the halting problem would itself trigger the halting problem. You can’t think of this as “either the human answers in a split second, or he knows he’s doing something that won’t produce an answer”.
(Of course, the human could say “I’ll just not make any decisions that are even close to the halting problem”, and avoid triggering the halting problem by also avoiding a big halo of other analyses around it. If he does that, then my first objection is even worse.)
I don’t know about that. The study of making decisions under significant constraints (e.g. time) looks very useful to me.
You’re the one who brought computational constraints into the problem, not me. In the abstract sense, a decision-theoretically optimal agent has to be able to solve the halting problem in order to be optimal.
If we start to consider real-world constraints, such as being unable to solve the halting problem, then real-world constraints like having a limit of five seconds to make a decision are totally reasonable as well.
As for how to avoid getting your head chopped off, it’s pretty easy; just press a button within five seconds.
What? Being unable to solve the halting problem is a theoretical constraint, not a real-world constraint.
If it’s purely theoretical then why can’t I have a hypercomputer? What’s wrong with simply solving the halting problem by using an oracle, or by running a Turing machine for infinitely many steps before I make my decision?
If I can’t have infinite time, then I might as well have 5 seconds.
You’re asking the same question three times.
Anyway, an oracle can determine whether a program in a Turing machine can halt. It can’t determine whether it itself can halt.
Any attempt to use an oracle could lead to X predicting Y who tries to predict X using an oracle. That can be equivalent to the oracle trying to determine whether it itself can halt.
This is of course true, but it just means that both finite time and 5 seconds are bad.
OK, I think I’ve found a source of confusion here.
There’s two fundamentally different questions one could ask:
what is the optimal action for X/X* to perform?
what computations should X/X* perform in order to work out which action ve should perform?
The first question is the standard decision-theoretic question, and in that context the halting problem is of no relevance because we’re solving the problem from the outside, not from the inside.
On the other hand, there is no point to taking the inside or “embedded” view unless we specifically want to consider computational or real-world constraints. In that context, the answer is that it’s pretty stupid for the agent to run a simulation of itself because that obviously won’t work.
Any decision-making algorithm in the real world has to be smart enough not to go into infinite loops. Of course, such an algorithm won’t be optimal, but it would be very silly to expect it to be optimal except in relatively easy cases.