“If you ask the AI when it made its decision it will either point to the time after the analysis or it will be wrong.”
I use “decision” precisely to refer the experience that we have when we make a decision, and this experience has no mathematical definition. So you may believe yourself right about this, but you don’t have (and can’t have) any mathematical proof of it.
(I corrected this comment so that it says “mathematical proof” instead of proof in general.)
Making a claim, and then, when given counter-arguments, claiming that one was using an exotic definition seems close to logical rudeness to me.
It also does his initial position a disservice. Rereading the original claim with the professed intended meaning changes it from “not quite technical true” to, basically, nonsense (at least in as much as it claims to pertain to AIs).
I don’t think my definition is … inconsistent with the sense used in decision theory.
You defined decision as a mathematical undefinable experience and suggested that it cannot be subject to proofs. That isn’t even remotely compatible with the sense used in decision theory.
It is compatible with it as an addition to it; the mathematics of decision theory does not have decisions happening at particular moments in time, but it consistent with decision theory to recognize that in real life, decisions do happen at particular moments.
“If you ask the AI when it made its decision it will either point to the time after the analysis or it will be wrong.”
I use “decision” precisely to refer the experience that we have when we make a decision, and this experience has no mathematical definition. So you may believe yourself right about this, but you don’t have (and can’t have) any mathematical proof of it.
(I corrected this comment so that it says “mathematical proof” instead of proof in general.)
I think most people on LessWrong are using “decision” in the sense used in Decision Theory.
Making a claim, and then, when given counter-arguments, claiming that one was using an exotic definition seems close to logical rudeness to me.
It also does his initial position a disservice. Rereading the original claim with the professed intended meaning changes it from “not quite technical true” to, basically, nonsense (at least in as much as it claims to pertain to AIs).
I don’t think my definition is either exotic or inconsistent with the sense used in decision theory.
You defined decision as a mathematical undefinable experience and suggested that it cannot be subject to proofs. That isn’t even remotely compatible with the sense used in decision theory.
It is compatible with it as an addition to it; the mathematics of decision theory does not have decisions happening at particular moments in time, but it consistent with decision theory to recognize that in real life, decisions do happen at particular moments.
If you believe that we can’t have any proof of it, then you’re wasting our time with arguments.
You might have a proof of it, but not a mathematical proof.
Also note that your comment that I would be “wasting our time” implies that you think that you couldn’t be wrong.
How many legs does an animal have if I call a tail a leg and believe all animals are quadrupeds?
How many legs does a dog have if I call a tail a leg?