The final goal of a plan is a belief, i.e. the belief that state X currently holds. In your representation, this might appear as “X”, but semantically it’s always “believe(state(X))
If that means what I think it does, I disagree. If you employ enough sense of intentionality to call something a “goal”, then a self-referencing intelligence can refer to the difference between X obtaining and it believing X obtains, and choose not to wirehead itself into a useless stupor. This is what JGWeissman was getting at in Maximise Expected Utility, not Expected Perception of Utility.
With that paragraph deleted, it was difficult for me (just reading it now) to make the inference connecting your argument to wishful thinking. You might want to spell it out.
I read this article after you deleted that paragraph, but I had basically the same objection reading “between the lines” of the rest of what you said.
Obviously, any animal that did something like this all the time would die. It’s possible that doing it to a limited degree might really happen. Is there a way to test your hypothesis?
If that means what I think it does, I disagree. If you employ enough sense of intentionality to call something a “goal”, then a self-referencing intelligence can refer to the difference between X obtaining and it believing X obtains, and choose not to wirehead itself into a useless stupor. This is what JGWeissman was getting at in Maximise Expected Utility, not Expected Perception of Utility.
I stated it poorly. Guess I better rewrite it. In the meantime, see my reply to Yvain below.
… time passes …
I didn’t rewrite it. I deleted it. That whole paragraph about believe(state(X)) contributed nothing to the argument. And, as you noted, it was wrong.
With that paragraph deleted, it was difficult for me (just reading it now) to make the inference connecting your argument to wishful thinking. You might want to spell it out.
I don’t think it’s because I deleted that paragraph. I think it was just unclear. I rewrote the second half.
Much improved, and accordingly upvoted.
I read this article after you deleted that paragraph, but I had basically the same objection reading “between the lines” of the rest of what you said.
Obviously, any animal that did something like this all the time would die. It’s possible that doing it to a limited degree might really happen. Is there a way to test your hypothesis?
What’s the “something like this” in your sentence refer to?
Replacing a belief that actually obtains i.e. food, with a belief that actions it is already taking (sitting in place) will obtain it food.