They’re both kinda wrong because trying to declare one thing the “actual terminal value” is the wrong exercise to be engaging in in the first place!
I disagree. I’m not talking about the intentional stance or such “external” descriptions. I’m claiming that if you took the explicit algorithmic implementation of the human mind and looked over it, you would find some kind of distinct “planner” part, and that part would be something like an idealized utility-maximizer with a pointer to the shard economy in place of its utility function.
It’s not a frame that can be kinda wrong/awkward to use. It’s a specific mechanistic prediction that’s either flat-out right or flat-out wrong.
This is related to my other warning about the word “actual”: this idea that you’re “actually” the control algorithm and not the cupcake heuristic
Mm, I’m more willing to relax this assumption. It ties into my model of self-awareness — I suspect it might be the case that the planner is the thing that’s being fed summaries of the brain’s state, making it literally the thing that’s having qualia. But I haven’t fully worked out my model of that.
I disagree. I’m not talking about the intentional stance or such “external” descriptions. I’m claiming that if you took the explicit algorithmic implementation of the human mind and looked over it, you would find some kind of distinct “planner” part, and that part would be something like an idealized utility-maximizer with a pointer to the shard economy in place of its utility function.
It’s not a frame that can be kinda wrong/awkward to use. It’s a specific mechanistic prediction that’s either flat-out right or flat-out wrong.
Mm, I’m more willing to relax this assumption. It ties into my model of self-awareness — I suspect it might be the case that the planner is the thing that’s being fed summaries of the brain’s state, making it literally the thing that’s having qualia. But I haven’t fully worked out my model of that.