A human is not well modelled as a wrapper mind; do you disagree?
Certainly agree. That said, I feel the need to lay out my broader model here. The way I see it, a “wrapper-mind” is a general-purpose problem-solving algorithm hooked up to a static value function. As such:
Are humans proper wrapper-minds? No, certainly not.
Do humans have the fundamental machinery to be wrapper-minds? Yes.
Is any individual run of a human general-purpose problem-solving algorithm essentially equivalent to wrapper-mind-style reasoning? Yes.
Can humans choose to act as wrapper-minds on longer time scales? Yes, approximately, subject to constraints like force of will.
Do most humans, in practice, choose to act as wrapper-minds? No, we switch our targets all the time, value drift is ubiquitous.
Is it desirable for a human to act as a wrapper-mind? That’s complicated.
On the one hand, yes because consistent pursuit of instrumentally convergent goals would lead to you having more resources to spend on whatever values you have.
On the other hand, no because we terminally value this sort of value-drift and self-inconsistency, it’s part of “being human”.
In sum, for humans, there’s a sort of tradeoff between approximating a wrapper-mind, and being an incoherent human, and different people weight it differently in different context. E. g., if you really want to achieve something (earning your first million dollars, averting extinction), and you value it more than having fun being a human, you may choose to act as a wrapper-mind in the relevant context/at the relevant scale.
As such: humans aren’t wrapper-minds, but they can act like them, and it’s sometimes useful to act as one.
Certainly agree. That said, I feel the need to lay out my broader model here. The way I see it, a “wrapper-mind” is a general-purpose problem-solving algorithm hooked up to a static value function. As such:
Are humans proper wrapper-minds? No, certainly not.
Do humans have the fundamental machinery to be wrapper-minds? Yes.
Is any individual run of a human general-purpose problem-solving algorithm essentially equivalent to wrapper-mind-style reasoning? Yes.
Can humans choose to act as wrapper-minds on longer time scales? Yes, approximately, subject to constraints like force of will.
Do most humans, in practice, choose to act as wrapper-minds? No, we switch our targets all the time, value drift is ubiquitous.
Is it desirable for a human to act as a wrapper-mind? That’s complicated.
On the one hand, yes because consistent pursuit of instrumentally convergent goals would lead to you having more resources to spend on whatever values you have.
On the other hand, no because we terminally value this sort of value-drift and self-inconsistency, it’s part of “being human”.
In sum, for humans, there’s a sort of tradeoff between approximating a wrapper-mind, and being an incoherent human, and different people weight it differently in different context. E. g., if you really want to achieve something (earning your first million dollars, averting extinction), and you value it more than having fun being a human, you may choose to act as a wrapper-mind in the relevant context/at the relevant scale.
As such: humans aren’t wrapper-minds, but they can act like them, and it’s sometimes useful to act as one.