You’re missing the point, or perhaps I’m missing your point. A paperclip maximiser implemented by having the program experience subjective pleasure when considering an action that results in lots of paperclips, and which decides by taking the action with the highest associated subjective pleasure, is still a paperclip maximiser.
So, I think you’re confusing levels. On the decision making level, you can hypothesise that decisions are made by attaching a “pleasure” feeling to each option and taking the one with highest pleasure. Sure, fine. But this doesn’t mean it’s wrong for an option which predictably results in less physical pleasure later to feel less pleasurable during decision making. The decision system could have been implemented equally well by associating options with colors and picking the brightest or something, without meaning the agent is irrational to take an action that physically darkens the environment. This is just a way of implementing the algorithm, which is not about the brightness of the environment or the light levels observed by the agent.
This is what I mean by ”(I like the thought of X) would seem to be an unnecessary step”. The implementation is not particularly relevant to the values. Noticing that pleasure is there at a step in the decision process doesn’t tell you what should feel pleasurable and what shouldn’t, it just tells you a bit about the mechanisms.
Of course I believe that pleasure has intrinsic value. We value fun; pleasure can be fun. But I can’t believe pleasure is the only thing with intrinsic value. We don’t use nozick’s pleasure machine, we don’t choose to be turned into orgasmium, we are willing to be hurt for higher benefits. I don’t think any of those things are mistakes.
You’re missing the point, or perhaps I’m missing your point. A paperclip maximiser implemented by having the program experience subjective pleasure when considering an action that results in lots of paperclips, and which decides by taking the action with the highest associated subjective pleasure, is still a paperclip maximiser.
So, I think you’re confusing levels. On the decision making level, you can hypothesise that decisions are made by attaching a “pleasure” feeling to each option and taking the one with highest pleasure. Sure, fine. But this doesn’t mean it’s wrong for an option which predictably results in less physical pleasure later to feel less pleasurable during decision making. The decision system could have been implemented equally well by associating options with colors and picking the brightest or something, without meaning the agent is irrational to take an action that physically darkens the environment. This is just a way of implementing the algorithm, which is not about the brightness of the environment or the light levels observed by the agent.
This is what I mean by ”
(I like the thought of X)
would seem to be an unnecessary step”. The implementation is not particularly relevant to the values. Noticing that pleasure is there at a step in the decision process doesn’t tell you what should feel pleasurable and what shouldn’t, it just tells you a bit about the mechanisms.Of course I believe that pleasure has intrinsic value. We value fun; pleasure can be fun. But I can’t believe pleasure is the only thing with intrinsic value. We don’t use nozick’s pleasure machine, we don’t choose to be turned into orgasmium, we are willing to be hurt for higher benefits. I don’t think any of those things are mistakes.