You’re mixing up different things: (A)- a program which will produce an optimal existence for me (B)- the actual optimal existence for me.
You’re saying that if (A) is so fully understood that I feel no excitement studying it, then (B) will likewise be unexciting.
This doesn’t follow. Tiny fully understood programs produce hugely varied and unanticipated outputs.
If someone fully understands (and is bored by) the laws of quantum mechanics, it doesn’t follow that they are bored by art or architecture or economics, even though everything in the universe (including art or architecture or economics) is eventually an application (many, many layers removed) of particle physics.
Another point that doesn’t follow is your seeming assumption that “predictable” and “well-understood” is the same as “boring”. Not all feelings of beauty and appreciation stem from surprise or ignorance.
You’re saying that if (A) is so fully understood that I feel no excitement studying it, then (B) will likewise be unexciting.
Then I wasn’t clear enough, because that’s not what I tried to say. I tried to say that from the subjective perspective of a program that completely understands a human being and its complex values, the satisfaction of these complex values will be no more interesting than wireheading.
Tiny fully understood programs produce hugely varied and unanticipated outputs.
If someone fully understands (and is bored by) the laws of quantum mechanics, it doesn’t follow that they are bored by art or architecture or economics...
You can’t predict art from quantum mechanics. You can’t predictably self-improve if your program is unpredictable. Given that you accept planned self-improvement, I claim that the amount of introspection that is required to do so makes your formerly complex values appear to be simple.
Another point that doesn’t follow is your seeming assumption that “predictable” and “well-understood” is the same as “boring”. Not all feelings of beauty and appreciation stem from surprise or ignorance.
I never claimed that. The point is that a lot of what humans value now will be gone or strongly diminished.
Then I wasn’t clear enough, because that’s not what I tried to say.
I think you should stop using words like “emulation” and “computation” when they’re not actually needed.
I claim that the amount of introspection that is required to do so makes your formerly complex values appear to be simple.
Okay, then my answer is that I place value on things and people and concepts, but I don’t think I place terminal value on whether said things/people/concepts are simple or complex, so again I don’t think I’d care whether I would be considered simple or complex by someone else, or even by myself.
You’re mixing up different things:
(A)- a program which will produce an optimal existence for me
(B)- the actual optimal existence for me.
You’re saying that if (A) is so fully understood that I feel no excitement studying it, then (B) will likewise be unexciting.
This doesn’t follow. Tiny fully understood programs produce hugely varied and unanticipated outputs.
If someone fully understands (and is bored by) the laws of quantum mechanics, it doesn’t follow that they are bored by art or architecture or economics, even though everything in the universe (including art or architecture or economics) is eventually an application (many, many layers removed) of particle physics.
Another point that doesn’t follow is your seeming assumption that “predictable” and “well-understood” is the same as “boring”. Not all feelings of beauty and appreciation stem from surprise or ignorance.
Then I wasn’t clear enough, because that’s not what I tried to say. I tried to say that from the subjective perspective of a program that completely understands a human being and its complex values, the satisfaction of these complex values will be no more interesting than wireheading.
You can’t predict art from quantum mechanics. You can’t predictably self-improve if your program is unpredictable. Given that you accept planned self-improvement, I claim that the amount of introspection that is required to do so makes your formerly complex values appear to be simple.
I never claimed that. The point is that a lot of what humans value now will be gone or strongly diminished.
I think you should stop using words like “emulation” and “computation” when they’re not actually needed.
Okay, then my answer is that I place value on things and people and concepts, but I don’t think I place terminal value on whether said things/people/concepts are simple or complex, so again I don’t think I’d care whether I would be considered simple or complex by someone else, or even by myself.