For realist, wireheading isn’t a good aim. For anti realists, it is the only aim.
Realism doesn’t preclude ethical frameworks that endorse wireheading.
I’m less clear about the second part, though.
Rejecting (sufficiently well implemented) wireheading requires valuing things other than one’s own experience. I’m not yet clear on how one goes about valuing things other than one’s own experience in an instrumentalist framework, but then again I’m not sure I could explain to someone who didn’t already understand it how I go about valuing things other than my own experience in a realist framework, either.
but then again I’m not sure I could explain to someone who didn’t already understand it how I go about valuing things other than my own experience in a realist framework, either.
Realism doesn’t preclude ethical frameworks that endorse wireheading
No, but they are a minority interest.
’m not yet clear on how one goes about valuing things other than one’s own experience in an instrumentalist framework, but then again I’m not sure I could explain to someone who didn’t already understand it how I go about valuing things other than my own experience in a realist framework, either.
If someone accepts that reality exists, you have a head start. Why do anti-realists care about accurate prediction? They don’t think predictive models represent and external reality, and they don;t think accurate models can be ued as a basis to change anything external. Either prediction is an end in itself, or its for improving inputs.
they don;t think accurate models can be ued as a basis to change anything external. Either prediction is an end in itself, or its for improving inputs.
My understanding of shminux’s position is that accurate models can be used, somehow, to improve inputs.
I don’t yet understand how that is even in principle possible on his model, though I hope to improve my understanding.
Your last statement shows that you have much to learn from TheOtherDave about the principle of charity. Specifically, don’t think the other person to be stupider than you are, without a valid reason. So, if you come up with a trivial objection to their point, consider that they might have come across it before and addressed it in some way. They might still be wrong, but likely not in the obvious ways.
Sorry, just realized I skipped over the first part of your comment.
It happens, but this should not be the initial assumption.
Doesn’t that depend on the prior? I think most holders of certain religious or political beliefs, for instance, do so for trivially wrong reasons*. Perhaps you mean it should not be the default assumption here?
Realism doesn’t preclude ethical frameworks that endorse wireheading.
I’m less clear about the second part, though.
Rejecting (sufficiently well implemented) wireheading requires valuing things other than one’s own experience. I’m not yet clear on how one goes about valuing things other than one’s own experience in an instrumentalist framework, but then again I’m not sure I could explain to someone who didn’t already understand it how I go about valuing things other than my own experience in a realist framework, either.
See The Domain of Your Utility Function.
No, but they are a minority interest.
If someone accepts that reality exists, you have a head start. Why do anti-realists care about accurate prediction? They don’t think predictive models represent and external reality, and they don;t think accurate models can be ued as a basis to change anything external. Either prediction is an end in itself, or its for improving inputs.
My understanding of shminux’s position is that accurate models can be used, somehow, to improve inputs.
I don’t yet understand how that is even in principle possible on his model, though I hope to improve my understanding.
Your last statement shows that you have much to learn from TheOtherDave about the principle of charity. Specifically, don’t think the other person to be stupider than you are, without a valid reason. So, if you come up with a trivial objection to their point, consider that they might have come across it before and addressed it in some way. They might still be wrong, but likely not in the obvious ways.
So where did you address it?
The trouble, of course, is that sometimes people really are wrong in “obvious” ways. Probably not high-status LWers, I guess.
It happens, but this should not be the initial assumption. And I’m not sure who you mean by “high-status LWers”.
Sorry, just realized I skipped over the first part of your comment.
Doesn’t that depend on the prior? I think most holders of certain religious or political beliefs, for instance, do so for trivially wrong reasons*. Perhaps you mean it should not be the default assumption here?
*Most conspiracy theories, for example.
I was referring to you. PrawnOfFate should not have expected you to make such a mistake, give the evidence.