First of all, is the existence of such an agent implausible? Not really, considering there are masochists out there and that, to some individuals, ignorance is bliss.
Why argue for plausibility of something when it clearly exists? I have personally met several people who fit your definition of obscurantist and I don’t doubt that you have too.
How much, then, will be left of an obscurantist’s identity upon coherently extrapolating their desires? The answers is probably not much, if anything at all.
Is there some argument for the probable answer? I don’t find it obvious.
Always good to be reminded that different people find different things obvious and, for exactly this reason, a little redundancy doesn’t hurt in the first case!
To answer your second question: an obscurantist might want to act as if it did not know certain propositions, but CEV extrapolates desires on the basis of knowledge that might include those same propositions, the ignorance of which constitutes a core part of the obscurantist’s identity.
There are obscurantists who wear their obscurantism as attire, proudly claiming that it is impossible to know whether God exists. It can be said, perhaps, that such an obscurantist has a preference for not knowing the answer to the question, for never storing a belief of “the God does (not) exist” in his brain. But still all the obscurantist’s decisions are the same as if he believed that there is no God—the obscurantist belief bears no influence on other preferences. In such a case, you may well argue that the extrapolated volition of the obscurantist is to act as if he knew the answer and therefore the obscurantist beliefs are shattered. But this is also true for his non-extrapolated volition. If the non-extrapolated volition already ignores the obscurantist belief and can coexist with it, why is this possibility excluded for the extrapolated volition? Because of the “coherent” part? Does coherence of volition require that one is not mistaken about one’s actual desires? (This is a honest question; I think that “volition” refers to the set of desires, which is to be made coherent by extrapolation in case of CEV, but that it doesn’t refer to beliefs about the desires. But I haven’t been interested in CEV that much and may be mistaken about this.)
The more interesting case is an obscurantist who holds obscurantism as a worldview with real consequences. Talking about things that are plausible (I am not sure whether this kind of obscurantists exist in non-negligible numbers), imagine a woman who holds that the efficacy of homoeopathics can never be established with any reasonable certainty. Now she may get cancer and have two possibilities for treatment: a conventional, with 10% chance of success, and a homoeopathic, with 0.1% chance (equal to that of a spontaneous remission). But, in accordance with her obscurantism, she believes that assigning anything except 50% for homoeopathy working would mean that we know the answer here, and since we can’t know, homoeopathy has indeed success chance of 50%.
Acting on these beliefs, she decides for the homoeopathic treatment. One of her desires is to survive, which leads to choosing the conventional treatment upon extrapolation, thus creating conflict with the actual decision. But isn’t it plausible that her another desire, namely to ever decide as if the chance of homoeopathy working were 50%, is enough strong to survive the extrapolation and take precedence upon the desire to survive? People have died for their beliefs many times.
Edit: I find it mildly annoying when, answering a comment or post, people point out obvious things whose relevance to the comment / post is dubious without further explanation. If you think that the non-equivalence of the mentioned beliefs somehow inplies the impossibility to extrapolate obscurantist values, please elaborate. If you just thought that I might have commited a sloppy inference and it would be cool to correct me on it, please don’t do that. It (1) derails the discussion to issues of uninteresting nitpickery and (2) motivates the commenters to clutter their comments with disclaimers in order to avoid being suspected of sloppy reasoning.
To answer your second question: an obscurantist might want to act as if it did not know certain propositions, but CEV extrapolates desires on the basis of knowledge that might include those same propositions, the ignorance of which constitutes a core part of the obscurantist’s identity.
What definition of CEV do you use that you get around the “were more the people we wished we were (...) extrapolated as we wish that extrapolated, interpreted as we wish that interpreted” part of CEV (page 6 here), which would block such an extrapolation against the obscurantist’s desires?
CEV against the obscurantist’s desires is a contradictio in terminis.
What definition of CEV do you use that you get around the “were more the people we wished we were (...) extrapolated as we wish that extrapolated, interpreted as we wish that interpreted” part of CEV (page 6 here), which would block such an extrapolation against the obscurantist’s desires?
None, as I simply don’t get around that part of CEV.
CEV against the obscurantist’s desires is a contradictio in terminis.
Indeed it is, but so could be CEV of the obscurantist’s desires in the first place; that’s one of the issues I’m raising, to which I genuinely don’t know the answer.
To see how that could happen, consider the following analogy. Let q ::= “all literals in this conjunction are true” in the unsatisfiable conjunction ‘p ∧ ¬p ∧ q’; here ‘p’ stands for “if we knew more”—a statement taken from the same paragraph you quoted—while ‘¬p’ and ‘q’ stand for consequences of the remaining of CEV’s requisites.
I’m not sure how this chimes with “Do this force us to renounce to the idea of personal CEV [emphasis mine]? Hardly so.”
There are infinitely many possible ways of extrapolating desires. But if you don’t get around the part of “more the people we wished we were” (etc.), let’s not call your musings on extrapolating CEV, because it doesn’t fit the major criteria.
If an obscurantist (or anyone else for that matter) does not wish for his desires to change in any way, there is no personal CEV of him. Simple as that.
There may be other sensible ways of extrapolating / streamlining a utility function. It’s an open question, and one that’s much bigger than just CEV, the CEV part (as it’s defined) is often answered easily enough.
Assume there’s no personal CEV for certain obscurantists, then we are left with a theory that’s supposed to tells us how to make people happy—i.e. CEV—and the example of an agent who cannot be made happy through their personal CEV—i.e. an obscurantist; as the whole point of CEV is desire-satisfaction, if that fails to occur then the proposal isn’t exactly fulfilling its role. You’re correct that my musings aren’t only on CEV, as they relate to the bigger question of what is a correct desire-satisfaction theory of well-being, which in turn might require figuring out how to extrapolate utility functions.
Why argue for plausibility of something when it clearly exists? I have personally met several people who fit your definition of obscurantist and I don’t doubt that you have too.
Is there some argument for the probable answer? I don’t find it obvious.
Always good to be reminded that different people find different things obvious and, for exactly this reason, a little redundancy doesn’t hurt in the first case!
To answer your second question: an obscurantist might want to act as if it did not know certain propositions, but CEV extrapolates desires on the basis of knowledge that might include those same propositions, the ignorance of which constitutes a core part of the obscurantist’s identity.
There are obscurantists who wear their obscurantism as attire, proudly claiming that it is impossible to know whether God exists. It can be said, perhaps, that such an obscurantist has a preference for not knowing the answer to the question, for never storing a belief of “the God does (not) exist” in his brain. But still all the obscurantist’s decisions are the same as if he believed that there is no God—the obscurantist belief bears no influence on other preferences. In such a case, you may well argue that the extrapolated volition of the obscurantist is to act as if he knew the answer and therefore the obscurantist beliefs are shattered. But this is also true for his non-extrapolated volition. If the non-extrapolated volition already ignores the obscurantist belief and can coexist with it, why is this possibility excluded for the extrapolated volition? Because of the “coherent” part? Does coherence of volition require that one is not mistaken about one’s actual desires? (This is a honest question; I think that “volition” refers to the set of desires, which is to be made coherent by extrapolation in case of CEV, but that it doesn’t refer to beliefs about the desires. But I haven’t been interested in CEV that much and may be mistaken about this.)
The more interesting case is an obscurantist who holds obscurantism as a worldview with real consequences. Talking about things that are plausible (I am not sure whether this kind of obscurantists exist in non-negligible numbers), imagine a woman who holds that the efficacy of homoeopathics can never be established with any reasonable certainty. Now she may get cancer and have two possibilities for treatment: a conventional, with 10% chance of success, and a homoeopathic, with 0.1% chance (equal to that of a spontaneous remission). But, in accordance with her obscurantism, she believes that assigning anything except 50% for homoeopathy working would mean that we know the answer here, and since we can’t know, homoeopathy has indeed success chance of 50%.
Acting on these beliefs, she decides for the homoeopathic treatment. One of her desires is to survive, which leads to choosing the conventional treatment upon extrapolation, thus creating conflict with the actual decision. But isn’t it plausible that her another desire, namely to ever decide as if the chance of homoeopathy working were 50%, is enough strong to survive the extrapolation and take precedence upon the desire to survive? People have died for their beliefs many times.
Holding that the efficacy of homeopathics can never be established with any reasonable certainty != assigning a success chance of 50%.
Tell that to the hypothetical obscurantist.
Edit: I find it mildly annoying when, answering a comment or post, people point out obvious things whose relevance to the comment / post is dubious without further explanation. If you think that the non-equivalence of the mentioned beliefs somehow inplies the impossibility to extrapolate obscurantist values, please elaborate. If you just thought that I might have commited a sloppy inference and it would be cool to correct me on it, please don’t do that. It (1) derails the discussion to issues of uninteresting nitpickery and (2) motivates the commenters to clutter their comments with disclaimers in order to avoid being suspected of sloppy reasoning.
What definition of CEV do you use that you get around the “were more the people we wished we were (...) extrapolated as we wish that extrapolated, interpreted as we wish that interpreted” part of CEV (page 6 here), which would block such an extrapolation against the obscurantist’s desires?
CEV against the obscurantist’s desires is a contradictio in terminis.
None, as I simply don’t get around that part of CEV.
Indeed it is, but so could be CEV of the obscurantist’s desires in the first place; that’s one of the issues I’m raising, to which I genuinely don’t know the answer. To see how that could happen, consider the following analogy. Let q ::= “all literals in this conjunction are true” in the unsatisfiable conjunction ‘p ∧ ¬p ∧ q’; here ‘p’ stands for “if we knew more”—a statement taken from the same paragraph you quoted—while ‘¬p’ and ‘q’ stand for consequences of the remaining of CEV’s requisites.
I’m not sure how this chimes with “Do this force us to renounce to the idea of personal CEV [emphasis mine]? Hardly so.”
There are infinitely many possible ways of extrapolating desires. But if you don’t get around the part of “more the people we wished we were” (etc.), let’s not call your musings on extrapolating CEV, because it doesn’t fit the major criteria.
If an obscurantist (or anyone else for that matter) does not wish for his desires to change in any way, there is no personal CEV of him. Simple as that.
There may be other sensible ways of extrapolating / streamlining a utility function. It’s an open question, and one that’s much bigger than just CEV, the CEV part (as it’s defined) is often answered easily enough.
Assume there’s no personal CEV for certain obscurantists, then we are left with a theory that’s supposed to tells us how to make people happy—i.e. CEV—and the example of an agent who cannot be made happy through their personal CEV—i.e. an obscurantist; as the whole point of CEV is desire-satisfaction, if that fails to occur then the proposal isn’t exactly fulfilling its role. You’re correct that my musings aren’t only on CEV, as they relate to the bigger question of what is a correct desire-satisfaction theory of well-being, which in turn might require figuring out how to extrapolate utility functions.