Extrapolating an Obscurantist’s Volition
Consider the case of an obscurantist i.e. an irrational agent who is proudly ignorant and opposes the spread of (certain) knowledge even to themselves. First of all, is the existence of such an agent implausible? Not really, considering there are masochists out there and that, to some individuals, ignorance is bliss. How much, then, will be left of an obscurantist’s identity upon coherently extrapolating their desires? The answers is probably not much, if anything at all.
Do this force us to renounce to the idea of personal CEV? Hardly so. Instead, do we decry the legitimacy of the obscurantist’s desires? Perhaps, but a convincing argument must be provided for the ethical aspects of such a line of thought; a utilitarian could draw support from the societal benefits of increased epistemic hygiene in the absence of obscurantists.
In any case, this (admittedly contrived) example illustrates that there are pressing issues regarding CEV and personal identity. Also, on a related note, I recently heard a leading decision theorist say that their greatest concern with Ideal Advisor Theories was how desires become no longer the individuals’ but, rather, those of their advisors; it may well be the case that personal CEV incurs in the same issues, at least under the obscurantist’s conditions.
The limiting case above also reveals a subtle interplay between knowledge and volition; our desires might (implicitly) involve not wanting to know certain propositions, wanting to not know certain propositions, not wanting to act as if we knew certain propositions, wanting to act as if we did not know certain propositions.
What I just presented is not a rejection of the idea of personal CEV or similar desire-satisfaction theories of well-being, rather it aims to be a pointer to complications one must keep in mind when developing such proposals.
If you take away a mercenary’s money, he is unhappy, but by no means no longer a mercenary. Similarly, giving an obscurantist knowledge makes him unhappy, but it doesn’t make him no longer an obscurantist.
Sure, but the whole point of CEV is desire-satisfaction—in other words, making people happy—so if that fails to occur the proposal isn’t exactly fulfilling its role.
In any case CEV needn’t result in knowledge being fed directly to the agent, so it hardly affects an obscurantist who merely doesn’t want to know certain propositions; more problematic is an obscurantist who wants to act as if they didn’t know certain propositions.
Wait, so now we need “convincing arguments” for our terminal values (edit: which follows from ‘we can decry terminal values using convincing arguments’)? They need to provide e.g. “societal benefits”? According to whom, or whose standard? What else do we need convincing arguments for, sexual preferences? Or just obscurantism?
No, I do not claim any such thing. To clarify, that statement and the related paragraph assume that there might be grounds for claiming irrational behaviour is to be decried. The reason for this is that a given utility function, with some exceptions, cannot be maximised if its maximiser shuns knowledge as an obscurantist would; this is not generalisable to “sexual preferences” as such, but applies to behaviours conflicting with self-improvement, rationality, preservation of utility functions, prevention of counterfeit utilities, self-protectiveness, efficient acquisition and use of resource, et cetera. The exceptions are utility functions that directly conflict with the aforementioned drives; I’m personally unsure whether to consider them legitimate, but do concede that others might be more opinionated.
Why argue for plausibility of something when it clearly exists? I have personally met several people who fit your definition of obscurantist and I don’t doubt that you have too.
Is there some argument for the probable answer? I don’t find it obvious.
Always good to be reminded that different people find different things obvious and, for exactly this reason, a little redundancy doesn’t hurt in the first case!
To answer your second question: an obscurantist might want to act as if it did not know certain propositions, but CEV extrapolates desires on the basis of knowledge that might include those same propositions, the ignorance of which constitutes a core part of the obscurantist’s identity.
There are obscurantists who wear their obscurantism as attire, proudly claiming that it is impossible to know whether God exists. It can be said, perhaps, that such an obscurantist has a preference for not knowing the answer to the question, for never storing a belief of “the God does (not) exist” in his brain. But still all the obscurantist’s decisions are the same as if he believed that there is no God—the obscurantist belief bears no influence on other preferences. In such a case, you may well argue that the extrapolated volition of the obscurantist is to act as if he knew the answer and therefore the obscurantist beliefs are shattered. But this is also true for his non-extrapolated volition. If the non-extrapolated volition already ignores the obscurantist belief and can coexist with it, why is this possibility excluded for the extrapolated volition? Because of the “coherent” part? Does coherence of volition require that one is not mistaken about one’s actual desires? (This is a honest question; I think that “volition” refers to the set of desires, which is to be made coherent by extrapolation in case of CEV, but that it doesn’t refer to beliefs about the desires. But I haven’t been interested in CEV that much and may be mistaken about this.)
The more interesting case is an obscurantist who holds obscurantism as a worldview with real consequences. Talking about things that are plausible (I am not sure whether this kind of obscurantists exist in non-negligible numbers), imagine a woman who holds that the efficacy of homoeopathics can never be established with any reasonable certainty. Now she may get cancer and have two possibilities for treatment: a conventional, with 10% chance of success, and a homoeopathic, with 0.1% chance (equal to that of a spontaneous remission). But, in accordance with her obscurantism, she believes that assigning anything except 50% for homoeopathy working would mean that we know the answer here, and since we can’t know, homoeopathy has indeed success chance of 50%.
Acting on these beliefs, she decides for the homoeopathic treatment. One of her desires is to survive, which leads to choosing the conventional treatment upon extrapolation, thus creating conflict with the actual decision. But isn’t it plausible that her another desire, namely to ever decide as if the chance of homoeopathy working were 50%, is enough strong to survive the extrapolation and take precedence upon the desire to survive? People have died for their beliefs many times.
Holding that the efficacy of homeopathics can never be established with any reasonable certainty != assigning a success chance of 50%.
Tell that to the hypothetical obscurantist.
Edit: I find it mildly annoying when, answering a comment or post, people point out obvious things whose relevance to the comment / post is dubious without further explanation. If you think that the non-equivalence of the mentioned beliefs somehow inplies the impossibility to extrapolate obscurantist values, please elaborate. If you just thought that I might have commited a sloppy inference and it would be cool to correct me on it, please don’t do that. It (1) derails the discussion to issues of uninteresting nitpickery and (2) motivates the commenters to clutter their comments with disclaimers in order to avoid being suspected of sloppy reasoning.
What definition of CEV do you use that you get around the “were more the people we wished we were (...) extrapolated as we wish that extrapolated, interpreted as we wish that interpreted” part of CEV (page 6 here), which would block such an extrapolation against the obscurantist’s desires?
CEV against the obscurantist’s desires is a contradictio in terminis.
None, as I simply don’t get around that part of CEV.
Indeed it is, but so could be CEV of the obscurantist’s desires in the first place; that’s one of the issues I’m raising, to which I genuinely don’t know the answer. To see how that could happen, consider the following analogy. Let q ::= “all literals in this conjunction are true” in the unsatisfiable conjunction ‘p ∧ ¬p ∧ q’; here ‘p’ stands for “if we knew more”—a statement taken from the same paragraph you quoted—while ‘¬p’ and ‘q’ stand for consequences of the remaining of CEV’s requisites.
I’m not sure how this chimes with “Do this force us to renounce to the idea of personal CEV [emphasis mine]? Hardly so.”
There are infinitely many possible ways of extrapolating desires. But if you don’t get around the part of “more the people we wished we were” (etc.), let’s not call your musings on extrapolating CEV, because it doesn’t fit the major criteria.
If an obscurantist (or anyone else for that matter) does not wish for his desires to change in any way, there is no personal CEV of him. Simple as that.
There may be other sensible ways of extrapolating / streamlining a utility function. It’s an open question, and one that’s much bigger than just CEV, the CEV part (as it’s defined) is often answered easily enough.
Assume there’s no personal CEV for certain obscurantists, then we are left with a theory that’s supposed to tells us how to make people happy—i.e. CEV—and the example of an agent who cannot be made happy through their personal CEV—i.e. an obscurantist; as the whole point of CEV is desire-satisfaction, if that fails to occur then the proposal isn’t exactly fulfilling its role. You’re correct that my musings aren’t only on CEV, as they relate to the bigger question of what is a correct desire-satisfaction theory of well-being, which in turn might require figuring out how to extrapolate utility functions.