If I were to write the case for this in my own words, it might be something like:
There are many different normative criteria we should give some weight to.
One of them is “maximizing EV according to moral theory A”.
But maximizing EV is an intuitively less appealing normative criteria when (i) it’s super unclear and non-robust what credences we ought to put on certain propositions, and (ii) the recommended decision is very different depending on what our exact credences on those propositions are.
So in such cases, as a matter of ethics, you might have the intuition that you should give less weight to “maximize EV according to moral theory A” and more weight to e.g.:
Deontic criteria that don’t use EV.
EV-maximizing according to moral theory B (where B’s recommendations are less sensitive to the propositions that are difficult to put robust credences on).
EV-maximizing within a more narrow “domain”, ignoring the effects outside of that “domain”. (Where the effects within that “domain” are less sensitive to the propositions that are difficult to put robust credences on).
I like this formulation because it seems pretty arbitrary to me where you draw the boundary between a credence that you include in your representor vs. not. (Like: What degree of justification is enough? We’ll always have the problem of induction to provide some degree of arbitrariness.) But if we put this squarely in the domain of ethics, I’m less fuzzed about this, because I’m already sympathetic to being pretty anti-realist about ethics, and there being some degree of arbitrariness in choosing what you care about. (And I certainly feel some intuitive aversion to making choices based on very non-robust credences, and it feels interesting to interpret that as an ~ethical intuition.)
(I’ll reply to the point about arbitrariness in another comment.)
I think it’s generally helpful for conceptual clarity to analyze epistemics separately from ethics and decision theory. E.g., it’s not just EV maximization w.r.t. non-robust credences that I take issue with, it’s any decision rule built on top of non-robust credences. And I worry that without more careful justification, “[consequentialist] EV-maximizing within a more narrow “domain”, ignoring the effects outside of that “domain”″ is pretty unmotivated / just kinda looking under the streetlight. And how do you pick the domain?
(Depends on the details, though. If it turns out that EV-maximizing w.r.t. impartial consequentialism is always sensitive to non-robust credences (in your framing), I’m sympathetic to “EV-maximizing w.r.t. those you personally care about, subject to various deontological side constraints etc.” as a response. Because “those you personally care about” isn’t an arbitrary domain, it’s, well, those you personally care about. The moral motivation for focusing on that domain is qualitatively different from the motivation for impartial consequentialism.)
So I’m hesitant to endorse your formulation. But maybe for most practical purposes this isn’t a big deal, I’m not sure yet.
To be clear: The “domain” thing was just meant to be a vague gesture of the sort of thing you might want to do. (I was trying to include my impression of what eg bracketed choice is trying to do.) I definitely agree that the gesture was vague enough to also include some options that I’d think are unreasonable.
it seems pretty arbitrary to me where you draw the boundary between a credence that you include in your representor vs. not. (Like: What degree of justification is enough? We’ll always have the problem of induction to provide some degree of arbitrariness.)
To spell out how I’m thinking of credence-setting: Given some information, we apply different (vague) non-pragmatic principles we endorse — fit with evidence, Occam’s razor, deference, etc.
Epistemic arbitrariness means making choices in your credence-setting that add something beyond these principles. (Contrast this with mere “formalization arbitrariness”, the sort discussed in the part of the post about vagueness.)
I don’t think the problem of induction forces us to be epistemically arbitrary. Occam’s razor (perhaps an imprecise version!) favors priors that penalize a hypothesis like “the mechanisms that made the sun rise every day in the past suddenly change tomorrow”. This seems to give us grounds for having prior credences narrower than (0, 1), even if there’s some unavoidable formalization arbitrariness. (We can endorse the principle underlying Occam’s razor, “give more weight to hypotheses that posit fewer entities”, without a circular justification like “Occam’s razor worked well in the past”. Admittedly, I don’t feel super satisfied with / unconfused about Occam’s razor, but it’s not just an ad hoc thing.)
By contrast, pinning down a single determinate credence (in the cases discussed in this post) seems to require favoring epistemic weights for no reason. Or at best, a very weak reason that IMO is clearly outweighed by a principle of suspending judgment. So this seems more arbitrary to me than indeterminate credences, since it injects epistemic arbitrariness on top of formalization arbitrariness.
Thanks. It still seems to me like the problem recurs. The application of Occam’s razor to questions like “will the Sun rise tomorrow?” seems more solid than e.g. random intuitions I have about how to weigh up various considerations. But the latter do still seem like a very weak version of the former. (E.g. both do rely on my intuitions; and in both cases, the domain have something in common with cases where my intuitions have worked well before, and something not-in-common.) And so it’s unclear to me what non-arbitrary standards I can use to decide whether I should let both, neither, or just the latter be “outweighed by a principle of suspending judgment”.
(General caveat that I’m not sure if I’m missing your point.)
Sure, there’s still a “problem” in the sense that we don’t have a clean epistemic theory of everything. The weights we put on the importance of different principles, and how well different credences fulfill them, will be fuzzy. But we’ve had this problem all along.
There are options other than (1) purely determinate credences or (2) implausibly wide indeterminate credences. To me, there are very compelling intuitions behind the view that the balance among my epistemic principles is best struck by (3) indeterminate credences that are narrow in proportion to the weight of evidence and how far principles like Occam seem to go. This isn’t objective (neither are any other principles of rationality less trivial than avoiding synchronic sure losses). Maybe your intuitions differ, upon careful reflection. That doesn’t mean it’s a free-for-all. Even if it is, this isn’t a positive argument for determinacy.
both do rely on my intuitions
My intuitions about foundational epistemic principles are just about what I philosophically endorse — in that domain, I don’t know what else we could possibly go on other than intuition. Whereas, my intuitions about empirical claims about the far future only seem worth endorsing as far as I have reasons to think they’re tracking empirical reality.
Also, my sense is that many people are making decisions based on similar intuitions as the ones you have (albeit with much less of a formal argument for how this can be represented or why it’s reasonable). In particular, my impression is that people who are are uncompelled by longtermism (despite being compelled by some type of scope-sensitive consequentialism) are often driven by an aversion to very non-robust EV-estimates.
If I were to write the case for this in my own words, it might be something like:
There are many different normative criteria we should give some weight to.
One of them is “maximizing EV according to moral theory A”.
But maximizing EV is an intuitively less appealing normative criteria when (i) it’s super unclear and non-robust what credences we ought to put on certain propositions, and (ii) the recommended decision is very different depending on what our exact credences on those propositions are.
So in such cases, as a matter of ethics, you might have the intuition that you should give less weight to “maximize EV according to moral theory A” and more weight to e.g.:
Deontic criteria that don’t use EV.
EV-maximizing according to moral theory B (where B’s recommendations are less sensitive to the propositions that are difficult to put robust credences on).
EV-maximizing within a more narrow “domain”, ignoring the effects outside of that “domain”. (Where the effects within that “domain” are less sensitive to the propositions that are difficult to put robust credences on).
I like this formulation because it seems pretty arbitrary to me where you draw the boundary between a credence that you include in your representor vs. not. (Like: What degree of justification is enough? We’ll always have the problem of induction to provide some degree of arbitrariness.) But if we put this squarely in the domain of ethics, I’m less fuzzed about this, because I’m already sympathetic to being pretty anti-realist about ethics, and there being some degree of arbitrariness in choosing what you care about. (And I certainly feel some intuitive aversion to making choices based on very non-robust credences, and it feels interesting to interpret that as an ~ethical intuition.)
(I’ll reply to the point about arbitrariness in another comment.)
I think it’s generally helpful for conceptual clarity to analyze epistemics separately from ethics and decision theory. E.g., it’s not just EV maximization w.r.t. non-robust credences that I take issue with, it’s any decision rule built on top of non-robust credences. And I worry that without more careful justification, “[consequentialist] EV-maximizing within a more narrow “domain”, ignoring the effects outside of that “domain”″ is pretty unmotivated / just kinda looking under the streetlight. And how do you pick the domain?
(Depends on the details, though. If it turns out that EV-maximizing w.r.t. impartial consequentialism is always sensitive to non-robust credences (in your framing), I’m sympathetic to “EV-maximizing w.r.t. those you personally care about, subject to various deontological side constraints etc.” as a response. Because “those you personally care about” isn’t an arbitrary domain, it’s, well, those you personally care about. The moral motivation for focusing on that domain is qualitatively different from the motivation for impartial consequentialism.)
So I’m hesitant to endorse your formulation. But maybe for most practical purposes this isn’t a big deal, I’m not sure yet.
To be clear: The “domain” thing was just meant to be a vague gesture of the sort of thing you might want to do. (I was trying to include my impression of what eg bracketed choice is trying to do.) I definitely agree that the gesture was vague enough to also include some options that I’d think are unreasonable.
To spell out how I’m thinking of credence-setting: Given some information, we apply different (vague) non-pragmatic principles we endorse — fit with evidence, Occam’s razor, deference, etc.
Epistemic arbitrariness means making choices in your credence-setting that add something beyond these principles. (Contrast this with mere “formalization arbitrariness”, the sort discussed in the part of the post about vagueness.)
I don’t think the problem of induction forces us to be epistemically arbitrary. Occam’s razor (perhaps an imprecise version!) favors priors that penalize a hypothesis like “the mechanisms that made the sun rise every day in the past suddenly change tomorrow”. This seems to give us grounds for having prior credences narrower than (0, 1), even if there’s some unavoidable formalization arbitrariness. (We can endorse the principle underlying Occam’s razor, “give more weight to hypotheses that posit fewer entities”, without a circular justification like “Occam’s razor worked well in the past”. Admittedly, I don’t feel super satisfied with / unconfused about Occam’s razor, but it’s not just an ad hoc thing.)
By contrast, pinning down a single determinate credence (in the cases discussed in this post) seems to require favoring epistemic weights for no reason. Or at best, a very weak reason that IMO is clearly outweighed by a principle of suspending judgment. So this seems more arbitrary to me than indeterminate credences, since it injects epistemic arbitrariness on top of formalization arbitrariness.
Thanks. It still seems to me like the problem recurs. The application of Occam’s razor to questions like “will the Sun rise tomorrow?” seems more solid than e.g. random intuitions I have about how to weigh up various considerations. But the latter do still seem like a very weak version of the former. (E.g. both do rely on my intuitions; and in both cases, the domain have something in common with cases where my intuitions have worked well before, and something not-in-common.) And so it’s unclear to me what non-arbitrary standards I can use to decide whether I should let both, neither, or just the latter be “outweighed by a principle of suspending judgment”.
(General caveat that I’m not sure if I’m missing your point.)
Sure, there’s still a “problem” in the sense that we don’t have a clean epistemic theory of everything. The weights we put on the importance of different principles, and how well different credences fulfill them, will be fuzzy. But we’ve had this problem all along.
There are options other than (1) purely determinate credences or (2) implausibly wide indeterminate credences. To me, there are very compelling intuitions behind the view that the balance among my epistemic principles is best struck by (3) indeterminate credences that are narrow in proportion to the weight of evidence and how far principles like Occam seem to go. This isn’t objective (neither are any other principles of rationality less trivial than avoiding synchronic sure losses). Maybe your intuitions differ, upon careful reflection. That doesn’t mean it’s a free-for-all. Even if it is, this isn’t a positive argument for determinacy.
My intuitions about foundational epistemic principles are just about what I philosophically endorse — in that domain, I don’t know what else we could possibly go on other than intuition. Whereas, my intuitions about empirical claims about the far future only seem worth endorsing as far as I have reasons to think they’re tracking empirical reality.
Also, my sense is that many people are making decisions based on similar intuitions as the ones you have (albeit with much less of a formal argument for how this can be represented or why it’s reasonable). In particular, my impression is that people who are are uncompelled by longtermism (despite being compelled by some type of scope-sensitive consequentialism) are often driven by an aversion to very non-robust EV-estimates.