This is a very good point, thank you. I have some tentative thoughts in response, but I will have to think about it carefully.
Here’s a question in the meantime: do you think that what you say is addressed in / is essentially the same as what I write in this comment elsethread? Or is this something else entirely?
I think my point is different, although I have to admit I don’t entirely grasp your objection to Nostalgebraist’s objection. I think Nostalgebraist’s point about rules being gameable does overlap with my example of multi-agent systems, because clear-but-only-approximately-correct rules are exploitable. But I don’t think my argument is about it being hard to identify legitimate exceptions. In fact, astrophysicists would have no difficulty identifying when it’s the right time to stop using Newtonian gravity.
But my point with the physics analogy is that sometimes, even if you actually know the correct rule, and even if that rule is simple (Navier-Stokes is still just one equation), you still might accomplish a lot more by using approximations and just remembering when they start to break down.
That’s because Occam’s-razor-simple rules like “to build a successful business, just turn a huge profit!” or “air is perfectly described by this one-line equation!” can be very hard to apply to synthesize into specific new business plans or airplane designs, or even to make predictions about existing business plans or airplane designs.
I guess a better example is: the various flavours of utilitarianism each convert complex moral judgements into simple, universal rules to maximize various measures of utility. But even with a firm belief in utilitarianism, you could still be stumped about the right action in any particular dilemma, just because it might be really hard to calculate the utility of each option. In this case, you don’t feel like you’ve reached an “exception” to utilitarianism at all—you still believe in the underlying principle—but you might find it easier to make decisions using an approximation like “try not to kill anybody”, until you reach edge-cases where that might break down, like in a war zone.
You might not even know if eating a cookie will increase or decrease your utility, so you stick to an approximation like “I’m on a diet” to simplify your decision-making process until you reach an exception like “this is a really delicious-looking / unusually healthy cookie”, in which you decide it’s worth dropping the approximation and reaching for the deeper rules of utilitarianism to make your choice.
I think my point is different, although I have to admit I don’t entirely grasp your objection to Nostalgebraist’s objection.
Oh, I don’t object to what nostalgebraist says! I think it’s entirely right. (Also, to be clear, his post was written some time before my comment, so it’s not in any way a response to the latter.)
I say only that despite what he says seemingly being a serious challenge to (or even contradiction of) my post, nonetheless the post’s thesis survives the challenge intact, if not unscathed—mostly because no alternative approach to mine deals with the challenge any better.
I guess a better example is: the various flavours of utilitarianism …
Actually… I think this is a much worse example—because, in fact, I think such difficulties are entirely fatal to utilitarianism! (In fact I think that utilitarianism’s inadequacy as a moral theory is overdetermined—that is, that there are several reasons to reject it, each one sufficient on its own—but the sorts of problems you mention are certainly among those reasons.)
But let me return to your original examples—physics and business. Having thought about the matter a bit, it now seems to me that the position you are arguing against, which you (by implication) ascribe to me, is somewhat of a strawman.
The sort of situation I am referring to is one where you have (a) a rule that is applicable to a given class of situations, and (b) some phenomenon by which exceptions to the rule [i.e., specific situations where you don’t follow the rule, but instead do something else] arise. The claim I am making at the end of the post is that (b) is not some unfathomable black box from which, unexpectedly and unpredictably, exceptional cases spring, but rather a comprehensible set of criteria; and that (a) and (b) together constitute the actual “rule”—which, by construction, lacks exceptions. (And then there is the additional claim that there’s a benefit to making all of this explicit, and basing your decisions on it; this is the primary subject of the post.)
Now, it seems to me (and please correct me if I’m wrong here) that you are misreading me in two ways.
Firstly, it seems as if you are reading me as saying that (a) and (b) actually should be, or are, not two separate things but actually just one thing (and perhaps even that this one thing is, or should be, a simple thing). But I’m not saying anything of the sort! For instance, you say:
In fact, astrophysicists would have no difficulty identifying when it’s the right time to stop using Newtonian gravity.
Well and good! This is entirely consistent with my point. Here the “actual rule” would be something like: “relativity, plus whatever criteria we use to determine when to use Newtonian physics instead”. Clearly, this rule has no exceptions! (And if it does, well, whence those exceptions? How do physicists decide those are exceptions? However they did, whatever criteria they used—into the rule they go…)
Secondly, the situations I am referring to are, as I said, those where you have a rule that’s applicable to a given class of situations. By this I mean that you have some rule that tells you precisely what to do, but sometimes instead of doing that thing, you do (or, at least, are tempted to do) a different thing (i.e., you sometimes encounter [potentially] exceptional cases).
For example, if you have the rule “don’t eat cookies”, and you encounter a cookie, your rule is very clear on what you are to do: don’t eat the cookie. There’s no ambiguity here, no confusion or uncertainty. Should you eat this cookie? The rule says: no. You should not eat the cookie. End of story. That you are sometimes tempted to ignore, a.k.a. break, the rule, does not change the fact that the rule unambiguously dictates your actions. (The question, then, is why you’re tempted to make the exception, and exactly in what sorts of cases, etc.)
But note that this is not the case in your examples! If the rule, supposedly, is “use the Navier-Stokes equation”, but that equation is, in practice, impossible to calculate, then the rule doesn’t actually dictate your actions! It’s not that you know exactly what the answer is but you are unwilling to accept it; you just don’t have the answer! The supposed “rule” isn’t really any such thing. And in business it’s even worse: yes, “just turn a huge profit”, but what actually do I do? Specifically? I don’t know! I’m not tempted to break the rule, not at all; actually, I’d love to follow it, if only I knew how… but I don’t have any idea how! So, I have to use something other than this purported “rule”, in order to decide what to do.
This is a very good point, thank you. I have some tentative thoughts in response, but I will have to think about it carefully.
Here’s a question in the meantime: do you think that what you say is addressed in / is essentially the same as what I write in this comment elsethread? Or is this something else entirely?
Thanks!
I think my point is different, although I have to admit I don’t entirely grasp your objection to Nostalgebraist’s objection. I think Nostalgebraist’s point about rules being gameable does overlap with my example of multi-agent systems, because clear-but-only-approximately-correct rules are exploitable. But I don’t think my argument is about it being hard to identify legitimate exceptions. In fact, astrophysicists would have no difficulty identifying when it’s the right time to stop using Newtonian gravity.
But my point with the physics analogy is that sometimes, even if you actually know the correct rule, and even if that rule is simple (Navier-Stokes is still just one equation), you still might accomplish a lot more by using approximations and just remembering when they start to break down.
That’s because Occam’s-razor-simple rules like “to build a successful business, just turn a huge profit!” or “air is perfectly described by this one-line equation!” can be very hard to apply to synthesize into specific new business plans or airplane designs, or even to make predictions about existing business plans or airplane designs.
I guess a better example is: the various flavours of utilitarianism each convert complex moral judgements into simple, universal rules to maximize various measures of utility. But even with a firm belief in utilitarianism, you could still be stumped about the right action in any particular dilemma, just because it might be really hard to calculate the utility of each option. In this case, you don’t feel like you’ve reached an “exception” to utilitarianism at all—you still believe in the underlying principle—but you might find it easier to make decisions using an approximation like “try not to kill anybody”, until you reach edge-cases where that might break down, like in a war zone.
You might not even know if eating a cookie will increase or decrease your utility, so you stick to an approximation like “I’m on a diet” to simplify your decision-making process until you reach an exception like “this is a really delicious-looking / unusually healthy cookie”, in which you decide it’s worth dropping the approximation and reaching for the deeper rules of utilitarianism to make your choice.
Oh, I don’t object to what nostalgebraist says! I think it’s entirely right. (Also, to be clear, his post was written some time before my comment, so it’s not in any way a response to the latter.)
I say only that despite what he says seemingly being a serious challenge to (or even contradiction of) my post, nonetheless the post’s thesis survives the challenge intact, if not unscathed—mostly because no alternative approach to mine deals with the challenge any better.
Actually… I think this is a much worse example—because, in fact, I think such difficulties are entirely fatal to utilitarianism! (In fact I think that utilitarianism’s inadequacy as a moral theory is overdetermined—that is, that there are several reasons to reject it, each one sufficient on its own—but the sorts of problems you mention are certainly among those reasons.)
But let me return to your original examples—physics and business. Having thought about the matter a bit, it now seems to me that the position you are arguing against, which you (by implication) ascribe to me, is somewhat of a strawman.
The sort of situation I am referring to is one where you have (a) a rule that is applicable to a given class of situations, and (b) some phenomenon by which exceptions to the rule [i.e., specific situations where you don’t follow the rule, but instead do something else] arise. The claim I am making at the end of the post is that (b) is not some unfathomable black box from which, unexpectedly and unpredictably, exceptional cases spring, but rather a comprehensible set of criteria; and that (a) and (b) together constitute the actual “rule”—which, by construction, lacks exceptions. (And then there is the additional claim that there’s a benefit to making all of this explicit, and basing your decisions on it; this is the primary subject of the post.)
Now, it seems to me (and please correct me if I’m wrong here) that you are misreading me in two ways.
Firstly, it seems as if you are reading me as saying that (a) and (b) actually should be, or are, not two separate things but actually just one thing (and perhaps even that this one thing is, or should be, a simple thing). But I’m not saying anything of the sort! For instance, you say:
Well and good! This is entirely consistent with my point. Here the “actual rule” would be something like: “relativity, plus whatever criteria we use to determine when to use Newtonian physics instead”. Clearly, this rule has no exceptions! (And if it does, well, whence those exceptions? How do physicists decide those are exceptions? However they did, whatever criteria they used—into the rule they go…)
Secondly, the situations I am referring to are, as I said, those where you have a rule that’s applicable to a given class of situations. By this I mean that you have some rule that tells you precisely what to do, but sometimes instead of doing that thing, you do (or, at least, are tempted to do) a different thing (i.e., you sometimes encounter [potentially] exceptional cases).
For example, if you have the rule “don’t eat cookies”, and you encounter a cookie, your rule is very clear on what you are to do: don’t eat the cookie. There’s no ambiguity here, no confusion or uncertainty. Should you eat this cookie? The rule says: no. You should not eat the cookie. End of story. That you are sometimes tempted to ignore, a.k.a. break, the rule, does not change the fact that the rule unambiguously dictates your actions. (The question, then, is why you’re tempted to make the exception, and exactly in what sorts of cases, etc.)
But note that this is not the case in your examples! If the rule, supposedly, is “use the Navier-Stokes equation”, but that equation is, in practice, impossible to calculate, then the rule doesn’t actually dictate your actions! It’s not that you know exactly what the answer is but you are unwilling to accept it; you just don’t have the answer! The supposed “rule” isn’t really any such thing. And in business it’s even worse: yes, “just turn a huge profit”, but what actually do I do? Specifically? I don’t know! I’m not tempted to break the rule, not at all; actually, I’d love to follow it, if only I knew how… but I don’t have any idea how! So, I have to use something other than this purported “rule”, in order to decide what to do.