Now, consider the following imaginary conversation:
Customer: “There’s a bug in your latest Pentium.” (Presents copious evidence ruling out all other possible causes of the errors.)
Intel: “Those aren’t errors, the chip’s working exactly as designed. Look, here’s the complete schematic of the chip, here’s the test results for the actual processor, you can see it’s working exactly as manufactured.”
Customer: “But the schematic is wrong. Look, these values that it lists for that lookup table are wrong, that’s why the chip’s giving wrong answers.”
Intel: “Those values are exactly the ones the engineers put there. What does it mean to say that they’re ‘wrong’?”
Customer: “It means they’re wrong, that’s what it means. The chip was supposed to do floating point divisions according to this other spec here.” (Gestures towards relevant standards document.) “It doesn’t. Somewhere between there and the lookup table, someone must have made a mistake.”
Intel: “The engineers designing it took the spec and made that lookup table. The table is exactly what it they made it to be. It makes no sense to call it ‘wrong’.”
Customer: “The processor says that 4195835⁄3145727 = 1.333820449136241002. The right answer is 1.333739068902037589. That’s a huge error, compared with the precision it’s supposed to give.”
Intel: “It says 1.333820449136241002, so that’s the answer it was designed to give. What does that other computation have to do with it? That’s not the calculation it does. I still can’t see the problem.”
Customer: “It’s supposed to be doing division. It’s not doing division!”
Intel: “But there are lots of other examples it gets right. You’re presenting it with the wrong problems.”
Customer: “It’s supposed to be right for all examples. It isn’t.”
Intel: “It does exactly what it does. If it doesn’t do something else that you think it ought to be doing instead, that’s your problem. And if you want division, it’s still a pretty good approximation.”
I think this parallels a lot of the discussion on “biases”.
But there was a specification—IEEE 754 - that the Pentium was supposed to be implementing, and wasn’t. There’s no similar objective standard for rationality.
But there was a specification—IEEE 754 - that the Pentium was supposed to be implementing, and wasn’t. There’s no similar objective standard for rationality.
I have a sad that you didn’t challenge me on my previous reply to you; that means that you’ve written me off as an interlocutor, probably on the suspicion that I’m a hopeless fanboy.
...which, on reflection, would be no more than I deserve for going into pissing-match mode and not being straightforward about my point of view. Oh well.
Upvoted, but I would like to point out that it is not immediately obvious that the template can be modified to suit instrumental rationality as well as epistemological rationality; At a casual inspection the litany appears to be about epistemology only.
If working standing as opposed to sitting will increase my health, I desire to have the habit of working standing. If working standing as opposed to sitting will decrease my health, I desire to have the habit of working sitting. Let me not become attached to habits that do not serve my goals.
Note also that there are some delightful self-fulfilling prophecies that mix epistemic and instrumental rationality, with a hint of Löb’s Theorem:
If believing that (taking this sugar pill will cure my headache) will mean (taking this sugar pill will cure my headache), I desire to believe that (taking this sugar pill will cure my headache). If believing that (taking this sugar pill will not cure my headache) will mean (taking this sugar pill will not cure my headache), I desire to believe that (taking this sugar pill will cure my headache). Let me not become attached to self-fulfilling beliefs that disempower me.
Yes, that’s roughly the reformulation I settled on. Except that I omitted ‘have the habit’ because it’s magical-ish—desiring to have the habit of X is not that relevant to actually achieving the habit of X, rather simply desiring to X strongly enough to actually X is what results in the building of a habit of X.
Certain models of the Pentium processor had errors in their FPU. Some floating point calculations would give the wrong answers. The reason was that in a lookup table inside the FPU, a few values were wrong.
Now, consider the following imaginary conversation:
Customer: “There’s a bug in your latest Pentium.” (Presents copious evidence ruling out all other possible causes of the errors.)
Intel: “Those aren’t errors, the chip’s working exactly as designed. Look, here’s the complete schematic of the chip, here’s the test results for the actual processor, you can see it’s working exactly as manufactured.”
Customer: “But the schematic is wrong. Look, these values that it lists for that lookup table are wrong, that’s why the chip’s giving wrong answers.”
Intel: “Those values are exactly the ones the engineers put there. What does it mean to say that they’re ‘wrong’?”
Customer: “It means they’re wrong, that’s what it means. The chip was supposed to do floating point divisions according to this other spec here.” (Gestures towards relevant standards document.) “It doesn’t. Somewhere between there and the lookup table, someone must have made a mistake.”
Intel: “The engineers designing it took the spec and made that lookup table. The table is exactly what it they made it to be. It makes no sense to call it ‘wrong’.”
Customer: “The processor says that 4195835⁄3145727 = 1.333820449136241002. The right answer is 1.333739068902037589. That’s a huge error, compared with the precision it’s supposed to give.”
Intel: “It says 1.333820449136241002, so that’s the answer it was designed to give. What does that other computation have to do with it? That’s not the calculation it does. I still can’t see the problem.”
Customer: “It’s supposed to be doing division. It’s not doing division!”
Intel: “But there are lots of other examples it gets right. You’re presenting it with the wrong problems.”
Customer: “It’s supposed to be right for all examples. It isn’t.”
Intel: “It does exactly what it does. If it doesn’t do something else that you think it ought to be doing instead, that’s your problem. And if you want division, it’s still a pretty good approximation.”
I think this parallels a lot of the discussion on “biases”.
A version of Intel’s argument is used ny Objectvists to prove that there is no perceptual error.
But there was a specification—IEEE 754 - that the Pentium was supposed to be implementing, and wasn’t. There’s no similar objective standard for rationality.
There is.
That’s a poem, not a specification.
It’s a poem and a specification.
Not in any way that is meaningful from an engineering point of view.
I do not agree. (Point of view = Ph.D. biomedical engineering.)
I have a sad that you didn’t challenge me on my previous reply to you; that means that you’ve written me off as an interlocutor, probably on the suspicion that I’m a hopeless fanboy.
...which, on reflection, would be no more than I deserve for going into pissing-match mode and not being straightforward about my point of view. Oh well.
I felt that the discussion wasn’t going to become productive, hence I disengaged.
Upvoted, but I would like to point out that it is not immediately obvious that the template can be modified to suit instrumental rationality as well as epistemological rationality; At a casual inspection the litany appears to be about epistemology only.
The corresponding specification for instrumental rationality would be the VNM axioms, wouldn’t it?
No.
If working standing as opposed to sitting will increase my health,
I desire to have the habit of working standing.
If working standing as opposed to sitting will decrease my health,
I desire to have the habit of working sitting.
Let me not become attached to habits that do not serve my goals.
Note also that there are some delightful self-fulfilling prophecies that mix epistemic and instrumental rationality, with a hint of Löb’s Theorem:
If believing that (taking this sugar pill will cure my headache) will mean (taking this sugar pill will cure my headache),
I desire to believe that (taking this sugar pill will cure my headache).
If believing that (taking this sugar pill will not cure my headache) will mean (taking this sugar pill will not cure my headache),
I desire to believe that (taking this sugar pill will cure my headache).
Let me not become attached to self-fulfilling beliefs that disempower me.
For a much more in-depth look, see this article by LWer BrienneStrohl:
Lob’s Theorem Cured My Social Anxiety
Yes, that’s roughly the reformulation I settled on. Except that I omitted ‘have the habit’ because it’s magical-ish—desiring to have the habit of X is not that relevant to actually achieving the habit of X, rather simply desiring to X strongly enough to actually X is what results in the building of a habit of X.