It’s sort of like thinking that a machine learning professional who did sales optimization for an orange company couldn’t possibly do sales optimization for a banana company, because their skills must be about oranges rather than bananas.
This is a terrible analogy. It assumes what you’re trying to prove, oversimplifies a complex issue, and isn’t even all that analogous to the issue at hand. Sales optimization for a banana company is obviously related to sales optimization in an orange company; not so with Oracle Al and Friendly AI.
I don’t see how it assumes what it’s trying to prove. The analogous case is not about the relationship between Oracle AI and Friendly AI. For A:B::C:D to be a good analogy, C:D should have the same relationship that you’re asserting A:B has, and A:B should be relevantly similar to C:D, and A,B,C, and D should all be different things. You can argue that it fails at one or several of those, but it really isn’t begging the question unless you end up with something like A:B::A:B.
An analogy should be a simplification. In using an analogy, one is assuming the reader is not sufficiently versed in the complexities of A:B but will see the obviousness of C:D.
Thank you for putting it in such clear language. In this case, C and D (banana sales and orange sales) are defined to be obviously identical, even to the layperson. To claim A:B::C:D is a drastic oversimplification of the actual relationship between A and B, a relationship that has a number of properties that the relationship between C and D does not have. Moreover, the analogy does not demonstrate why A:B::C:D, it simply asserts that it would be oh-so-obvious to anyone that D is identical to C and then claims that the case of A and B is the same. Consequently, the analogy is used as an assertion, a way of insisting A:B to the reader rather than demonstrating why it is so.
The analogy on its own is just an assertion. That assertion is backed up by detailed points in the rest of the article demonstrating the asserted similarities, like the required skills of looking at a mathematical specification of a program and predicting how that program will really behave, finding methods of choosing actions/plans that are less expensive than searching the entire solution space but still return a result high in the preference order, and specifying the preference order to actually reflect what we want.
Right, but the analogy itself doesn’t demonstrate why the assertion is true—see my other reply to thomblake. Yudkowsky’s analogy is like a political pundit comparing the economy to a roller coaster, but then using quotes from famous economists to support his predictions about what the economy is going to do. The analogy is superfluous and is being used as a persuasive tool, not an actual argument.
I agree that the analogy was not an argument, but I disagree that it isn’t allowed to be an explanation of the position one is arguing for. The analogy itself doesn’t have to demonstrate why the assertion is true, because the supporting arguments do that.
I don’t agree—a well-done analogy should mirror on the inner structure of the inference, and demonstrate how it works. For example, consider this classic Feynman quote:
[T]he mathematicians would come in with a terrific theorem, and they’re all excited. As they’re telling me the conditions of the theorem, I construct something which fits all the conditions. You know, you have a set (one ball) – disjoint (two balls). Then the balls turn colors, grow hairs, or whatever, in my head as they put more conditions on. Finally they state the theorem, which is some dumb thing about the ball which isn’t true for my hairy green ball thing, so I say, ‘False!’
Compare this to, say, a pundit making an analogy between the economy and a roller coaster (“They both go up and down!”). In the pundit’s case, the economy has surface similarities with the roller coaster, but the way you’d predict the behavior of the economy and the way you’d predict the behavior of a roller coaster are completely different, so the analogy fails. In Feynman’s case, the imaginary colored balls behave in a logically similar way to the conditions of the proof, and this isomorphism is what makes the analogy work.
Most analogies don’t meet this standard, of course. But on a topic like this, precision is extremely important, and the banana/orange sales analogy struck me as particularly sloppy.
Writing nitpick:
This is a terrible analogy. It assumes what you’re trying to prove, oversimplifies a complex issue, and isn’t even all that analogous to the issue at hand. Sales optimization for a banana company is obviously related to sales optimization in an orange company; not so with Oracle Al and Friendly AI.
The goal with an analogy is to have the reader see the connection as obvious in the analogous case. It’s not a flaw.
Yes, but the analogy is a drastic oversimplification of Oracle/FAI case, and it assumes the conclusion it is supposed to be demonstrating.
I don’t see how it assumes what it’s trying to prove. The analogous case is not about the relationship between Oracle AI and Friendly AI. For A:B::C:D to be a good analogy, C:D should have the same relationship that you’re asserting A:B has, and A:B should be relevantly similar to C:D, and A,B,C, and D should all be different things. You can argue that it fails at one or several of those, but it really isn’t begging the question unless you end up with something like A:B::A:B.
An analogy should be a simplification. In using an analogy, one is assuming the reader is not sufficiently versed in the complexities of A:B but will see the obviousness of C:D.
Thank you for putting it in such clear language. In this case, C and D (banana sales and orange sales) are defined to be obviously identical, even to the layperson. To claim A:B::C:D is a drastic oversimplification of the actual relationship between A and B, a relationship that has a number of properties that the relationship between C and D does not have. Moreover, the analogy does not demonstrate why A:B::C:D, it simply asserts that it would be oh-so-obvious to anyone that D is identical to C and then claims that the case of A and B is the same. Consequently, the analogy is used as an assertion, a way of insisting A:B to the reader rather than demonstrating why it is so.
The analogy on its own is just an assertion. That assertion is backed up by detailed points in the rest of the article demonstrating the asserted similarities, like the required skills of looking at a mathematical specification of a program and predicting how that program will really behave, finding methods of choosing actions/plans that are less expensive than searching the entire solution space but still return a result high in the preference order, and specifying the preference order to actually reflect what we want.
Right, but the analogy itself doesn’t demonstrate why the assertion is true—see my other reply to thomblake. Yudkowsky’s analogy is like a political pundit comparing the economy to a roller coaster, but then using quotes from famous economists to support his predictions about what the economy is going to do. The analogy is superfluous and is being used as a persuasive tool, not an actual argument.
I agree that the analogy was not an argument, but I disagree that it isn’t allowed to be an explanation of the position one is arguing for. The analogy itself doesn’t have to demonstrate why the assertion is true, because the supporting arguments do that.
I agree, though I would count that as a criticism of analogies done well, rather than a criticism that this one was done badly.
I don’t agree—a well-done analogy should mirror on the inner structure of the inference, and demonstrate how it works. For example, consider this classic Feynman quote:
Compare this to, say, a pundit making an analogy between the economy and a roller coaster (“They both go up and down!”). In the pundit’s case, the economy has surface similarities with the roller coaster, but the way you’d predict the behavior of the economy and the way you’d predict the behavior of a roller coaster are completely different, so the analogy fails. In Feynman’s case, the imaginary colored balls behave in a logically similar way to the conditions of the proof, and this isomorphism is what makes the analogy work.
Most analogies don’t meet this standard, of course. But on a topic like this, precision is extremely important, and the banana/orange sales analogy struck me as particularly sloppy.
I agree