With all due respect, but this post reminds me of why I find the expectation-calculation kind of rationality dangerous.
IMO examples such as the first, with known probabilities and a straightforward way to calculate utility, are a total red herring.
In more realistic examples, you’ll have to do many judgment calls such as the choice of model, and your best estimate of the basic probabilities and utilities, which will ultimately be grounded on the fuzzy, biased intuitive level.
I think you might reply that this isn’t a specific fault with your approach, and that everyone has to start with some axioms somewhere. Granted.
Now the problem, as I see it, is that picking these axioms (including quantitative estimates) once and for all, and then proceeding deductively, will exaggerate any initial choices (silly metaphor: A bit like going from one point to another by calculating the angle and then going in a straight line, instead of making corrections as you go. But (quitting the metaphor) I’m not just talking about divergence over time, but also along the deduction).
So now you have a conclusion which is still based on the fuzzy and intuitive, but which has an air of mathematical exactness… If the model is complex enough, you can probably reach any desired conclusion by inconspicious parameter twiddling.
My argument is far from “Omg it’s so coldhearted to mix math and moral decisions!”. I think math is an important tool in the analysis (incidentally, I’m a math student ;)), but that you should know its limitations and hidden assumptions in applying math to the real world.
I would consider an act of (intuitively wrongful) violence based on a 500-page utility expectation calculation no better than one based on elaborate logic grounded in scripture or ideology.
I think that, after being informed by rationality about all the value-neutral facts, intuition, as fallible as it is, should be the final arbiter.
I think these sacred (no religion implied) values you mention, and especially kindness, do serve an important purpose, namely as a safeguard against the subtly flawed logic I’ve been talking about.
I understand your point, and I believe Eliezer isn’t as naive as you might think. Compare Ethical Injunctions (which starts out looking irrelevant to this, but comes around by the end)...
With all due respect, but this post reminds me of why I find the expectation-calculation kind of rationality dangerous.
IMO examples such as the first, with known probabilities and a straightforward way to calculate utility, are a total red herring.
In more realistic examples, you’ll have to do many judgment calls such as the choice of model, and your best estimate of the basic probabilities and utilities, which will ultimately be grounded on the fuzzy, biased intuitive level.
I think you might reply that this isn’t a specific fault with your approach, and that everyone has to start with some axioms somewhere. Granted.
Now the problem, as I see it, is that picking these axioms (including quantitative estimates) once and for all, and then proceeding deductively, will exaggerate any initial choices (silly metaphor: A bit like going from one point to another by calculating the angle and then going in a straight line, instead of making corrections as you go. But (quitting the metaphor) I’m not just talking about divergence over time, but also along the deduction).
So now you have a conclusion which is still based on the fuzzy and intuitive, but which has an air of mathematical exactness… If the model is complex enough, you can probably reach any desired conclusion by inconspicious parameter twiddling.
My argument is far from “Omg it’s so coldhearted to mix math and moral decisions!”. I think math is an important tool in the analysis (incidentally, I’m a math student ;)), but that you should know its limitations and hidden assumptions in applying math to the real world.
I would consider an act of (intuitively wrongful) violence based on a 500-page utility expectation calculation no better than one based on elaborate logic grounded in scripture or ideology.
I think that, after being informed by rationality about all the value-neutral facts, intuition, as fallible as it is, should be the final arbiter.
I think these sacred (no religion implied) values you mention, and especially kindness, do serve an important purpose, namely as a safeguard against the subtly flawed logic I’ve been talking about.
I understand your point, and I believe Eliezer isn’t as naive as you might think. Compare Ethical Injunctions (which starts out looking irrelevant to this, but comes around by the end)...
To add to orthonormal’s link (see also the other posts around these two in the list of all posts):
Ends Don’t Justify Means (Among Humans)