I plead guilty to changing it, but not guilty to changing it in order to be able to argue. If you look a couple of paragraphs earlier in the comment in question you will see that I argue, explicitly, that surely people saying this kind of thing can’t actually mean that no simple theory can be useful in ethics, because that’s obviously wrong, and that the interesting claim we should consider is something more like “simple moral theories cannot account for all of human values”.
this tendency to imagine someone’s words being stupider than they really are, and then arguing with them.
Yup, that’s a terrible thing, and I bet I do it sometimes, but on this particular occasion I was attempting to do the exact opposite (not to you but to the unspecified others Gram_Stone wrote about—though at the time I was under the misapprehension that it was actually Gram_Stone).
Hmm. So maybe let’s state the issue in a more nuanced way.
We have argument A and counter-argument B.
You adjust argument A in direction X to make it stronger and more valuable to argue against.
But it is not enough to apply the same adjustment to B. To make B stronger in a similar way, it might need adjusting in direction -X, or some other direction Y.
Does it look like it describes a bug that might have happened here? If not, feel free to drop the issue.
I’m afraid your description here is another thing that may have “not enough concreteness” :-). In your analogy, I take it A is “simple moral theories are too neat to do any real work in moral philosophy” and X is what takes you from there to “simple moral theories can’t account for all of human values”, but I’m not sure what B is, or what direction Y is, or where I adjusted B in direction X instead of direction Y.
So you may well be right, but I’m not sure I understand what you’re saying with enough clarity to tell whether you are.
You caught me red handed at not being concrete! Shame on me!
By B I meant applying the idea from “Say not ‘Complexity’”.
You adjusting B in direction X is what I pointed out when I accused you of changing my original comment.
By Y I mean something like our later consensus, which boils down to (Y1) “we can use the heuristic of ‘simple doesn’t work’ in this case, because in this case we have pretty high confidence that that’s how it really is; which still doesn’t make it a method we can use for finding solutions and is dangerous to use without sufficient basis”
Or it could even become (Y2) “we can get something out of considering those simple and wrong solutions” which is close to Gram_Stone’s original point.
I plead guilty to changing it, but not guilty to changing it in order to be able to argue. If you look a couple of paragraphs earlier in the comment in question you will see that I argue, explicitly, that surely people saying this kind of thing can’t actually mean that no simple theory can be useful in ethics, because that’s obviously wrong, and that the interesting claim we should consider is something more like “simple moral theories cannot account for all of human values”.
Yup, that’s a terrible thing, and I bet I do it sometimes, but on this particular occasion I was attempting to do the exact opposite (not to you but to the unspecified others Gram_Stone wrote about—though at the time I was under the misapprehension that it was actually Gram_Stone).
Hmm. So maybe let’s state the issue in a more nuanced way.
We have argument A and counter-argument B.
You adjust argument A in direction X to make it stronger and more valuable to argue against.
But it is not enough to apply the same adjustment to B. To make B stronger in a similar way, it might need adjusting in direction -X, or some other direction Y.
Does it look like it describes a bug that might have happened here? If not, feel free to drop the issue.
I’m afraid your description here is another thing that may have “not enough concreteness” :-). In your analogy, I take it A is “simple moral theories are too neat to do any real work in moral philosophy” and X is what takes you from there to “simple moral theories can’t account for all of human values”, but I’m not sure what B is, or what direction Y is, or where I adjusted B in direction X instead of direction Y.
So you may well be right, but I’m not sure I understand what you’re saying with enough clarity to tell whether you are.
You caught me red handed at not being concrete! Shame on me!
By B I meant applying the idea from “Say not ‘Complexity’”.
You adjusting B in direction X is what I pointed out when I accused you of changing my original comment.
By Y I mean something like our later consensus, which boils down to (Y1) “we can use the heuristic of ‘simple doesn’t work’ in this case, because in this case we have pretty high confidence that that’s how it really is; which still doesn’t make it a method we can use for finding solutions and is dangerous to use without sufficient basis”
Or it could even become (Y2) “we can get something out of considering those simple and wrong solutions” which is close to Gram_Stone’s original point.