Perhaps I’m misunderstanding you [..] make a simple model of human values by making it say something like “what humans value is valuable”
Yes you are? It need not necessarily be trivial, e.g. the aforementioned Coherent Extrapolated Volition idea by Eliezer′2004. Considering that “what humans value” is not clear or consistent, any analysis would proceed on two fronts:
extracting and defining what humans value,
constructing a thinking framework that allows us to put it all together.
So we can have insights about 2 without progress in 1.
Makes sense?
Again, I don’t think Gram_Stone was proposing
Well, if you actually look at Gram_Stone’s post, he was arguing against the “simple doesn’t work” heuristic in moral philosophy. I think you might have automatically assumed that each next speaker’s position is in opposition to others? It’s not important who said what, it’s important to sort out all relevant thoughts and came out having more accurate beliefs than before.
However there is an issue of hygiene of discussion. I don’t “disagree”, I suggest alternative ways of looking at an issue. I tend to do it even when I am less inclined to support the position I seem to be defending, versus the opposing position. In Eliezer’s words, you need not only to fight the bad argument, but the most terrible horror that can be constructed from its corpse.
So it makes me sad that you see this as “disagreeing”. I do this in my own head, too, and there need not be emotional againstness in pitting various beliefs against each other.
I am at least 95% sure I’m not you
Now if I also say I don’t think I’m really you, it will be just another similarity.
we can have insights about 2 without progress in 1. Make sense?
Yes, but I don’t think people saying that simple moral theories are too simple are claiming that no theory about any aspect of ethics should be simple. At any rate, in so far as they are I think they’re boringly wrong and would prefer to contemplate a less silly version of their position. The more interesting claim is (I think) something more like “no very simple theory can account for all of human values”, and I don’t see how CEV offers anything like a counterexample to that.
if you actually look at Gram_Stone’s post [...]
Ahem. You would appear to be right. Therefore, when I’ve said “Gram_Stone” in the above, please pretend I said something like “those advocating the position that simple moral theories are too simple”. As you say, it doesn’t particularly matter who said what, but I regret foisting a position on someone who wasn’t actually advocating it.
it makes me sad that you see this as “disagreeing”
I regret making you sad. I wasn’t suggesting any sort of “emotional againstness”, though. And I think we actually are disagreeing. For instance, you are arguing that saying “we should reject very simple moral theories because they cannot rightly describe human values” is making a mistake, and that it’s the mistake Eliezer was arguing against when he wrote “Say not ‘Complexity’”. I think saying that is probably not a mistake, and certainly can’t be determined to be a mistake simply by recapitulating Eliezer’s arguments there. Isn’t that a disagreement?
But I take your point, and I too am in the habit of defending things I “disagree” with. I would say then, though, that I am disagreeing with bad arguments against those things—there is still disagreement, and that’s not the same as looking at an issue from multiple directions.
I have a very strong impression we disagree insomuch that we interpret each other’s words to mean something we can argue with.
Just now, you treated my original remark in this way by changing the quoted phrase, which was (when I wrote my comment) “Simple moral theories are too neat to do any real work in moral philosophy” but become (in your version) “simple moral theories cannot rightly describe human values”. Notice the difference?
I’m not defending my original comment, it was pretty stupid the way I had phrased it in any case.
So of course, you were right when you argued and corrected me, and I thank you for that.
But it still is worrying to have this tendency to imagine someone’s words being stupider than they really were, and then arguing with them.
That’s what I mean when I say I wish we all could give each other more credit, and interpret others’ words in the best possible way, not the worst...
Agreeing with me here?
But in any case, I also wanted to note that this discussion had not enough concreteness from the start.
I plead guilty to changing it, but not guilty to changing it in order to be able to argue. If you look a couple of paragraphs earlier in the comment in question you will see that I argue, explicitly, that surely people saying this kind of thing can’t actually mean that no simple theory can be useful in ethics, because that’s obviously wrong, and that the interesting claim we should consider is something more like “simple moral theories cannot account for all of human values”.
this tendency to imagine someone’s words being stupider than they really are, and then arguing with them.
Yup, that’s a terrible thing, and I bet I do it sometimes, but on this particular occasion I was attempting to do the exact opposite (not to you but to the unspecified others Gram_Stone wrote about—though at the time I was under the misapprehension that it was actually Gram_Stone).
Hmm. So maybe let’s state the issue in a more nuanced way.
We have argument A and counter-argument B.
You adjust argument A in direction X to make it stronger and more valuable to argue against.
But it is not enough to apply the same adjustment to B. To make B stronger in a similar way, it might need adjusting in direction -X, or some other direction Y.
Does it look like it describes a bug that might have happened here? If not, feel free to drop the issue.
I’m afraid your description here is another thing that may have “not enough concreteness” :-). In your analogy, I take it A is “simple moral theories are too neat to do any real work in moral philosophy” and X is what takes you from there to “simple moral theories can’t account for all of human values”, but I’m not sure what B is, or what direction Y is, or where I adjusted B in direction X instead of direction Y.
So you may well be right, but I’m not sure I understand what you’re saying with enough clarity to tell whether you are.
You caught me red handed at not being concrete! Shame on me!
By B I meant applying the idea from “Say not ‘Complexity’”.
You adjusting B in direction X is what I pointed out when I accused you of changing my original comment.
By Y I mean something like our later consensus, which boils down to (Y1) “we can use the heuristic of ‘simple doesn’t work’ in this case, because in this case we have pretty high confidence that that’s how it really is; which still doesn’t make it a method we can use for finding solutions and is dangerous to use without sufficient basis”
Or it could even become (Y2) “we can get something out of considering those simple and wrong solutions” which is close to Gram_Stone’s original point.
Yes you are? It need not necessarily be trivial, e.g. the aforementioned Coherent Extrapolated Volition idea by Eliezer′2004. Considering that “what humans value” is not clear or consistent, any analysis would proceed on two fronts:
extracting and defining what humans value,
constructing a thinking framework that allows us to put it all together. So we can have insights about 2 without progress in 1. Makes sense?
Well, if you actually look at Gram_Stone’s post, he was arguing against the “simple doesn’t work” heuristic in moral philosophy. I think you might have automatically assumed that each next speaker’s position is in opposition to others? It’s not important who said what, it’s important to sort out all relevant thoughts and came out having more accurate beliefs than before.
However there is an issue of hygiene of discussion. I don’t “disagree”, I suggest alternative ways of looking at an issue. I tend to do it even when I am less inclined to support the position I seem to be defending, versus the opposing position. In Eliezer’s words, you need not only to fight the bad argument, but the most terrible horror that can be constructed from its corpse.
So it makes me sad that you see this as “disagreeing”. I do this in my own head, too, and there need not be emotional againstness in pitting various beliefs against each other.
Now if I also say I don’t think I’m really you, it will be just another similarity.
Agreed.
Yes, but I don’t think people saying that simple moral theories are too simple are claiming that no theory about any aspect of ethics should be simple. At any rate, in so far as they are I think they’re boringly wrong and would prefer to contemplate a less silly version of their position. The more interesting claim is (I think) something more like “no very simple theory can account for all of human values”, and I don’t see how CEV offers anything like a counterexample to that.
Ahem. You would appear to be right. Therefore, when I’ve said “Gram_Stone” in the above, please pretend I said something like “those advocating the position that simple moral theories are too simple”. As you say, it doesn’t particularly matter who said what, but I regret foisting a position on someone who wasn’t actually advocating it.
I regret making you sad. I wasn’t suggesting any sort of “emotional againstness”, though. And I think we actually are disagreeing. For instance, you are arguing that saying “we should reject very simple moral theories because they cannot rightly describe human values” is making a mistake, and that it’s the mistake Eliezer was arguing against when he wrote “Say not ‘Complexity’”. I think saying that is probably not a mistake, and certainly can’t be determined to be a mistake simply by recapitulating Eliezer’s arguments there. Isn’t that a disagreement?
But I take your point, and I too am in the habit of defending things I “disagree” with. I would say then, though, that I am disagreeing with bad arguments against those things—there is still disagreement, and that’s not the same as looking at an issue from multiple directions.
I have a very strong impression we disagree insomuch that we interpret each other’s words to mean something we can argue with.
Just now, you treated my original remark in this way by changing the quoted phrase, which was (when I wrote my comment) “Simple moral theories are too neat to do any real work in moral philosophy” but become (in your version) “simple moral theories cannot rightly describe human values”. Notice the difference?
I’m not defending my original comment, it was pretty stupid the way I had phrased it in any case.
So of course, you were right when you argued and corrected me, and I thank you for that.
But it still is worrying to have this tendency to imagine someone’s words being stupider than they really were, and then arguing with them.
That’s what I mean when I say I wish we all could give each other more credit, and interpret others’ words in the best possible way, not the worst...
Agreeing with me here?
But in any case, I also wanted to note that this discussion had not enough concreteness from the start.
http://lesswrong.com/lw/ic/the_virtue_of_narrowness/ etc.
I plead guilty to changing it, but not guilty to changing it in order to be able to argue. If you look a couple of paragraphs earlier in the comment in question you will see that I argue, explicitly, that surely people saying this kind of thing can’t actually mean that no simple theory can be useful in ethics, because that’s obviously wrong, and that the interesting claim we should consider is something more like “simple moral theories cannot account for all of human values”.
Yup, that’s a terrible thing, and I bet I do it sometimes, but on this particular occasion I was attempting to do the exact opposite (not to you but to the unspecified others Gram_Stone wrote about—though at the time I was under the misapprehension that it was actually Gram_Stone).
Hmm. So maybe let’s state the issue in a more nuanced way.
We have argument A and counter-argument B.
You adjust argument A in direction X to make it stronger and more valuable to argue against.
But it is not enough to apply the same adjustment to B. To make B stronger in a similar way, it might need adjusting in direction -X, or some other direction Y.
Does it look like it describes a bug that might have happened here? If not, feel free to drop the issue.
I’m afraid your description here is another thing that may have “not enough concreteness” :-). In your analogy, I take it A is “simple moral theories are too neat to do any real work in moral philosophy” and X is what takes you from there to “simple moral theories can’t account for all of human values”, but I’m not sure what B is, or what direction Y is, or where I adjusted B in direction X instead of direction Y.
So you may well be right, but I’m not sure I understand what you’re saying with enough clarity to tell whether you are.
You caught me red handed at not being concrete! Shame on me!
By B I meant applying the idea from “Say not ‘Complexity’”.
You adjusting B in direction X is what I pointed out when I accused you of changing my original comment.
By Y I mean something like our later consensus, which boils down to (Y1) “we can use the heuristic of ‘simple doesn’t work’ in this case, because in this case we have pretty high confidence that that’s how it really is; which still doesn’t make it a method we can use for finding solutions and is dangerous to use without sufficient basis”
Or it could even become (Y2) “we can get something out of considering those simple and wrong solutions” which is close to Gram_Stone’s original point.