OK, so I agree that that’s part of what Eliezer is saying under “Say not ‘complexity’”. But let’s be a bit more precise about it. He makes (at least) two separate claims.
The first is that “complexity should never be a goal in itself”. I strongly agree with that, and I bet Gram_Stone does too and isn’t proposing to chase after complexity for its own sake.
[EDITED to add: Oops, as SquirrelInHell points out later I actually mean not Gram_Stone but whatever other people Gram_Stone had in mind who hold that theories of ethics should not be very simple. Sorry, Gram_Stone!]
The second is that “saying ‘complexity’ doesn’t concentrate your probability mass”. This I think is almost right, but that “almost” is important sometimes. Eliezer’s point is that there are vastly many “complex” things, which have nothing much in common besides not being very simple, so that “let’s do something complex” doesn’t give you any guidance to speak of. All of that is true. But suppose you’re trying to solve a problem whose solution you have good reason to think is complex, and suppose that for whatever reason you (or others) have a strong temptation to look for solutions that you’re pretty sure are simpler than the simplest actual solution. Then saying “no, that won’t do; the solution will not be that simple” does concentrate your probability mass and does guide you—by steering you away from something specific that won’t work and that you’d otherwise have been inclined to try.
Again, this is dependent on your being right when you say “no, the solution will not be that simple”. That’s often not something you can have any confidence in. But if what you’re trying to do is to model something formed by millions of years of arbitrary contingencies in a complicated environment—like, e.g., human values—I think you can be quite confident that no really simple model is very accurate. More so, if lots of clever people have looked for simple answers and not found anything good enough.
Here’s another of Eliezer’s posts that maybe comes closer to agreeing explicitly with Gram_Stone: Value is Fragile. Central thesis: “Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.” Note that if our values could be adequately captured by a genuinely simple model, this would be false.
(I am citing things Eliezer has written not because there’s anything wrong with disagreeing with Eliezer, but because your application here of what he wrote in “Say not ‘complexity’” seems to lead to conclusions at variance with other things he’s written, which suggests that you might be misapplying it.)
Central thesis: “Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.” Note that if our values could be adequately captured by a genuinely simple model, this would be false.
I think you are not fully accurate in your reasoning here. It is still possible to have a relatively simple and describable transformation that takes “humans” as an input value, see e.g. http://intelligence.org/files/CEV.pdf (Now I’m not saying this is true in this particular case, just noting it for the sake of completeness.)
seems to lead to conclusions at variance with other things he’s written, which suggests that you might be misapplying it.
I’d say the message is consistent if you resist dumping the meta-level and object-level together. On meta-level, “we need more complexity/messiness” is still a bad heuristic. On object-level, we have determined that simple solutions don’t work, so we are suspicious of them.
Thanks for pointing out the inconsistency, it certainly makes the issue worthwhile to discuss in depth.
Then saying “no, that won’t do; the solution will not be that simple” does concentrate your probability mass and does guide you—by steering you away from something specific that won’t work and that you’d otherwise have been inclined to try.
In practice, there’s probably more value in confronting your simple solution and finding an error in it, then in dismissing it out of hand because it’s “too simple”. You just repeat this until you stop making errors of this kind, and what you have learned will be useful in finding a real solution. In this sense it might be harmful to use the notion that “complexity” sometimes concentrates your probability mass a little bit.
Meta-note: reading paragraphs 2-3 of your comment gave me a funny impression that you are thinking and writing like you are a copy of me. ???? MYSTERIOUS MAGICAL SOULMATES MAKE RAINBOW CANDY FALL FROM THE SKY ????
[...] a relatively simple and describable transformation that takes “humans” as an input value [...]
Perhaps I’m misunderstanding you, but it sounds as if you’re saying that we might be able to make a simple model of human values by making it say something like “what humans value is valuable”. I agree that in some sense that could be a simple model of human values, but it manages to be one only by not actually doing its job.
On meta-level [...] On object-level [...]
Sure, I agree that in general “need more complexity” is a bad heuristic. Again, I don’t think Gram_Stone was proposing it as a general-purpose meta-level heuristic—but as an observation about apparent constraints on models of human values.
[EDITED to add: Oops, as SquirrelInHell points out later I actually mean not Gram_Stone but whatever other people Gram_Stone had in mind who hold that theories of ethics should not be very simple. Sorry, Gram_Stone!]
[...] more value in confronting your simple solution [...]
Yes, that could well be true. But what if you’re just getting started and haven’t arrived on a simple candidate solution yet? It might be better to save the wasted effort and just spurn the seduction of super-simple solutions. (Of course doing that means that you’ll miss out if in fact there really is a super-simple solution. Unless you get lucky and find it by starting with a hairy complicated solution and simplifying, I guess.)
Meta-note:
I am at least 95% sure I’m not you, and I regret that if I have the ability to make rainbow candy fall from the sky (other than by throwing it in the air and watching it fall) I’ve not yet discovered it. But, um, hi, pleased to meet you. I hope we’ll be friends. (Unless you’re secretly trying to convert us all to Raëlianism, of course.) And congratulations on thinking at a high enough level of abstraction to see someone as thinking like a copy of you when they write something disagreeing with you, I guess :-).
Perhaps I’m misunderstanding you [..] make a simple model of human values by making it say something like “what humans value is valuable”
Yes you are? It need not necessarily be trivial, e.g. the aforementioned Coherent Extrapolated Volition idea by Eliezer′2004. Considering that “what humans value” is not clear or consistent, any analysis would proceed on two fronts:
extracting and defining what humans value,
constructing a thinking framework that allows us to put it all together.
So we can have insights about 2 without progress in 1.
Makes sense?
Again, I don’t think Gram_Stone was proposing
Well, if you actually look at Gram_Stone’s post, he was arguing against the “simple doesn’t work” heuristic in moral philosophy. I think you might have automatically assumed that each next speaker’s position is in opposition to others? It’s not important who said what, it’s important to sort out all relevant thoughts and came out having more accurate beliefs than before.
However there is an issue of hygiene of discussion. I don’t “disagree”, I suggest alternative ways of looking at an issue. I tend to do it even when I am less inclined to support the position I seem to be defending, versus the opposing position. In Eliezer’s words, you need not only to fight the bad argument, but the most terrible horror that can be constructed from its corpse.
So it makes me sad that you see this as “disagreeing”. I do this in my own head, too, and there need not be emotional againstness in pitting various beliefs against each other.
I am at least 95% sure I’m not you
Now if I also say I don’t think I’m really you, it will be just another similarity.
we can have insights about 2 without progress in 1. Make sense?
Yes, but I don’t think people saying that simple moral theories are too simple are claiming that no theory about any aspect of ethics should be simple. At any rate, in so far as they are I think they’re boringly wrong and would prefer to contemplate a less silly version of their position. The more interesting claim is (I think) something more like “no very simple theory can account for all of human values”, and I don’t see how CEV offers anything like a counterexample to that.
if you actually look at Gram_Stone’s post [...]
Ahem. You would appear to be right. Therefore, when I’ve said “Gram_Stone” in the above, please pretend I said something like “those advocating the position that simple moral theories are too simple”. As you say, it doesn’t particularly matter who said what, but I regret foisting a position on someone who wasn’t actually advocating it.
it makes me sad that you see this as “disagreeing”
I regret making you sad. I wasn’t suggesting any sort of “emotional againstness”, though. And I think we actually are disagreeing. For instance, you are arguing that saying “we should reject very simple moral theories because they cannot rightly describe human values” is making a mistake, and that it’s the mistake Eliezer was arguing against when he wrote “Say not ‘Complexity’”. I think saying that is probably not a mistake, and certainly can’t be determined to be a mistake simply by recapitulating Eliezer’s arguments there. Isn’t that a disagreement?
But I take your point, and I too am in the habit of defending things I “disagree” with. I would say then, though, that I am disagreeing with bad arguments against those things—there is still disagreement, and that’s not the same as looking at an issue from multiple directions.
I have a very strong impression we disagree insomuch that we interpret each other’s words to mean something we can argue with.
Just now, you treated my original remark in this way by changing the quoted phrase, which was (when I wrote my comment) “Simple moral theories are too neat to do any real work in moral philosophy” but become (in your version) “simple moral theories cannot rightly describe human values”. Notice the difference?
I’m not defending my original comment, it was pretty stupid the way I had phrased it in any case.
So of course, you were right when you argued and corrected me, and I thank you for that.
But it still is worrying to have this tendency to imagine someone’s words being stupider than they really were, and then arguing with them.
That’s what I mean when I say I wish we all could give each other more credit, and interpret others’ words in the best possible way, not the worst...
Agreeing with me here?
But in any case, I also wanted to note that this discussion had not enough concreteness from the start.
I plead guilty to changing it, but not guilty to changing it in order to be able to argue. If you look a couple of paragraphs earlier in the comment in question you will see that I argue, explicitly, that surely people saying this kind of thing can’t actually mean that no simple theory can be useful in ethics, because that’s obviously wrong, and that the interesting claim we should consider is something more like “simple moral theories cannot account for all of human values”.
this tendency to imagine someone’s words being stupider than they really are, and then arguing with them.
Yup, that’s a terrible thing, and I bet I do it sometimes, but on this particular occasion I was attempting to do the exact opposite (not to you but to the unspecified others Gram_Stone wrote about—though at the time I was under the misapprehension that it was actually Gram_Stone).
Hmm. So maybe let’s state the issue in a more nuanced way.
We have argument A and counter-argument B.
You adjust argument A in direction X to make it stronger and more valuable to argue against.
But it is not enough to apply the same adjustment to B. To make B stronger in a similar way, it might need adjusting in direction -X, or some other direction Y.
Does it look like it describes a bug that might have happened here? If not, feel free to drop the issue.
I’m afraid your description here is another thing that may have “not enough concreteness” :-). In your analogy, I take it A is “simple moral theories are too neat to do any real work in moral philosophy” and X is what takes you from there to “simple moral theories can’t account for all of human values”, but I’m not sure what B is, or what direction Y is, or where I adjusted B in direction X instead of direction Y.
So you may well be right, but I’m not sure I understand what you’re saying with enough clarity to tell whether you are.
You caught me red handed at not being concrete! Shame on me!
By B I meant applying the idea from “Say not ‘Complexity’”.
You adjusting B in direction X is what I pointed out when I accused you of changing my original comment.
By Y I mean something like our later consensus, which boils down to (Y1) “we can use the heuristic of ‘simple doesn’t work’ in this case, because in this case we have pretty high confidence that that’s how it really is; which still doesn’t make it a method we can use for finding solutions and is dangerous to use without sufficient basis”
Or it could even become (Y2) “we can get something out of considering those simple and wrong solutions” which is close to Gram_Stone’s original point.
Heh, it’s okay. I had no idea that the common ancestor comment had generated so much discussion.
Also, I agree that neither is the complex approach obviously wrong to me, and that it seems that until there’s something that makes it seem obviously wrong, we might as well let the two research paths thrive.
OK, so I agree that that’s part of what Eliezer is saying under “Say not ‘complexity’”. But let’s be a bit more precise about it. He makes (at least) two separate claims.
The first is that “complexity should never be a goal in itself”. I strongly agree with that, and I bet Gram_Stone does too and isn’t proposing to chase after complexity for its own sake.
[EDITED to add: Oops, as SquirrelInHell points out later I actually mean not Gram_Stone but whatever other people Gram_Stone had in mind who hold that theories of ethics should not be very simple. Sorry, Gram_Stone!]
The second is that “saying ‘complexity’ doesn’t concentrate your probability mass”. This I think is almost right, but that “almost” is important sometimes. Eliezer’s point is that there are vastly many “complex” things, which have nothing much in common besides not being very simple, so that “let’s do something complex” doesn’t give you any guidance to speak of. All of that is true. But suppose you’re trying to solve a problem whose solution you have good reason to think is complex, and suppose that for whatever reason you (or others) have a strong temptation to look for solutions that you’re pretty sure are simpler than the simplest actual solution. Then saying “no, that won’t do; the solution will not be that simple” does concentrate your probability mass and does guide you—by steering you away from something specific that won’t work and that you’d otherwise have been inclined to try.
Again, this is dependent on your being right when you say “no, the solution will not be that simple”. That’s often not something you can have any confidence in. But if what you’re trying to do is to model something formed by millions of years of arbitrary contingencies in a complicated environment—like, e.g., human values—I think you can be quite confident that no really simple model is very accurate. More so, if lots of clever people have looked for simple answers and not found anything good enough.
Here’s another of Eliezer’s posts that maybe comes closer to agreeing explicitly with Gram_Stone: Value is Fragile. Central thesis: “Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.” Note that if our values could be adequately captured by a genuinely simple model, this would be false.
(I am citing things Eliezer has written not because there’s anything wrong with disagreeing with Eliezer, but because your application here of what he wrote in “Say not ‘complexity’” seems to lead to conclusions at variance with other things he’s written, which suggests that you might be misapplying it.)
I think you are not fully accurate in your reasoning here. It is still possible to have a relatively simple and describable transformation that takes “humans” as an input value, see e.g. http://intelligence.org/files/CEV.pdf (Now I’m not saying this is true in this particular case, just noting it for the sake of completeness.)
I’d say the message is consistent if you resist dumping the meta-level and object-level together. On meta-level, “we need more complexity/messiness” is still a bad heuristic. On object-level, we have determined that simple solutions don’t work, so we are suspicious of them.
Thanks for pointing out the inconsistency, it certainly makes the issue worthwhile to discuss in depth.
In practice, there’s probably more value in confronting your simple solution and finding an error in it, then in dismissing it out of hand because it’s “too simple”. You just repeat this until you stop making errors of this kind, and what you have learned will be useful in finding a real solution. In this sense it might be harmful to use the notion that “complexity” sometimes concentrates your probability mass a little bit.
Meta-note: reading paragraphs 2-3 of your comment gave me a funny impression that you are thinking and writing like you are a copy of me. ???? MYSTERIOUS MAGICAL SOULMATES MAKE RAINBOW CANDY FALL FROM THE SKY ????
Perhaps I’m misunderstanding you, but it sounds as if you’re saying that we might be able to make a simple model of human values by making it say something like “what humans value is valuable”. I agree that in some sense that could be a simple model of human values, but it manages to be one only by not actually doing its job.
Sure, I agree that in general “need more complexity” is a bad heuristic. Again, I don’t think Gram_Stone was proposing it as a general-purpose meta-level heuristic—but as an observation about apparent constraints on models of human values.
[EDITED to add: Oops, as SquirrelInHell points out later I actually mean not Gram_Stone but whatever other people Gram_Stone had in mind who hold that theories of ethics should not be very simple. Sorry, Gram_Stone!]
Yes, that could well be true. But what if you’re just getting started and haven’t arrived on a simple candidate solution yet? It might be better to save the wasted effort and just spurn the seduction of super-simple solutions. (Of course doing that means that you’ll miss out if in fact there really is a super-simple solution. Unless you get lucky and find it by starting with a hairy complicated solution and simplifying, I guess.)
I am at least 95% sure I’m not you, and I regret that if I have the ability to make rainbow candy fall from the sky (other than by throwing it in the air and watching it fall) I’ve not yet discovered it. But, um, hi, pleased to meet you. I hope we’ll be friends. (Unless you’re secretly trying to convert us all to Raëlianism, of course.) And congratulations on thinking at a high enough level of abstraction to see someone as thinking like a copy of you when they write something disagreeing with you, I guess :-).
Yes you are? It need not necessarily be trivial, e.g. the aforementioned Coherent Extrapolated Volition idea by Eliezer′2004. Considering that “what humans value” is not clear or consistent, any analysis would proceed on two fronts:
extracting and defining what humans value,
constructing a thinking framework that allows us to put it all together. So we can have insights about 2 without progress in 1. Makes sense?
Well, if you actually look at Gram_Stone’s post, he was arguing against the “simple doesn’t work” heuristic in moral philosophy. I think you might have automatically assumed that each next speaker’s position is in opposition to others? It’s not important who said what, it’s important to sort out all relevant thoughts and came out having more accurate beliefs than before.
However there is an issue of hygiene of discussion. I don’t “disagree”, I suggest alternative ways of looking at an issue. I tend to do it even when I am less inclined to support the position I seem to be defending, versus the opposing position. In Eliezer’s words, you need not only to fight the bad argument, but the most terrible horror that can be constructed from its corpse.
So it makes me sad that you see this as “disagreeing”. I do this in my own head, too, and there need not be emotional againstness in pitting various beliefs against each other.
Now if I also say I don’t think I’m really you, it will be just another similarity.
Agreed.
Yes, but I don’t think people saying that simple moral theories are too simple are claiming that no theory about any aspect of ethics should be simple. At any rate, in so far as they are I think they’re boringly wrong and would prefer to contemplate a less silly version of their position. The more interesting claim is (I think) something more like “no very simple theory can account for all of human values”, and I don’t see how CEV offers anything like a counterexample to that.
Ahem. You would appear to be right. Therefore, when I’ve said “Gram_Stone” in the above, please pretend I said something like “those advocating the position that simple moral theories are too simple”. As you say, it doesn’t particularly matter who said what, but I regret foisting a position on someone who wasn’t actually advocating it.
I regret making you sad. I wasn’t suggesting any sort of “emotional againstness”, though. And I think we actually are disagreeing. For instance, you are arguing that saying “we should reject very simple moral theories because they cannot rightly describe human values” is making a mistake, and that it’s the mistake Eliezer was arguing against when he wrote “Say not ‘Complexity’”. I think saying that is probably not a mistake, and certainly can’t be determined to be a mistake simply by recapitulating Eliezer’s arguments there. Isn’t that a disagreement?
But I take your point, and I too am in the habit of defending things I “disagree” with. I would say then, though, that I am disagreeing with bad arguments against those things—there is still disagreement, and that’s not the same as looking at an issue from multiple directions.
I have a very strong impression we disagree insomuch that we interpret each other’s words to mean something we can argue with.
Just now, you treated my original remark in this way by changing the quoted phrase, which was (when I wrote my comment) “Simple moral theories are too neat to do any real work in moral philosophy” but become (in your version) “simple moral theories cannot rightly describe human values”. Notice the difference?
I’m not defending my original comment, it was pretty stupid the way I had phrased it in any case.
So of course, you were right when you argued and corrected me, and I thank you for that.
But it still is worrying to have this tendency to imagine someone’s words being stupider than they really were, and then arguing with them.
That’s what I mean when I say I wish we all could give each other more credit, and interpret others’ words in the best possible way, not the worst...
Agreeing with me here?
But in any case, I also wanted to note that this discussion had not enough concreteness from the start.
http://lesswrong.com/lw/ic/the_virtue_of_narrowness/ etc.
I plead guilty to changing it, but not guilty to changing it in order to be able to argue. If you look a couple of paragraphs earlier in the comment in question you will see that I argue, explicitly, that surely people saying this kind of thing can’t actually mean that no simple theory can be useful in ethics, because that’s obviously wrong, and that the interesting claim we should consider is something more like “simple moral theories cannot account for all of human values”.
Yup, that’s a terrible thing, and I bet I do it sometimes, but on this particular occasion I was attempting to do the exact opposite (not to you but to the unspecified others Gram_Stone wrote about—though at the time I was under the misapprehension that it was actually Gram_Stone).
Hmm. So maybe let’s state the issue in a more nuanced way.
We have argument A and counter-argument B.
You adjust argument A in direction X to make it stronger and more valuable to argue against.
But it is not enough to apply the same adjustment to B. To make B stronger in a similar way, it might need adjusting in direction -X, or some other direction Y.
Does it look like it describes a bug that might have happened here? If not, feel free to drop the issue.
I’m afraid your description here is another thing that may have “not enough concreteness” :-). In your analogy, I take it A is “simple moral theories are too neat to do any real work in moral philosophy” and X is what takes you from there to “simple moral theories can’t account for all of human values”, but I’m not sure what B is, or what direction Y is, or where I adjusted B in direction X instead of direction Y.
So you may well be right, but I’m not sure I understand what you’re saying with enough clarity to tell whether you are.
You caught me red handed at not being concrete! Shame on me!
By B I meant applying the idea from “Say not ‘Complexity’”.
You adjusting B in direction X is what I pointed out when I accused you of changing my original comment.
By Y I mean something like our later consensus, which boils down to (Y1) “we can use the heuristic of ‘simple doesn’t work’ in this case, because in this case we have pretty high confidence that that’s how it really is; which still doesn’t make it a method we can use for finding solutions and is dangerous to use without sufficient basis”
Or it could even become (Y2) “we can get something out of considering those simple and wrong solutions” which is close to Gram_Stone’s original point.
Heh, it’s okay. I had no idea that the common ancestor comment had generated so much discussion.
Also, I agree that neither is the complex approach obviously wrong to me, and that it seems that until there’s something that makes it seem obviously wrong, we might as well let the two research paths thrive.