not right to just implicitly assume that they are the same thing.
Yes, good point. I was just listing words that people tend to throw around for that sort of problem. “awesome” is likewise not necessarily “good”. I wonder how I might make that clearer...
If we take an outcome to be a world history, then “being turned into a whale for a day” isn’t an outcome.
Thanks for pointing this out. I forgot to substantiate on that. I take “turned into a whale for a day” to be referring to the probability distribution over total world histories consistent with current observations and with the turned-into-a-whale-on-this-day constraint.
Maybe I should have explained what I was doing… I hope no one gets too confused.
I’m having trouble reconciling this
“Awesomeness” is IMO the simplest effective pointer to morality that we currently have, but that morality is still inconsistent and dynamic. I take the “moral philosophy” problem to be working out in explicit detail what exactly is awesome and what isn’t, from our current position in morality-space, with all its meta-intuitions. I think this problem is incredibly hard to solve completely, but most people can do better than usual by just using “awesomeness”. I hope this makes that clearer?
VNM, or just the concept of utility function, implies consequentialism
In some degenerate sense, yes, but you can easily think up a utility function that cares what rules you followed in coming to a decision, which is generally not considered “consequentialism”. It is after all part of the world history and therefor available to the utility function.
We may have reached the point where we are looking at the problem in more detail than “consequentialism” is good for. We may need a new word to distinguish mere VNM from rules-don’t-matter type stuff.
I take “turned into a whale for a day” to be referring to the probability distribution over total world histories consistent with current observations and with the turned-into-a-whale-on-this-day constraint.
I don’t think this works for your post, because “turned into a whale for a day” implies I’m probably living in a universe with magic, and my expected utility conditional on that would be mostly determined by what I expect will happen with the magic for the rest of time, rather the particular experience of being a whale for a day. It would no longer make much sense to compare the utility of “turned into a whale for a day” with “day with an orgasm” and “day without an orgasm”.
but most people can do better than usual by just using “awesomeness”
It’s possible that I judged your previous post too harshly because I was missing the “most people” part. But what kind of people do you think can do better by using “awesomeness”? What about, for example, Brian Tomasik, who thinks his morality mostly has to do with reducing the amount of negative hedons in the universe (rather than whales and starships)?
Ooops. I suppose to patch that, we have to postulate that we at least believe that we live in a world where a wizard turning you into a whale is normal enough that you don’t totally re-evaluate everything you believe about reality, but rare enough that it would be pretty awesome.
Thanks for catching that. I can’t believe I missed it.
What about, for example, Brian Tomasik, who thinks his morality mostly has to do with reducing the amount of negative hedons in the universe (rather than whales and starships)?
I would put that guy in the “needs awesomeism” crowd, but maybe he would disagree, and I have no interest in pushing it.
I don’t much like his “morality as hostile meme-warfare” idea either. In fact, I disagree with almost everything in that post.
Last night, someone convinced me to continue on this writing trend that the OP is a part of, and end up with a sane attack, or at least scouting mission, on moral philosophy and CEV or CEV-like strategies. I do have some ideas that haven’t been discussed around here, and a competent co-philosopher, so if I can merely stay on the rails (very hard), it should be interesting.
EDIT: And thanks a lot for your critical feedback; it’s really helpful given that so few other people come up with useful competent criticism.
I don’t much like his “morality as hostile meme-warfare” idea either. In fact, I disagree with almost everything in that post.
What do you mean by “don’t like”? It’s epistemically wrong, or instrumentally bad to think that way? I’d like to see your reaction to that post in more detail.
And thanks a lot for your critical feedback; it’s really helpful given that so few other people come up with useful competent criticism.
It seems to me that people made a lot more competent critical comments when Eliezer was writing his sequences, which makes me think that we’ve driven out a bunch of competent critics (or they just left naturally and we haven’t done enough to recruit replacements).
“Awesomeness” is IMO the simplest effective pointer to morality that we currently have, but that morality is still inconsistent and dynamic.
The more I think about “awesomeness” as a proxy for moral reasoning, the less awesome it becomes and the more like the original painful exercise of rationality it looks.
It’s too late for me. It might work to tell the average person to use “awesomeness” as their black box for moral reasoning as long as they never ever look inside it. Unfortunately, all of us have now looked, and so whatever value it had as a black box has disappeared.
You can’t tell me now to go back and revert to my original version of awesome unless you have a supply of blue pills whenever I need them.
If the power of this tool evaporates as soon as you start investigating it, that strikes me as a rather strong point of evidence against it. It was fun while it lasted, though.
It’s too late for me. It might work to tell the average person to use “awesomeness” as their black box for moral reasoning as long as they never ever look inside it. Unfortunately, all of us have now looked, and so whatever value it had as a black box has disappeared.
You seem to be generalizing from one example. Have you attempted to find examples of people who have looked inside the box and not destroyed its value in the process?
I suspect that the utility of this approach is dependent on more than simply whether or not the person has examined the “awesome” label, and that some people will do better than others. Given the comments I see on LW, I suspect many people here have looked into it and still find value. (I will place myself into that group only tentatively; I haven’t looked into it in any particular detail, but I have looked. OTOH, that still seems like strong enough evidence to call “never ever look inside” into question.)
Yes, good point. I was just listing words that people tend to throw around for that sort of problem. “awesome” is likewise not necessarily “good”. I wonder how I might make that clearer...
Thanks for pointing this out. I forgot to substantiate on that. I take “turned into a whale for a day” to be referring to the probability distribution over total world histories consistent with current observations and with the turned-into-a-whale-on-this-day constraint.
Maybe I should have explained what I was doing… I hope no one gets too confused.
“Awesomeness” is IMO the simplest effective pointer to morality that we currently have, but that morality is still inconsistent and dynamic. I take the “moral philosophy” problem to be working out in explicit detail what exactly is awesome and what isn’t, from our current position in morality-space, with all its meta-intuitions. I think this problem is incredibly hard to solve completely, but most people can do better than usual by just using “awesomeness”. I hope this makes that clearer?
In some degenerate sense, yes, but you can easily think up a utility function that cares what rules you followed in coming to a decision, which is generally not considered “consequentialism”. It is after all part of the world history and therefor available to the utility function.
We may have reached the point where we are looking at the problem in more detail than “consequentialism” is good for. We may need a new word to distinguish mere VNM from rules-don’t-matter type stuff.
I don’t think this works for your post, because “turned into a whale for a day” implies I’m probably living in a universe with magic, and my expected utility conditional on that would be mostly determined by what I expect will happen with the magic for the rest of time, rather the particular experience of being a whale for a day. It would no longer make much sense to compare the utility of “turned into a whale for a day” with “day with an orgasm” and “day without an orgasm”.
It’s possible that I judged your previous post too harshly because I was missing the “most people” part. But what kind of people do you think can do better by using “awesomeness”? What about, for example, Brian Tomasik, who thinks his morality mostly has to do with reducing the amount of negative hedons in the universe (rather than whales and starships)?
Ooops. I suppose to patch that, we have to postulate that we at least believe that we live in a world where a wizard turning you into a whale is normal enough that you don’t totally re-evaluate everything you believe about reality, but rare enough that it would be pretty awesome.
Thanks for catching that. I can’t believe I missed it.
I would put that guy in the “needs awesomeism” crowd, but maybe he would disagree, and I have no interest in pushing it.
I don’t much like his “morality as hostile meme-warfare” idea either. In fact, I disagree with almost everything in that post.
Last night, someone convinced me to continue on this writing trend that the OP is a part of, and end up with a sane attack, or at least scouting mission, on moral philosophy and CEV or CEV-like strategies. I do have some ideas that haven’t been discussed around here, and a competent co-philosopher, so if I can merely stay on the rails (very hard), it should be interesting.
EDIT: And thanks a lot for your critical feedback; it’s really helpful given that so few other people come up with useful competent criticism.
What do you mean by “don’t like”? It’s epistemically wrong, or instrumentally bad to think that way? I’d like to see your reaction to that post in more detail.
It seems to me that people made a lot more competent critical comments when Eliezer was writing his sequences, which makes me think that we’ve driven out a bunch of competent critics (or they just left naturally and we haven’t done enough to recruit replacements).
The more I think about “awesomeness” as a proxy for moral reasoning, the less awesome it becomes and the more like the original painful exercise of rationality it looks.
see this
tl;dr: don’t dereference “awesome” in verbal-logical mode.
It’s too late for me. It might work to tell the average person to use “awesomeness” as their black box for moral reasoning as long as they never ever look inside it. Unfortunately, all of us have now looked, and so whatever value it had as a black box has disappeared.
You can’t tell me now to go back and revert to my original version of awesome unless you have a supply of blue pills whenever I need them.
If the power of this tool evaporates as soon as you start investigating it, that strikes me as a rather strong point of evidence against it. It was fun while it lasted, though.
You seem to be generalizing from one example. Have you attempted to find examples of people who have looked inside the box and not destroyed its value in the process?
I suspect that the utility of this approach is dependent on more than simply whether or not the person has examined the “awesome” label, and that some people will do better than others. Given the comments I see on LW, I suspect many people here have looked into it and still find value. (I will place myself into that group only tentatively; I haven’t looked into it in any particular detail, but I have looked. OTOH, that still seems like strong enough evidence to call “never ever look inside” into question.)