BTW, that post alone gets you half the way to my variant of theism.
I think you mean that it would get you halfway there. Do you have good reason to think it would do the same for others who aren’t already convinced? (It seems like there could be non-question-begging reasons to think that—e.g., it might turn out that people who’ve read and understood it quite commonly end up agreeing with you about God.)
I think most of the disagreement would be about the use of the “God” label, not about the actual decision theory. Wei Dai asks:
Or is anyone tempted to bite this bullet and claim that we should apply pre-rationality to our utility functions as well?
This is very close to my variant of theism / objective morality, and gets you to the First and Final Cause of morality—the rest is discerning the attributes of said Cause, which we can do to some extent with algorithmic information theory, specifically the properties of Chaitin’s number of wisdom, omega. I think I could argue quite forcefully that my God is the same God as the God of Aquinas and especially Leibniz (who was in his time already groping towards algorithmic information theory himself). Thus far the counterarguments I’ve seen amount to: “Their ‘language’ doesn’t mean anything; if it does mean something then it doesn’t mean what you think it means; if it does mean what you think it means then you’re both wrong, traitor.” I strongly suspect rationalization due to irrational allergies to the “God” word; most people who think that theism is stupid and worthless have very little understanding of what theology actually is. This is pretty much unrelated to the actual contents of my ideas about ethics and decision theory, it’s just a debate about labels.
Anyway what I meant wasn’t that reading the post halfway convinces the attentive reader of my variant of theism, I meant it allows the attentive reader to halfway understand why I have the intuitions I do, whether or not the reader agrees with those intuitions.
(Apologies if I sound curmudgeonly, really stressed lately.)
Will, may I suggest that you try to work out the details of your objective morality first and explain it to us before linking it with theism/God? For example, how are we supposed to use Chaitin’s Omega to “discerning the attributes of said Cause”? I really have no idea at all what you mean by that, but it seems like it would make for a more interesting discussion than whether your God is the same God as the God of Aquinas and Leibniz, and also less likely to trigger people’s “allergies”.
Actually for the last few days I’ve been thinking about emailing you, because I’ve been planning on writing a long exegesis explaining my ideas about decision theory and theology, but you’ve stated that you don’t think it’s generally useful to proselytize about your intuitions before you have solid formally justified results that resulted from your intuitions. Although I’ve independently noticed various ideas about decision theory (probably due to Steve’s influence), I haven’t at all contributed any new insights, and the only thing I would accomplish with my apologetics is to convince other people that I’m not obviously crazy. You, Nesov, and Steve have made comments that indicate that you recognize that various of my intuitions might be correct, but of course that in itself isn’t anything noteworthy: it doesn’t help us build FAI. (Speaking of which, do you have any ideas about a better name than “FAI”? ‘Friendliness’ implies “friendly to humans”, which itself is a value judgment. Justified Artificial Intelligence, maybe? Not Regrettable Artificial Intelligence? I was using “computational axiology” for awhile a few years ago, but if there’s not a fundamental distinction between epistemology and axiology then that too is sort of misleading.)
Now, I personally think that certain results about decision theory should actually affect what we think of as morally justified, and thus I think my intuitions are actually important for not being damned (whatever that means). But I could easily be wrong about that.
The reason I’ve made references to theology is actually a matter of epistemology, not decision theory: I think LessWrong and others’ contempt for theology and other kinds of academic philosophy is unjustified and epistemically poisonous. (Needless to say, I am extremely skeptical of arguments along the lines of “we only have so much time, we can’t check out every crackpot thesis that comes our way”: in my experience such arguments are always, without exception the result of motivated cognition.) I would hold this position about normative epistemology even if my intuitions about decision theory didn’t happen to support various theological hypotheses.
Anyway, my default position is to write up the aforementioned exegesis in Latin; that way only people that already give my opinions a substantial amount of weight will bother to read it, and I won’t be seen as unfairly proselytizing about my own justifiably-ignorable ideas.
(I’m pretty drunk right now, apologies for errors. I might respond to your comment again when I’m sober.)
my default position is to write up the aforementioned exegesis in Latin
OK, so now you’re just taking the piss.
Writing it in Latin selects to some extent for people who respect your opinions, but more strongly for people who happen to know quite a lot of Latin. It sounds as if what you actually want is to be able to say you’ve written up your position, without anyone actually reading it. I hope that isn’t really what you actually want.
(I’m pretty stupid; apologies for any mistakes I make.)
(Part of this stems from my looking for an excuse to manipulate myself into learning Latin. Thus far I’ve used a hot Catholic chick and a perceived moral obligation to express myself incoherently—a quite potent combination.)
It sounds as if what you actually want is to be able to say you’ve written up your position, without anyone actually reading it.
That actually sounds a lot like me. Could be true. Yay double negative moral obligations—they force us to be coherent on a higher level, and about more important things!
you’ve stated that you don’t think it’s generally useful to proselytize about your intuitions before you have solid formally justified results that resulted from your intuitions
I will generally explain my intuitions but try not to waste too much time arguing for them if other people do not agree. So I think if you have any ideas that you have not already clearly explained, then you should do so. (And please, not in Latin.)
Speaking of which, do you have any ideas about a better name than “FAI”?
How about Minimally Wrong AI? :)
The reason I’ve made references to theology is actually a matter of epistemology, not decision theory: I think LessWrong and others’ contempt for theology and other kinds of academic philosophy is unjustified and epistemically poisonous.
Making off-hand references to theology is not going to change our minds about this. Do you have an actual plan to do so? If not, you’re just wasting your credibility and make it less likely for us to take your other ideas seriously.
So I think if you have any ideas that you have not already clearly explained, then you should do so. (And please, not in Latin.)
Okay, thanks for the advice. I haven’t yet clearly explained most of my ideas. (Hm, “my” ideas?—I doubt any of them are actually “mine”.) Not sure I want to do so (hence the Latin), but it sort of seems like a moral imperative, so I guess I have to. bleh bleh bleh
Making off-hand references to theology is not going to change our minds about this. Do you have an actual plan to do so?
I’ve debated the meta-level issue of epistemic “charity” and how much importance we should assign it in our decision calculi a few times on LessWrong before, e.g. in a few debates with Nesov. You were involved in at least one of them. I think what eventually happened is that I became afraid I was committing typical mind fallacy in advocating a sort of devil-may-care attitude to looking at weird or low-status beliefs; Nesov claimed that doing so had been harmful to him in the past, so I decided I’d rather collect more data before pushing my epistemic intuitions. Unfortunately I don’t know of an easy way to collect more data, so I’ve sort of stalled out on that particular campaign. The making references to theism thing is a sort of middleground position I’ve taken up, presumably to escape various aversions that I don’t have immediate introspective access to. There’s also the matter of not going out of my way to not appear discreditable.
The making references to theism thing is a sort of middleground position I’ve taken up, presumably to escape various aversions that I don’t have immediate introspective access to.
FWIW, I think this “middleground position” is the worst of both worlds.
There’s also the matter of not going out of my way to not appear discreditable.
Your comments have made me wonder if I’ve been too creditable, i.e., to the extent of making people take my ideas more seriously than they should. But it seems like a valid Umeshism that if there isn’t at least one person who has taken your ideas too seriously, then you’re not being creditable enough. I may be close to (or past) this threshold already, but you seem to still have quite a long way to go, so I suggest not worrying about this right now. Especially since credibility is much harder to gain than to lose, so if you ever find yourself having too much credibility, it shouldn’t be too late to do something about it then.
Your comment seems to me to be modally implicitly self-contradictory. For you say that you are worried that you’ve caused yourself to be too creditable, and yet the reason you are considering that hypothesis is that I, a mere peasant, have implicitly-suggested-if-only-categorically that that might be the case. If I am wrong to doubt the wisdom of my self-doubting, then by your lights I am right, and not right to do so! You’ve taken me seriously enough to doubt yourself—to some extent this implies that I have impressed my self too strongly upon you, for you and I and everyone else thinks that you are more justified than I. Again, modally—not necessarily self-contradictory, but it leans that way, at least connotationally-implicitly.
(Really quite drunk, again, apologies for errors, again.)
Damn it, why am I giving you advice on the proper level of credibility, when I should be telling you to stop drinking so much? Talk about cached selves...
Apologies in advance for the emotivist interpretation of morality espoused by this comment.
because I’ve been planning on writing a long exegesis explaining my ideas about decision theory and theology,
Yay!
but you’ve stated that you don’t think it’s generally useful to proselytize about your intuitions before you have solid formally justified results that resulted from your intuitions.
Boo.
I think LessWrong and others’ contempt for theology and other kinds of academic philosophy is unjustified and epistemically poisonous. (Needless to say, I am extremely skeptical of arguments along the lines of “we only have so much time, we can’t check out every crackpot thesis that comes our way”: in my experience such arguments are always, without exception the result of motivated cognition.)
YAAAAAY!
Anyway, my default position is to write up the aforementioned exegesis in Latin; that way only people that already give my opinions a substantial amount of weight will bother to read it, and I won’t be seen as unfairly proselytizing about my own justifiably-ignorable ideas.
I may well be being obtuse, but it seems to me that there’s something very odd about the phrase “theism / objective morality”, with its suggestion that basically the two are the same thing.
Have you actually argued forcefully that your god is also Aquinas’s and Leibniz’s? I ask because first you say you could, which kinda suggests you haven’t actually done it so far (at least not in public), but then you start talking about “counterarguments”, which kinda suggests that you have and people have responded.
I agree with Wei_Dai that it might be interesting to know more about your version of objective morality and how one goes about discerning the attributes of its alleged cause using algorithmic information theory.
I may well be being obtuse, but it seems to me that there’s something very odd about the phrase “theism / objective morality”, with its suggestion that basically the two are the same thing.
This reflects a confusion I have about how popular philosophical opinion is in favor of moral realism, yet against theism. It seems that getting the correct answer to all possible moral problems would require prodigious intelligence, and so I don’t really understand the conjunct of moral realism and atheism. This likely reflects my ignorance of the existent philosophical literature, though to be honest like most LessWrongers I’m a little skeptical of the worth of the average philosopher’s opinion, especially about subjects outside of his specialty. Also if I averaged philosophical opinion over, say, the last 500 years, then I think theism would beat atheism. Also, there’s the algorithm from music appreciation, which is like “look at what good musicians like”, which I think would strongly favor theism. Still, I admit I’m confused.
Have you actually argued forcefully that your god is also Aquinas’s and Leibniz’s? I ask because first you say you could, which kinda suggests you haven’t actually done it so far (at least not in public), but then you start talking about “counterarguments”, which kinda suggests that you have and people have responded.
I’ve kinda argued it on the meta-level, i.e. I’ve argued about when it is or isn’t appropriate to assume that you’re actually referring to the same concept versus just engaging in syncretism. But IIRC I haven’t yet forcefully argued that my god is Leibniz’s God. So, yeah, it’s a mixture.
BTW, realistically, I won’t be able to reply to your comment re CEV/rightness, though as a result of your comment I do plan on re-reading the meta-ethics sequence to see if “right” is anywhere (implicitly or explicitly) defined as CEV.
Also if I averaged philosophical opinion over, say, the last 500 years, then I think theism would beat atheism.
(nods) Very likely. To the extent that this technique is useful for rank-ordering philosophical positions I ought to adopt, I can also use it to rank-order various theological positions to determine which particular theology to adopt. (I’ve never done this, but I predict it’s one that endorses literacy.)
Surely typical moral realists, atheist or otherwise, don’t believe that they’ve got the correct answer to all possible moral problems. (Just as no one thinks they’re factually correct about everything.)
I don’t think “averaged philosophical opinion” is likely to have much value. Nor “averaged opinion of good musicians” when you’re talking about something that isn’t primarily musical, especially when you average over a period for much of which (e.g.) many of the best employment opportunities for musicians were working for religious organizations.
(Human with a finite brain; apologies for errors or omissions.)
Surely typical moral realists, atheist or otherwise, don’t believe that they’ve got the correct answer to all possible moral problems.
Apparently I mis-stated something. I’m a little too spent to fully rectify the situation, so here’s some word salad: moral realism implies belief in a Form of the Good, but ISTM that the Form of the Good has to be personal, because only intelligences can solve moral problems; specifically, I think a true Form of the Good has to be a superintelligence, i.e. a god, who, if the god is also the Form of the Good, we call God. ISTM that belief in a Form of the Good that isn’t personal is an obvious error that any decent moral philosopher should recognize, and so I think there must be something wrong with how I’m formulating the problem or with how I’m conceptualizing others’ representation of the problem.
I think you mean that it would get you halfway there. Do you have good reason to think it would do the same for others who aren’t already convinced? (It seems like there could be non-question-begging reasons to think that—e.g., it might turn out that people who’ve read and understood it quite commonly end up agreeing with you about God.)
I think most of the disagreement would be about the use of the “God” label, not about the actual decision theory. Wei Dai asks:
This is very close to my variant of theism / objective morality, and gets you to the First and Final Cause of morality—the rest is discerning the attributes of said Cause, which we can do to some extent with algorithmic information theory, specifically the properties of Chaitin’s number of wisdom, omega. I think I could argue quite forcefully that my God is the same God as the God of Aquinas and especially Leibniz (who was in his time already groping towards algorithmic information theory himself). Thus far the counterarguments I’ve seen amount to: “Their ‘language’ doesn’t mean anything; if it does mean something then it doesn’t mean what you think it means; if it does mean what you think it means then you’re both wrong, traitor.” I strongly suspect rationalization due to irrational allergies to the “God” word; most people who think that theism is stupid and worthless have very little understanding of what theology actually is. This is pretty much unrelated to the actual contents of my ideas about ethics and decision theory, it’s just a debate about labels.
Anyway what I meant wasn’t that reading the post halfway convinces the attentive reader of my variant of theism, I meant it allows the attentive reader to halfway understand why I have the intuitions I do, whether or not the reader agrees with those intuitions.
(Apologies if I sound curmudgeonly, really stressed lately.)
Will, may I suggest that you try to work out the details of your objective morality first and explain it to us before linking it with theism/God? For example, how are we supposed to use Chaitin’s Omega to “discerning the attributes of said Cause”? I really have no idea at all what you mean by that, but it seems like it would make for a more interesting discussion than whether your God is the same God as the God of Aquinas and Leibniz, and also less likely to trigger people’s “allergies”.
Actually for the last few days I’ve been thinking about emailing you, because I’ve been planning on writing a long exegesis explaining my ideas about decision theory and theology, but you’ve stated that you don’t think it’s generally useful to proselytize about your intuitions before you have solid formally justified results that resulted from your intuitions. Although I’ve independently noticed various ideas about decision theory (probably due to Steve’s influence), I haven’t at all contributed any new insights, and the only thing I would accomplish with my apologetics is to convince other people that I’m not obviously crazy. You, Nesov, and Steve have made comments that indicate that you recognize that various of my intuitions might be correct, but of course that in itself isn’t anything noteworthy: it doesn’t help us build FAI. (Speaking of which, do you have any ideas about a better name than “FAI”? ‘Friendliness’ implies “friendly to humans”, which itself is a value judgment. Justified Artificial Intelligence, maybe? Not Regrettable Artificial Intelligence? I was using “computational axiology” for awhile a few years ago, but if there’s not a fundamental distinction between epistemology and axiology then that too is sort of misleading.)
Now, I personally think that certain results about decision theory should actually affect what we think of as morally justified, and thus I think my intuitions are actually important for not being damned (whatever that means). But I could easily be wrong about that.
The reason I’ve made references to theology is actually a matter of epistemology, not decision theory: I think LessWrong and others’ contempt for theology and other kinds of academic philosophy is unjustified and epistemically poisonous. (Needless to say, I am extremely skeptical of arguments along the lines of “we only have so much time, we can’t check out every crackpot thesis that comes our way”: in my experience such arguments are always, without exception the result of motivated cognition.) I would hold this position about normative epistemology even if my intuitions about decision theory didn’t happen to support various theological hypotheses.
Anyway, my default position is to write up the aforementioned exegesis in Latin; that way only people that already give my opinions a substantial amount of weight will bother to read it, and I won’t be seen as unfairly proselytizing about my own justifiably-ignorable ideas.
(I’m pretty drunk right now, apologies for errors. I might respond to your comment again when I’m sober.)
OK, so now you’re just taking the piss.
Writing it in Latin selects to some extent for people who respect your opinions, but more strongly for people who happen to know quite a lot of Latin. It sounds as if what you actually want is to be able to say you’ve written up your position, without anyone actually reading it. I hope that isn’t really what you actually want.
(I’m pretty stupid; apologies for any mistakes I make.)
(Part of this stems from my looking for an excuse to manipulate myself into learning Latin. Thus far I’ve used a hot Catholic chick and a perceived moral obligation to express myself incoherently—a quite potent combination.)
That actually sounds a lot like me. Could be true. Yay double negative moral obligations—they force us to be coherent on a higher level, and about more important things!
I will generally explain my intuitions but try not to waste too much time arguing for them if other people do not agree. So I think if you have any ideas that you have not already clearly explained, then you should do so. (And please, not in Latin.)
How about Minimally Wrong AI? :)
Making off-hand references to theology is not going to change our minds about this. Do you have an actual plan to do so? If not, you’re just wasting your credibility and make it less likely for us to take your other ideas seriously.
(Side note: This self-sabotage is purposeful, for reasons indicated by, e.g., this post.)
Okay, thanks for the advice. I haven’t yet clearly explained most of my ideas. (Hm, “my” ideas?—I doubt any of them are actually “mine”.) Not sure I want to do so (hence the Latin), but it sort of seems like a moral imperative, so I guess I have to. bleh bleh bleh
I’ve debated the meta-level issue of epistemic “charity” and how much importance we should assign it in our decision calculi a few times on LessWrong before, e.g. in a few debates with Nesov. You were involved in at least one of them. I think what eventually happened is that I became afraid I was committing typical mind fallacy in advocating a sort of devil-may-care attitude to looking at weird or low-status beliefs; Nesov claimed that doing so had been harmful to him in the past, so I decided I’d rather collect more data before pushing my epistemic intuitions. Unfortunately I don’t know of an easy way to collect more data, so I’ve sort of stalled out on that particular campaign. The making references to theism thing is a sort of middleground position I’ve taken up, presumably to escape various aversions that I don’t have immediate introspective access to. There’s also the matter of not going out of my way to not appear discreditable.
FWIW, I think this “middleground position” is the worst of both worlds.
Your comments have made me wonder if I’ve been too creditable, i.e., to the extent of making people take my ideas more seriously than they should. But it seems like a valid Umeshism that if there isn’t at least one person who has taken your ideas too seriously, then you’re not being creditable enough. I may be close to (or past) this threshold already, but you seem to still have quite a long way to go, so I suggest not worrying about this right now. Especially since credibility is much harder to gain than to lose, so if you ever find yourself having too much credibility, it shouldn’t be too late to do something about it then.
Your comment seems to me to be modally implicitly self-contradictory. For you say that you are worried that you’ve caused yourself to be too creditable, and yet the reason you are considering that hypothesis is that I, a mere peasant, have implicitly-suggested-if-only-categorically that that might be the case. If I am wrong to doubt the wisdom of my self-doubting, then by your lights I am right, and not right to do so! You’ve taken me seriously enough to doubt yourself—to some extent this implies that I have impressed my self too strongly upon you, for you and I and everyone else thinks that you are more justified than I. Again, modally—not necessarily self-contradictory, but it leans that way, at least connotationally-implicitly.
(Really quite drunk, again, apologies for errors, again.)
Damn it, why am I giving you advice on the proper level of credibility, when I should be telling you to stop drinking so much? Talk about cached selves...
It’s okay, I ran out of rum. But now I’m left with an existential question: Why is the rum gone?
Apologies in advance for the emotivist interpretation of morality espoused by this comment.
Yay!
Boo.
YAAAAAY!
Boo.
I may well be being obtuse, but it seems to me that there’s something very odd about the phrase “theism / objective morality”, with its suggestion that basically the two are the same thing.
Have you actually argued forcefully that your god is also Aquinas’s and Leibniz’s? I ask because first you say you could, which kinda suggests you haven’t actually done it so far (at least not in public), but then you start talking about “counterarguments”, which kinda suggests that you have and people have responded.
I agree with Wei_Dai that it might be interesting to know more about your version of objective morality and how one goes about discerning the attributes of its alleged cause using algorithmic information theory.
This reflects a confusion I have about how popular philosophical opinion is in favor of moral realism, yet against theism. It seems that getting the correct answer to all possible moral problems would require prodigious intelligence, and so I don’t really understand the conjunct of moral realism and atheism. This likely reflects my ignorance of the existent philosophical literature, though to be honest like most LessWrongers I’m a little skeptical of the worth of the average philosopher’s opinion, especially about subjects outside of his specialty. Also if I averaged philosophical opinion over, say, the last 500 years, then I think theism would beat atheism. Also, there’s the algorithm from music appreciation, which is like “look at what good musicians like”, which I think would strongly favor theism. Still, I admit I’m confused.
I’ve kinda argued it on the meta-level, i.e. I’ve argued about when it is or isn’t appropriate to assume that you’re actually referring to the same concept versus just engaging in syncretism. But IIRC I haven’t yet forcefully argued that my god is Leibniz’s God. So, yeah, it’s a mixture.
I replied to Wei Dai’s comment here.
BTW, realistically, I won’t be able to reply to your comment re CEV/rightness, though as a result of your comment I do plan on re-reading the meta-ethics sequence to see if “right” is anywhere (implicitly or explicitly) defined as CEV.
(Inebriated, apologies for errors or omissions.)
(nods) Very likely. To the extent that this technique is useful for rank-ordering philosophical positions I ought to adopt, I can also use it to rank-order various theological positions to determine which particular theology to adopt. (I’ve never done this, but I predict it’s one that endorses literacy.)
Surely typical moral realists, atheist or otherwise, don’t believe that they’ve got the correct answer to all possible moral problems. (Just as no one thinks they’re factually correct about everything.)
I don’t think “averaged philosophical opinion” is likely to have much value. Nor “averaged opinion of good musicians” when you’re talking about something that isn’t primarily musical, especially when you average over a period for much of which (e.g.) many of the best employment opportunities for musicians were working for religious organizations.
(Human with a finite brain; apologies for errors or omissions.)
Apparently I mis-stated something. I’m a little too spent to fully rectify the situation, so here’s some word salad: moral realism implies belief in a Form of the Good, but ISTM that the Form of the Good has to be personal, because only intelligences can solve moral problems; specifically, I think a true Form of the Good has to be a superintelligence, i.e. a god, who, if the god is also the Form of the Good, we call God. ISTM that belief in a Form of the Good that isn’t personal is an obvious error that any decent moral philosopher should recognize, and so I think there must be something wrong with how I’m formulating the problem or with how I’m conceptualizing others’ representation of the problem.