The core-sequence fail gets downvoted pretty reliably. I can’t say the same for metaethics or AI stuff. We need more people to read those sequences so that they can point out and downvote failure.
The core-sequence fail gets downvoted pretty reliably. I can’t say the same for metaethics or AI stuff. We need more people to read those sequences so that they can point out and downvote failure.
Isn’t the metaethics sequence not liked very much? I haven’t read it in a while, and so I’m not sure that I actually read all of the posts, but I found what I read fairly squishy, and not even on the level of, say, Nietzsche’s moral thought.
Downvoting people for not understanding that beliefs constrain expectation I’m okay with. Downvoting people for not agreeing with EY’s moral intuitions seems… mistaken.
Downvoting people for not understanding that beliefs constrain expectation I’m okay with.
Beliefs are only sometimes about anticipation. LessWrong repeatedly makes huge errors when they interpret “belief” in such a naive fashion;—giving LessWrong a semi-Bayesian justification for this collective failure of hermeneutics is unwise. Maybe beliefs “should” be about anticipation, but LessWrong, like everybody else, can’t reliably separate descriptive and normative claims, which is exactly why this “beliefs constrain anticipation” thing is misleading. …There’s a neat level-crossing thingy in there.
Downvoting people for not agreeing with EY’s moral intuitions seems… mistaken.
EY thinking of meta-ethics as a “solved problem” is one of the most obvious signs that he’s very spotty when it comes to philosophy and can’t really be trusted to do AI theory.
EY thinking of meta-ethics as a “solved problem” is one of the most obvious signs that he’s very spotty when it comes to philosophy and can’t really be trusted to do AI theory.
He does? I know he doesn’t take it as seriously as other knowledge required for AI but I didn’t think he actually thought it was a ‘solved problem’.
From my favorite post and comments section on Less Wrong thus far:
Take metaethics, a solved problem: what are the odds that someone who still thought metaethics was a Deep Mystery could write an AI algorithm that could come up with a correct metaethics?
Yes, it looks like Eliezer is mistaken there (or speaking hyperbole).
I agree with:
what are the odds that someone who still thought metaethics was a Deep Mystery could write an AI algorithm that could come up with a correct metaethics?
… but would weaken the claim drastically to “Take metaethics, a clearly reducible problem with many technical details to be ironed out”. I suspect you would disagree with even that, given that you advocate meta-ethical sentiments that I would negatively label “Deeply Mysterious”. This places me approximately equidistant from your respective positions.
I only weakly advocate certain (not formally justified) ideas about meta-ethics, and remain deeply confused about certain meta-ethical questions that I wouldn’t characterize as mere technical details. One simple example: Eliezer equates reflective consistency (a la CEV) with alignment with the big blob of computation he calls “right”; I still don’t know what argument, technical or non-technical, could justify such an intuition, and I don’t know how Eliezer would make tradeoffs if the two did in fact have different referents. This strikes me as a significant problem in itself, and there are many more problems like it.
Are you sure Eliezer does equate reflective consistency with alignment with what-he-calls-”right”? Because my recollection is that he doesn’t claim either (1) that a reflectively consistent alien mind need have values at all like what he calls right, or (2) that any individual human being, if made reflectively consistent, would necessarily end up with values much like what he calls right.
(Unless I’m awfully confused, denial of (1) is an important element in his thinking.)
I think he is defining “right” to mean something along the lines of “in line with the CEV of present-day humanity”. Maybe that’s a sensible way to use the word, maybe not (for what it’s worth, I incline towards “not”) but it isn’t the same thing as identifying “right” with “reflectively consistent”, and it doesn’t lead to a risk of confusion if the two turn out to have different referents (because they can’t).
But the key notion is the idea that what we name by ‘right’ is a fixed question, or perhaps a fixed framework. We can encounter moral arguments that modify our terminal values, and even encounter moral arguments that modify what we count as a moral argument; nonetheless, it all grows out of a particular starting point. We do not experience ourselves as embodying the question “What will I decide to do?” which would be a Type 2 calculator; anything we decided would thereby become right. We experience ourselves as asking the embodied question: “What will save my friends, and my people, from getting hurt? How can we all have more fun? …” where the ”...” is around a thousand other things.
So ‘I should X’ does not mean that I would attempt to X were I fully informed.
Aghhhh this is so confusing. Now I’m left thinking both you and Wei Dai have furnished quotes supporting my position, User:thomblake has interpreted your quote as supporting his position, and neither User:thomblake nor User:gjm have replied to Wei Dai’s quote so I don’t know if they’d interpret it as evidence of their position too! I guess I’ll just assume I’m wrong in the meantime.
Now two people have said the exact opposite things both of which disagree with me. :( Now I don’t know how to update. I plan on re-reading the relevant stuff anyway.
If you mean me and thomblake, I don’t see how we’re saying exact opposite things, or even slightly opposite things. We do both disagree with you, though.
I guess I can interpret User:thomblake two ways, but apparently my preferred way isn’t correct. Let me rephrase what you said from memory. It was like, “right is defined as the output of something like CEV, but that doesn’t mean that individuals won’t upon reflection differ substantially”. User:thomblake seemed to be saying “Eliezer doesn’t try to equate those two or define one as the other”, not “Eliezer defines right as CEV, he doesn’t equate it with CEV”. But you think User:thomblake intended the latter? Also, have I fairly characterized your position?
I don’t know whether thomblake intended the latter, but he certainly didn’t say the former. I think you said “Eliezer said A and B”, thomblake said “No he didn’t”, and you are now saying he meant “Eliezer said neither A nor B”. I suggest that he said, or at least implied, something rather like A, and would fiercely repudiate B.
Eliezer defines right as CEV, he doesn’t equate it with CEV
I definitely meant the latter, and I might be persuaded of the former.
Though “define” still seems like the wrong word. More like, ” ‘right’ is defined as *point at big blob of poetry*, and I expect it will be correctly found via the process of CEV.”—but that’s still off-the-cuff.
Thanks much; I’ll keep your opinion in mind while re-reading the meta-ethics sequence/CEV/CFAI. I might be being unduly uncharitable to Eliezer as a reaction to noticing that I was unduly (objectively-unjustifiably) trusting him. (This would have been a year or two ago.) (I notice that many people seem to unjustifiably disparage Eliezer’s ideas, but then again I notice that many people seem to unjustifiably anti-disparage (praise, re-confirm, spread) Eliezer’s ideas;—so I might be biased.)
(Really freaking drunk, apologies for errors, e.g. poltiically unmotivated adulation/anti-adulation, or excessive self-divulgation. (E.g., I suspect “divulgation” isn’t a word.))
But yeah, I just find it odd that it’s a couple of steps removed from the obvious usage. I ask myself, “Why science specifically?” and “Why public awareness rather than making the public aware?”
One simple example: Eliezer equates reflective consistency (a la CEV) with alignment with the big blob of computation he calls “right”; I still don’t know what argument, technical or non-technical, could justify such an intuition, and I don’t know how Eliezer would make tradeoffs if the two did in fact have different referents.
If I understand you correctly then this particular example I don’t think I have a problem with, at least not when I assume the kind of disclaimers and limitations of scope that I would include if I were to attempt to formally specify such a thing.
This strikes me as a significant problem in itself, and there are many more problems like it.
I suspect I agree with some of your objections to various degrees.
I thought the upshot of Eliezer’s metaethics sequence was just that “right” is a fixed abstract computation, not that it’s (the output of) some particular computation that involves simulating really smart people. CEV is not even mentioned in the sequence (EDIT: whoops it is.).
(Indeed just saying that it’s a fixed abstract computation is at the right level of abstraction to qualify as metaethics; saying that it’s some particular computation would be more like just plain ethics. The upshot does feel kind of underwhelming and obvious. This might be because I just don’t remember how confusing the issue looked before I read those posts. It could also mean that Eliezer claiming that metaethics is a solved problem is not as questionable as it might seem. And it could also mean that metaethics being solved doesn’t consitute as massive progress as it might seem.)
The upshot does feel kind of underwhelming and obvious. This might be because I just don’t remember how confusing the issue looked before I read those posts.
BTW, I’ve had numerous “wow” moments with philosophical insights, some of which made me spend years considering their implications. For example:
Bayesian interpretation of probability
AI / intelligence explosion
Tegmark’s mathematical universe
anthropic principle / anthropic reasoning
free will as the ability to decide logical facts
I expect that a correct solution to metaethics would produce a similar “wow” reaction. That is, it would be obvious in retrospect, but in an overwhelming instead of underwhelming way.
Is the insight about free will and logical facts part of the sequences? or is it something you or others discuss in a post somewhere? I’d like to learn about it, but my searches failed.
I never wrote a post on it specifically, but it’s sort of implicit in my UDT post (see also this comment). Eliezer also has a free will sequence) which is somewhat similar/related but I’m not sure if he would agree with my formulation.
“What is it that you’re deciding when you make a decision?”
What is “you”? And what is “deciding”? Personally I haven’t been able to come to any redefinition of free will that makes more sense than this one.
I haven’t read the free will sequence. And I haven’t read up on decision theory because I wasn’t sure if my math education is good enough yet. But I doubt that if I was going to read it I would learn that you can salvage the notion of “deciding” from causality and logical facts. The best you can do is look at an agent and treat it is as a transformation. But then you’d still be left with the problem of identity.
(Agreed; I also think meta-ethics and ethics are tied into each other in a way that would require that a solution to meta-ethics would at least theoretically solve any ethical problems. Given that I can think of hundreds or thousands of object level ethical problems, and given that I don’t think my inability to answer at least some of them is purely due to boundedness, fallibility, self-delusion, or ignorance as such, I don’t think I have a solution to meta-ethics. (But I would characterize my belief in God as at least a belief that meta-ethics and ethical problems do at least have some unique (meta-level) solution. This might be optimistic bias, though.))
Wei Dai, have you read the Sermon on the Mount, particularly with superintelligences, Tegmark, (epistemic or moral) credit assignment, and decision theory in mind? If not I suggest it, if only for spiritual benefits. (I suggest the Douay-Rheims translation, but that might be due to a bias towards Catholics as opposed to Protestants.)
(Pretty damn drunk for the third day in a row, apologies for errors.)
Are you planning on starting a rationalist’s drinking club? A byob lesswrong meetup with one sober note-taker? You usually do things purposefully, even if they’re unusual purposes, so consistent drunkenness seems uncharacteristic unless it’s part of a plan.
(FWIW the “post-rationalist” label isn’t my invention, I think it mostly belongs to the somewhat separate Will Ryan / Nick Tarleton / Michael Vassar / Divia / &c. crowd; I agree with Nick and Vassar way more than I agree with the LessWrong gestalt, but I’m still off on my own plot of land. Jennifer Rodriguez-Mueller could be described similarly.)
I’m pretty sure the term “rationalist’s drinking club” wouldn’t be used ingenuously as a self-description. I have noticed the justifiable use of “post-rationalist” and distance from the LW gestalt, though. I think if there were a site centered around a sequence written by Steve Rayhawk with the kind of insights into other people’s minds he regularly writes out here, with Sark and a few others as heavy contributors, that would be a “more agenty less wrong” Will would endorse. I’d actually like to see that, too.
For a human this is a much huger blob of a computation that looks like, “Did everyone survive? How many people are happy? Are people in control of their own lives? …” Humans have complex emotions, have many values—the thousand shards of desire, the godshatter of natural selection. I would say, by the way, that the huge blob of a computation is not just my present terminal values (which I don’t really have—I am not a consistent expected utility maximizers); the huge blob of a computation includes the specification of those moral arguments, those justifications, that would sway me if I heard them. So that I can regard my present values, as an approximation to the ideal morality that I would have if I heard all the arguments, to whatever extent such an extrapolation is coherent. [link in the original]
ETA: Just in case you’re right and Eliezer somehow meant for that paragraph not to be part of his metaethics, and that his actual metaethics is just “morality is a fixed abstract computation”, then I’d ask, “If morality is a fixed abstract computation, then it seems that rationality must also be a fixed abstract computation. But don’t you think a complete “solved” metaethics should explain how morality differs from rationality?”
“If morality is a fixed abstract computation, then it seems that rationality must also be a fixed abstract computation. But don’t you think a complete “solved” metaethics should explain how morality differs from rationality?”
Rationality computation outputs statements about the world, morality evaluates them. Rationality is universal and objective, so it is unique as an abstract computation, not just fixed. Morality is arbitrary.
If we assume some kind of mathematical realism (which seems to be necessary for “abstract computation” and “uniqueness” to have any meaning) then there exist objectively true statements and computations that generate them. At some point there are Goedelian problems, but at least all of the computations agree on the primitive-recursive truths, which are therefore universal, objective, unique, and true.
Any rational agent (optimization process) in any world with some regularities would exploit these regularities, which means use math. A reflective self-optimizing rational agent would arrive to the same math as us, because the math is unique.
Of course, all these points are made by a fallible human brain and so may be wrong.
But there is nothing even like that for morality. In fact, when a moral statement seems universal under sufficient reflection, it stops being a moral statement and becomes simply rational, like cooperating in the Prisoner’s Dilemma when playing against the right opponents.
But there is nothing even like that for morality. In fact, when a moral statement seems universal under sufficient reflection, it stops being a moral statement and becomes simply rational, like cooperating in the Prisoner’s Dilemma when playing against the right opponents.
What is the distinction you are making between rationality and morality, then? What makes you think the former won’t be swallowed up by the latter (or vice versa!) in the limit of infinite reflection?
(Sorta drunk, apologies for conflating conflation of rationality and morality with lack of conflation of rationality and morality, probabilistically-shouldly.)
ETA: I don’t understand how my comments can be so awesome when I’m obviously so freakin’ drunk. ;P . Maybe I should get drunk all the freakin’ time. Or study Latin all the freakin’ time, or read the Bible all the freakin’ time, or ponder how often people are obviously wrong when they use the phrase “all the freakin’ time” (let alone “freakin[‘]”) (especially when they use the phrase “all the freakin’ time” all the freakin’ time, naturally-because-reflexively)....
What is the distinction you are making between rationality and morality, then? What makes you think the former won’t be swallowed up by the latter (or vice versa!) in the limit of infinite reflection?
That was the distinction—one is universal, another arbitrary, in the limit of infinite reflection. I suppose, “there is nothing arbitrary” is a valid (consistent) position, but I don’t see any evidence for it.
Interesting! You seem to be a moral realist (cognitivist, whatever) and an a-theist. (I suspect this is the typical LessWrong position, even if the typical LessWronger isn’t as coherent as you.) I’ll take note that I should pester you and/or take care to pay attention to your opinions (comments) more in the future. Also, I thank you for showing me what the reasoning process would be that would lead one to that position. (And I think that position has a very good chance of being correct—in the absence of justifiably-ignorable inside-view (non-communicable) evidence I myself hold.)
(It’s probably obvious that I’m pretty damn drunk. (Interesting that alcohol can be just as effective as LSD or cannabis. (Still not as effective as nitrous oxide or DMT.)))
Any rational agent (optimization process) in any world with some regularities would exploit these regularities, which means use math. A reflective self-optimizing rational agent would arrive to the same math as us, because the math is unique.
Assuming it started with the same laws of inference and axioms. Also I was mostly thinking of statements about the world, e.g., physics.
Assuming it started with the same laws of inference and axioms
Or equivalent ones. But no matter where it started, it won’t arrive at different primitive-recursive truths, at least according to my brain’s current understanding.
Also I was mostly thinking of statements about the world, e.g., physics.
Is there significant difference? Wherever there are regularities in physics, there’s math (=study of regularities). Where no regularities exist, there’s no rationality.
I think the poor things are already dead. More generally, I am aware of that post, but is it relevant? The possible mind design space is of course huge and contains lots of irrational minds, but here I am arguing about universality of rationality.
But rationality is defined by external criteria—it’s about how to win (=achieve intended goals). Morality doesn’t have any such criteria. Thus, “rational minds” is a natural category. “Moral minds” is not.
Take metaethics, a solved problem: what are the odds that someone who still thought metaethics was a Deep Mystery could write an AI algorithm that could come up with a correct metaethics? I tried that, you know, and in retrospect it didn’t work.
Can you give examples of beliefs that aren’t about anticipation?
Beliefs about things that are outside our future light cone possibly qualify, to the extent that the beliefs don’t relate to things that leave historical footprints. If you’ll pardon an extreme and trite case, I would have a belief that the guy who flew the relativistic rocket out of my light cone did not cease to exist as he passed out of that cone and also did not get eaten by a giant space monster ten minutes after. My anticipations are not constrained by beliefs about either of those possibilities.
In both cases my inability to constrain my anticipated experiences speaks to my limited ability to experience and not a limitation of the universe. The same principles of ‘belief’ apply even though it has incidentally fallen out of the scope which I am able to influence or verify even in principle.
Beliefs that aren’t easily testable also tend to be the kind of beliefs that have a lot of political associations, and thus tend not to act like beliefs as such so much as policies. Also, even falsified beliefs tend to be summarily replaced with new untested/not-intended-to-be-tested beliefs, e.g. “communism is good” with “correctly implemented communism is good”, or “whites and blacks have equal average IQ” with “whites and blacks would have equal average IQ if they’d had the same cultural privileges/disadvantages”. (Apologies for the necessary political examples. Please don’t use this as an opportunity to talk about communism or race.)
Many “beliefs” that aren’t politically relevant—which excludes most scientific “knowledge” and much knowledge of your self, the people you know, what you want to do with your life, et cetera—are better characterized as knowledge, and not beliefs as such. The answers to questions like “do I have one hand, two hands, or three hands?” or “how do I get back to my house from my workplace?” aren’t generally beliefs so much as knowledge, and in my opinion “knowledge” is not only epistemologically but cognitively-neurologically a more accurate description, though I don’t really know enough about memory encoding to really back up that claim (though the difference is introspectively apparent). Either way, I still think that given our knowledge of the non-fundamental-ness of Bayes, we shouldn’t try too hard to stretch Bayes-ness to fit decision problems or cognitive algorithms that Bayes wasn’t meant to describe or solve, even if it’s technically possible to do so.
Also, even falsified beliefs tend to be summarily replaced with new untested/not-intended-to-be-tested beliefs, e.g. “communism is good” with “correctly implemented communism is good”, or “whites and blacks have equal average IQ” with “whites and blacks would have equal average IQ if they’d had the same cultural privileges/disadvantages”.
I believe the common to term for that mistake is “no true Scotsman”.
Beliefs about things that are outside our future light cone possibly qualify, to the extent that the beliefs don’t relate to things that leave historical footprints. If you’ll pardon an extreme and trite case, I would have a belief that the guy who flew the relativistic rocket out of my light cone did not cease to exist as he passed out of that cone and also did not get eaten by a giant space monster ten minutes after. My anticipations are not constrained by beliefs about either of those possibilities.
What do we lose by saying that doesn’t count as a belief? Some consistency when we describe how our minds manipulate anticipations (because we don’t separate out ones we can measure and ones we can’t, but reality does separate those, and our terminology fits reality)? Something else?
I’m not clear on the relevance of caring to beliefs. I would prefer that those I care about not be tortured, but once they’re out of my future light cone whatever happens to them is a sunk cost- I don’t see what I (or they) get from my preferring or believing things about them.
Oops, I just realized that in my hypothetical scenario by someone being tortured outside your light cone, I meant someone being tortured somewhere your two future light cones don’t intersect.
Indeed; being outside of my future light cone just means whatever I do has no impact on them. But now not only can I not impact them, but they’re also dead to me (as they, or any information they emit, won’t exist in my future). I still don’t see what impact caring about them has.
Right, but for my actions to have an effect on them, they have to be in my future light cone at the time of action. It sounds like you’re interested in events in my future light cone but will not be in any of the past light cones centered at my future intervals- like, for example, things that I can set in motion now which will not come to fruition until after I’m dead, or the person I care about pondering whether or not to jump into a black hole. Those things are worth caring about so long as they’re in my future light cone, and it’s meaningful to have beliefs about them to the degree that they could be in my past light cone in the future.
The best illustration I’ve seen thus far is this one.
(Side note: I desire few things more than a community where people automatically and regularly engage in analyses like the one linked to. Such a community would actually be significantly less wrong than any community thus far seen on Earth. When LessWrong tries to engage in causal analyses of why others believe what they believe it’s usually really bad: proffered explanations are variations on “memetic selection pressures”, “confirmation bias”, or other fully general “explanations”/rationalizations. I think this in itself is a damning critique of LessWrong, and I think some of the attitude that promotes such ignorance of the causes of others’ beliefs is apparent in posts like “Our Phyg Is Not Exclusive Enough”.)
I agree that that post is the sort of thing that I want more of on LW.
It seems to me like Steve_Rayhawk’s comment is all about anticipation- I hold position X because I anticipate it will have Y impact on the future. But I think I see the disconnect you’re talking about- the position one takes on global warming is based on anticipations one has about politics, not the climate, but it’s necessary (and/or reduces cognitive dissonance) to state the political position in terms of anticipations one has about the climate.
I don’t think public stated beliefs have to be about anticipation- but I do think that private beliefs have to be (should be?) about anticipation. I also think I’m much more sympathetic to the view that rationalizations can use the “beliefs are anticipation” argument as a weapon without finding the true anticipations in question (like Steve_Rayhawk did), but I don’t think that implies that “beliefs are anticipation” is naive or incorrect. Separating out positions, identities, and beliefs seems more helpful than overloading the world beliefs.
it’s necessary (and/or reduces cognitive dissonance) to state the political position in terms of anticipations one has about the climate.
I don’t think public stated beliefs have to be about anticipation
You seem to be modeling the AGW disputant’s decision policy as if he is internally representing, in a way that would be introspectively clear to him, his belief about AGW and his public stance about AGW as explicitly distinguished nodes;—as opposed to having “actual belief about AGW” as a latent node that isn’t introspectively accessible. That’s surely the case sometimes, but I don’t think that’s usually the case. Given the non-distinguishability of beliefs and preferences (and the theoretical non-unique-decomposability (is there a standard economic term for that?) of decision policies) I’m not sure it’s wise to use “belief” to refer to only the (in many cases unidentifiable) “actual anticipation” part of decision policies, either for others or ourselves, especially when we don’t have enough time to be abnormally reflective about the causes and purposes of others’/our “beliefs”.
(Areas where such caution isn’t as necessary are e.g. decision science modeling of simple rational agents, or largescale economic models. But if you want to model actual people’s policies in complex situations then the naive Bayesian approach (e.g. with influence diagrams) doesn’t work or is way too cumbersome. Does your experience differ from mine? You have a lot more modeling experience than I do. Also I get the impression that Steve disagrees with me at least a little bit, and his opinion is worth a lot more than mine.)
Another more theoretical reason I encourage caution about the “belief as anticipation” idea is that I don’t think it correctly characterizes the nature of belief in light of recent ideas in decision theory. To me, beliefs seem to be about coordination, where your choice of belief (e.g. expecting a squared rather than a cubed modulus Born rule) is determined by the innate preference (drilled into you by ecological contingencies and natural selection) to coordinate your actions with the actions and decision policies of the agents around you, and where your utility function is about self-coordination (e.g. for purposes of dynamic consistency). The ‘pure’ “anticipation” aspect of beliefs only seems relevant in certain cases, e.g. when you don’t have “anthropic” uncertainty (e.g. uncertainty about the extent to which your contexts are ambiently determined by your decision policy). Unfortunately people like me always have a substantial amount of “anthropic” uncertainty, and it’s mostly only in counterfactual/toy problems where I can use the naive Bayesian approach to epistemology.
(Note that taking the general decision theoretic perspective doesn’t lead to wacky quantum-suicide-like implications, otherwise I would be a lot more skeptical about the prudence of partially ditching the Bayesian boat.)
You seem to be modeling the AGW disputant’s decision policy as if he is internally representing, in a way that would be introspectively clear to him, his belief about AGW and his public stance about AGW as explicitly distinguished nodes;—as opposed to having “actual belief about AGW” as a latent node that isn’t introspectively accessible.
I’m describing it that way but I don’t think the introspection is necessary- it’s just easier to talk about as if he had full access to his mind. (Private beliefs don’t have to be beliefs that the mind’s narrator has access to, and oftentimes are kept out of its reach for security purposes!)
But if you want to model actual people’s policies in complex situations then the naive Bayesian approach (e.g. with influence diagrams) doesn’t work or is way too cumbersome. Does your experience differ from mine?
I don’t think I’ve seen any Bayesian modeling of that sort of thing, but I haven’t gone looking for it.
Bayes nets in general are difficult for people, rather than computers, to manipulate, and so it’s hard to decide what makes them too cumbersome. (Bayes nets in industrial use, like for fault diagnostics, tend to have hundreds if not thousands of nodes, but you wouldn’t have a person traverse them unaided.)
If you wanted to code a narrow AI that determined someone’s mood by, say, webcam footage of them, I think putting your perception data into a Bayes net would be a common approach.
Political positions / psychology seem tough. I could see someone do belief-mapping and correlation in a useful way, but I don’t see analysis on the level of Steve_Rayhawk’s post coming out of a computer-run Bayes net anytime soon, and I don’t think drawing out a Bayes net would help significantly with that sort of analysis. Possible but unlikely- we’ve got pretty sophisticated dedicated hardware for very similar things.
Another more theoretical reason I encourage caution about the “belief as anticipation” idea is that I don’t think it correctly characterizes the nature of belief in light of recent ideas in decision theory. To me, beliefs seem to be about coordination
Hmm. I’m going to need to sleep on this, but this sort of coordination still smells to me like anticipation.
(A general comment: this conversation has moved me towards thinking that it’s useful for the LW norm to be tabooing “belief” and using “anticipation” instead when appropriate, rather than trying to equate the two terms. I don’t know if you’re advocating for tabooing “belief”, though.)
(Complement to my other reply: You might not have seen this comment, where I suggest “knowledge” as a better descriptor than “belief” in most mundane settings. (Also I suspect that people’s uses of the words “think” versus “believe” are correlated with introspectively distinct kinds of uncertainty.))
Don’t my beliefs about primordial cows constrain my anticipation of the fossil record and development of contemporary species?
I think “most people’s beliefs” fit the anticipation framework- so long as you express them in a compartmentalized fashion, and my understanding of the point of the ‘belief=anticipation’ approach is that it helps resist compartmentalization, which is generally positive.
Metaethics sequence is a bit of a mess, but the point it made is important, and it doesn’t seem like it’s just some wierd opinion of Eliezer’s.
After I read it I was like, “Oh, ok. Morality is easy. Just do the right thing. Where ‘right’ is some incredibly complex set of preferences that are only represented implicitly in physical human brains. And it’s OK that it’s not supernatural or ‘objective’, and we don’t have to ‘justify’ it to an ideal philosophy student of perfect emptyness”. Fake utility functions, and Recursive justification stuff helped.
Maybe there’s something wrong with Eliezer’s metaethics, but I havn’t seen anyone point it out, and have no reason to suspect it. Most of the material that contradicts it is obvious mistakes from just not having read and understood the sequences, not an enlightened counter-analysis.
Hm. I think I’ll put on my project list “reread the metaethics sequence and create an intelligent reply.” If that happens, it’ll be at least two months out.
There’s a difference between a metaethics and an ethical theory.
The metaethics sequence is supposed to help dissolve the false dichotomy “either there’s a metaphysical, human-independent Source Of Morality, or else the nihilists/moral relativists are right”. It’s not immediately supposed to solve “So, should we push a fat man off the bridge to stop a runaway trolley before it runs over five people?”
For the second question, we’d want to add an Ethics Sequence (in my opinion, Yvain’s Consquentialism FAQ lays some good groundwork for one).
Maybe there’s something wrong with Eliezer’s metaethics, but I havn’t seen anyone point it out, and have no reason to suspect it.
The main problem I have is that it is grossly incomplete. There are a few foundational posts but it cuts off without covering what I would like to be covered.
What would you like covered? Or is it just that vague “this isn’t enough” feeling?
I can’t fully remember—it’s been a while since I considered the topic so I mostly have the cached conclusion. More on preference aggregation is one thing. A ‘preferences are subjectively objective’ post. A post that explains more completely what he means by ‘should’ (he has discussed and argued about this in comments).
It’s much worse than that. Nobody on LW seems to be able to understand it at all.
Oh, ok. Morality is easy. Just do the right thing. Where ‘right’ is some incredibly complex set of preferences that are only represented implicitly in physical human brains.
Random factoid: The post by Eliezer that I find most useful for describing (a particular aspect of) moral philosophy is actually a post about probability.
(In general I use most of the same intuitions for values as I do for probability; they share a lot of the same structure, and given the oft-remarked-on non-unique-decomposability of decision policies they seem to be special cases of some more fundamental thing that we don’t yet have a satisfactory language for talking about. You might like this post and similar posts by Wei Dai that highlight the similarities between beliefs and values. (BTW, that post alone gets you half the way to my variant of theism.) Also check out this post by Nesov. (One question that intrigues me: is there a nonlinearity that results in non-boring outputs if you have an agent who calculates the expected utility of an action by dividing the universal prior probability of A by the universal prior probability of A (i.e., unity)? (The reason you might expect nonlinearities is that some actions depend on the output of the agent program itself, which is encoded by the universal prior but is undetermined until the agent fills in the blank. Seems to be a decent illustration of the more general timeful/timeless problem.)))
BTW, that post alone gets you half the way to my variant of theism.
I think you mean that it would get you halfway there. Do you have good reason to think it would do the same for others who aren’t already convinced? (It seems like there could be non-question-begging reasons to think that—e.g., it might turn out that people who’ve read and understood it quite commonly end up agreeing with you about God.)
I think most of the disagreement would be about the use of the “God” label, not about the actual decision theory. Wei Dai asks:
Or is anyone tempted to bite this bullet and claim that we should apply pre-rationality to our utility functions as well?
This is very close to my variant of theism / objective morality, and gets you to the First and Final Cause of morality—the rest is discerning the attributes of said Cause, which we can do to some extent with algorithmic information theory, specifically the properties of Chaitin’s number of wisdom, omega. I think I could argue quite forcefully that my God is the same God as the God of Aquinas and especially Leibniz (who was in his time already groping towards algorithmic information theory himself). Thus far the counterarguments I’ve seen amount to: “Their ‘language’ doesn’t mean anything; if it does mean something then it doesn’t mean what you think it means; if it does mean what you think it means then you’re both wrong, traitor.” I strongly suspect rationalization due to irrational allergies to the “God” word; most people who think that theism is stupid and worthless have very little understanding of what theology actually is. This is pretty much unrelated to the actual contents of my ideas about ethics and decision theory, it’s just a debate about labels.
Anyway what I meant wasn’t that reading the post halfway convinces the attentive reader of my variant of theism, I meant it allows the attentive reader to halfway understand why I have the intuitions I do, whether or not the reader agrees with those intuitions.
(Apologies if I sound curmudgeonly, really stressed lately.)
Will, may I suggest that you try to work out the details of your objective morality first and explain it to us before linking it with theism/God? For example, how are we supposed to use Chaitin’s Omega to “discerning the attributes of said Cause”? I really have no idea at all what you mean by that, but it seems like it would make for a more interesting discussion than whether your God is the same God as the God of Aquinas and Leibniz, and also less likely to trigger people’s “allergies”.
Actually for the last few days I’ve been thinking about emailing you, because I’ve been planning on writing a long exegesis explaining my ideas about decision theory and theology, but you’ve stated that you don’t think it’s generally useful to proselytize about your intuitions before you have solid formally justified results that resulted from your intuitions. Although I’ve independently noticed various ideas about decision theory (probably due to Steve’s influence), I haven’t at all contributed any new insights, and the only thing I would accomplish with my apologetics is to convince other people that I’m not obviously crazy. You, Nesov, and Steve have made comments that indicate that you recognize that various of my intuitions might be correct, but of course that in itself isn’t anything noteworthy: it doesn’t help us build FAI. (Speaking of which, do you have any ideas about a better name than “FAI”? ‘Friendliness’ implies “friendly to humans”, which itself is a value judgment. Justified Artificial Intelligence, maybe? Not Regrettable Artificial Intelligence? I was using “computational axiology” for awhile a few years ago, but if there’s not a fundamental distinction between epistemology and axiology then that too is sort of misleading.)
Now, I personally think that certain results about decision theory should actually affect what we think of as morally justified, and thus I think my intuitions are actually important for not being damned (whatever that means). But I could easily be wrong about that.
The reason I’ve made references to theology is actually a matter of epistemology, not decision theory: I think LessWrong and others’ contempt for theology and other kinds of academic philosophy is unjustified and epistemically poisonous. (Needless to say, I am extremely skeptical of arguments along the lines of “we only have so much time, we can’t check out every crackpot thesis that comes our way”: in my experience such arguments are always, without exception the result of motivated cognition.) I would hold this position about normative epistemology even if my intuitions about decision theory didn’t happen to support various theological hypotheses.
Anyway, my default position is to write up the aforementioned exegesis in Latin; that way only people that already give my opinions a substantial amount of weight will bother to read it, and I won’t be seen as unfairly proselytizing about my own justifiably-ignorable ideas.
(I’m pretty drunk right now, apologies for errors. I might respond to your comment again when I’m sober.)
my default position is to write up the aforementioned exegesis in Latin
OK, so now you’re just taking the piss.
Writing it in Latin selects to some extent for people who respect your opinions, but more strongly for people who happen to know quite a lot of Latin. It sounds as if what you actually want is to be able to say you’ve written up your position, without anyone actually reading it. I hope that isn’t really what you actually want.
(I’m pretty stupid; apologies for any mistakes I make.)
(Part of this stems from my looking for an excuse to manipulate myself into learning Latin. Thus far I’ve used a hot Catholic chick and a perceived moral obligation to express myself incoherently—a quite potent combination.)
It sounds as if what you actually want is to be able to say you’ve written up your position, without anyone actually reading it.
That actually sounds a lot like me. Could be true. Yay double negative moral obligations—they force us to be coherent on a higher level, and about more important things!
you’ve stated that you don’t think it’s generally useful to proselytize about your intuitions before you have solid formally justified results that resulted from your intuitions
I will generally explain my intuitions but try not to waste too much time arguing for them if other people do not agree. So I think if you have any ideas that you have not already clearly explained, then you should do so. (And please, not in Latin.)
Speaking of which, do you have any ideas about a better name than “FAI”?
How about Minimally Wrong AI? :)
The reason I’ve made references to theology is actually a matter of epistemology, not decision theory: I think LessWrong and others’ contempt for theology and other kinds of academic philosophy is unjustified and epistemically poisonous.
Making off-hand references to theology is not going to change our minds about this. Do you have an actual plan to do so? If not, you’re just wasting your credibility and make it less likely for us to take your other ideas seriously.
So I think if you have any ideas that you have not already clearly explained, then you should do so. (And please, not in Latin.)
Okay, thanks for the advice. I haven’t yet clearly explained most of my ideas. (Hm, “my” ideas?—I doubt any of them are actually “mine”.) Not sure I want to do so (hence the Latin), but it sort of seems like a moral imperative, so I guess I have to. bleh bleh bleh
Making off-hand references to theology is not going to change our minds about this. Do you have an actual plan to do so?
I’ve debated the meta-level issue of epistemic “charity” and how much importance we should assign it in our decision calculi a few times on LessWrong before, e.g. in a few debates with Nesov. You were involved in at least one of them. I think what eventually happened is that I became afraid I was committing typical mind fallacy in advocating a sort of devil-may-care attitude to looking at weird or low-status beliefs; Nesov claimed that doing so had been harmful to him in the past, so I decided I’d rather collect more data before pushing my epistemic intuitions. Unfortunately I don’t know of an easy way to collect more data, so I’ve sort of stalled out on that particular campaign. The making references to theism thing is a sort of middleground position I’ve taken up, presumably to escape various aversions that I don’t have immediate introspective access to. There’s also the matter of not going out of my way to not appear discreditable.
The making references to theism thing is a sort of middleground position I’ve taken up, presumably to escape various aversions that I don’t have immediate introspective access to.
FWIW, I think this “middleground position” is the worst of both worlds.
There’s also the matter of not going out of my way to not appear discreditable.
Your comments have made me wonder if I’ve been too creditable, i.e., to the extent of making people take my ideas more seriously than they should. But it seems like a valid Umeshism that if there isn’t at least one person who has taken your ideas too seriously, then you’re not being creditable enough. I may be close to (or past) this threshold already, but you seem to still have quite a long way to go, so I suggest not worrying about this right now. Especially since credibility is much harder to gain than to lose, so if you ever find yourself having too much credibility, it shouldn’t be too late to do something about it then.
Your comment seems to me to be modally implicitly self-contradictory. For you say that you are worried that you’ve caused yourself to be too creditable, and yet the reason you are considering that hypothesis is that I, a mere peasant, have implicitly-suggested-if-only-categorically that that might be the case. If I am wrong to doubt the wisdom of my self-doubting, then by your lights I am right, and not right to do so! You’ve taken me seriously enough to doubt yourself—to some extent this implies that I have impressed my self too strongly upon you, for you and I and everyone else thinks that you are more justified than I. Again, modally—not necessarily self-contradictory, but it leans that way, at least connotationally-implicitly.
(Really quite drunk, again, apologies for errors, again.)
Damn it, why am I giving you advice on the proper level of credibility, when I should be telling you to stop drinking so much? Talk about cached selves...
Apologies in advance for the emotivist interpretation of morality espoused by this comment.
because I’ve been planning on writing a long exegesis explaining my ideas about decision theory and theology,
Yay!
but you’ve stated that you don’t think it’s generally useful to proselytize about your intuitions before you have solid formally justified results that resulted from your intuitions.
Boo.
I think LessWrong and others’ contempt for theology and other kinds of academic philosophy is unjustified and epistemically poisonous. (Needless to say, I am extremely skeptical of arguments along the lines of “we only have so much time, we can’t check out every crackpot thesis that comes our way”: in my experience such arguments are always, without exception the result of motivated cognition.)
YAAAAAY!
Anyway, my default position is to write up the aforementioned exegesis in Latin; that way only people that already give my opinions a substantial amount of weight will bother to read it, and I won’t be seen as unfairly proselytizing about my own justifiably-ignorable ideas.
I may well be being obtuse, but it seems to me that there’s something very odd about the phrase “theism / objective morality”, with its suggestion that basically the two are the same thing.
Have you actually argued forcefully that your god is also Aquinas’s and Leibniz’s? I ask because first you say you could, which kinda suggests you haven’t actually done it so far (at least not in public), but then you start talking about “counterarguments”, which kinda suggests that you have and people have responded.
I agree with Wei_Dai that it might be interesting to know more about your version of objective morality and how one goes about discerning the attributes of its alleged cause using algorithmic information theory.
I may well be being obtuse, but it seems to me that there’s something very odd about the phrase “theism / objective morality”, with its suggestion that basically the two are the same thing.
This reflects a confusion I have about how popular philosophical opinion is in favor of moral realism, yet against theism. It seems that getting the correct answer to all possible moral problems would require prodigious intelligence, and so I don’t really understand the conjunct of moral realism and atheism. This likely reflects my ignorance of the existent philosophical literature, though to be honest like most LessWrongers I’m a little skeptical of the worth of the average philosopher’s opinion, especially about subjects outside of his specialty. Also if I averaged philosophical opinion over, say, the last 500 years, then I think theism would beat atheism. Also, there’s the algorithm from music appreciation, which is like “look at what good musicians like”, which I think would strongly favor theism. Still, I admit I’m confused.
Have you actually argued forcefully that your god is also Aquinas’s and Leibniz’s? I ask because first you say you could, which kinda suggests you haven’t actually done it so far (at least not in public), but then you start talking about “counterarguments”, which kinda suggests that you have and people have responded.
I’ve kinda argued it on the meta-level, i.e. I’ve argued about when it is or isn’t appropriate to assume that you’re actually referring to the same concept versus just engaging in syncretism. But IIRC I haven’t yet forcefully argued that my god is Leibniz’s God. So, yeah, it’s a mixture.
BTW, realistically, I won’t be able to reply to your comment re CEV/rightness, though as a result of your comment I do plan on re-reading the meta-ethics sequence to see if “right” is anywhere (implicitly or explicitly) defined as CEV.
Also if I averaged philosophical opinion over, say, the last 500 years, then I think theism would beat atheism.
(nods) Very likely. To the extent that this technique is useful for rank-ordering philosophical positions I ought to adopt, I can also use it to rank-order various theological positions to determine which particular theology to adopt. (I’ve never done this, but I predict it’s one that endorses literacy.)
Surely typical moral realists, atheist or otherwise, don’t believe that they’ve got the correct answer to all possible moral problems. (Just as no one thinks they’re factually correct about everything.)
I don’t think “averaged philosophical opinion” is likely to have much value. Nor “averaged opinion of good musicians” when you’re talking about something that isn’t primarily musical, especially when you average over a period for much of which (e.g.) many of the best employment opportunities for musicians were working for religious organizations.
(Human with a finite brain; apologies for errors or omissions.)
Surely typical moral realists, atheist or otherwise, don’t believe that they’ve got the correct answer to all possible moral problems.
Apparently I mis-stated something. I’m a little too spent to fully rectify the situation, so here’s some word salad: moral realism implies belief in a Form of the Good, but ISTM that the Form of the Good has to be personal, because only intelligences can solve moral problems; specifically, I think a true Form of the Good has to be a superintelligence, i.e. a god, who, if the god is also the Form of the Good, we call God. ISTM that belief in a Form of the Good that isn’t personal is an obvious error that any decent moral philosopher should recognize, and so I think there must be something wrong with how I’m formulating the problem or with how I’m conceptualizing others’ representation of the problem.
The core-sequence fail gets downvoted pretty reliably. I can’t say the same for metaethics or AI stuff. We need more people to read those sequences so that they can point out and downvote failure.
Isn’t the metaethics sequence not liked very much? I haven’t read it in a while, and so I’m not sure that I actually read all of the posts, but I found what I read fairly squishy, and not even on the level of, say, Nietzsche’s moral thought.
Downvoting people for not understanding that beliefs constrain expectation I’m okay with. Downvoting people for not agreeing with EY’s moral intuitions seems… mistaken.
Beliefs are only sometimes about anticipation. LessWrong repeatedly makes huge errors when they interpret “belief” in such a naive fashion;—giving LessWrong a semi-Bayesian justification for this collective failure of hermeneutics is unwise. Maybe beliefs “should” be about anticipation, but LessWrong, like everybody else, can’t reliably separate descriptive and normative claims, which is exactly why this “beliefs constrain anticipation” thing is misleading. …There’s a neat level-crossing thingy in there.
EY thinking of meta-ethics as a “solved problem” is one of the most obvious signs that he’s very spotty when it comes to philosophy and can’t really be trusted to do AI theory.
(Apologies if I come across as curmudgeonly.)
He does? I know he doesn’t take it as seriously as other knowledge required for AI but I didn’t think he actually thought it was a ‘solved problem’.
From my favorite post and comments section on Less Wrong thus far:
Yes, it looks like Eliezer is mistaken there (or speaking hyperbole).
I agree with:
… but would weaken the claim drastically to “Take metaethics, a clearly reducible problem with many technical details to be ironed out”. I suspect you would disagree with even that, given that you advocate meta-ethical sentiments that I would negatively label “Deeply Mysterious”. This places me approximately equidistant from your respective positions.
I only weakly advocate certain (not formally justified) ideas about meta-ethics, and remain deeply confused about certain meta-ethical questions that I wouldn’t characterize as mere technical details. One simple example: Eliezer equates reflective consistency (a la CEV) with alignment with the big blob of computation he calls “right”; I still don’t know what argument, technical or non-technical, could justify such an intuition, and I don’t know how Eliezer would make tradeoffs if the two did in fact have different referents. This strikes me as a significant problem in itself, and there are many more problems like it.
(Mildly inebriated, apologies for errors.)
Are you sure Eliezer does equate reflective consistency with alignment with what-he-calls-”right”? Because my recollection is that he doesn’t claim either (1) that a reflectively consistent alien mind need have values at all like what he calls right, or (2) that any individual human being, if made reflectively consistent, would necessarily end up with values much like what he calls right.
(Unless I’m awfully confused, denial of (1) is an important element in his thinking.)
I think he is defining “right” to mean something along the lines of “in line with the CEV of present-day humanity”. Maybe that’s a sensible way to use the word, maybe not (for what it’s worth, I incline towards “not”) but it isn’t the same thing as identifying “right” with “reflectively consistent”, and it doesn’t lead to a risk of confusion if the two turn out to have different referents (because they can’t).
He most certainly does not.
Relevant quote from Morality as Fixed Computation:
Thanks—I hope you’re providing that as evidence for my point.
Sort of. It certainly means he doesn’t define morality as extrapolated volition. (But maybe “equate” meant something looser than that?)
Aghhhh this is so confusing. Now I’m left thinking both you and Wei Dai have furnished quotes supporting my position, User:thomblake has interpreted your quote as supporting his position, and neither User:thomblake nor User:gjm have replied to Wei Dai’s quote so I don’t know if they’d interpret it as evidence of their position too! I guess I’ll just assume I’m wrong in the meantime.
Now two people have said the exact opposite things both of which disagree with me. :( Now I don’t know how to update. I plan on re-reading the relevant stuff anyway.
If you mean me and thomblake, I don’t see how we’re saying exact opposite things, or even slightly opposite things. We do both disagree with you, though.
I guess I can interpret User:thomblake two ways, but apparently my preferred way isn’t correct. Let me rephrase what you said from memory. It was like, “right is defined as the output of something like CEV, but that doesn’t mean that individuals won’t upon reflection differ substantially”. User:thomblake seemed to be saying “Eliezer doesn’t try to equate those two or define one as the other”, not “Eliezer defines right as CEV, he doesn’t equate it with CEV”. But you think User:thomblake intended the latter? Also, have I fairly characterized your position?
I don’t know whether thomblake intended the latter, but he certainly didn’t say the former. I think you said “Eliezer said A and B”, thomblake said “No he didn’t”, and you are now saying he meant “Eliezer said neither A nor B”. I suggest that he said, or at least implied, something rather like A, and would fiercely repudiate B.
I definitely meant the latter, and I might be persuaded of the former.
Though “define” still seems like the wrong word. More like, ” ‘right’ is defined as *point at big blob of poetry*, and I expect it will be correctly found via the process of CEV.”—but that’s still off-the-cuff.
Thanks much; I’ll keep your opinion in mind while re-reading the meta-ethics sequence/CEV/CFAI. I might be being unduly uncharitable to Eliezer as a reaction to noticing that I was unduly (objectively-unjustifiably) trusting him. (This would have been a year or two ago.) (I notice that many people seem to unjustifiably disparage Eliezer’s ideas, but then again I notice that many people seem to unjustifiably anti-disparage (praise, re-confirm, spread) Eliezer’s ideas;—so I might be biased.)
(Really freaking drunk, apologies for errors, e.g. poltiically unmotivated adulation/anti-adulation, or excessive self-divulgation. (E.g., I suspect “divulgation” isn’t a word.))
Not to worry, it means “The act of divulging” or else “public awareness of science” (oddly).
I mean, it’s not so odd. di-vulgar-tion; the result of making public (something).
Well,
divulge
divulgate
divulgation
But yeah, I just find it odd that it’s a couple of steps removed from the obvious usage. I ask myself, “Why science specifically?” and “Why public awareness rather than making the public aware?”
If I understand you correctly then this particular example I don’t think I have a problem with, at least not when I assume the kind of disclaimers and limitations of scope that I would include if I were to attempt to formally specify such a thing.
I suspect I agree with some of your objections to various degrees.
Part of my concern about Eliezer trying to build FAI also stems from his treatment of metaethics. Here’s a caricature of how his solution looks to me:
Alice: Hey, what is the value of X?
Bob: Hmm, I don’t know. Actually I’m not even sure what it means to answer that question. What’s the definition of X?
Alice: I don’t know how to define it either.
Bob: Ok… I don’t know how to answer your question, but what if we simulate a bunch of really smart people and ask them what the value of X is?
Alice: Great idea! But what about the definition of X? I feel like we ought to be able to at least answer that now...
Bob: Oh that’s easy. Let’s just define it as the output of that computation I just mentioned.
I thought the upshot of Eliezer’s metaethics sequence was just that “right” is a fixed abstract computation, not that it’s (the output of) some particular computation that involves simulating really smart people. CEV is not even mentioned in the sequence (EDIT: whoops it is.).
(Indeed just saying that it’s a fixed abstract computation is at the right level of abstraction to qualify as metaethics; saying that it’s some particular computation would be more like just plain ethics. The upshot does feel kind of underwhelming and obvious. This might be because I just don’t remember how confusing the issue looked before I read those posts. It could also mean that Eliezer claiming that metaethics is a solved problem is not as questionable as it might seem. And it could also mean that metaethics being solved doesn’t consitute as massive progress as it might seem.)
BTW, I’ve had numerous “wow” moments with philosophical insights, some of which made me spend years considering their implications. For example:
Bayesian interpretation of probability
AI / intelligence explosion
Tegmark’s mathematical universe
anthropic principle / anthropic reasoning
free will as the ability to decide logical facts
I expect that a correct solution to metaethics would produce a similar “wow” reaction. That is, it would be obvious in retrospect, but in an overwhelming instead of underwhelming way.
Is the insight about free will and logical facts part of the sequences? or is it something you or others discuss in a post somewhere? I’d like to learn about it, but my searches failed.
I never wrote a post on it specifically, but it’s sort of implicit in my UDT post (see also this comment). Eliezer also has a free will sequence) which is somewhat similar/related but I’m not sure if he would agree with my formulation.
What is “you”? And what is “deciding”? Personally I haven’t been able to come to any redefinition of free will that makes more sense than this one.
I haven’t read the free will sequence. And I haven’t read up on decision theory because I wasn’t sure if my math education is good enough yet. But I doubt that if I was going to read it I would learn that you can salvage the notion of “deciding” from causality and logical facts. The best you can do is look at an agent and treat it is as a transformation. But then you’d still be left with the problem of identity.
(Agreed; I also think meta-ethics and ethics are tied into each other in a way that would require that a solution to meta-ethics would at least theoretically solve any ethical problems. Given that I can think of hundreds or thousands of object level ethical problems, and given that I don’t think my inability to answer at least some of them is purely due to boundedness, fallibility, self-delusion, or ignorance as such, I don’t think I have a solution to meta-ethics. (But I would characterize my belief in God as at least a belief that meta-ethics and ethical problems do at least have some unique (meta-level) solution. This might be optimistic bias, though.))
Wei Dai, have you read the Sermon on the Mount, particularly with superintelligences, Tegmark, (epistemic or moral) credit assignment, and decision theory in mind? If not I suggest it, if only for spiritual benefits. (I suggest the Douay-Rheims translation, but that might be due to a bias towards Catholics as opposed to Protestants.)
(Pretty damn drunk for the third day in a row, apologies for errors.)
Are you planning on starting a rationalist’s drinking club? A byob lesswrong meetup with one sober note-taker? You usually do things purposefully, even if they’re unusual purposes, so consistent drunkenness seems uncharacteristic unless it’s part of a plan.
Will_Newsome isn’t a rationalist. (He has described himself as a ‘post-rationalist’, which seems as good a term as any.)
(FWIW the “post-rationalist” label isn’t my invention, I think it mostly belongs to the somewhat separate Will Ryan / Nick Tarleton / Michael Vassar / Divia / &c. crowd; I agree with Nick and Vassar way more than I agree with the LessWrong gestalt, but I’m still off on my own plot of land. Jennifer Rodriguez-Mueller could be described similarly.)
I’m pretty sure the term “rationalist’s drinking club” wouldn’t be used ingenuously as a self-description. I have noticed the justifiable use of “post-rationalist” and distance from the LW gestalt, though. I think if there were a site centered around a sequence written by Steve Rayhawk with the kind of insights into other people’s minds he regularly writes out here, with Sark and a few others as heavy contributors, that would be a “more agenty less wrong” Will would endorse. I’d actually like to see that, too.
In vino veritas et sanitas!
It’s mentioned here:
ETA: Just in case you’re right and Eliezer somehow meant for that paragraph not to be part of his metaethics, and that his actual metaethics is just “morality is a fixed abstract computation”, then I’d ask, “If morality is a fixed abstract computation, then it seems that rationality must also be a fixed abstract computation. But don’t you think a complete “solved” metaethics should explain how morality differs from rationality?”
Rationality computation outputs statements about the world, morality evaluates them. Rationality is universal and objective, so it is unique as an abstract computation, not just fixed. Morality is arbitrary.
How so? Every argument I’ve heard for why morality is arbitrary applies just as well to rationality.
If we assume some kind of mathematical realism (which seems to be necessary for “abstract computation” and “uniqueness” to have any meaning) then there exist objectively true statements and computations that generate them. At some point there are Goedelian problems, but at least all of the computations agree on the primitive-recursive truths, which are therefore universal, objective, unique, and true.
Any rational agent (optimization process) in any world with some regularities would exploit these regularities, which means use math. A reflective self-optimizing rational agent would arrive to the same math as us, because the math is unique.
Of course, all these points are made by a fallible human brain and so may be wrong.
But there is nothing even like that for morality. In fact, when a moral statement seems universal under sufficient reflection, it stops being a moral statement and becomes simply rational, like cooperating in the Prisoner’s Dilemma when playing against the right opponents.
What is the distinction you are making between rationality and morality, then? What makes you think the former won’t be swallowed up by the latter (or vice versa!) in the limit of infinite reflection?
(Sorta drunk, apologies for conflating conflation of rationality and morality with lack of conflation of rationality and morality, probabilistically-shouldly.)
ETA: I don’t understand how my comments can be so awesome when I’m obviously so freakin’ drunk. ;P . Maybe I should get drunk all the freakin’ time. Or study Latin all the freakin’ time, or read the Bible all the freakin’ time, or ponder how often people are obviously wrong when they use the phrase “all the freakin’ time” (let alone “freakin[‘]”) (especially when they use the phrase “all the freakin’ time” all the freakin’ time, naturally-because-reflexively)....
That was the distinction—one is universal, another arbitrary, in the limit of infinite reflection. I suppose, “there is nothing arbitrary” is a valid (consistent) position, but I don’t see any evidence for it.
Interesting! You seem to be a moral realist (cognitivist, whatever) and an a-theist. (I suspect this is the typical LessWrong position, even if the typical LessWronger isn’t as coherent as you.) I’ll take note that I should pester you and/or take care to pay attention to your opinions (comments) more in the future. Also, I thank you for showing me what the reasoning process would be that would lead one to that position. (And I think that position has a very good chance of being correct—in the absence of justifiably-ignorable inside-view (non-communicable) evidence I myself hold.)
(It’s probably obvious that I’m pretty damn drunk. (Interesting that alcohol can be just as effective as LSD or cannabis. (Still not as effective as nitrous oxide or DMT.)))
Cognitivist yes, moral realist, no. IIUC, it’s EY’s position (“morality is a computation”), so naturally it’s the typical LessWrong position.
Universally valid statements must have universally-available evidence, no?
Really nothing like LSD, which makes it impossible to write anything at all, at least for me.
Assuming it started with the same laws of inference and axioms. Also I was mostly thinking of statements about the world, e.g., physics.
Or equivalent ones. But no matter where it started, it won’t arrive at different primitive-recursive truths, at least according to my brain’s current understanding.
Is there significant difference? Wherever there are regularities in physics, there’s math (=study of regularities). Where no regularities exist, there’s no rationality.
What about the poor beings with an anti-iductive prior? More generally read this post by Eliezer.
I think the poor things are already dead. More generally, I am aware of that post, but is it relevant? The possible mind design space is of course huge and contains lots of irrational minds, but here I am arguing about universality of rationality.
My point, as I stated above, is that every argument I’ve heard against universality of morality applies just as well to rationality.
I agree with your statement:
I would also agree with the following:
The possible mind design space is of course huge and contains lots of immoral minds, but here I am arguing about universality of morality.
But rationality is defined by external criteria—it’s about how to win (=achieve intended goals). Morality doesn’t have any such criteria. Thus, “rational minds” is a natural category. “Moral minds” is not.
Yeah: CEV appears to just move the hard bit. Adding another layer of indirection.
To take Eliezer’s statement one meta-level down:
What did he mean by “I tried that...”?
I’m not at all sure, but I think he means CFAI.
Possibly he means this.
He may have soleved it, but if only he or someone else could say what the solution was.
Can you give examples of beliefs that aren’t about anticipation?
Beliefs about things that are outside our future light cone possibly qualify, to the extent that the beliefs don’t relate to things that leave historical footprints. If you’ll pardon an extreme and trite case, I would have a belief that the guy who flew the relativistic rocket out of my light cone did not cease to exist as he passed out of that cone and also did not get eaten by a giant space monster ten minutes after. My anticipations are not constrained by beliefs about either of those possibilities.
In both cases my inability to constrain my anticipated experiences speaks to my limited ability to experience and not a limitation of the universe. The same principles of ‘belief’ apply even though it has incidentally fallen out of the scope which I am able to influence or verify even in principle.
Beliefs that aren’t easily testable also tend to be the kind of beliefs that have a lot of political associations, and thus tend not to act like beliefs as such so much as policies. Also, even falsified beliefs tend to be summarily replaced with new untested/not-intended-to-be-tested beliefs, e.g. “communism is good” with “correctly implemented communism is good”, or “whites and blacks have equal average IQ” with “whites and blacks would have equal average IQ if they’d had the same cultural privileges/disadvantages”. (Apologies for the necessary political examples. Please don’t use this as an opportunity to talk about communism or race.)
Many “beliefs” that aren’t politically relevant—which excludes most scientific “knowledge” and much knowledge of your self, the people you know, what you want to do with your life, et cetera—are better characterized as knowledge, and not beliefs as such. The answers to questions like “do I have one hand, two hands, or three hands?” or “how do I get back to my house from my workplace?” aren’t generally beliefs so much as knowledge, and in my opinion “knowledge” is not only epistemologically but cognitively-neurologically a more accurate description, though I don’t really know enough about memory encoding to really back up that claim (though the difference is introspectively apparent). Either way, I still think that given our knowledge of the non-fundamental-ness of Bayes, we shouldn’t try too hard to stretch Bayes-ness to fit decision problems or cognitive algorithms that Bayes wasn’t meant to describe or solve, even if it’s technically possible to do so.
I believe the common to term for that mistake is “no true Scotsman”.
What do we lose by saying that doesn’t count as a belief? Some consistency when we describe how our minds manipulate anticipations (because we don’t separate out ones we can measure and ones we can’t, but reality does separate those, and our terminology fits reality)? Something else?
So if someone you cared about is leaving your future light cone, you wouldn’t care if he gets horribly tortured as soon as he’s outside of it?
I’m not clear on the relevance of caring to beliefs. I would prefer that those I care about not be tortured, but once they’re out of my future light cone whatever happens to them is a sunk cost- I don’t see what I (or they) get from my preferring or believing things about them.
Yes, but you can affect what happens to them before they leave.
Before they leave, their torture would be in my future light cone, right?
Oops, I just realized that in my hypothetical scenario by someone being tortured outside your light cone, I meant someone being tortured somewhere your two future light cones don’t intersect.
Indeed; being outside of my future light cone just means whatever I do has no impact on them. But now not only can I not impact them, but they’re also dead to me (as they, or any information they emit, won’t exist in my future). I still don’t see what impact caring about them has.
Ok, my scenario involves your actions having an effect on them before your two light cones become disjoint.
Right, but for my actions to have an effect on them, they have to be in my future light cone at the time of action. It sounds like you’re interested in events in my future light cone but will not be in any of the past light cones centered at my future intervals- like, for example, things that I can set in motion now which will not come to fruition until after I’m dead, or the person I care about pondering whether or not to jump into a black hole. Those things are worth caring about so long as they’re in my future light cone, and it’s meaningful to have beliefs about them to the degree that they could be in my past light cone in the future.
The best illustration I’ve seen thus far is this one.
(Side note: I desire few things more than a community where people automatically and regularly engage in analyses like the one linked to. Such a community would actually be significantly less wrong than any community thus far seen on Earth. When LessWrong tries to engage in causal analyses of why others believe what they believe it’s usually really bad: proffered explanations are variations on “memetic selection pressures”, “confirmation bias”, or other fully general “explanations”/rationalizations. I think this in itself is a damning critique of LessWrong, and I think some of the attitude that promotes such ignorance of the causes of others’ beliefs is apparent in posts like “Our Phyg Is Not Exclusive Enough”.)
I agree that that post is the sort of thing that I want more of on LW.
It seems to me like Steve_Rayhawk’s comment is all about anticipation- I hold position X because I anticipate it will have Y impact on the future. But I think I see the disconnect you’re talking about- the position one takes on global warming is based on anticipations one has about politics, not the climate, but it’s necessary (and/or reduces cognitive dissonance) to state the political position in terms of anticipations one has about the climate.
I don’t think public stated beliefs have to be about anticipation- but I do think that private beliefs have to be (should be?) about anticipation. I also think I’m much more sympathetic to the view that rationalizations can use the “beliefs are anticipation” argument as a weapon without finding the true anticipations in question (like Steve_Rayhawk did), but I don’t think that implies that “beliefs are anticipation” is naive or incorrect. Separating out positions, identities, and beliefs seems more helpful than overloading the world beliefs.
You seem to be modeling the AGW disputant’s decision policy as if he is internally representing, in a way that would be introspectively clear to him, his belief about AGW and his public stance about AGW as explicitly distinguished nodes;—as opposed to having “actual belief about AGW” as a latent node that isn’t introspectively accessible. That’s surely the case sometimes, but I don’t think that’s usually the case. Given the non-distinguishability of beliefs and preferences (and the theoretical non-unique-decomposability (is there a standard economic term for that?) of decision policies) I’m not sure it’s wise to use “belief” to refer to only the (in many cases unidentifiable) “actual anticipation” part of decision policies, either for others or ourselves, especially when we don’t have enough time to be abnormally reflective about the causes and purposes of others’/our “beliefs”.
(Areas where such caution isn’t as necessary are e.g. decision science modeling of simple rational agents, or largescale economic models. But if you want to model actual people’s policies in complex situations then the naive Bayesian approach (e.g. with influence diagrams) doesn’t work or is way too cumbersome. Does your experience differ from mine? You have a lot more modeling experience than I do. Also I get the impression that Steve disagrees with me at least a little bit, and his opinion is worth a lot more than mine.)
Another more theoretical reason I encourage caution about the “belief as anticipation” idea is that I don’t think it correctly characterizes the nature of belief in light of recent ideas in decision theory. To me, beliefs seem to be about coordination, where your choice of belief (e.g. expecting a squared rather than a cubed modulus Born rule) is determined by the innate preference (drilled into you by ecological contingencies and natural selection) to coordinate your actions with the actions and decision policies of the agents around you, and where your utility function is about self-coordination (e.g. for purposes of dynamic consistency). The ‘pure’ “anticipation” aspect of beliefs only seems relevant in certain cases, e.g. when you don’t have “anthropic” uncertainty (e.g. uncertainty about the extent to which your contexts are ambiently determined by your decision policy). Unfortunately people like me always have a substantial amount of “anthropic” uncertainty, and it’s mostly only in counterfactual/toy problems where I can use the naive Bayesian approach to epistemology.
(Note that taking the general decision theoretic perspective doesn’t lead to wacky quantum-suicide-like implications, otherwise I would be a lot more skeptical about the prudence of partially ditching the Bayesian boat.)
I’m describing it that way but I don’t think the introspection is necessary- it’s just easier to talk about as if he had full access to his mind. (Private beliefs don’t have to be beliefs that the mind’s narrator has access to, and oftentimes are kept out of its reach for security purposes!)
I don’t think I’ve seen any Bayesian modeling of that sort of thing, but I haven’t gone looking for it.
Bayes nets in general are difficult for people, rather than computers, to manipulate, and so it’s hard to decide what makes them too cumbersome. (Bayes nets in industrial use, like for fault diagnostics, tend to have hundreds if not thousands of nodes, but you wouldn’t have a person traverse them unaided.)
If you wanted to code a narrow AI that determined someone’s mood by, say, webcam footage of them, I think putting your perception data into a Bayes net would be a common approach.
Political positions / psychology seem tough. I could see someone do belief-mapping and correlation in a useful way, but I don’t see analysis on the level of Steve_Rayhawk’s post coming out of a computer-run Bayes net anytime soon, and I don’t think drawing out a Bayes net would help significantly with that sort of analysis. Possible but unlikely- we’ve got pretty sophisticated dedicated hardware for very similar things.
Hmm. I’m going to need to sleep on this, but this sort of coordination still smells to me like anticipation.
(A general comment: this conversation has moved me towards thinking that it’s useful for the LW norm to be tabooing “belief” and using “anticipation” instead when appropriate, rather than trying to equate the two terms. I don’t know if you’re advocating for tabooing “belief”, though.)
(Complement to my other reply: You might not have seen this comment, where I suggest “knowledge” as a better descriptor than “belief” in most mundane settings. (Also I suspect that people’s uses of the words “think” versus “believe” are correlated with introspectively distinct kinds of uncertainty.))
Beliefs about primordial cows, etc. Most people’s beliefs. He’s talking descriptively, not normatively.
Don’t my beliefs about primordial cows constrain my anticipation of the fossil record and development of contemporary species?
I think “most people’s beliefs” fit the anticipation framework- so long as you express them in a compartmentalized fashion, and my understanding of the point of the ‘belief=anticipation’ approach is that it helps resist compartmentalization, which is generally positive.
Metaethics sequence is a bit of a mess, but the point it made is important, and it doesn’t seem like it’s just some wierd opinion of Eliezer’s.
After I read it I was like, “Oh, ok. Morality is easy. Just do the right thing. Where ‘right’ is some incredibly complex set of preferences that are only represented implicitly in physical human brains. And it’s OK that it’s not supernatural or ‘objective’, and we don’t have to ‘justify’ it to an ideal philosophy student of perfect emptyness”. Fake utility functions, and Recursive justification stuff helped.
Maybe there’s something wrong with Eliezer’s metaethics, but I havn’t seen anyone point it out, and have no reason to suspect it. Most of the material that contradicts it is obvious mistakes from just not having read and understood the sequences, not an enlightened counter-analysis.
Hm. I think I’ll put on my project list “reread the metaethics sequence and create an intelligent reply.” If that happens, it’ll be at least two months out.
I look forward to that.
Has it ever been demonstrated that there is a consensus on what point he was trying to make, and that he in fact demonstrated it?
He seems to make a conclusion, but I don’t believe demonstrated it, and I never got the sense that he carried the day in the peanut gallery.
Try actually applying it to some real life situations and you’ll quickly discover the problems with it.
There’s a difference between a metaethics and an ethical theory.
The metaethics sequence is supposed to help dissolve the false dichotomy “either there’s a metaphysical, human-independent Source Of Morality, or else the nihilists/moral relativists are right”. It’s not immediately supposed to solve “So, should we push a fat man off the bridge to stop a runaway trolley before it runs over five people?”
For the second question, we’d want to add an Ethics Sequence (in my opinion, Yvain’s Consquentialism FAQ lays some good groundwork for one).
such as?
Well, for starters determining whether something is a preference or a bias is rather arbitrary in practice.
I struggled with that myself, but then figured out a rather nice quantitative solution.
Eliezer’s stuff doesn’t say much about that topic, but that doesn’t mean it fails at it.
I don’t think your solution actually resolves things since you still need to figure out what weights to assign to each of your biases/values.
You mean that it’s not something that I could use to write an explicit utility function? Of course.
Beyond that, whatever weight all my various concerns have is handled by built-in algorithms. I just have to do the right thing.
The main problem I have is that it is grossly incomplete. There are a few foundational posts but it cuts off without covering what I would like to be covered.
What would you like covered? Or is it just that vague “this isn’t enough” feeling?
I can’t fully remember—it’s been a while since I considered the topic so I mostly have the cached conclusion. More on preference aggregation is one thing. A ‘preferences are subjectively objective’ post. A post that explains more completely what he means by ‘should’ (he has discussed and argued about this in comments).
It’s much worse than that. Nobody on LW seems to be able to understand it at all.
Nah. Subjectivism. Euthyphro.
Random factoid: The post by Eliezer that I find most useful for describing (a particular aspect of) moral philosophy is actually a post about probability.
That is an excellent point.
(In general I use most of the same intuitions for values as I do for probability; they share a lot of the same structure, and given the oft-remarked-on non-unique-decomposability of decision policies they seem to be special cases of some more fundamental thing that we don’t yet have a satisfactory language for talking about. You might like this post and similar posts by Wei Dai that highlight the similarities between beliefs and values. (BTW, that post alone gets you half the way to my variant of theism.) Also check out this post by Nesov. (One question that intrigues me: is there a nonlinearity that results in non-boring outputs if you have an agent who calculates the expected utility of an action by dividing the universal prior probability of A by the universal prior probability of A (i.e., unity)? (The reason you might expect nonlinearities is that some actions depend on the output of the agent program itself, which is encoded by the universal prior but is undetermined until the agent fills in the blank. Seems to be a decent illustration of the more general timeful/timeless problem.)))
I think you mean that it would get you halfway there. Do you have good reason to think it would do the same for others who aren’t already convinced? (It seems like there could be non-question-begging reasons to think that—e.g., it might turn out that people who’ve read and understood it quite commonly end up agreeing with you about God.)
I think most of the disagreement would be about the use of the “God” label, not about the actual decision theory. Wei Dai asks:
This is very close to my variant of theism / objective morality, and gets you to the First and Final Cause of morality—the rest is discerning the attributes of said Cause, which we can do to some extent with algorithmic information theory, specifically the properties of Chaitin’s number of wisdom, omega. I think I could argue quite forcefully that my God is the same God as the God of Aquinas and especially Leibniz (who was in his time already groping towards algorithmic information theory himself). Thus far the counterarguments I’ve seen amount to: “Their ‘language’ doesn’t mean anything; if it does mean something then it doesn’t mean what you think it means; if it does mean what you think it means then you’re both wrong, traitor.” I strongly suspect rationalization due to irrational allergies to the “God” word; most people who think that theism is stupid and worthless have very little understanding of what theology actually is. This is pretty much unrelated to the actual contents of my ideas about ethics and decision theory, it’s just a debate about labels.
Anyway what I meant wasn’t that reading the post halfway convinces the attentive reader of my variant of theism, I meant it allows the attentive reader to halfway understand why I have the intuitions I do, whether or not the reader agrees with those intuitions.
(Apologies if I sound curmudgeonly, really stressed lately.)
Will, may I suggest that you try to work out the details of your objective morality first and explain it to us before linking it with theism/God? For example, how are we supposed to use Chaitin’s Omega to “discerning the attributes of said Cause”? I really have no idea at all what you mean by that, but it seems like it would make for a more interesting discussion than whether your God is the same God as the God of Aquinas and Leibniz, and also less likely to trigger people’s “allergies”.
Actually for the last few days I’ve been thinking about emailing you, because I’ve been planning on writing a long exegesis explaining my ideas about decision theory and theology, but you’ve stated that you don’t think it’s generally useful to proselytize about your intuitions before you have solid formally justified results that resulted from your intuitions. Although I’ve independently noticed various ideas about decision theory (probably due to Steve’s influence), I haven’t at all contributed any new insights, and the only thing I would accomplish with my apologetics is to convince other people that I’m not obviously crazy. You, Nesov, and Steve have made comments that indicate that you recognize that various of my intuitions might be correct, but of course that in itself isn’t anything noteworthy: it doesn’t help us build FAI. (Speaking of which, do you have any ideas about a better name than “FAI”? ‘Friendliness’ implies “friendly to humans”, which itself is a value judgment. Justified Artificial Intelligence, maybe? Not Regrettable Artificial Intelligence? I was using “computational axiology” for awhile a few years ago, but if there’s not a fundamental distinction between epistemology and axiology then that too is sort of misleading.)
Now, I personally think that certain results about decision theory should actually affect what we think of as morally justified, and thus I think my intuitions are actually important for not being damned (whatever that means). But I could easily be wrong about that.
The reason I’ve made references to theology is actually a matter of epistemology, not decision theory: I think LessWrong and others’ contempt for theology and other kinds of academic philosophy is unjustified and epistemically poisonous. (Needless to say, I am extremely skeptical of arguments along the lines of “we only have so much time, we can’t check out every crackpot thesis that comes our way”: in my experience such arguments are always, without exception the result of motivated cognition.) I would hold this position about normative epistemology even if my intuitions about decision theory didn’t happen to support various theological hypotheses.
Anyway, my default position is to write up the aforementioned exegesis in Latin; that way only people that already give my opinions a substantial amount of weight will bother to read it, and I won’t be seen as unfairly proselytizing about my own justifiably-ignorable ideas.
(I’m pretty drunk right now, apologies for errors. I might respond to your comment again when I’m sober.)
OK, so now you’re just taking the piss.
Writing it in Latin selects to some extent for people who respect your opinions, but more strongly for people who happen to know quite a lot of Latin. It sounds as if what you actually want is to be able to say you’ve written up your position, without anyone actually reading it. I hope that isn’t really what you actually want.
(I’m pretty stupid; apologies for any mistakes I make.)
(Part of this stems from my looking for an excuse to manipulate myself into learning Latin. Thus far I’ve used a hot Catholic chick and a perceived moral obligation to express myself incoherently—a quite potent combination.)
That actually sounds a lot like me. Could be true. Yay double negative moral obligations—they force us to be coherent on a higher level, and about more important things!
I will generally explain my intuitions but try not to waste too much time arguing for them if other people do not agree. So I think if you have any ideas that you have not already clearly explained, then you should do so. (And please, not in Latin.)
How about Minimally Wrong AI? :)
Making off-hand references to theology is not going to change our minds about this. Do you have an actual plan to do so? If not, you’re just wasting your credibility and make it less likely for us to take your other ideas seriously.
(Side note: This self-sabotage is purposeful, for reasons indicated by, e.g., this post.)
Okay, thanks for the advice. I haven’t yet clearly explained most of my ideas. (Hm, “my” ideas?—I doubt any of them are actually “mine”.) Not sure I want to do so (hence the Latin), but it sort of seems like a moral imperative, so I guess I have to. bleh bleh bleh
I’ve debated the meta-level issue of epistemic “charity” and how much importance we should assign it in our decision calculi a few times on LessWrong before, e.g. in a few debates with Nesov. You were involved in at least one of them. I think what eventually happened is that I became afraid I was committing typical mind fallacy in advocating a sort of devil-may-care attitude to looking at weird or low-status beliefs; Nesov claimed that doing so had been harmful to him in the past, so I decided I’d rather collect more data before pushing my epistemic intuitions. Unfortunately I don’t know of an easy way to collect more data, so I’ve sort of stalled out on that particular campaign. The making references to theism thing is a sort of middleground position I’ve taken up, presumably to escape various aversions that I don’t have immediate introspective access to. There’s also the matter of not going out of my way to not appear discreditable.
FWIW, I think this “middleground position” is the worst of both worlds.
Your comments have made me wonder if I’ve been too creditable, i.e., to the extent of making people take my ideas more seriously than they should. But it seems like a valid Umeshism that if there isn’t at least one person who has taken your ideas too seriously, then you’re not being creditable enough. I may be close to (or past) this threshold already, but you seem to still have quite a long way to go, so I suggest not worrying about this right now. Especially since credibility is much harder to gain than to lose, so if you ever find yourself having too much credibility, it shouldn’t be too late to do something about it then.
Your comment seems to me to be modally implicitly self-contradictory. For you say that you are worried that you’ve caused yourself to be too creditable, and yet the reason you are considering that hypothesis is that I, a mere peasant, have implicitly-suggested-if-only-categorically that that might be the case. If I am wrong to doubt the wisdom of my self-doubting, then by your lights I am right, and not right to do so! You’ve taken me seriously enough to doubt yourself—to some extent this implies that I have impressed my self too strongly upon you, for you and I and everyone else thinks that you are more justified than I. Again, modally—not necessarily self-contradictory, but it leans that way, at least connotationally-implicitly.
(Really quite drunk, again, apologies for errors, again.)
Damn it, why am I giving you advice on the proper level of credibility, when I should be telling you to stop drinking so much? Talk about cached selves...
It’s okay, I ran out of rum. But now I’m left with an existential question: Why is the rum gone?
Apologies in advance for the emotivist interpretation of morality espoused by this comment.
Yay!
Boo.
YAAAAAY!
Boo.
I may well be being obtuse, but it seems to me that there’s something very odd about the phrase “theism / objective morality”, with its suggestion that basically the two are the same thing.
Have you actually argued forcefully that your god is also Aquinas’s and Leibniz’s? I ask because first you say you could, which kinda suggests you haven’t actually done it so far (at least not in public), but then you start talking about “counterarguments”, which kinda suggests that you have and people have responded.
I agree with Wei_Dai that it might be interesting to know more about your version of objective morality and how one goes about discerning the attributes of its alleged cause using algorithmic information theory.
This reflects a confusion I have about how popular philosophical opinion is in favor of moral realism, yet against theism. It seems that getting the correct answer to all possible moral problems would require prodigious intelligence, and so I don’t really understand the conjunct of moral realism and atheism. This likely reflects my ignorance of the existent philosophical literature, though to be honest like most LessWrongers I’m a little skeptical of the worth of the average philosopher’s opinion, especially about subjects outside of his specialty. Also if I averaged philosophical opinion over, say, the last 500 years, then I think theism would beat atheism. Also, there’s the algorithm from music appreciation, which is like “look at what good musicians like”, which I think would strongly favor theism. Still, I admit I’m confused.
I’ve kinda argued it on the meta-level, i.e. I’ve argued about when it is or isn’t appropriate to assume that you’re actually referring to the same concept versus just engaging in syncretism. But IIRC I haven’t yet forcefully argued that my god is Leibniz’s God. So, yeah, it’s a mixture.
I replied to Wei Dai’s comment here.
BTW, realistically, I won’t be able to reply to your comment re CEV/rightness, though as a result of your comment I do plan on re-reading the meta-ethics sequence to see if “right” is anywhere (implicitly or explicitly) defined as CEV.
(Inebriated, apologies for errors or omissions.)
(nods) Very likely. To the extent that this technique is useful for rank-ordering philosophical positions I ought to adopt, I can also use it to rank-order various theological positions to determine which particular theology to adopt. (I’ve never done this, but I predict it’s one that endorses literacy.)
Surely typical moral realists, atheist or otherwise, don’t believe that they’ve got the correct answer to all possible moral problems. (Just as no one thinks they’re factually correct about everything.)
I don’t think “averaged philosophical opinion” is likely to have much value. Nor “averaged opinion of good musicians” when you’re talking about something that isn’t primarily musical, especially when you average over a period for much of which (e.g.) many of the best employment opportunities for musicians were working for religious organizations.
(Human with a finite brain; apologies for errors or omissions.)
Apparently I mis-stated something. I’m a little too spent to fully rectify the situation, so here’s some word salad: moral realism implies belief in a Form of the Good, but ISTM that the Form of the Good has to be personal, because only intelligences can solve moral problems; specifically, I think a true Form of the Good has to be a superintelligence, i.e. a god, who, if the god is also the Form of the Good, we call God. ISTM that belief in a Form of the Good that isn’t personal is an obvious error that any decent moral philosopher should recognize, and so I think there must be something wrong with how I’m formulating the problem or with how I’m conceptualizing others’ representation of the problem.
Point taken. There is certainly a lack along those lines.