I thought the upshot of Eliezer’s metaethics sequence was just that “right” is a fixed abstract computation, not that it’s (the output of) some particular computation that involves simulating really smart people. CEV is not even mentioned in the sequence (EDIT: whoops it is.).
(Indeed just saying that it’s a fixed abstract computation is at the right level of abstraction to qualify as metaethics; saying that it’s some particular computation would be more like just plain ethics. The upshot does feel kind of underwhelming and obvious. This might be because I just don’t remember how confusing the issue looked before I read those posts. It could also mean that Eliezer claiming that metaethics is a solved problem is not as questionable as it might seem. And it could also mean that metaethics being solved doesn’t consitute as massive progress as it might seem.)
The upshot does feel kind of underwhelming and obvious. This might be because I just don’t remember how confusing the issue looked before I read those posts.
BTW, I’ve had numerous “wow” moments with philosophical insights, some of which made me spend years considering their implications. For example:
Bayesian interpretation of probability
AI / intelligence explosion
Tegmark’s mathematical universe
anthropic principle / anthropic reasoning
free will as the ability to decide logical facts
I expect that a correct solution to metaethics would produce a similar “wow” reaction. That is, it would be obvious in retrospect, but in an overwhelming instead of underwhelming way.
Is the insight about free will and logical facts part of the sequences? or is it something you or others discuss in a post somewhere? I’d like to learn about it, but my searches failed.
I never wrote a post on it specifically, but it’s sort of implicit in my UDT post (see also this comment). Eliezer also has a free will sequence) which is somewhat similar/related but I’m not sure if he would agree with my formulation.
“What is it that you’re deciding when you make a decision?”
What is “you”? And what is “deciding”? Personally I haven’t been able to come to any redefinition of free will that makes more sense than this one.
I haven’t read the free will sequence. And I haven’t read up on decision theory because I wasn’t sure if my math education is good enough yet. But I doubt that if I was going to read it I would learn that you can salvage the notion of “deciding” from causality and logical facts. The best you can do is look at an agent and treat it is as a transformation. But then you’d still be left with the problem of identity.
(Agreed; I also think meta-ethics and ethics are tied into each other in a way that would require that a solution to meta-ethics would at least theoretically solve any ethical problems. Given that I can think of hundreds or thousands of object level ethical problems, and given that I don’t think my inability to answer at least some of them is purely due to boundedness, fallibility, self-delusion, or ignorance as such, I don’t think I have a solution to meta-ethics. (But I would characterize my belief in God as at least a belief that meta-ethics and ethical problems do at least have some unique (meta-level) solution. This might be optimistic bias, though.))
Wei Dai, have you read the Sermon on the Mount, particularly with superintelligences, Tegmark, (epistemic or moral) credit assignment, and decision theory in mind? If not I suggest it, if only for spiritual benefits. (I suggest the Douay-Rheims translation, but that might be due to a bias towards Catholics as opposed to Protestants.)
(Pretty damn drunk for the third day in a row, apologies for errors.)
Are you planning on starting a rationalist’s drinking club? A byob lesswrong meetup with one sober note-taker? You usually do things purposefully, even if they’re unusual purposes, so consistent drunkenness seems uncharacteristic unless it’s part of a plan.
(FWIW the “post-rationalist” label isn’t my invention, I think it mostly belongs to the somewhat separate Will Ryan / Nick Tarleton / Michael Vassar / Divia / &c. crowd; I agree with Nick and Vassar way more than I agree with the LessWrong gestalt, but I’m still off on my own plot of land. Jennifer Rodriguez-Mueller could be described similarly.)
I’m pretty sure the term “rationalist’s drinking club” wouldn’t be used ingenuously as a self-description. I have noticed the justifiable use of “post-rationalist” and distance from the LW gestalt, though. I think if there were a site centered around a sequence written by Steve Rayhawk with the kind of insights into other people’s minds he regularly writes out here, with Sark and a few others as heavy contributors, that would be a “more agenty less wrong” Will would endorse. I’d actually like to see that, too.
For a human this is a much huger blob of a computation that looks like, “Did everyone survive? How many people are happy? Are people in control of their own lives? …” Humans have complex emotions, have many values—the thousand shards of desire, the godshatter of natural selection. I would say, by the way, that the huge blob of a computation is not just my present terminal values (which I don’t really have—I am not a consistent expected utility maximizers); the huge blob of a computation includes the specification of those moral arguments, those justifications, that would sway me if I heard them. So that I can regard my present values, as an approximation to the ideal morality that I would have if I heard all the arguments, to whatever extent such an extrapolation is coherent. [link in the original]
ETA: Just in case you’re right and Eliezer somehow meant for that paragraph not to be part of his metaethics, and that his actual metaethics is just “morality is a fixed abstract computation”, then I’d ask, “If morality is a fixed abstract computation, then it seems that rationality must also be a fixed abstract computation. But don’t you think a complete “solved” metaethics should explain how morality differs from rationality?”
“If morality is a fixed abstract computation, then it seems that rationality must also be a fixed abstract computation. But don’t you think a complete “solved” metaethics should explain how morality differs from rationality?”
Rationality computation outputs statements about the world, morality evaluates them. Rationality is universal and objective, so it is unique as an abstract computation, not just fixed. Morality is arbitrary.
If we assume some kind of mathematical realism (which seems to be necessary for “abstract computation” and “uniqueness” to have any meaning) then there exist objectively true statements and computations that generate them. At some point there are Goedelian problems, but at least all of the computations agree on the primitive-recursive truths, which are therefore universal, objective, unique, and true.
Any rational agent (optimization process) in any world with some regularities would exploit these regularities, which means use math. A reflective self-optimizing rational agent would arrive to the same math as us, because the math is unique.
Of course, all these points are made by a fallible human brain and so may be wrong.
But there is nothing even like that for morality. In fact, when a moral statement seems universal under sufficient reflection, it stops being a moral statement and becomes simply rational, like cooperating in the Prisoner’s Dilemma when playing against the right opponents.
But there is nothing even like that for morality. In fact, when a moral statement seems universal under sufficient reflection, it stops being a moral statement and becomes simply rational, like cooperating in the Prisoner’s Dilemma when playing against the right opponents.
What is the distinction you are making between rationality and morality, then? What makes you think the former won’t be swallowed up by the latter (or vice versa!) in the limit of infinite reflection?
(Sorta drunk, apologies for conflating conflation of rationality and morality with lack of conflation of rationality and morality, probabilistically-shouldly.)
ETA: I don’t understand how my comments can be so awesome when I’m obviously so freakin’ drunk. ;P . Maybe I should get drunk all the freakin’ time. Or study Latin all the freakin’ time, or read the Bible all the freakin’ time, or ponder how often people are obviously wrong when they use the phrase “all the freakin’ time” (let alone “freakin[‘]”) (especially when they use the phrase “all the freakin’ time” all the freakin’ time, naturally-because-reflexively)....
What is the distinction you are making between rationality and morality, then? What makes you think the former won’t be swallowed up by the latter (or vice versa!) in the limit of infinite reflection?
That was the distinction—one is universal, another arbitrary, in the limit of infinite reflection. I suppose, “there is nothing arbitrary” is a valid (consistent) position, but I don’t see any evidence for it.
Interesting! You seem to be a moral realist (cognitivist, whatever) and an a-theist. (I suspect this is the typical LessWrong position, even if the typical LessWronger isn’t as coherent as you.) I’ll take note that I should pester you and/or take care to pay attention to your opinions (comments) more in the future. Also, I thank you for showing me what the reasoning process would be that would lead one to that position. (And I think that position has a very good chance of being correct—in the absence of justifiably-ignorable inside-view (non-communicable) evidence I myself hold.)
(It’s probably obvious that I’m pretty damn drunk. (Interesting that alcohol can be just as effective as LSD or cannabis. (Still not as effective as nitrous oxide or DMT.)))
Any rational agent (optimization process) in any world with some regularities would exploit these regularities, which means use math. A reflective self-optimizing rational agent would arrive to the same math as us, because the math is unique.
Assuming it started with the same laws of inference and axioms. Also I was mostly thinking of statements about the world, e.g., physics.
Assuming it started with the same laws of inference and axioms
Or equivalent ones. But no matter where it started, it won’t arrive at different primitive-recursive truths, at least according to my brain’s current understanding.
Also I was mostly thinking of statements about the world, e.g., physics.
Is there significant difference? Wherever there are regularities in physics, there’s math (=study of regularities). Where no regularities exist, there’s no rationality.
I think the poor things are already dead. More generally, I am aware of that post, but is it relevant? The possible mind design space is of course huge and contains lots of irrational minds, but here I am arguing about universality of rationality.
But rationality is defined by external criteria—it’s about how to win (=achieve intended goals). Morality doesn’t have any such criteria. Thus, “rational minds” is a natural category. “Moral minds” is not.
I thought the upshot of Eliezer’s metaethics sequence was just that “right” is a fixed abstract computation, not that it’s (the output of) some particular computation that involves simulating really smart people. CEV is not even mentioned in the sequence (EDIT: whoops it is.).
(Indeed just saying that it’s a fixed abstract computation is at the right level of abstraction to qualify as metaethics; saying that it’s some particular computation would be more like just plain ethics. The upshot does feel kind of underwhelming and obvious. This might be because I just don’t remember how confusing the issue looked before I read those posts. It could also mean that Eliezer claiming that metaethics is a solved problem is not as questionable as it might seem. And it could also mean that metaethics being solved doesn’t consitute as massive progress as it might seem.)
BTW, I’ve had numerous “wow” moments with philosophical insights, some of which made me spend years considering their implications. For example:
Bayesian interpretation of probability
AI / intelligence explosion
Tegmark’s mathematical universe
anthropic principle / anthropic reasoning
free will as the ability to decide logical facts
I expect that a correct solution to metaethics would produce a similar “wow” reaction. That is, it would be obvious in retrospect, but in an overwhelming instead of underwhelming way.
Is the insight about free will and logical facts part of the sequences? or is it something you or others discuss in a post somewhere? I’d like to learn about it, but my searches failed.
I never wrote a post on it specifically, but it’s sort of implicit in my UDT post (see also this comment). Eliezer also has a free will sequence) which is somewhat similar/related but I’m not sure if he would agree with my formulation.
What is “you”? And what is “deciding”? Personally I haven’t been able to come to any redefinition of free will that makes more sense than this one.
I haven’t read the free will sequence. And I haven’t read up on decision theory because I wasn’t sure if my math education is good enough yet. But I doubt that if I was going to read it I would learn that you can salvage the notion of “deciding” from causality and logical facts. The best you can do is look at an agent and treat it is as a transformation. But then you’d still be left with the problem of identity.
(Agreed; I also think meta-ethics and ethics are tied into each other in a way that would require that a solution to meta-ethics would at least theoretically solve any ethical problems. Given that I can think of hundreds or thousands of object level ethical problems, and given that I don’t think my inability to answer at least some of them is purely due to boundedness, fallibility, self-delusion, or ignorance as such, I don’t think I have a solution to meta-ethics. (But I would characterize my belief in God as at least a belief that meta-ethics and ethical problems do at least have some unique (meta-level) solution. This might be optimistic bias, though.))
Wei Dai, have you read the Sermon on the Mount, particularly with superintelligences, Tegmark, (epistemic or moral) credit assignment, and decision theory in mind? If not I suggest it, if only for spiritual benefits. (I suggest the Douay-Rheims translation, but that might be due to a bias towards Catholics as opposed to Protestants.)
(Pretty damn drunk for the third day in a row, apologies for errors.)
Are you planning on starting a rationalist’s drinking club? A byob lesswrong meetup with one sober note-taker? You usually do things purposefully, even if they’re unusual purposes, so consistent drunkenness seems uncharacteristic unless it’s part of a plan.
Will_Newsome isn’t a rationalist. (He has described himself as a ‘post-rationalist’, which seems as good a term as any.)
(FWIW the “post-rationalist” label isn’t my invention, I think it mostly belongs to the somewhat separate Will Ryan / Nick Tarleton / Michael Vassar / Divia / &c. crowd; I agree with Nick and Vassar way more than I agree with the LessWrong gestalt, but I’m still off on my own plot of land. Jennifer Rodriguez-Mueller could be described similarly.)
I’m pretty sure the term “rationalist’s drinking club” wouldn’t be used ingenuously as a self-description. I have noticed the justifiable use of “post-rationalist” and distance from the LW gestalt, though. I think if there were a site centered around a sequence written by Steve Rayhawk with the kind of insights into other people’s minds he regularly writes out here, with Sark and a few others as heavy contributors, that would be a “more agenty less wrong” Will would endorse. I’d actually like to see that, too.
In vino veritas et sanitas!
It’s mentioned here:
ETA: Just in case you’re right and Eliezer somehow meant for that paragraph not to be part of his metaethics, and that his actual metaethics is just “morality is a fixed abstract computation”, then I’d ask, “If morality is a fixed abstract computation, then it seems that rationality must also be a fixed abstract computation. But don’t you think a complete “solved” metaethics should explain how morality differs from rationality?”
Rationality computation outputs statements about the world, morality evaluates them. Rationality is universal and objective, so it is unique as an abstract computation, not just fixed. Morality is arbitrary.
How so? Every argument I’ve heard for why morality is arbitrary applies just as well to rationality.
If we assume some kind of mathematical realism (which seems to be necessary for “abstract computation” and “uniqueness” to have any meaning) then there exist objectively true statements and computations that generate them. At some point there are Goedelian problems, but at least all of the computations agree on the primitive-recursive truths, which are therefore universal, objective, unique, and true.
Any rational agent (optimization process) in any world with some regularities would exploit these regularities, which means use math. A reflective self-optimizing rational agent would arrive to the same math as us, because the math is unique.
Of course, all these points are made by a fallible human brain and so may be wrong.
But there is nothing even like that for morality. In fact, when a moral statement seems universal under sufficient reflection, it stops being a moral statement and becomes simply rational, like cooperating in the Prisoner’s Dilemma when playing against the right opponents.
What is the distinction you are making between rationality and morality, then? What makes you think the former won’t be swallowed up by the latter (or vice versa!) in the limit of infinite reflection?
(Sorta drunk, apologies for conflating conflation of rationality and morality with lack of conflation of rationality and morality, probabilistically-shouldly.)
ETA: I don’t understand how my comments can be so awesome when I’m obviously so freakin’ drunk. ;P . Maybe I should get drunk all the freakin’ time. Or study Latin all the freakin’ time, or read the Bible all the freakin’ time, or ponder how often people are obviously wrong when they use the phrase “all the freakin’ time” (let alone “freakin[‘]”) (especially when they use the phrase “all the freakin’ time” all the freakin’ time, naturally-because-reflexively)....
That was the distinction—one is universal, another arbitrary, in the limit of infinite reflection. I suppose, “there is nothing arbitrary” is a valid (consistent) position, but I don’t see any evidence for it.
Interesting! You seem to be a moral realist (cognitivist, whatever) and an a-theist. (I suspect this is the typical LessWrong position, even if the typical LessWronger isn’t as coherent as you.) I’ll take note that I should pester you and/or take care to pay attention to your opinions (comments) more in the future. Also, I thank you for showing me what the reasoning process would be that would lead one to that position. (And I think that position has a very good chance of being correct—in the absence of justifiably-ignorable inside-view (non-communicable) evidence I myself hold.)
(It’s probably obvious that I’m pretty damn drunk. (Interesting that alcohol can be just as effective as LSD or cannabis. (Still not as effective as nitrous oxide or DMT.)))
Cognitivist yes, moral realist, no. IIUC, it’s EY’s position (“morality is a computation”), so naturally it’s the typical LessWrong position.
Universally valid statements must have universally-available evidence, no?
Really nothing like LSD, which makes it impossible to write anything at all, at least for me.
Assuming it started with the same laws of inference and axioms. Also I was mostly thinking of statements about the world, e.g., physics.
Or equivalent ones. But no matter where it started, it won’t arrive at different primitive-recursive truths, at least according to my brain’s current understanding.
Is there significant difference? Wherever there are regularities in physics, there’s math (=study of regularities). Where no regularities exist, there’s no rationality.
What about the poor beings with an anti-iductive prior? More generally read this post by Eliezer.
I think the poor things are already dead. More generally, I am aware of that post, but is it relevant? The possible mind design space is of course huge and contains lots of irrational minds, but here I am arguing about universality of rationality.
My point, as I stated above, is that every argument I’ve heard against universality of morality applies just as well to rationality.
I agree with your statement:
I would also agree with the following:
The possible mind design space is of course huge and contains lots of immoral minds, but here I am arguing about universality of morality.
But rationality is defined by external criteria—it’s about how to win (=achieve intended goals). Morality doesn’t have any such criteria. Thus, “rational minds” is a natural category. “Moral minds” is not.