The upshot does feel kind of underwhelming and obvious. This might be because I just don’t remember how confusing the issue looked before I read those posts.
BTW, I’ve had numerous “wow” moments with philosophical insights, some of which made me spend years considering their implications. For example:
Bayesian interpretation of probability
AI / intelligence explosion
Tegmark’s mathematical universe
anthropic principle / anthropic reasoning
free will as the ability to decide logical facts
I expect that a correct solution to metaethics would produce a similar “wow” reaction. That is, it would be obvious in retrospect, but in an overwhelming instead of underwhelming way.
Is the insight about free will and logical facts part of the sequences? or is it something you or others discuss in a post somewhere? I’d like to learn about it, but my searches failed.
I never wrote a post on it specifically, but it’s sort of implicit in my UDT post (see also this comment). Eliezer also has a free will sequence) which is somewhat similar/related but I’m not sure if he would agree with my formulation.
“What is it that you’re deciding when you make a decision?”
What is “you”? And what is “deciding”? Personally I haven’t been able to come to any redefinition of free will that makes more sense than this one.
I haven’t read the free will sequence. And I haven’t read up on decision theory because I wasn’t sure if my math education is good enough yet. But I doubt that if I was going to read it I would learn that you can salvage the notion of “deciding” from causality and logical facts. The best you can do is look at an agent and treat it is as a transformation. But then you’d still be left with the problem of identity.
(Agreed; I also think meta-ethics and ethics are tied into each other in a way that would require that a solution to meta-ethics would at least theoretically solve any ethical problems. Given that I can think of hundreds or thousands of object level ethical problems, and given that I don’t think my inability to answer at least some of them is purely due to boundedness, fallibility, self-delusion, or ignorance as such, I don’t think I have a solution to meta-ethics. (But I would characterize my belief in God as at least a belief that meta-ethics and ethical problems do at least have some unique (meta-level) solution. This might be optimistic bias, though.))
Wei Dai, have you read the Sermon on the Mount, particularly with superintelligences, Tegmark, (epistemic or moral) credit assignment, and decision theory in mind? If not I suggest it, if only for spiritual benefits. (I suggest the Douay-Rheims translation, but that might be due to a bias towards Catholics as opposed to Protestants.)
(Pretty damn drunk for the third day in a row, apologies for errors.)
Are you planning on starting a rationalist’s drinking club? A byob lesswrong meetup with one sober note-taker? You usually do things purposefully, even if they’re unusual purposes, so consistent drunkenness seems uncharacteristic unless it’s part of a plan.
(FWIW the “post-rationalist” label isn’t my invention, I think it mostly belongs to the somewhat separate Will Ryan / Nick Tarleton / Michael Vassar / Divia / &c. crowd; I agree with Nick and Vassar way more than I agree with the LessWrong gestalt, but I’m still off on my own plot of land. Jennifer Rodriguez-Mueller could be described similarly.)
I’m pretty sure the term “rationalist’s drinking club” wouldn’t be used ingenuously as a self-description. I have noticed the justifiable use of “post-rationalist” and distance from the LW gestalt, though. I think if there were a site centered around a sequence written by Steve Rayhawk with the kind of insights into other people’s minds he regularly writes out here, with Sark and a few others as heavy contributors, that would be a “more agenty less wrong” Will would endorse. I’d actually like to see that, too.
BTW, I’ve had numerous “wow” moments with philosophical insights, some of which made me spend years considering their implications. For example:
Bayesian interpretation of probability
AI / intelligence explosion
Tegmark’s mathematical universe
anthropic principle / anthropic reasoning
free will as the ability to decide logical facts
I expect that a correct solution to metaethics would produce a similar “wow” reaction. That is, it would be obvious in retrospect, but in an overwhelming instead of underwhelming way.
Is the insight about free will and logical facts part of the sequences? or is it something you or others discuss in a post somewhere? I’d like to learn about it, but my searches failed.
I never wrote a post on it specifically, but it’s sort of implicit in my UDT post (see also this comment). Eliezer also has a free will sequence) which is somewhat similar/related but I’m not sure if he would agree with my formulation.
What is “you”? And what is “deciding”? Personally I haven’t been able to come to any redefinition of free will that makes more sense than this one.
I haven’t read the free will sequence. And I haven’t read up on decision theory because I wasn’t sure if my math education is good enough yet. But I doubt that if I was going to read it I would learn that you can salvage the notion of “deciding” from causality and logical facts. The best you can do is look at an agent and treat it is as a transformation. But then you’d still be left with the problem of identity.
(Agreed; I also think meta-ethics and ethics are tied into each other in a way that would require that a solution to meta-ethics would at least theoretically solve any ethical problems. Given that I can think of hundreds or thousands of object level ethical problems, and given that I don’t think my inability to answer at least some of them is purely due to boundedness, fallibility, self-delusion, or ignorance as such, I don’t think I have a solution to meta-ethics. (But I would characterize my belief in God as at least a belief that meta-ethics and ethical problems do at least have some unique (meta-level) solution. This might be optimistic bias, though.))
Wei Dai, have you read the Sermon on the Mount, particularly with superintelligences, Tegmark, (epistemic or moral) credit assignment, and decision theory in mind? If not I suggest it, if only for spiritual benefits. (I suggest the Douay-Rheims translation, but that might be due to a bias towards Catholics as opposed to Protestants.)
(Pretty damn drunk for the third day in a row, apologies for errors.)
Are you planning on starting a rationalist’s drinking club? A byob lesswrong meetup with one sober note-taker? You usually do things purposefully, even if they’re unusual purposes, so consistent drunkenness seems uncharacteristic unless it’s part of a plan.
Will_Newsome isn’t a rationalist. (He has described himself as a ‘post-rationalist’, which seems as good a term as any.)
(FWIW the “post-rationalist” label isn’t my invention, I think it mostly belongs to the somewhat separate Will Ryan / Nick Tarleton / Michael Vassar / Divia / &c. crowd; I agree with Nick and Vassar way more than I agree with the LessWrong gestalt, but I’m still off on my own plot of land. Jennifer Rodriguez-Mueller could be described similarly.)
I’m pretty sure the term “rationalist’s drinking club” wouldn’t be used ingenuously as a self-description. I have noticed the justifiable use of “post-rationalist” and distance from the LW gestalt, though. I think if there were a site centered around a sequence written by Steve Rayhawk with the kind of insights into other people’s minds he regularly writes out here, with Sark and a few others as heavy contributors, that would be a “more agenty less wrong” Will would endorse. I’d actually like to see that, too.
In vino veritas et sanitas!