Requesting lukeprog get round to this. Lesswrong Metaethics, given that it rejects a large amount of rubbish (coherentism being the main part), is the best in the field today and needs further advancing.
Requesting people upvote this post if they agree with me that getting round to metaethics is the best thing Lukeprog could be doing with his time, and downvote if they disagree.
I would love to see Luke (the other Luke, but maybe you, too) and hopefully others (like Yvain) explicate their views on meta-ethics, given how the Eliezer’s Sequence is at best unclear (though quite illuminating). It seems essential because a clear meta-ethics seems necessary to achieve MIRI’s stated purpose: averting AGI x-risk by developing FAI.
Creating a “balance Karma” post. Asking people use this for their conventional Karma for my above post, or to balance out upvotes/downvotes. This way my Karma will remain fair.
Requesting lukeprog get round to this. Lesswrong Metaethics, given that it rejects a large amount of rubbish (coherentism being the main part), is the best in the field today and needs further advancing.
Requesting people upvote this post if they agree with me that getting round to metaethics is the best thing Lukeprog could be doing with his time, and downvote if they disagree.
Getting round to metaethics should rank on Lukeprog’s priorities: [pollid:573]
I would love to see Luke (the other Luke, but maybe you, too) and hopefully others (like Yvain) explicate their views on meta-ethics, given how the Eliezer’s Sequence is at best unclear (though quite illuminating). It seems essential because a clear meta-ethics seems necessary to achieve MIRI’s stated purpose: averting AGI x-risk by developing FAI.
Creating a “balance Karma” post. Asking people use this for their conventional Karma for my above post, or to balance out upvotes/downvotes. This way my Karma will remain fair.