Just reminded me of them, but they weren’t all that similar; we’ve learned a lot in the last 20 years. In fact, the discussions of metaethics etc were on a pretty high level and I am glad we had them. But, as Eli hints, I think that for MIRIx purposes, a math focus without discussion of philosophical underpinnings is best.
In fact, the discussions of metaethics etc were on a pretty high level
Actually, they were pretty godawful. From appearances, several members of LW-TA are pretty actively worried by not knowing any universally compelling ethical arguments. The problem is, just because the field of Normative Ethics in philosophy concerns itself with finding universally compelling arguments… doesn’t mean there are ethical universally-compelling arguments, or at least, universally-compelling arguments that are even close to our own moral intuitions or values.
Mind, there are of course universally-compelling arguments in mathematics, and thus (via Solomonoff :-p) in science. These are universally compelling because any agent who does not feel compelled by them will be killed very quickly by natural forces; a mind that doesn’t accept Modus Ponens will die out in favor of one that does. Natural selection doesn’t select in favor of true beliefs, but it does select in favor of not trivially inconsistent or incorrect logics.
(EDIT: I think I can and should refine the above statement. Let us say that an argument is universally compelling when its acceptance increases optimization power as such in any possible optimization process. We can then consider the issue of whether particular arguments or statements are universally compelling to classes of minds or processes which share some or all of their utility function.)
And since there are universally-compelling arguments in mathematics (that is, running a logic as a computation from a fixed set of axioms can generate a fixed set of conclusions, see: Curry-Howard Isomorphism), that means there are almost definitely universally compelling arguments in decision theory, by the way. So there can be universally normative decision theories, just not universally normative value-beliefs over which those decision theories compute optimal decisions.
Those are always parameters to the decision theory. Which is sort of our whole problem in a nutshell.
Just reminded me of them, but they weren’t all that similar; we’ve learned a lot in the last 20 years. In fact, the discussions of metaethics etc were on a pretty high level and I am glad we had them. But, as Eli hints, I think that for MIRIx purposes, a math focus without discussion of philosophical underpinnings is best.
Actually, they were pretty godawful. From appearances, several members of LW-TA are pretty actively worried by not knowing any universally compelling ethical arguments. The problem is, just because the field of Normative Ethics in philosophy concerns itself with finding universally compelling arguments… doesn’t mean there are ethical universally-compelling arguments, or at least, universally-compelling arguments that are even close to our own moral intuitions or values.
Mind, there are of course universally-compelling arguments in mathematics, and thus (via Solomonoff :-p) in science. These are universally compelling because any agent who does not feel compelled by them will be killed very quickly by natural forces; a mind that doesn’t accept Modus Ponens will die out in favor of one that does. Natural selection doesn’t select in favor of true beliefs, but it does select in favor of not trivially inconsistent or incorrect logics.
(EDIT: I think I can and should refine the above statement. Let us say that an argument is universally compelling when its acceptance increases optimization power as such in any possible optimization process. We can then consider the issue of whether particular arguments or statements are universally compelling to classes of minds or processes which share some or all of their utility function.)
And since there are universally-compelling arguments in mathematics (that is, running a logic as a computation from a fixed set of axioms can generate a fixed set of conclusions, see: Curry-Howard Isomorphism), that means there are almost definitely universally compelling arguments in decision theory, by the way. So there can be universally normative decision theories, just not universally normative value-beliefs over which those decision theories compute optimal decisions.
Those are always parameters to the decision theory. Which is sort of our whole problem in a nutshell.