Oy, it’s only halfway through the month! I still have two weeks to reimplement the proofs of correctness for the Damas-Milner type system and its inference algorithm W. I already got through almost all of Benjamin Pierce’s Software Foundations, getting through almost the whole bunch of lambda-calculus and type-theory chapters just last week.
Well, anyway, in the past month I published a nine-page decision theory article here on LW and more-or-less organized MIRIx Tel-Aviv (I just have to attach my PayPal to my bank account). At one point, this involved causing the LW-TA mailing list to erupt into debates that rather reminded JoshuaFox of the Extropian Mailing List in the ’90s: I consider this slightly shameful, but also actually an accomplishment.
Actually, the productivity technique used for that was multifold: 1) sheer chutzpah of applying, 2) solicit people for information on when and where they can do things, 3) close down discussions that diverge. You’d be surprised how far this gets you, provided anyone actually responds to anything ever.
I’ve been getting coursework done on-time and with really good marks, but that’s not real productivity for a graduate student.
Overall, things are just going very smoothly, time is getting managed well, and I’m finding myself with lots of time to invest in my long-term goals.
Just reminded me of them, but they weren’t all that similar; we’ve learned a lot in the last 20 years. In fact, the discussions of metaethics etc were on a pretty high level and I am glad we had them. But, as Eli hints, I think that for MIRIx purposes, a math focus without discussion of philosophical underpinnings is best.
In fact, the discussions of metaethics etc were on a pretty high level
Actually, they were pretty godawful. From appearances, several members of LW-TA are pretty actively worried by not knowing any universally compelling ethical arguments. The problem is, just because the field of Normative Ethics in philosophy concerns itself with finding universally compelling arguments… doesn’t mean there are ethical universally-compelling arguments, or at least, universally-compelling arguments that are even close to our own moral intuitions or values.
Mind, there are of course universally-compelling arguments in mathematics, and thus (via Solomonoff :-p) in science. These are universally compelling because any agent who does not feel compelled by them will be killed very quickly by natural forces; a mind that doesn’t accept Modus Ponens will die out in favor of one that does. Natural selection doesn’t select in favor of true beliefs, but it does select in favor of not trivially inconsistent or incorrect logics.
(EDIT: I think I can and should refine the above statement. Let us say that an argument is universally compelling when its acceptance increases optimization power as such in any possible optimization process. We can then consider the issue of whether particular arguments or statements are universally compelling to classes of minds or processes which share some or all of their utility function.)
And since there are universally-compelling arguments in mathematics (that is, running a logic as a computation from a fixed set of axioms can generate a fixed set of conclusions, see: Curry-Howard Isomorphism), that means there are almost definitely universally compelling arguments in decision theory, by the way. So there can be universally normative decision theories, just not universally normative value-beliefs over which those decision theories compute optimal decisions.
Those are always parameters to the decision theory. Which is sort of our whole problem in a nutshell.
Oy, it’s only halfway through the month! I still have two weeks to reimplement the proofs of correctness for the Damas-Milner type system and its inference algorithm W. I already got through almost all of Benjamin Pierce’s Software Foundations, getting through almost the whole bunch of lambda-calculus and type-theory chapters just last week.
Well, anyway, in the past month I published a nine-page decision theory article here on LW and more-or-less organized MIRIx Tel-Aviv (I just have to attach my PayPal to my bank account). At one point, this involved causing the LW-TA mailing list to erupt into debates that rather reminded JoshuaFox of the Extropian Mailing List in the ’90s: I consider this slightly shameful, but also actually an accomplishment.
Actually, the productivity technique used for that was multifold: 1) sheer chutzpah of applying, 2) solicit people for information on when and where they can do things, 3) close down discussions that diverge. You’d be surprised how far this gets you, provided anyone actually responds to anything ever.
I’ve been getting coursework done on-time and with really good marks, but that’s not real productivity for a graduate student.
Overall, things are just going very smoothly, time is getting managed well, and I’m finding myself with lots of time to invest in my long-term goals.
Just reminded me of them, but they weren’t all that similar; we’ve learned a lot in the last 20 years. In fact, the discussions of metaethics etc were on a pretty high level and I am glad we had them. But, as Eli hints, I think that for MIRIx purposes, a math focus without discussion of philosophical underpinnings is best.
Actually, they were pretty godawful. From appearances, several members of LW-TA are pretty actively worried by not knowing any universally compelling ethical arguments. The problem is, just because the field of Normative Ethics in philosophy concerns itself with finding universally compelling arguments… doesn’t mean there are ethical universally-compelling arguments, or at least, universally-compelling arguments that are even close to our own moral intuitions or values.
Mind, there are of course universally-compelling arguments in mathematics, and thus (via Solomonoff :-p) in science. These are universally compelling because any agent who does not feel compelled by them will be killed very quickly by natural forces; a mind that doesn’t accept Modus Ponens will die out in favor of one that does. Natural selection doesn’t select in favor of true beliefs, but it does select in favor of not trivially inconsistent or incorrect logics.
(EDIT: I think I can and should refine the above statement. Let us say that an argument is universally compelling when its acceptance increases optimization power as such in any possible optimization process. We can then consider the issue of whether particular arguments or statements are universally compelling to classes of minds or processes which share some or all of their utility function.)
And since there are universally-compelling arguments in mathematics (that is, running a logic as a computation from a fixed set of axioms can generate a fixed set of conclusions, see: Curry-Howard Isomorphism), that means there are almost definitely universally compelling arguments in decision theory, by the way. So there can be universally normative decision theories, just not universally normative value-beliefs over which those decision theories compute optimal decisions.
Those are always parameters to the decision theory. Which is sort of our whole problem in a nutshell.