June Monthly Bragging Thread
Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you’ve done this month. You may be as blatantly proud of yourself as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you’re dying to tell everyone about. This is the place to do just that.
Remember, however, that this isn’t any kind of progress thread. Nor is it any kind of proposal thread. This thread is solely for people to talk about the awesome things they have done. Not “will do”. Not “are working on”. Have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods.
So, what’s the coolest thing you’ve done this month?
I reached and exceeded economic break-even for the first time in 6 years.
This is kinda a big deal for me.
Third Flatiron has published my short story, “The Right Books” in the Master Minds anthology. After 20-some-odd nonfiction books, and probably hundreds of articles, this is my first published work of fiction.
I don’t want to give too much away to those who may read it, but this story is heavily informed by LessWrong. Indeed I very much doubt I could or would have written anything like this before discovering LessWrong. Enjoy!
By the way, if anyone would like to do a review of this collection for LessWrong, the publisher would be happy to send you a review copy. Just let me know. The theme of the anthology is Intelligence, a subject of some interest here. :-)
I went on the Fox Business TV show Stossel to talk about what we should do if robots cause mass unemployment. The segment has not yet aired.
What do you mean, if? Given the economic incentives for business-owners, that’s more of a when thing.
Well, in the past when technology destroyed jobs new ones always appeared so from an outside viewpoint we should be skeptical that robots will cause mass unemployment.
Tsk tsk. They didn’t appear for the same workers, unless you can point me to the current large employer of draft horses.
Since we are talking about whether robots will cause “mass unemployment” it doesn’t matter if jobs appear for “the same workers”.
Interestingly, your horse point was basically brought up by another guest of the show.
Oy, it’s only halfway through the month! I still have two weeks to reimplement the proofs of correctness for the Damas-Milner type system and its inference algorithm W. I already got through almost all of Benjamin Pierce’s Software Foundations, getting through almost the whole bunch of lambda-calculus and type-theory chapters just last week.
Well, anyway, in the past month I published a nine-page decision theory article here on LW and more-or-less organized MIRIx Tel-Aviv (I just have to attach my PayPal to my bank account). At one point, this involved causing the LW-TA mailing list to erupt into debates that rather reminded JoshuaFox of the Extropian Mailing List in the ’90s: I consider this slightly shameful, but also actually an accomplishment.
Actually, the productivity technique used for that was multifold: 1) sheer chutzpah of applying, 2) solicit people for information on when and where they can do things, 3) close down discussions that diverge. You’d be surprised how far this gets you, provided anyone actually responds to anything ever.
I’ve been getting coursework done on-time and with really good marks, but that’s not real productivity for a graduate student.
Overall, things are just going very smoothly, time is getting managed well, and I’m finding myself with lots of time to invest in my long-term goals.
Just reminded me of them, but they weren’t all that similar; we’ve learned a lot in the last 20 years. In fact, the discussions of metaethics etc were on a pretty high level and I am glad we had them. But, as Eli hints, I think that for MIRIx purposes, a math focus without discussion of philosophical underpinnings is best.
Actually, they were pretty godawful. From appearances, several members of LW-TA are pretty actively worried by not knowing any universally compelling ethical arguments. The problem is, just because the field of Normative Ethics in philosophy concerns itself with finding universally compelling arguments… doesn’t mean there are ethical universally-compelling arguments, or at least, universally-compelling arguments that are even close to our own moral intuitions or values.
Mind, there are of course universally-compelling arguments in mathematics, and thus (via Solomonoff :-p) in science. These are universally compelling because any agent who does not feel compelled by them will be killed very quickly by natural forces; a mind that doesn’t accept Modus Ponens will die out in favor of one that does. Natural selection doesn’t select in favor of true beliefs, but it does select in favor of not trivially inconsistent or incorrect logics.
(EDIT: I think I can and should refine the above statement. Let us say that an argument is universally compelling when its acceptance increases optimization power as such in any possible optimization process. We can then consider the issue of whether particular arguments or statements are universally compelling to classes of minds or processes which share some or all of their utility function.)
And since there are universally-compelling arguments in mathematics (that is, running a logic as a computation from a fixed set of axioms can generate a fixed set of conclusions, see: Curry-Howard Isomorphism), that means there are almost definitely universally compelling arguments in decision theory, by the way. So there can be universally normative decision theories, just not universally normative value-beliefs over which those decision theories compute optimal decisions.
Those are always parameters to the decision theory. Which is sort of our whole problem in a nutshell.
Met several rationalists online, met two at an Austin meetup, and started a meetup myself in San Antonio.