Open Thread June 2018
If it’s worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
Check if there is an active Open Thread before posting a new one (use search for Open Thread ).
Monthly open threads seem to get lost and maybe we should switch to fortnightly.
What accomplishments are you celebrating from the last month?
What are you reading?
I made a small programming tutorial for beginners, and tried it on kids in a programming class I’m teaching. It seems to work pretty well. Don’t want to publish it yet, but I’d like to try it on someone online, preferably someone who never learned programming and doesn’t feel naturally gifted at it. Any takers?
Last month I got published on an Australian national media website.
I also moved into Sydney’s first lesswrong group house.
I read a book about schema therapy (post coming soon).
I read Scott Adams, “how to fail at everything and still win big”.
I also read my first paper book this year. (it was a novel)
Also discovered bone conduction headphones and I am impressed with the quality.
I am reading “principles of neural science”.
Do you have a recommendation? Constantly on the look out for new headphone styles, I have weird ear holes that nothing fits in.
Ebay is where I got mine. They are “aftershockz bluez 2s”. I would buy them again now in a heartbeat. Have to wait a month before I decide they are worth it or I’d the upgrade was worth it. But I suspect the answer is yes.
Curious about this as well since neither of these recently-updated articles from the NYTimes-owned (meta)review site The Wirecutter mention being able to find any bone-conduction headphones they liked.
I am moving to the Bay Area from the east coast, and have been looking for a job out there for some time. I signed an offer letter last week from a company I am excited to start working with.
Yesterday I published an app that will send you to a random slatestarcodex article. You can find it as a subdomain of my blog: http://random-ssc.pulsarcoffee.com.
I am reading The Fall of Hyperion by Dan Simmons, and Programming in Scala by Martin Odersky.
OpenAi has recently released this charter outlining their strategic approach.
My reaction to this was that it sounds like incredibly good and pretty important news. it reads very genuine and distinct from trying to merely appease critics. But I haven’t seen anyone on LW mentioning it, which leaves me wondering if I’m naive.
So I guess I’m just curious about other opinions here. I’m also reminded of this post which seemed reasonable to me at the time.
I have a LW draft, but I’m only mostly sure its publishing would help AI safety research more than AI research if it works. We should have a review process for this. I could just send Eliezer the .html, but surely he has random internet people sending him quite enough.
On recommendation from the #lesswrong IRC channel, I’m sending it to a particular LW account for review.
I have an exercise app that feeds me lotus if I work out every day. (In the form of streaks, achievements, and eventually unlocking more exercises.) I want not to work out every day.
I’m solving this dilemma by lying to the app.
Can you set it up so the lotus is conditional on you not having lied? For example, the app could praise you for not lying, so your guilt would outweigh the feeling of achievement if you lie. It could say that the app only keeps track of an approximation to the true, platonic streaks and achievements, which is only as accurate as the information you give it, and if you lie it becomes much harder for you to figure out whether you receive lotus.
Perhaps it could just ask you questions of the form “Have you lied in this time period?” and do as much with that information as won’t make you lie about that. For example, there might be a “What if?” tool which lets you select a subset of your statements, and shows you what your progress had been if those were all you stated.
I feel like you missed what I was getting at. (Either that, or I missed what you’re getting at.) Context is noticing the taste of lotus.
It kind of sounds like you think that I’m lying because I lack willpower, or something along those lines. But that’s not it. The point of lying is to decouple the lotus from whether I’m actually exercising, so that I exercise when I choose to, not when the app thinks I should. (I think I should probably exercise more than I currently choose to, but less than the app thinks I should.)
With that in mind, I’m not really sure why “only get lotus when I tell the truth” is something I would want.
That said, it would be kind of nice if I could tag my lies and have the app show me only truthful workouts. As it is I can’t tell how often I’m actually exercising. (It occurs to me I can get some of that benefit by creating a custom workout named “fake”.)
So you don’t like the gamification on the app. Have you considered… using a less gamified app to track workouts? Or not using an app at all?
I have. I like having the app better than not having an app. It’s likely that a better app exists, but finding it is higher activation energy.
Actually that sounds like a good idea not just because you’d get more accurate information about how often you exercise, but also for the following reason: what often happens (at least to me) when I’m tracking something I want to do is that when I have to put in a failed instance I feel guilty. Due to Goodhart’s Imperius this then disincetivizes me to track the behavior in the first place (esp if I’m failing often) because I get negative feedback from the tracking, so the simplest solution from the monkey brain’s perspective is to stop the tracking. But if you get the lotus whether you did the thing or not, conditional on you entering that information into the app, then that gives the proper incentive to track. So I would predict this would work well.
I finished my senior thesis and graduated from college (okay technically the thesis was done in April, but it was at the end of April and the presentation was in May).
I am reading:
Other Minds: The Octopus, The Sea, and the Deep Origins of Consciousness by Peter Godfrey-Smith
r!Animorphs: The Reckoning by Duncan Sabien
Volume II of On What Matters by Derek Parfit (Also about halfway through Reasons and Persons, but that’s on hold for the moment)
I am also rereading Nausicaa of the Valley of the Wind, and when I’m done with Other Minds I’m intending to start Sidgwick’s Methods of Ethics
Also, I have a question: What do people think of the His Dark Materials series? I see a decent bit of discussion of Ender’s Game around here, and while I love Ender’s Game I think His Dark Materials is on a similar level, and should be similarly revered by rationalists. Granted, Lyra is not portrayed as extraordinarily intelligent like Ender, but she is extremely strong, and the series has several rationalist themes, e.g. s-risk (gur haqrejbeyq*), x-risk (gur fhogyr xavsr*), saving the world from these, the Problem of Evil, many-worlds (kind of), etc. Is it just that not as many people have read His Dark Materials, or is there some other reason it’s not really talked about?
*rot13
I enjoyed His Dark Materials but felt that the quality of the writing went downward as the amount of anti-religious axe-grinding went up. (Not because I have an axe to grind; I am an atheist myself and enjoy anti-religious axe-grinding when it’s done well.) I wouldn’t say that the books feel particularly rationalist, for what it’s worth, despite the relevant themes you mention.
Yep, I agree (ETA: about the fact that the books aren’t especially “rationalist”; I don’t remember thinking that the quality of the writing went down as the amount of anti-religious axe-grinding went up, but it’s been long enough since I read the books that maybe if I read them again with that claim in mind I would agree). Rereading Ender’s Game and have changed my mind about His Dark Materials being especially rationalist since writing that comment. ETA: Ender’s game has a ton more stuff in it than I remembered that could basically have come straight out of the sequences, so my mental baseline for “especially rationalist-y fiction” was a lot lower than it probably should have been. Also probably some halo effect going on: I like the books, I like rationalism, so my brain wanted to associate them.
On reading this again, I suppose the technically correct answer to my question is probably something like, “discussing books is not the primary purpose of this site, so the vast majority of books will never be discussed here. So it shouldn’t be surprising that [x book series] is not discussed here.” I guess I don’t really intend the comment to be asking that question literally, but more as 1.) a query of people’s opinions of the books, and 2.) a suggestion that these books might be good candidates to earn a similar status around here as e.g. Ender’s Game, for purposes of referencing for metaphors, inspiration, etc. (of course a big factor here is “how many people read these as a kid and were influenced by them”. If it turns out that just very few people have even read the books, then that would be a reason not to give them that status, because the references wouldn’t be gotten by many people. But if many people have read the books, what I’m doing here is something like putting in a bid to make the books more culturally salient).
Test comment. Please don’t upvote etc. etc.
(Sorry moderators, I assume you deleted my other ones, but I can’t really try to debug notification breakage without creating more.)
Test reply.
Test reply-reply.