I’ve tried using HabitRPG before, but didn’t stick with it. I’ve started using Lift, working out every day following the http://7-min.com. Somehow the expectation of checking off habits for today keep me going through the motions, and the automated timer reduces friction of changing into the mental state appropriate for exercising.
marchdown
There’s even a special page on the Amazon website for the express purpose of cancelling ebook purchases within the last 7 days: http://www.amazon.com/gp/help/customer/display.html?nodeId=200144510
Could you name some actual writer’s IRC channels? I’ve never seen any.
Sounds like a case of extreme discounting or a very close planning horizon.
On Will Newsome’s IRC channel someone mentioned the idea that you could totally automate the ITT into a mass-league game with elo ratings and everything (assuming there was some way to verify true beliefs at the beginning.) Make it happen, somebody.
Ooh, this would be so great!
What if Bludgers, being modelled after naive physics, have inherent knocking-people-out property? Wouldn’t that be in line with how canon is being dealt with in HPMOR?
If we’re taking seriously the possibility of basilisks actually being possible and harmful, isn’t it your invitation really dangerous? After all, what if Axel has thought of an entirely new cognitive hazard, different from everything you may already be familiar with? What if you succumb to it? I’m not saying that it’s probable, only that it should warrant the same precautions as the original basilisk debacle, which led to enacting censorship.
Aye. If you need another nudge, I’d like to say that it’s a great idea, and yes, I would help you test resulting decks.
I’m not so sure that AI suggesting murder is clear evidence of it being unfriendly. After all, it can have a good reason to believe that if it doesn’t stop a certain researcher ASAP and at all costs, then humanity is doomed. One way around that is to give infinite positive value to human life, but can you really expect CEV to be handicapped in such a manner?
It may be benevolent and cooperative in its present state even if it believes FAI to be provably impossible.
That’s what I figured, but I hoped I was wrong, and there’s still a super-secret beer-lovers’ club which opens if you say “iftahh ya simsim” thrice or something. Assuming you would let me in on a secret, of course.
Wait, I thought that library.nu was shut down back in the spring. What am I missing?
This is an interesting way to look at things. I would assert a higher probability, so I’m voting up. Even a slight tweaking (x+ε, m-ε) is enough. I’m imagining a continuous family of mappings starting with identity. These would preserve the structures we already perceive while accentuating certain features.
Fairly certain (85%—98%).
I was confused about getting several upvotes quickly, but without prompting debate. I began wondering if my proposition pattern-matched something not as interesting to discuss.
What a fun game! I notice that I’m somewhat confused, too. I see a couple of different approaches; maybe some of the upvoters would step in and explain themselves.
Irrationality game
Moral intuitions are very simple. A general idea of what it means for somebody to be human is enough to severely restrict variety of moral intuitions which you would expect it to be possible for them to have. Thus, conditioned on Adam’s humanity, you would need very little additional information to get a good idea of Adam’s morals, while Bob the alien would need to explain his basic preferences at length for you to model his moral judgements accurately. It follows that the tricky part of explaining moral intuitions to a machine is explaining human, and it’s not possible to cheat by formalizing moral separately.
Dark arts are very toxic, in the sense that you naturally and necessarily use any and all of your relevant beliefs to construct self-serving arguments on most occasions. Moreover, once you happen to successfully use some rationality technique in a self-serving manner, you become more prone to using it in such a way on future occasions. Thus, once you catch other people using dark arts and understand what’s going on, you are more likely to use the same tricks yourself. >80% sure (I don’t have an intuitive feeling for amounts of evidence, but here I would need at least 6dB of evidence to become uncertain).
- Jul 7, 2012, 1:01 PM; 0 points) 's comment on Irrationality Game II by (
Registered.
Do you have a citation for 15-30 minutes being a reasonable time for blood glucose levels changing in response to consuming a banana? I remember reading that it takes significantly longer than that, up to 150 minutes, but I can’t find a proper source at the moment. The closest I can find is the 4-hour body, and I don’t know how trustworthy it is. It also says that fructose may lower blood glucose levels.