For it is a sad rule that whenever you are most in need of your art as a rationalist, that is when you are most likely to forget it.
thomblake
Yes. I think Dumbledore was trying to talk about either Slytherin or himself, but accidentally was foreshadowing Voldemort.
This doesn’t seem significantly different from the loop Harry already tried, that didn’t work. Don’t summon Azathoth.
No, but Hermione’s life is on the line—he’d bite off his own fingers to save her.
Wow, that actually describes a pretty sane heuristic.
He fixes things a lot. There is practically never a notice.
I have heard rumors that cool things happen elsewhere, but I do not believe them. Though Akihabara is pretty cool.
What he means is that he wishes that books on memory charms fit that description—but in fact they’re not guarded at all or even in the restricted section of the library.
Trying to find web developer work in the SF Bay area.
Because SF is awesome and where all the great stuff in webdev is happening.
I prefer the theory that qhzoyrqber hfrq svraqsler gb qrfgebl gur qvnel ubepehk jura ur gubhtug gur ubhfr jnf rzcgl, naq gura pynvzrq perqvg sbe anepvffn’f qrngu fb gung ure fnpevsvpr jbhyq abg or zrnavatyrff
Current favicon is best favicon.
I’m surprised nobody brought this up at the time, but it’s telling that you’ve only picked out examples of humans when discussing intelligence, not bacteria or rocks or the color blue. I submit that the property is not as unknowable as you would suggest.
Furthermore, if you draw the graph the way Neel seems to suggest, then the bodyguard is adding the antidote without dependence on the actions of the assassin, and so there is no longer any reason to call one “assassin” and the other “bodyguard”, or one “poison” and the other “antidote”. The bodyguard in that model is trying to kill the king as much as the assassin is, and the assassin’s timely intervention saved the king as much as the bodyguard’s.
Upvoted for the multilayered pun
I think steelmanning would instead be if you listed more realistic dangers of that place rather than more extreme dangers
I think you missed what was going on there. In the hypothetical, Feynman’s mom was concerned about the plague and for the steelman Feynman corrected it to TB. The assumption there is that TB is a more realistic threat than the plague.
I don’t read a lot of other people’s stuff about your ideas (e.g. Mark Waser) but I have read most of the things you’ve published. I’m surprised to hear you’ve said it many times before.
This post does answer some questions I had regarding the relevance of mathematical proof to AI safety, and the motivations behind using mathematical proof in the first place. I don’t believe I’ve seen this bit before:
the idea that something-like-proof might be relevant to Friendly AI is not about achieving some chimera of absolute safety-feeling
I think the concept you’re looking for is the principle of charity. Steel man is what you do to someone else’s argument in order to make sure yours is good, after you’ve defeated their actual argument. Principle of charity is what you do in discourse to make sure you’re having the best possible discussion.
If you think Eliezer should have steelmanned your argument then you think he has already defeated it—before he even commented!
Why was this post downvoted like crazy? Is Less Wrong not the sort of place to post this sort of question?
Should we have a Q&A site for this sort of purpose? It’s been discussed before.
Or is it just that this should have been posted to Discussion or the open thread?
Just a general rule of thumb. The time loop is a powerful optimization process with outcomes that are not intuitive to humans. It’s analogous to invoking evolution. If ‘the world is destroyed by an asteroid’ is the only stable outcome, then it seems that’s what you’re going to get.