As I’m waiting to watch the Trump Obama meeting, I’m changing my mind to elaborate. I’ve never really been an active participator in the LW community and if I’m going to distance myself further so be it. As an example, compare this to this and this. If Eliezer actually believed that politics is the mind killer and had any interest in intellectual honesty, he would admit he was hoodwinked by that live action roleplay game of his. He won’t, hence my disgust.
James_Blair
I fully agree with this.
edit: someone may think this comment doesn’t contribute at all. the someone that did also took the additional step of downvoting the OP, so make of that what you will.
I have taken the survey.
So, after what happened.. turns out I was both wrong and right.
If a viable solution is posted before 12:01AM Pacific Time (8:01AM UTC) on Tuesday, March 3rd, 2015, the story will continue to Ch. 121.
Otherwise you will get a shorter and sadder ending.
So failure would have just meant the end, and yet there was nothing to worry about: the much larger audience managed to figure out a space of much more effective solutions, along with a much more hilarious space of failures.
The trick is to evaluate right to left.
single Scheme lambda
What scaffolding are you going to use for the tests? (For example: #!racket seems to be implied. I’d like to be sure of all of your details.)
][>:=~+
Is Omega Impossible?
No, Omega is possible. I have implemented Newcomb’s Game as a demonstration. This is not a probabilistic simulation, this omega is never wrong.
It’s really very obvious if you think about it like a game designer. To the obvious objection: Would a more sophisticated Omega be any different in practice?
For my next trick, I shall have an omnipotent being create an immovable object and then move it.
edit: sorry about the bugs. it’s rather embarrassing, i have not used these libraries in ages.
I think it is bad taste to cynl cenaxf jura gur fvgr’f pybpx qbrf abg fnl gung vg vf gur svefg bs ncevy. Vg’f onq rabhtu yvivat va tzg cyhf gjryir.
Yes. The exact phrasing of the challenge was:
With a sudden motion, the Confessor’s arm swept out...
… and anesthetized the Lord Pilot.
… [This option will become the True Ending only if someone suggests it in the comments before the previous ending is posted tomorrow. Otherwise, the first ending is the True one.]
Is there anyone keeping a history of the story? I suspect there are some clues to be gleamed from the edits.
(Note: I originally specifically asked for what was chapter 76 but now 77, but I realized that the thing I was looking for was there all along. Regardless I am still interested in a history.)
There’s nothing to worry about. We were presented with the same challenge in Three Worlds Collide. If we don’t succeed, we will just get a false ending instead of a true ending.
You can find it in chapter 63:
I will say this much, Mr. Potter: You are already an Occlumens, and I think you will become a perfect Occlumens before long. Identity does not mean, to such as us, what it means to other people. Anyone we can imagine, we can be; and the true difference about you, Mr. Potter, is that you have an unusually good imagination. A playwright must contain his characters, he must be larger than them in order to enact them within his mind. To an actor or spy or politician, the limit of his own diameter is the limit of who he can pretend to be, the limit of which face he may wear as a mask. But for such as you and I, anyone we can imagine, we can be, in reality and not pretense. While you imagined yourself a child, Mr. Potter, you were a child. Yet there are other existences you could support, larger existences, if you wished. Why are you so free, and so great in your circumference, when other children your age are small and constrained? Why can you imagine and become selves more adult than a mere child of a playwright should be able to compose? That I do not know, and I must not say what I guess. But what you have, Mr. Potter, is freedom.
The one time I tried this, it backfired terribly. It seemed like a logical sale, but the war games don’t start until quite a fair way in; meanwhile, the first ten chapters (which is what the first chapter recommends trying before giving up) don’t have that sort of flavour.
Ungrowth wasn’t talked about in the novels. I remember the opposite complaint: that the overly strict implementation of the Three Laws turned humanity into kittens, with the wireheads at the extreme. Ungrowth sounds almost as bad as Peer’s arbitrary obsessions in Permutation City.
Holding all other implementation details equal, Lawrence’s insistence that PI not look into people’s brains results in a much better world than not. I get the impression that Roger thinks his genie could have handled people better if it analyzed them that deeply.
The critique of the Three Laws as portrayed should have focused not on how limited it is, but on how restrictive it is. A premise Roger disagrees with when saying that Lawrence did not install a more robust ethical system when his design allowed for it. The star map above Lawrence’s house gave us a glimpse of the design that made it clear how he messed up his design not just in creating a thoughtless genie, but in not allowing corrections as the everything the AI believed was not only interdependent but centered on those three pillars.
Edit: There were more words here, but your later emphasis confuses me. I’m going to pretend you didn’t do that. If I’m not being clear here, please help me help me help you.
I think this essay drifts considerably further away from SIAI/LW thinking than his story does, though I might have forgotten things.
Actually, given a moment to reflect, I’m more confusing the essay’s points and my own impressions of the story. If he thought like this while writing the novel, then he spectacularly failed to reach me. For that I’m glad.
Rather than unfriendly AI, I think he means a Friendly AI that’s only Friendly to one person (or very few people). If we’re going to be talking about this concept then we need a better term for it. My inner nerd prefers Suzumiya AI.
Would the Institute consider hiring telecommuters (both in and out the US)?
Update: this question was left unanswered in the second Q&A.
Delete button