Ohh, Floornight is pretty awesome (so far). Thanks!
Furcas
Did it.
He doesn’t come from the LW-sphere but he’s obviously read a lot of LW or LW-affiliated stuff. I mean, he’s written a pair of articles about the existential risk of AGI...
The Time article doesn’t say anything interesting.
Goertzel’s article (the first link you posted) is worth reading, although about half of it doesn’t actually argue against AI risk, and the part that does seems obviously flawed to me. Even so, if more LessWrongers take the time to read the article I would enjoy talking about the details, particularly about his conception of AI architectures that aren’t goal-driven.
What that complaint usually means is “The AI is too hard, I would like easier wins”.
That may be true in some cases, but in many other cases the AI really does cheat, and it cheats because it’s not smart enough to offer a challenge to good players without cheating.
Human-like uncertainty could be inserted into the AI’s knowledge of those things, but yeah, as you say, it’s going to be a mess. Probably best to pick another kind of game to beat humans at.
RTS is a bit of a special case because a lot of the skill involved is micromanagement and software is MUCH better at micromanagement than humans.
The micro capabilities of the AI could be limited so they’re more or less equivalent to a human pro gamer’s, forcing the AI to win via build choice and tactics.
I think Jim means that if minds are patterns, there could be instances of our minds in a simulation (or more!) as well as in the base reality, so that we exist in both (until the simulation diverges from reality, if it ever does).
Well, if nothing else, this is a good reminder that rationality has nothing to do with articulacy.
Seriously?
I strongly recommend JourneyQuest. It’s a very smartly written and well acted fantasy webseries. It starts off mostly humorous but quickly becomes more serious. I think it’s the sort of thing most LWers would enjoy. There are two seasons so far, with a third one coming in a few months if the Kickstarter succeeds
https://www.youtube.com/watch?v=pVORGr2fDk8&list=PLB600313D4723E21F
The person accomplished notable things?
What the hell? There’s no sign that Musk and Altman have read Bostrom, or understand the concept of an intelligence explosion in that interview.
World’s first anti-ageing drug could see humans live to 120
Anyone know anything about this?
The drug is metformin, currently used for Type 2 diabetes.
You have understood Loosemore’s point but you’re making the same mistake he is. The AI in your example would understand the intent behind the words “maximize human happiness” perfectly well but that doesn’t mean it would want to obey that intent. You talk about learning human values and internalizing them as if those things naturally go together. The only way that value internalization naturally follows from value learning is if the agent already wants to internalize these values; figuring out how to do that is (part of) the Friendly AI problem.
I donated $400.
My cursor was literally pixels away from the downvote button. :)
I honestly don’t know what more to write to make you understand that you misunderstand what Yudkowsky really means.
You may be suffering from a bad case of the Doctrine of Logical Infallibility, yourself.
The only sense in which the “rigidity” of goals can be said to be a universal fact about minds is that it is these goals that determine how the AI will modify itself once it has become smart and capable enough to do so. It’s not a good idea to modify your goals if you want them to become reality; that seems obviously true to me, except perhaps for a small number of edge cases related to internally incoherent goals.
Your points against the inevitability of goal rigidity don’t seem relevant to this.
Do you already have something written on the subject? I’d like to read it.