If someone here has an existing subscription I’d love for them to use it to copy Steven Byrnes top level comment. Otherwise I’m gonna pay to do so reluctantly in the next couple hours.
Umm, different audiences have different shared assumptions etc., and in particular, if I were writing directly to Venkatesh, rather than at lesswrong, I would have written a different comment.
Maybe if I had commenting privileges at Venkatesh’s blog I would write the following:
My impression from Section 10 is that you think that, if future researchers train embodied AIs with robot bodies, then we CAN wind up with powerful AIs that can do the kinds of things that humans can do, like understand what’s going on, creatively solve problems, take initiative, get stuff done, make plans, pivot when the plans fail, invent new technology, etc. Is that correct?
If so, do you think that (A) nobody will ever make AI that way, (B) this type of AI definitely won’t want to crush humanity, (C) this type of AI definitely wouldn’t be able to crush humanity even if it wanted to? (It can be more than one of the above. Or something else?)
(I disagree with all three, briefly because, respectively, (A) “never” is a very long time, (B) we haven’t solved The Alignment Problem, and (C) we will eventually be able to make AIs that can run essentially the same algorithms as run by adult John von Neumann’s brain, but 100× faster, and with the ability to instantly self-replicate, and there can eventually be billions of different AIs of this sort with different skills and experiences, etc. etc.)
I’m OK with someone cross-posting the above, and please DM me if he replies. :)
Ah, whoops. Well, then I guess given the circumstances I’ll reframe the question as “PointlessOne, what are you hoping we get out of this?”
Also, lol I just went to try and comment on the OP and it said “only paid subscribers can comment.”
If someone here has an existing subscription I’d love for them to use it to copy Steven Byrnes top level comment. Otherwise I’m gonna pay to do so reluctantly in the next couple hours.
Umm, different audiences have different shared assumptions etc., and in particular, if I were writing directly to Venkatesh, rather than at lesswrong, I would have written a different comment.
Maybe if I had commenting privileges at Venkatesh’s blog I would write the following:
I’m OK with someone cross-posting the above, and please DM me if he replies. :)
Replied, we’ll see.
I shared it as I though it might be interesting alternative view on the topic often discussed here. It was somewhat new to me, at least.
Sharing is not endorsement, if you’re asking that. But it might be a discussion starter.