Umm, different audiences have different shared assumptions etc., and in particular, if I were writing directly to Venkatesh, rather than at lesswrong, I would have written a different comment.
Maybe if I had commenting privileges at Venkatesh’s blog I would write the following:
My impression from Section 10 is that you think that, if future researchers train embodied AIs with robot bodies, then we CAN wind up with powerful AIs that can do the kinds of things that humans can do, like understand what’s going on, creatively solve problems, take initiative, get stuff done, make plans, pivot when the plans fail, invent new technology, etc. Is that correct?
If so, do you think that (A) nobody will ever make AI that way, (B) this type of AI definitely won’t want to crush humanity, (C) this type of AI definitely wouldn’t be able to crush humanity even if it wanted to? (It can be more than one of the above. Or something else?)
(I disagree with all three, briefly because, respectively, (A) “never” is a very long time, (B) we haven’t solved The Alignment Problem, and (C) we will eventually be able to make AIs that can run essentially the same algorithms as run by adult John von Neumann’s brain, but 100× faster, and with the ability to instantly self-replicate, and there can eventually be billions of different AIs of this sort with different skills and experiences, etc. etc.)
I’m OK with someone cross-posting the above, and please DM me if he replies. :)
Umm, different audiences have different shared assumptions etc., and in particular, if I were writing directly to Venkatesh, rather than at lesswrong, I would have written a different comment.
Maybe if I had commenting privileges at Venkatesh’s blog I would write the following:
I’m OK with someone cross-posting the above, and please DM me if he replies. :)
Replied, we’ll see.