I bounced off this a short ways through because it seemed like it was focused on consciousness and something-it-is-like-to-be-ness, which just has very little to do with AI fears as commonly described on LessWrong. I tried skipping to the end to see if it would tie the gestalt of the argument together and see if I missed something.
Can you give a brief high level overview of who you’re arguing against and what you think their position is? Or, what the most important takeaways of your position are regardless of whether they’re arguing against anything in particular?
You’re saying “you”, but the blog post was written by Venkatesh Rao, who AFAIK does not have a LessWrong account.
I think that Rao thinks that he is arguing against AI fears as commonly described on LessWrong. I think he thinks that something-it-is-like-to-be-ness is a prerequisite to being an effective agent in the world, and that’s why he brought it up. Low confidence on that though.
If someone here has an existing subscription I’d love for them to use it to copy Steven Byrnes top level comment. Otherwise I’m gonna pay to do so reluctantly in the next couple hours.
Umm, different audiences have different shared assumptions etc., and in particular, if I were writing directly to Venkatesh, rather than at lesswrong, I would have written a different comment.
Maybe if I had commenting privileges at Venkatesh’s blog I would write the following:
My impression from Section 10 is that you think that, if future researchers train embodied AIs with robot bodies, then we CAN wind up with powerful AIs that can do the kinds of things that humans can do, like understand what’s going on, creatively solve problems, take initiative, get stuff done, make plans, pivot when the plans fail, invent new technology, etc. Is that correct?
If so, do you think that (A) nobody will ever make AI that way, (B) this type of AI definitely won’t want to crush humanity, (C) this type of AI definitely wouldn’t be able to crush humanity even if it wanted to? (It can be more than one of the above. Or something else?)
(I disagree with all three, briefly because, respectively, (A) “never” is a very long time, (B) we haven’t solved The Alignment Problem, and (C) we will eventually be able to make AIs that can run essentially the same algorithms as run by adult John von Neumann’s brain, but 100× faster, and with the ability to instantly self-replicate, and there can eventually be billions of different AIs of this sort with different skills and experiences, etc. etc.)
I’m OK with someone cross-posting the above, and please DM me if he replies. :)
I bounced off this a short ways through because it seemed like it was focused on consciousness and something-it-is-like-to-be-ness, which just has very little to do with AI fears as commonly described on LessWrong. I tried skipping to the end to see if it would tie the gestalt of the argument together and see if I missed something.
Can you give a brief high level overview of who you’re arguing against and what you think their position is? Or, what the most important takeaways of your position are regardless of whether they’re arguing against anything in particular?
You’re saying “you”, but the blog post was written by Venkatesh Rao, who AFAIK does not have a LessWrong account.
I think that Rao thinks that he is arguing against AI fears as commonly described on LessWrong. I think he thinks that something-it-is-like-to-be-ness is a prerequisite to being an effective agent in the world, and that’s why he brought it up. Low confidence on that though.
Ah, whoops. Well, then I guess given the circumstances I’ll reframe the question as “PointlessOne, what are you hoping we get out of this?”
Also, lol I just went to try and comment on the OP and it said “only paid subscribers can comment.”
If someone here has an existing subscription I’d love for them to use it to copy Steven Byrnes top level comment. Otherwise I’m gonna pay to do so reluctantly in the next couple hours.
Umm, different audiences have different shared assumptions etc., and in particular, if I were writing directly to Venkatesh, rather than at lesswrong, I would have written a different comment.
Maybe if I had commenting privileges at Venkatesh’s blog I would write the following:
I’m OK with someone cross-posting the above, and please DM me if he replies. :)
Replied, we’ll see.
I shared it as I though it might be interesting alternative view on the topic often discussed here. It was somewhat new to me, at least.
Sharing is not endorsement, if you’re asking that. But it might be a discussion starter.