One of the most interesting things that I’m taking away from this conversation is that it seems that there are severe barriers to AGIs taking over or otherwise becoming extremely powerful. These largescale problems are present in a variety of different fields. Coming from a math/comp-sci perspective gives me strong skepticism about rapid self-improvement, while apparently coming from a neuroscience/cogsci background gives you strong skepticism about the AI’s ability to understand or manipulate humans even if it extremely smart. Similarly, chemists seem highly skeptical of the strong nanotech sort of claims. It looks like much of the AI risk worry may come primarily from no one having enough across the board expertise to say “hey, that’s not going to happen” to every single issue.
What if people try to teach it about sarcasm or the like? Or simply have it learn by downloading a massive amount of literature and movies and look at those? And there are more subtle ways to learn about lying- AI being used for games is a common idea, how long will it take before someone decides to use a smart AI to play poker?
Yes. If we have an AGI, and someone sets forth to teach it how to be able to lie, I will get worried.
I am not worried about an AGI developing such an ability spontaneously.
One of the most interesting things that I’m taking away from this conversation is that it seems that there are severe barriers to AGIs taking over or otherwise becoming extremely powerful. These largescale problems are present in a variety of different fields. Coming from a math/comp-sci perspective gives me strong skepticism about rapid self-improvement, while apparently coming from a neuroscience/cogsci background gives you strong skepticism about the AI’s ability to understand or manipulate humans even if it extremely smart. Similarly, chemists seem highly skeptical of the strong nanotech sort of claims. It looks like much of the AI risk worry may come primarily from no one having enough across the board expertise to say “hey, that’s not going to happen” to every single issue.
What if people try to teach it about sarcasm or the like? Or simply have it learn by downloading a massive amount of literature and movies and look at those? And there are more subtle ways to learn about lying- AI being used for games is a common idea, how long will it take before someone decides to use a smart AI to play poker?