One of the most interesting things that I’m taking away from this conversation is that it seems that there are severe barriers to AGIs taking over or otherwise becoming extremely powerful. These largescale problems are present in a variety of different fields. Coming from a math/comp-sci perspective gives me strong skepticism about rapid self-improvement, while apparently coming from a neuroscience/cogsci background gives you strong skepticism about the AI’s ability to understand or manipulate humans even if it extremely smart. Similarly, chemists seem highly skeptical of the strong nanotech sort of claims. It looks like much of the AI risk worry may come primarily from no one having enough across the board expertise to say “hey, that’s not going to happen” to every single issue.
One of the most interesting things that I’m taking away from this conversation is that it seems that there are severe barriers to AGIs taking over or otherwise becoming extremely powerful. These largescale problems are present in a variety of different fields. Coming from a math/comp-sci perspective gives me strong skepticism about rapid self-improvement, while apparently coming from a neuroscience/cogsci background gives you strong skepticism about the AI’s ability to understand or manipulate humans even if it extremely smart. Similarly, chemists seem highly skeptical of the strong nanotech sort of claims. It looks like much of the AI risk worry may come primarily from no one having enough across the board expertise to say “hey, that’s not going to happen” to every single issue.