I did mean a more comprehensive/coherent sense. Here’s my thinking.
Fallacy: “If it’s a super-intelligent machine, the very nature of intelligence means it must be wise and good and therefore it won’t kill us all.”
Rejoinder: “Wait, that’s totally not true. Lots of minds could be very powerful at thinking without valuing anything that we value. And that could kill us all. A paperclip maximizer would be a disaster—but it would still be an intelligence.”
Rejoinder to the rejoinder: “Sure, Clippy is a mind, and Clippy is deadly, but are humans likely to produce a Clippy? When we build AI’s, we’ll probably build them as models of how we think, and so they’ll probably resemble us in some ways. If we built AI’s, and the alien race of Vogons also built AI’s, I’d bet our AI’s would probably be a little bit more like us, relatively speaking, and the Vogon AI’s would probably be a little more like Vogons. We’re not drawing at random from mindspace, we’re drawing from the space of minds that humans are likely to build (on purpose or by accident.) Doesn’t mean that our AI’s won’t be dangerous, but they’re not necessarily going to be as alien as 2 thinks.”
Sure, it seems plausible that an AI developed by humans will on average end up in an at-least-marginally different region of mindspace than an AI developed by nonhumans.
And an AI designed to develop new pharmaceuticals will on average end up in an at-least-marginally different region of mindspace than one designed to predict stock market behavior. Sure.
None of that implies safety, as far as I can tell.
I did mean a more comprehensive/coherent sense. Here’s my thinking.
Fallacy: “If it’s a super-intelligent machine, the very nature of intelligence means it must be wise and good and therefore it won’t kill us all.”
Rejoinder: “Wait, that’s totally not true. Lots of minds could be very powerful at thinking without valuing anything that we value. And that could kill us all. A paperclip maximizer would be a disaster—but it would still be an intelligence.”
Rejoinder to the rejoinder: “Sure, Clippy is a mind, and Clippy is deadly, but are humans likely to produce a Clippy? When we build AI’s, we’ll probably build them as models of how we think, and so they’ll probably resemble us in some ways. If we built AI’s, and the alien race of Vogons also built AI’s, I’d bet our AI’s would probably be a little bit more like us, relatively speaking, and the Vogon AI’s would probably be a little more like Vogons. We’re not drawing at random from mindspace, we’re drawing from the space of minds that humans are likely to build (on purpose or by accident.) Doesn’t mean that our AI’s won’t be dangerous, but they’re not necessarily going to be as alien as 2 thinks.”
Sure, it seems plausible that an AI developed by humans will on average end up in an at-least-marginally different region of mindspace than an AI developed by nonhumans.
And an AI designed to develop new pharmaceuticals will on average end up in an at-least-marginally different region of mindspace than one designed to predict stock market behavior. Sure.
None of that implies safety, as far as I can tell.