“I am an AI, not a human being. My mind is completely unlike the mind that you are projecting onto me.”
That may not sound crazy to anyone on LW, but if we get AIs, I predict that it will sound crazy to most people who aren’t technically informed on the subject, which will be most people.
Imagine this near-future scenario. AIs are made, not yet self-improving FOOMers, but helpful, specialised, below human-level systems. For example, what Wolfram Alpha would be, if all the hype was literally true. Autopilots for cars that you can just speak your destination to, and it will get there, even if there are road works or other disturbances. Factories that direct their entire operations without a single human present. Systems that read the Internet for you—really read, not just look for keywords—and bring to your attention the things it’s learned you want to see. Autocounsellors that do a lot better than an Eliza. Tutor programs that you can hold a real conversation with about a subject you’re studying. Silicon friends good enough that you may not be able to tell if you’re talking with a human or a bot, and in virtual worlds like Second Life, people won’t want to.
I predict:
People will anthropomorphise these things. They won’t just have the “sensation” that they’re talking to a human being, they’ll do theory of mind on them. They won’t be able not to.
The actual principles of operation of these systems will not resemble, even slightly, the “minds” that people will project onto them.
People will insist on the reality of these minds as strongly as anosognosics insist on the absence of their impairments. The only exceptions will be the people who design them, and they will still experience the illusion.
And because of that, systems at that level will be dangerous already.
Here I thought about the “Systems that … bring to your attention the things it’s learned you want to see.” A system that has “learned” might bring to attention some things and omit others. What if those omitted things are the “true” ones or the ones that are really necessary? If so then we cannot consider the AI having an explicit goal to tell the truth as Eliezer noted. Or it is not capable of telling the truth. Truth in such case being what the human considers to be true.
That seems pretty plausible. I have a hard enough time already preventing myself from anthropomorphizing my dog. Ascribing human emotions to animals is easy to accidentally do.
“I am an AI, not a human being. My mind is completely unlike the mind that you are projecting onto me.”
That may not sound crazy to anyone on LW, but if we get AIs, I predict that it will sound crazy to most people who aren’t technically informed on the subject, which will be most people.
Imagine this near-future scenario. AIs are made, not yet self-improving FOOMers, but helpful, specialised, below human-level systems. For example, what Wolfram Alpha would be, if all the hype was literally true. Autopilots for cars that you can just speak your destination to, and it will get there, even if there are road works or other disturbances. Factories that direct their entire operations without a single human present. Systems that read the Internet for you—really read, not just look for keywords—and bring to your attention the things it’s learned you want to see. Autocounsellors that do a lot better than an Eliza. Tutor programs that you can hold a real conversation with about a subject you’re studying. Silicon friends good enough that you may not be able to tell if you’re talking with a human or a bot, and in virtual worlds like Second Life, people won’t want to.
I predict:
People will anthropomorphise these things. They won’t just have the “sensation” that they’re talking to a human being, they’ll do theory of mind on them. They won’t be able not to.
The actual principles of operation of these systems will not resemble, even slightly, the “minds” that people will project onto them.
People will insist on the reality of these minds as strongly as anosognosics insist on the absence of their impairments. The only exceptions will be the people who design them, and they will still experience the illusion.
And because of that, systems at that level will be dangerous already.
nice prediction!
So, the Librarian from Snow Crash?
Here I thought about the “Systems that … bring to your attention the things it’s learned you want to see.” A system that has “learned” might bring to attention some things and omit others. What if those omitted things are the “true” ones or the ones that are really necessary? If so then we cannot consider the AI having an explicit goal to tell the truth as Eliezer noted. Or it is not capable of telling the truth. Truth in such case being what the human considers to be true.
That seems pretty plausible. I have a hard enough time already preventing myself from anthropomorphizing my dog. Ascribing human emotions to animals is easy to accidentally do.