This post brings to mind a fault I see with trying to create a trust system of Friendly AI’s. Humans are inherently untrustworthy and random. A robotic AI that is built to be friendly (or at least interact) with humans should by most concepts have a set of rules to work with that minimize its desire to kill us all and turn us into computer fuel. Even in an imperfect world, AI’s would be trained to deal with humans in a forthright and honest fashion, to give the truth when asked, and to build assumptions based on facts and real information.
Humans, however, are irrational creatures that lie, cheat, and steal when it is within our own best interest to do so, and we do it on a regular basis. For those of you who disagree with that premise, please look at the litany of laws we are asked to follow on a daily basis, starting with traffic laws.
Imagine a world of AI drivers and place a human in their midst. Then take away all ‘rules’ that force every driver to move in x direction at y speed on a given roadway. The AI drivers would move with purpose, but be programmed to understand how important their speed and direction was based on the purpose of their travel. Those going to work at a leisurely pace would drive slower, and congregate in one or two areas of the road. The AI’s that need to run a speedy errand or who are in an emergency would move faster and be programmed to take into account the slower vehicles.
But a human in their midst would not care about the others so much as about their own personal issues. They would want to move faster because they like driving fast, or would want to move in the right because they get nervous in the left lanes. Or perhaps they would drive in the area that gave them the best view of the sunset, and slow down to enjoy it—forcing the AI’s behind them to slow down as well.
And when we take the example of an AI who is supposed to work with humans as a receptionist, what does it do when the AI is faced with a human who lies to get past it? If the human lies convincingly and the AI let’s him go, how will the AI react when it finds out the human lied? Are all humans bad? If the same human returns and is now part of the company, will the AI no longer ‘trust’ that human’s information? If a human uses the AI to mess with another human (don’t tell me people never use computers to play pranks on each other) how will the AI ‘feel’ about being used in such a manner?
As humans, we have a set of emotions and memories that allow us to deal with people who do such things. Perhaps we would have a stern chat with the guy who tried to get past us, or play a prank back on the gal who messed with us last time. But should computers be equipped with such a mechanism? I really do not believe so. It is a slippery slope for a robot to play tricks on a human. Unless they are very advanced (such as body scanners that serve as lie detectors), there is little room for them to do anything but trust us.
Have you read the sequences yet? It seems like you’re anthropomorphizing AI to an unreasonable degree (yes, arguing about how they’re going to be different from us can still be too anthropomorphizing,) and that humans are “inherently untrustworthy and random” is a pretty confused statement. Humans are chaotic (difficult to predict without very complete information,) but not random (outcomes chosen arbitrarily from among the available options,) and as for “inherently untrustworthy, it’s not really even clear what such a statement would mean. That may sound overly critical or pedantic, but it’s really not obvious, for instance, what if anything you think would qualify as not inherently untrustworthy, and why you think they’re different.
This post brings to mind a fault I see with trying to create a trust system of Friendly AI’s. Humans are inherently untrustworthy and random. A robotic AI that is built to be friendly (or at least interact) with humans should by most concepts have a set of rules to work with that minimize its desire to kill us all and turn us into computer fuel. Even in an imperfect world, AI’s would be trained to deal with humans in a forthright and honest fashion, to give the truth when asked, and to build assumptions based on facts and real information. Humans, however, are irrational creatures that lie, cheat, and steal when it is within our own best interest to do so, and we do it on a regular basis. For those of you who disagree with that premise, please look at the litany of laws we are asked to follow on a daily basis, starting with traffic laws. Imagine a world of AI drivers and place a human in their midst. Then take away all ‘rules’ that force every driver to move in x direction at y speed on a given roadway. The AI drivers would move with purpose, but be programmed to understand how important their speed and direction was based on the purpose of their travel. Those going to work at a leisurely pace would drive slower, and congregate in one or two areas of the road. The AI’s that need to run a speedy errand or who are in an emergency would move faster and be programmed to take into account the slower vehicles. But a human in their midst would not care about the others so much as about their own personal issues. They would want to move faster because they like driving fast, or would want to move in the right because they get nervous in the left lanes. Or perhaps they would drive in the area that gave them the best view of the sunset, and slow down to enjoy it—forcing the AI’s behind them to slow down as well.
And when we take the example of an AI who is supposed to work with humans as a receptionist, what does it do when the AI is faced with a human who lies to get past it? If the human lies convincingly and the AI let’s him go, how will the AI react when it finds out the human lied? Are all humans bad? If the same human returns and is now part of the company, will the AI no longer ‘trust’ that human’s information? If a human uses the AI to mess with another human (don’t tell me people never use computers to play pranks on each other) how will the AI ‘feel’ about being used in such a manner? As humans, we have a set of emotions and memories that allow us to deal with people who do such things. Perhaps we would have a stern chat with the guy who tried to get past us, or play a prank back on the gal who messed with us last time. But should computers be equipped with such a mechanism? I really do not believe so. It is a slippery slope for a robot to play tricks on a human. Unless they are very advanced (such as body scanners that serve as lie detectors), there is little room for them to do anything but trust us.
Have you read the sequences yet? It seems like you’re anthropomorphizing AI to an unreasonable degree (yes, arguing about how they’re going to be different from us can still be too anthropomorphizing,) and that humans are “inherently untrustworthy and random” is a pretty confused statement. Humans are chaotic (difficult to predict without very complete information,) but not random (outcomes chosen arbitrarily from among the available options,) and as for “inherently untrustworthy, it’s not really even clear what such a statement would mean. That may sound overly critical or pedantic, but it’s really not obvious, for instance, what if anything you think would qualify as not inherently untrustworthy, and why you think they’re different.