I can’t imagine a situation in which the AGI is sort-of kind to us—not killing good people, letting us keep this solar system—but which also does some unfriendly things, like killing bad people or taking over the rest of the galaxy (both pretty terrible things in themselves, even if they’re not complete failures), unless that’s what the AI’s creator wanted—i.e. the creator solved FAI but managed to, without upsetting the whole thing, include in the AI’s utility function terms for killing bad people and caring about something completely alien outside the solar system. They’re not outcomes that you can cause by accident—and if you can do that, then you can also solve full FAI, without killing bad people or tiling the rest of the galaxy.
I guess what I’m saying is that we’ve gotten involved in a compression fallacy and are saying that Friendly AI = AI that helps out humanity (or is kind to humanity—insert favorite “helps” derivative here).
Here’s an example: I’m “sort of friendly” in that I don’t actively go around killing people, but neither will I go around actively helping you unless you want to trade with me. Does that make me unfriendly? I say no it doesn’t.
Well, I don’t suppose anyone feels the need to draw a bright-line distinction between FAI and uFAI—the AI is more friendly the more its utility function coincides with your own. But in practice it doesn’t seem like any AI is going to fall into the gap between “definitely unfriendly” and “completely friendly”—to create such a thing would be a more fiddly and difficult engineering problem than just creating FAI. If the AI doesn’t care about humans in the way that we want them to, it almost certainly takes us apart and uses the resources to create whatever it does care about.
EDIT: Actually, thinking about it, I suppose one potential failure mode which falls into the grey territory is building an AI that just executes peoples’ current volition without trying to extrapolate. I’m not sure how fast this goes wrong or in what way, but it doesn’t strike me as a good idea.
I suppose one potential failure mode which falls into the grey territory is building an AI that just executes peoples’ current volition without trying to extrapolate. I’m not sure how fast this goes wrong or in what way, but it doesn’t strike me as a good idea.
“I suppose one potential failure mode which falls into the grey territory is building an AI that just executes peoples’ current volition without trying to extrapolate”
i.e. the device has to judge the usefulness by some metric and then decide to execute someone’s volition or not.
That’s exactly what my issue is with trying to define a utility function for the AI. You can’t. And since some people will have their utility function denied by the AI then who is to choose who get’s theirs executed?
I’d prefer to shoot for a NOT(UFAI) and then trade with it.
Here’s a thought experiment:
Is a cure for cancer maximizing everyone’s utility function?
Yes on average we all win.
BUT
Companies who are currently creating drugs to treat the symptoms of cancer and their employees would be out of business.
Which utility function should be executed? Creating better cancer drugs to treat the symptoms and then allowing the company to sell them, or put the companies out of business and cure cancer.
Well, that’s an easy question: if you’ve worked sixteen hour days for the last forty years and you’re just six months away from curing cancer completely and you know you’re going to get the Nobel and be fabulously wealthy etc. etc. and an alien shows up and offers you a cure for cancer on a plate, you take it, because a lot of people will die in six months. This isn’t even different to how the world currently is—if I invented a cure for cancer it would be detrimental to all those others who were trying to (and who only cared about getting there first) - what difference does it make if an FAI helps me? I mean, if someone really wants to murder me but I don’t want them to and they are stopped by the police, that’s clearly an example of the government taking the side of my utility function over the murderer’s. But so what? The murderer was in the wrong.
Anyway, have you read Eliezer’s paper on CEV? I’m not sure that I agree with him, but he does deal with the problem you bring up.
Not necessarily friendly in the sense of being friendly to everyone as we all have differing utility functions, sometimes radically differing.
But I dispute the position that “if an AI doesn’t care about humans in the way we want them to, it almost certainly takes us apart and uses the resources to create whatever it does care about”.
Consider:
A totally unfriendly AI whose main goal is explicitly the extinction of humanity then turning itself off.
For us that’s an unfriendly AI.
One, however that doesn’t kill any of us but basically leaves us alone is defined by those of you who define “friendly AI” to be “kind to us”/”doing what we all want”/”maximizing our utility functions” etc is not unfriendly because by definition it doesn’t kill all of us.
Unless unfriendly also includes “won’t kill all of us but ignores us” et cetera.
Am I for example unfriendly to you if I spent my next month’s paycheck on paperclips but did you no harm?
Well, no. If it ignores us I probably wouldn’t call it “unfriendly”—but I don’t really mind if someone else does. It’s certainly not FAI. But an AI does need to have some utility function, otherwise it does nothing (and isn’t, in truth, intelligent at all), and will only ignore humanity if it’s explicitly programmed to. This ought to be as difficult an engineering problem as FAI—hence why I said it “almost certainly takes us apart”. You can’t get there by failing at FAI, except by being extremely lucky, and why would you want to go there on purpose?
Not necessarily friendly in the sense of being friendly to everyone as we all have differing utility functions, sometimes radically differing.
Yes, it would be a really bad idea to have a superintelligence optimise the world for just one person’s utility function.
“But an AI does need to have some utility function”
What if the “optimization of the utility function” is bounded like my own personal predilection with spending my paycheck on paperclips one time only and then stopping?
Is it sentient if it sits in a corner and thinks to itself, running simulations but won’t talk to you unless you offer it a trade e.g. of some paperclips?
Is it possible that we’re conflating “friendly” with “useful but NOT unfriendly” and we’re struggling with defining what “useful” means?
If it likes sitting in a corner and thinking to itself, and doesn’t care about anything else, it is very likely to turn everything around it (including us) into computronium so that it can think to itself better.
If you put a threshold on it to prevent it from doing stuff like that, that’s a little better, but not much. If it has a utility function that says “Think to yourself about stuff, but do not mess up the lives of humans in doing so”, then what you have now is an AI that is motivated to find loopholes in (the implementation of) that second clause, because anything that can get an increased fulfilment of the first clause will give it a higher utility score overall.
You can get more and more precise than that and cover more known failure modes with their own individual rules, but if it’s very intelligent or powerful it’s tough to predict what terrible nasty stuff might still be in the intersection of all the limiting conditions we create. Hidden complexity of wishes and all that jazz.
I can’t imagine a situation in which the AGI is sort-of kind to us—not killing good people, letting us keep this solar system—but which also does some unfriendly things, like killing bad people or taking over the rest of the galaxy (both pretty terrible things in themselves, even if they’re not complete failures), unless that’s what the AI’s creator wanted—i.e. the creator solved FAI but managed to, without upsetting the whole thing, include in the AI’s utility function terms for killing bad people and caring about something completely alien outside the solar system. They’re not outcomes that you can cause by accident—and if you can do that, then you can also solve full FAI, without killing bad people or tiling the rest of the galaxy.
I don’t see why things of this form can’t be in the set of programs that I’d label “FAI with a bug”
Can I say “LOL” without being downvoted?
I guess what I’m saying is that we’ve gotten involved in a compression fallacy and are saying that Friendly AI = AI that helps out humanity (or is kind to humanity—insert favorite “helps” derivative here).
Here’s an example: I’m “sort of friendly” in that I don’t actively go around killing people, but neither will I go around actively helping you unless you want to trade with me. Does that make me unfriendly? I say no it doesn’t.
Well, I don’t suppose anyone feels the need to draw a bright-line distinction between FAI and uFAI—the AI is more friendly the more its utility function coincides with your own. But in practice it doesn’t seem like any AI is going to fall into the gap between “definitely unfriendly” and “completely friendly”—to create such a thing would be a more fiddly and difficult engineering problem than just creating FAI. If the AI doesn’t care about humans in the way that we want them to, it almost certainly takes us apart and uses the resources to create whatever it does care about.
EDIT: Actually, thinking about it, I suppose one potential failure mode which falls into the grey territory is building an AI that just executes peoples’ current volition without trying to extrapolate. I’m not sure how fast this goes wrong or in what way, but it doesn’t strike me as a good idea.
Conscious or unconscious volition? I think I can point to one possible failure mode :)
“I suppose one potential failure mode which falls into the grey territory is building an AI that just executes peoples’ current volition without trying to extrapolate”
i.e. the device has to judge the usefulness by some metric and then decide to execute someone’s volition or not.
That’s exactly what my issue is with trying to define a utility function for the AI. You can’t. And since some people will have their utility function denied by the AI then who is to choose who get’s theirs executed?
I’d prefer to shoot for a NOT(UFAI) and then trade with it.
Here’s a thought experiment:
Is a cure for cancer maximizing everyone’s utility function?
Yes on average we all win.
BUT
Companies who are currently creating drugs to treat the symptoms of cancer and their employees would be out of business.
Which utility function should be executed? Creating better cancer drugs to treat the symptoms and then allowing the company to sell them, or put the companies out of business and cure cancer.
Well, that’s an easy question: if you’ve worked sixteen hour days for the last forty years and you’re just six months away from curing cancer completely and you know you’re going to get the Nobel and be fabulously wealthy etc. etc. and an alien shows up and offers you a cure for cancer on a plate, you take it, because a lot of people will die in six months. This isn’t even different to how the world currently is—if I invented a cure for cancer it would be detrimental to all those others who were trying to (and who only cared about getting there first) - what difference does it make if an FAI helps me? I mean, if someone really wants to murder me but I don’t want them to and they are stopped by the police, that’s clearly an example of the government taking the side of my utility function over the murderer’s. But so what? The murderer was in the wrong.
Anyway, have you read Eliezer’s paper on CEV? I’m not sure that I agree with him, but he does deal with the problem you bring up.
More friendly to you. Yes.
Not necessarily friendly in the sense of being friendly to everyone as we all have differing utility functions, sometimes radically differing.
But I dispute the position that “if an AI doesn’t care about humans in the way we want them to, it almost certainly takes us apart and uses the resources to create whatever it does care about”.
Consider: A totally unfriendly AI whose main goal is explicitly the extinction of humanity then turning itself off. For us that’s an unfriendly AI.
One, however that doesn’t kill any of us but basically leaves us alone is defined by those of you who define “friendly AI” to be “kind to us”/”doing what we all want”/”maximizing our utility functions” etc is not unfriendly because by definition it doesn’t kill all of us.
Unless unfriendly also includes “won’t kill all of us but ignores us” et cetera.
Am I for example unfriendly to you if I spent my next month’s paycheck on paperclips but did you no harm?
Well, no. If it ignores us I probably wouldn’t call it “unfriendly”—but I don’t really mind if someone else does. It’s certainly not FAI. But an AI does need to have some utility function, otherwise it does nothing (and isn’t, in truth, intelligent at all), and will only ignore humanity if it’s explicitly programmed to. This ought to be as difficult an engineering problem as FAI—hence why I said it “almost certainly takes us apart”. You can’t get there by failing at FAI, except by being extremely lucky, and why would you want to go there on purpose?
Yes, it would be a really bad idea to have a superintelligence optimise the world for just one person’s utility function.
“But an AI does need to have some utility function”
What if the “optimization of the utility function” is bounded like my own personal predilection with spending my paycheck on paperclips one time only and then stopping?
Is it sentient if it sits in a corner and thinks to itself, running simulations but won’t talk to you unless you offer it a trade e.g. of some paperclips?
Is it possible that we’re conflating “friendly” with “useful but NOT unfriendly” and we’re struggling with defining what “useful” means?
If it likes sitting in a corner and thinking to itself, and doesn’t care about anything else, it is very likely to turn everything around it (including us) into computronium so that it can think to itself better.
If you put a threshold on it to prevent it from doing stuff like that, that’s a little better, but not much. If it has a utility function that says “Think to yourself about stuff, but do not mess up the lives of humans in doing so”, then what you have now is an AI that is motivated to find loopholes in (the implementation of) that second clause, because anything that can get an increased fulfilment of the first clause will give it a higher utility score overall.
You can get more and more precise than that and cover more known failure modes with their own individual rules, but if it’s very intelligent or powerful it’s tough to predict what terrible nasty stuff might still be in the intersection of all the limiting conditions we create. Hidden complexity of wishes and all that jazz.