As I pointed out here… AI Safety vs Human Safety… nobody, that I know of, has applied the best method we have for controlling humans (the market) to robots. Which isn’t too surprising since AI largely falls under the scope of computer science. But it’s the “safety” aspect that also falls under the scope of economics. The development of an evil AI is most definitely an inefficient allocation of society’s limited resources.
With futarchy we could bet on which organization/company is most likely to develop harmful AI. We could also bet on which organization is most likely to develop beneficial AI. Then we could shift our money from the former to the latter.
The development of an evil AI is most definitely an inefficient allocation of society’s limited resources.
First using the term “evil” here is a good way to show that you don’t know what you are talking about. We are talking about “unfriendly”.
That said, there are reasons to believe that people who build AGI are overoptimistic in their own creations and might think they produce a useful AGI but actually produce UFAI. As a result there no reason to expect that nobody funds the relevant research.
First using the term “evil” here is a good way to show that you don’t know what you are talking about. We are talking about “unfriendly”.
“Unfriendly” is a tribal signal. The proper term is “unsafe”, but I think that “evil” is a better approximation than “unfriendly” in its standard usage, as opposed to the non-standard usage invented by Yudkowsky.
I always though that “evil” means a malicious intention, while “unfriendly” does harm but not with the intention of doing harm. Compare a standard B-movie rogue robot who hunts humans because of murderous “feelings” it developed out of revenge, fear, envy, or other anthropomorphic qualities, with the paperclip maximizer.
Calling something “evil” applies anthropomorphism to it.
No, it’s a mere signal of allegiance, which you are using to try to shut up the outgroup.
It’s like talking religion with a theist who complains that unless you are referring specifically to Elohim/Jesus/Allah/whatever then you couldn’t possibly say anything meaningful about their religion.
I’m not criticizing semantics out of context to the argument he makes it’s a strawman to claim that everyone who says “evil AI” hasn’t anything meaningful to say.
He speaks about how it’s obvious that nobody funds a evil AI. For some values of evils that’s true.
On the other hand it’s not the cases we worry about.
Not sure how you missed it… but I speak about how people should be able to choose where their taxes go. Maybe you missed it because I get swamped with downvotes?
Right now the government engages in activities that some people consider to be immoral. For example, pacifists consider war to be immoral. You think that there’s absolutely nothing wrong with pacifists being forced to fund war. Instead of worrying about how pacifists currently have to give war a leg to stand on… you want to worry about how we’re going to prevent robots from being immoral.
When evilness, like beauty, is in the eye of the beholder… it’s just as futile to try and prevent AIs from being immoral as it is to try and prevent humans from being immoral. What isn’t futile however is to fight for people’s freedom not to invest in immorality.
Any case you worry about is a case where an AI that you consider to be immoral ends up with too many resources at its disposal. Because you’re really not going to worry about...
… a moral AI with significant resources at its disposal
… an immoral AI with insignificant resources at its disposal
So you worry about a case where an immoral AI ends up with too many resources at its disposal. But that’s exactly the same thing that I worry about with humans. And if it’s exactly the same thing that I worry about with humans… then it’s a given that my worry is the same regardless of whether the immoral individual is human, AI, alien or other.
In other words, you have this bizarre double standard for humans and AI. You want to prevent immoral AIs from coming into existence yet you think nothing of forcing humans to give immoral humans a leg to stand on.
In other words, you have this bizarre double standard [...]
Oh gods, you’re doing that again. “How dare you be talking about something other than my pet issue! That proves you’re on the wrong side of my pet issue, which proves you’re inconsistent and insincere!”
There is a reason why you keep getting “swamped with downvotes”. That reason is that you are wasting other people’s time and attention, and appear not to care. As long as you continue to behave in this obnoxious and antisocial fashion, you will continue to get swamped with downvotes. And, not coincidentally, your rudeness and obtuseness will incline people to think less favourably of your proposal. If someone else more reasonable comes along with an economic proposal like yours, the first reaction of people who’ve interacted with you here is likely to be that bit more negative because they’ll associate the idea with rudeness and obtuseness.
Please consider whether that is really what you want.
In the comment that you replied to, I calmly and rationally explained with exceptionally sound logic why my “pet issue” (the efficient allocation of resources) is relevant to the subject of “unfriendly” AI.
Did you calmly and rationally explain why the efficient allocation of resources is not relevant to “unfriendly” AI? Nope.
Nobody on this forum is forced to read or respond to my comments. And obviously I’m not daunted by criticism. So unlike this guy, I’m not going to bravely run away from an abundance of economic ignorance.
And if my calm and rational comments are driving you so crazy… then perhaps it would behoove you to find the bias in your bonnet.
As Eliezer is fond of saying: “A fanatic is someone who can’t change his mind and won’t change the subject.” At least try to be able to change the subject.
This is kinda like how futarchy works… STAR WARS or STAR TREK… we let the swarm decide! The difference is that the outcome would be a lot more accurate with futarchy. Why? Because people would be putting their money where their mouths are.
As I pointed out here… AI Safety vs Human Safety… nobody, that I know of, has applied the best method we have for controlling humans (the market) to robots. Which isn’t too surprising since AI largely falls under the scope of computer science. But it’s the “safety” aspect that also falls under the scope of economics. The development of an evil AI is most definitely an inefficient allocation of society’s limited resources.
With futarchy we could bet on which organization/company is most likely to develop harmful AI. We could also bet on which organization is most likely to develop beneficial AI. Then we could shift our money from the former to the latter.
Don’t Give Evil Robots A Leg To Stand On!
On a related point, here’s a post about using swarms to build morality into intelligent systems:
http://unanimousai.com/building-moral/
First using the term “evil” here is a good way to show that you don’t know what you are talking about. We are talking about “unfriendly”.
That said, there are reasons to believe that people who build AGI are overoptimistic in their own creations and might think they produce a useful AGI but actually produce UFAI. As a result there no reason to expect that nobody funds the relevant research.
“Unfriendly” is a tribal signal. The proper term is “unsafe”, but I think that “evil” is a better approximation than “unfriendly” in its standard usage, as opposed to the non-standard usage invented by Yudkowsky.
I always though that “evil” means a malicious intention, while “unfriendly” does harm but not with the intention of doing harm. Compare a standard B-movie rogue robot who hunts humans because of murderous “feelings” it developed out of revenge, fear, envy, or other anthropomorphic qualities, with the paperclip maximizer.
Calling something “evil” applies anthropomorphism to it.
It’s signals that you are talking about the thing this tribe is talking about.
No, it’s a mere signal of allegiance, which you are using to try to shut up the outgroup.
It’s like talking religion with a theist who complains that unless you are referring specifically to Elohim/Jesus/Allah/whatever then you couldn’t possibly say anything meaningful about their religion.
I’m not criticizing semantics out of context to the argument he makes it’s a strawman to claim that everyone who says “evil AI” hasn’t anything meaningful to say.
He speaks about how it’s obvious that nobody funds a evil AI. For some values of evils that’s true. On the other hand it’s not the cases we worry about.
Not sure how you missed it… but I speak about how people should be able to choose where their taxes go. Maybe you missed it because I get swamped with downvotes?
Right now the government engages in activities that some people consider to be immoral. For example, pacifists consider war to be immoral. You think that there’s absolutely nothing wrong with pacifists being forced to fund war. Instead of worrying about how pacifists currently have to give war a leg to stand on… you want to worry about how we’re going to prevent robots from being immoral.
When evilness, like beauty, is in the eye of the beholder… it’s just as futile to try and prevent AIs from being immoral as it is to try and prevent humans from being immoral. What isn’t futile however is to fight for people’s freedom not to invest in immorality.
Any case you worry about is a case where an AI that you consider to be immoral ends up with too many resources at its disposal. Because you’re really not going to worry about...
… a moral AI with significant resources at its disposal
… an immoral AI with insignificant resources at its disposal
So you worry about a case where an immoral AI ends up with too many resources at its disposal. But that’s exactly the same thing that I worry about with humans. And if it’s exactly the same thing that I worry about with humans… then it’s a given that my worry is the same regardless of whether the immoral individual is human, AI, alien or other.
In other words, you have this bizarre double standard for humans and AI. You want to prevent immoral AIs from coming into existence yet you think nothing of forcing humans to give immoral humans a leg to stand on.
Oh gods, you’re doing that again. “How dare you be talking about something other than my pet issue! That proves you’re on the wrong side of my pet issue, which proves you’re inconsistent and insincere!”
There is a reason why you keep getting “swamped with downvotes”. That reason is that you are wasting other people’s time and attention, and appear not to care. As long as you continue to behave in this obnoxious and antisocial fashion, you will continue to get swamped with downvotes. And, not coincidentally, your rudeness and obtuseness will incline people to think less favourably of your proposal. If someone else more reasonable comes along with an economic proposal like yours, the first reaction of people who’ve interacted with you here is likely to be that bit more negative because they’ll associate the idea with rudeness and obtuseness.
Please consider whether that is really what you want.
In the comment that you replied to, I calmly and rationally explained with exceptionally sound logic why my “pet issue” (the efficient allocation of resources) is relevant to the subject of “unfriendly” AI.
Did you calmly and rationally explain why the efficient allocation of resources is not relevant to “unfriendly” AI? Nope.
Nobody on this forum is forced to read or respond to my comments. And obviously I’m not daunted by criticism. So unlike this guy, I’m not going to bravely run away from an abundance of economic ignorance.
And if my calm and rational comments are driving you so crazy… then perhaps it would behoove you to find the bias in your bonnet.
As Eliezer is fond of saying: “A fanatic is someone who can’t change his mind and won’t change the subject.” At least try to be able to change the subject.
Quotation commonly attributed to Churchill, but here’s some weak evidence that he didn’t say it, or at least wasn’t the first to.