No, it’s a mere signal of allegiance, which you are using to try to shut up the outgroup.
It’s like talking religion with a theist who complains that unless you are referring specifically to Elohim/Jesus/Allah/whatever then you couldn’t possibly say anything meaningful about their religion.
I’m not criticizing semantics out of context to the argument he makes it’s a strawman to claim that everyone who says “evil AI” hasn’t anything meaningful to say.
He speaks about how it’s obvious that nobody funds a evil AI. For some values of evils that’s true.
On the other hand it’s not the cases we worry about.
Not sure how you missed it… but I speak about how people should be able to choose where their taxes go. Maybe you missed it because I get swamped with downvotes?
Right now the government engages in activities that some people consider to be immoral. For example, pacifists consider war to be immoral. You think that there’s absolutely nothing wrong with pacifists being forced to fund war. Instead of worrying about how pacifists currently have to give war a leg to stand on… you want to worry about how we’re going to prevent robots from being immoral.
When evilness, like beauty, is in the eye of the beholder… it’s just as futile to try and prevent AIs from being immoral as it is to try and prevent humans from being immoral. What isn’t futile however is to fight for people’s freedom not to invest in immorality.
Any case you worry about is a case where an AI that you consider to be immoral ends up with too many resources at its disposal. Because you’re really not going to worry about...
… a moral AI with significant resources at its disposal
… an immoral AI with insignificant resources at its disposal
So you worry about a case where an immoral AI ends up with too many resources at its disposal. But that’s exactly the same thing that I worry about with humans. And if it’s exactly the same thing that I worry about with humans… then it’s a given that my worry is the same regardless of whether the immoral individual is human, AI, alien or other.
In other words, you have this bizarre double standard for humans and AI. You want to prevent immoral AIs from coming into existence yet you think nothing of forcing humans to give immoral humans a leg to stand on.
In other words, you have this bizarre double standard [...]
Oh gods, you’re doing that again. “How dare you be talking about something other than my pet issue! That proves you’re on the wrong side of my pet issue, which proves you’re inconsistent and insincere!”
There is a reason why you keep getting “swamped with downvotes”. That reason is that you are wasting other people’s time and attention, and appear not to care. As long as you continue to behave in this obnoxious and antisocial fashion, you will continue to get swamped with downvotes. And, not coincidentally, your rudeness and obtuseness will incline people to think less favourably of your proposal. If someone else more reasonable comes along with an economic proposal like yours, the first reaction of people who’ve interacted with you here is likely to be that bit more negative because they’ll associate the idea with rudeness and obtuseness.
Please consider whether that is really what you want.
In the comment that you replied to, I calmly and rationally explained with exceptionally sound logic why my “pet issue” (the efficient allocation of resources) is relevant to the subject of “unfriendly” AI.
Did you calmly and rationally explain why the efficient allocation of resources is not relevant to “unfriendly” AI? Nope.
Nobody on this forum is forced to read or respond to my comments. And obviously I’m not daunted by criticism. So unlike this guy, I’m not going to bravely run away from an abundance of economic ignorance.
And if my calm and rational comments are driving you so crazy… then perhaps it would behoove you to find the bias in your bonnet.
As Eliezer is fond of saying: “A fanatic is someone who can’t change his mind and won’t change the subject.” At least try to be able to change the subject.
It’s signals that you are talking about the thing this tribe is talking about.
No, it’s a mere signal of allegiance, which you are using to try to shut up the outgroup.
It’s like talking religion with a theist who complains that unless you are referring specifically to Elohim/Jesus/Allah/whatever then you couldn’t possibly say anything meaningful about their religion.
I’m not criticizing semantics out of context to the argument he makes it’s a strawman to claim that everyone who says “evil AI” hasn’t anything meaningful to say.
He speaks about how it’s obvious that nobody funds a evil AI. For some values of evils that’s true. On the other hand it’s not the cases we worry about.
Not sure how you missed it… but I speak about how people should be able to choose where their taxes go. Maybe you missed it because I get swamped with downvotes?
Right now the government engages in activities that some people consider to be immoral. For example, pacifists consider war to be immoral. You think that there’s absolutely nothing wrong with pacifists being forced to fund war. Instead of worrying about how pacifists currently have to give war a leg to stand on… you want to worry about how we’re going to prevent robots from being immoral.
When evilness, like beauty, is in the eye of the beholder… it’s just as futile to try and prevent AIs from being immoral as it is to try and prevent humans from being immoral. What isn’t futile however is to fight for people’s freedom not to invest in immorality.
Any case you worry about is a case where an AI that you consider to be immoral ends up with too many resources at its disposal. Because you’re really not going to worry about...
… a moral AI with significant resources at its disposal
… an immoral AI with insignificant resources at its disposal
So you worry about a case where an immoral AI ends up with too many resources at its disposal. But that’s exactly the same thing that I worry about with humans. And if it’s exactly the same thing that I worry about with humans… then it’s a given that my worry is the same regardless of whether the immoral individual is human, AI, alien or other.
In other words, you have this bizarre double standard for humans and AI. You want to prevent immoral AIs from coming into existence yet you think nothing of forcing humans to give immoral humans a leg to stand on.
Oh gods, you’re doing that again. “How dare you be talking about something other than my pet issue! That proves you’re on the wrong side of my pet issue, which proves you’re inconsistent and insincere!”
There is a reason why you keep getting “swamped with downvotes”. That reason is that you are wasting other people’s time and attention, and appear not to care. As long as you continue to behave in this obnoxious and antisocial fashion, you will continue to get swamped with downvotes. And, not coincidentally, your rudeness and obtuseness will incline people to think less favourably of your proposal. If someone else more reasonable comes along with an economic proposal like yours, the first reaction of people who’ve interacted with you here is likely to be that bit more negative because they’ll associate the idea with rudeness and obtuseness.
Please consider whether that is really what you want.
In the comment that you replied to, I calmly and rationally explained with exceptionally sound logic why my “pet issue” (the efficient allocation of resources) is relevant to the subject of “unfriendly” AI.
Did you calmly and rationally explain why the efficient allocation of resources is not relevant to “unfriendly” AI? Nope.
Nobody on this forum is forced to read or respond to my comments. And obviously I’m not daunted by criticism. So unlike this guy, I’m not going to bravely run away from an abundance of economic ignorance.
And if my calm and rational comments are driving you so crazy… then perhaps it would behoove you to find the bias in your bonnet.
As Eliezer is fond of saying: “A fanatic is someone who can’t change his mind and won’t change the subject.” At least try to be able to change the subject.
Quotation commonly attributed to Churchill, but here’s some weak evidence that he didn’t say it, or at least wasn’t the first to.