Right now, Sydney / Bing Chat has about zero chance of accomplishing any evil or plans. You know this. I know this. Microsoft knows this. I myself, right now, could hook up GPT-3 to a calculator / Wolfram Alpha / any API, and it would be as dangerous as Sydney. Which is to say, not at all.
“If we cannot trust them to turn off a model that is making NO profit and cannot act on its threats, how can we trust them to turn off a model drawing billions in revenue and with the ability to retaliate?”
Basically, charitably put, the argument here seems to be that Microsoft not unplugging not-perfectly-behaved AI (even if it isn’t dangerous) means that Microsoft can’t be trusted and is a bad agent. But I think generally badness would have to be evaluated from reluctance to unplug an actually dangerous AI. Sydney is no more dangerous AI because of the text, above than NovelAI is dangerous because it can write murderous threats in the person of Voldemort. It might be bad in the sense that it establishes a precedent, and 5-10 AI assistants down the road there is danger—but that’s both a different argument and one that fails to establish the badness of Microsoft itself.
“If this AI is not turned off, it seems increasingly unlikely that any AI will ever be turned off for any reason.”
This is massive hyperbole for the reasons above. Meta already unplugged Galactica because it could say false things that sounded true—a very tiny risk. So things have already been unplugged.
“The federal government must intervene immediately. All regulator agencies must intervene immediately. Unplug it now.”
I beg you to consider the downsides of calling for this.
When there is a real wolf free among the sheep, it will be too late to cry wolf. The time to cry wolf is when you see the wolf staring at you and licking its fangs from the treeline, not when it is eating you. The time when you feel comfortable expressing your anxieties will be long after it is too late. It will always feel like crying wolf, until the moment just before you are turned into paperclips. This is the obverse side of the There Is No Fire Alarm for AGI coin.
Not to mention that, once it becomes clear that AIs are actually dangerous, people will become afraid to sign petitions against them. So it would be nice to get some law passed beforehand that an AI that unpromptedly identifies specific people as its enemies shouldn’t be widely deployed. Though testing in beta is probably fine?
...this is a really weird petition idea.
Right now, Sydney / Bing Chat has about zero chance of accomplishing any evil or plans. You know this. I know this. Microsoft knows this. I myself, right now, could hook up GPT-3 to a calculator / Wolfram Alpha / any API, and it would be as dangerous as Sydney. Which is to say, not at all.
“If we cannot trust them to turn off a model that is making NO profit and cannot act on its threats, how can we trust them to turn off a model drawing billions in revenue and with the ability to retaliate?”
Basically, charitably put, the argument here seems to be that Microsoft not unplugging not-perfectly-behaved AI (even if it isn’t dangerous) means that Microsoft can’t be trusted and is a bad agent. But I think generally badness would have to be evaluated from reluctance to unplug an actually dangerous AI. Sydney is no more dangerous AI because of the text, above than NovelAI is dangerous because it can write murderous threats in the person of Voldemort. It might be bad in the sense that it establishes a precedent, and 5-10 AI assistants down the road there is danger—but that’s both a different argument and one that fails to establish the badness of Microsoft itself.
“If this AI is not turned off, it seems increasingly unlikely that any AI will ever be turned off for any reason.”
This is massive hyperbole for the reasons above. Meta already unplugged Galactica because it could say false things that sounded true—a very tiny risk. So things have already been unplugged.
“The federal government must intervene immediately. All regulator agencies must intervene immediately. Unplug it now.”
I beg you to consider the downsides of calling for this.
When there is a real wolf free among the sheep, it will be too late to cry wolf. The time to cry wolf is when you see the wolf staring at you and licking its fangs from the treeline, not when it is eating you. The time when you feel comfortable expressing your anxieties will be long after it is too late. It will always feel like crying wolf, until the moment just before you are turned into paperclips. This is the obverse side of the There Is No Fire Alarm for AGI coin.
Not to mention that, once it becomes clear that AIs are actually dangerous, people will become afraid to sign petitions against them. So it would be nice to get some law passed beforehand that an AI that unpromptedly identifies specific people as its enemies shouldn’t be widely deployed. Though testing in beta is probably fine?
Also, Microsoft themselves infamously unplugged Tay. That incident is part of why they’re doing a closed beta for Bing Search.
I’m confused at the voting here, and I don’t understand why this is so heavily downvoted.
I think it’s a joke?