There are costs to even temporarily shutting down Bing Search. Microsoft would take a significant reputational hit—Google’s stock price fell 8% after a minor kerfuffle with their chatbot.
The risks of imminent harmful action by Sydney are negligible. Microsoft doesn’t need to take dramatic action; it is reasonable for them to keep using user feedback from the beta period to attempt to improve alignment.
I think this petition will be perceived as crying wolf by those we wish to convince of the dangers of AI. It is enough to simply point out that Bing Search is not aligned with Microsoft’s goals for it right now.
I mostly agree and have strongly upvoted. However, I have one small but important nitpick about this sentense:
The risks of imminent harmful action by Sydney are negligible.
I think when it comes to x-risk, the correct question is not “what is the probability that this will result in existential catastrophe”.
Suppose that there is a series of potential harmful any increasingly risky AIs that each have some probabilities p1,p2,… of causing existiential catastrophe unless you press a stop button. If the probabilities are growing sufficiently slowly, then existential catastrophe will most likely happen for an n where pn is still low. A better question to ask is “what was the probability of existential catastrophe happening for som i≤n.”
There are costs to even temporarily shutting down Bing Search. Microsoft would take a significant reputational hit—Google’s stock price fell 8% after a minor kerfuffle with their chatbot.
Doesn’t that support the claim being made in the original post? Admitting that the new AI technology has flaws carries reputational costs, so Microsoft/Google/Facebook will not admit that their technology has flaws and will continue tinkering with it long past the threshold where any reasonable external observer would call for it to be shut down, simply because the costs of shutdown are real, tangible and immediate, while the risk of a catastrophic failure is abstract, hard to quantify, and in the future.
What we are witnessing is the beginning of a normalization of deviance process, similar to that which Diane Vaughan documented in her book, The Challenger Launch Decision. Bing AI behaving oddly is, to me, akin to the O-rings in the Shuttle solid rocket booster joints behaving oddly under pressure tests in 1979.
The difference between these two scenarios is that the Challenger explosion only killed seven astronauts.
Isn’t the whole point to be able to say “we cried wolf and no one came, so if you say we can just cry wolf when we see one and we will be saved, you are wrong”? I don’t think Eneasz think that a petition on change.org will be successful. (Eneasz, please, correct me if I am wrong)
I didn’t sign the petition.
There are costs to even temporarily shutting down Bing Search. Microsoft would take a significant reputational hit—Google’s stock price fell 8% after a minor kerfuffle with their chatbot.
The risks of imminent harmful action by Sydney are negligible. Microsoft doesn’t need to take dramatic action; it is reasonable for them to keep using user feedback from the beta period to attempt to improve alignment.
I think this petition will be perceived as crying wolf by those we wish to convince of the dangers of AI. It is enough to simply point out that Bing Search is not aligned with Microsoft’s goals for it right now.
I must admit, I have removed my signature from the petition and I had learned an important lesson.
Let this be a lesson for us not to pounce on the first signs of problems.
Note that you can still retract your signature for 30 days after signing. See here:
https://help.change.org/s/article/Remove-signature-from-petition?language=en_US
I retracted my signature, and I will edit my top post.
I mostly agree and have strongly upvoted. However, I have one small but important nitpick about this sentense:
I think when it comes to x-risk, the correct question is not “what is the probability that this will result in existential catastrophe”. Suppose that there is a series of potential harmful any increasingly risky AIs that each have some probabilities p1,p2,… of causing existiential catastrophe unless you press a stop button. If the probabilities are growing sufficiently slowly, then existential catastrophe will most likely happen for an n where pn is still low. A better question to ask is “what was the probability of existential catastrophe happening for som i≤n.”
Doesn’t that support the claim being made in the original post? Admitting that the new AI technology has flaws carries reputational costs, so Microsoft/Google/Facebook will not admit that their technology has flaws and will continue tinkering with it long past the threshold where any reasonable external observer would call for it to be shut down, simply because the costs of shutdown are real, tangible and immediate, while the risk of a catastrophic failure is abstract, hard to quantify, and in the future.
What we are witnessing is the beginning of a normalization of deviance process, similar to that which Diane Vaughan documented in her book, The Challenger Launch Decision. Bing AI behaving oddly is, to me, akin to the O-rings in the Shuttle solid rocket booster joints behaving oddly under pressure tests in 1979.
The difference between these two scenarios is that the Challenger explosion only killed seven astronauts.
Isn’t the whole point to be able to say “we cried wolf and no one came, so if you say we can just cry wolf when we see one and we will be saved, you are wrong”? I don’t think Eneasz think that a petition on change.org will be successful. (Eneasz, please, correct me if I am wrong)
Definitely not wrong, the petitions almost certainly won’t change anything. Change.org is not where one goes to actually change things.
No, the point is to not signal false alarms, so that when there is a real threat we are less likely to be ignored.
It proves little if others dismiss a clearly false alarm.