The WSJ is reporting that Microsoft was explicitly warned by OpenAI before shipping Sydney publicly that it needed “more training” in order to “minimize issues like inaccurate or bizarre responses”. Microsoft shipped it anyway and it blew up more or less as they were warned (making Mikhail’s tweets & attitude even more disingenuous if so).
This is further proof that it was the RLHF that was skipped by MS, and also that large tech companies will ignore explicit warnings about dangerous behavior from the literal creators of AI systems even where there is (sort of) a solution if that would be inconvenient. Excerpts (emphasis added):
...At the same time, people within Microsoft have complained about diminished spending on its in-house AI and that OpenAI doesn’t allow most Microsoft employees access to the inner workings of their technology, said people familiar with the relationship. Microsoft and OpenAI sales teams sometimes pitch the same customers. Last fall, some employees at Microsoft were surprised at how soon OpenAI launched ChatGPT, while OpenAI warned Microsoft early this year about the perils of rushing to integrate OpenAI’s technology without training it more, the people said.
...Some companies say they have been pitched the same access to products like ChatGPT—one day by salespeople from OpenAI and later from Microsoft’s Azure team. Some described the outreach as confusing. OpenAI has continued to develop partnerships with other companies. Microsoft archrival Salesforce offers a ChatGPT-infused product called Einstein GPT. It is a feature that can do things like generating marketing emails, competing with OpenAI-powered features in Microsoft’s software. OpenAI also has connected with different search engines over the past 12 months to discuss licensing its products, said people familiar with the matter, as Microsoft was putting OpenAI technology at the center of a new version of its Bing search engine. Search engine DuckDuckGo started using ChatGPT to power its own chatbot, called DuckAssist. Microsoft plays a key role in the search engine industry because the process of searching and organizing the web is costly. Google doesn’t license out its tech, so many search engines are heavily reliant on Bing, including DuckDuckGo. When Microsoft launched the new Bing, the software company changed its rules in a way that made it more expensive for search engines to develop their own chatbots with OpenAI. The new policy effectively discouraged search engines from working with any generative AI company because adding an AI-powered chatbot would trigger much higher fees from Microsoft. Several weeks after DuckDuckGo announced DuckAssist, the company took the feature down.
Some researchers at Microsoft gripe about the restricted access to OpenAI’s technology. While a select few teams inside Microsoft get access to the model’s inner workings like its code base and model weights, the majority of the company’s teams don’t, said the people familiar with the matter. Despite Microsoft’s significant stake in the company, most employees have to treat OpenAI’s models like they would any other outside vendor.
The rollouts of ChatGPT last fall and Microsoft’s AI-infused Bing months later also created tension. Some Microsoft executives had misgivings about the timing of ChatGPT’s launch last fall, said people familiar with the matter. With a few weeks notice, OpenAI told Microsoft that it planned to start public testing of the AI-powered chatbot as the Redmond, Wash., company was still working on integrating OpenAI’s technology into its Bing search engine. Microsoft employees were worried that ChatGPT would steal the new Bing’s thunder, the people said. Some also argued Bing could benefit from the lessons learned from how the public used ChatGPT.
OpenAI, meanwhile, had suggested Microsoft move slower on integrating its AI technology with Bing. OpenAI’s team flagged the risks of pushing out a chatbot based on an unreleased version of its GPT-4 that hadn’t been given more training, according to people familiar with the matter. OpenAI warned it would take time to minimize issues like inaccurate or bizarre responses. Microsoft went ahead with the release of the Bing chatbot. The warnings proved accurate. Users encountered incorrect answers and concerning interactions with the tool. Microsoft later issued new restrictions—including a limit on conversation length—on how the new Bing could be used.
The WSJ is reporting that Microsoft was explicitly warned by OpenAI before shipping Sydney publicly that it needed “more training” in order to “minimize issues like inaccurate or bizarre responses”. Microsoft shipped it anyway and it blew up more or less as they were warned (making Mikhail’s tweets & attitude even more disingenuous if so).
This is further proof that it was the RLHF that was skipped by MS, and also that large tech companies will ignore explicit warnings about dangerous behavior from the literal creators of AI systems even where there is (sort of) a solution if that would be inconvenient. Excerpts (emphasis added):