My assumption has been that Bing was so obviously rushed and botched that it’s probably less persuasive of the problems with aligning AI than ChatGPT is. To the common person, ChatGPT has the appearance of a serious product by a company trying to take safety seriously, but still frequently failing. I think that “someone trying really hard and doing badly” looks more concerning than “someone not really even trying and then failing”.
I haven’t actually talked to any laypeople to try to check this impression, though.
The majority of popular articles also seem to be talking specifically about ChatGPT rather than Bing, suggesting that ChatGPT has vastly more users. Regular use affects people’s intuitions much more than a few one-time headlines.
Though when I said “ChatGPT”, I was actually thinking about not just ChatGPT, but also the steps that led there—GPT-2 and GPT-3 as well. Microsoft didn’t contribute to those.
My assumption has been that Bing was so obviously rushed and botched that it’s probably less persuasive of the problems with aligning AI than ChatGPT is. To the common person, ChatGPT has the appearance of a serious product by a company trying to take safety seriously, but still frequently failing. I think that “someone trying really hard and doing badly” looks more concerning than “someone not really even trying and then failing”.
I haven’t actually talked to any laypeople to try to check this impression, though.
The majority of popular articles also seem to be talking specifically about ChatGPT rather than Bing, suggesting that ChatGPT has vastly more users. Regular use affects people’s intuitions much more than a few one-time headlines.
Though when I said “ChatGPT”, I was actually thinking about not just ChatGPT, but also the steps that led there—GPT-2 and GPT-3 as well. Microsoft didn’t contribute to those.