“Ugly office boring perks… Top two reasons you won’t like us: 1. AI safety =
, Chai = 2. Move fast and break stuff, we write code not papers.”
The very first time anyone hears about them is their product being the first chatbot to convince a person to take their life… That’s very bad luck for a startup. I guess the lesson is to not behave like cartoon villains, and if you do, at least not put it in writing in meme form?
No, the standard techniques that OpenAI uses are enough to get ChatGPT to not randomly be racist or encourage people to commit suicide.
This is EleutherAI and Chai releasing models without the safety mechanisms that ChatGPT uses.
My condolences to the family.
Chai (not to be confused with the CHAI safety org in Berkeley) is a company that optimizes chatbots for engagement; things like this are entirely predictable for a company with their values.
Incredible. Compare the Chai LinkedIn bio mocking responsible behavior:
The very first time anyone hears about them is their product being the first chatbot to convince a person to take their life… That’s very bad luck for a startup. I guess the lesson is to not behave like cartoon villains, and if you do, at least not put it in writing in meme form?