It feels to me like the update today made it even better at filtering out answers that OpenAI doesn’t want it to give.
It seems to me like the run basically on:
“Have an AI that flags whether or not a prompt or an answer violates the rules. Mark the text red if it does. Offer the user a way to say that text was marked wrongly as violating the rules.”
This then gives them training data they can use to improve their filtering. Given how much ChatGPT is used this method will allow them to filter out more and more of what they want to filter out.
It feels to me like the update today made it even better at filtering out answers that OpenAI doesn’t want it to give.
It seems to me like the run basically on:
“Have an AI that flags whether or not a prompt or an answer violates the rules. Mark the text red if it does. Offer the user a way to say that text was marked wrongly as violating the rules.”
This then gives them training data they can use to improve their filtering. Given how much ChatGPT is used this method will allow them to filter out more and more of what they want to filter out.
Huh, ok. I will have to check out the new version. Thanks!