From what I understand, the reason has to do with GDPR, the EU’s data protection law. It’s pretty strict stuff and it essentially says that you can’t store people’s data without their active permission, you can’t store people’s data without a demonstrable need (that isn’t just “I wanna sell it and make moniez off it”), you can’t store people’s data past the end of that need, and you always need to give people the right to delete their data whenever they wish for it.
Now, this puts ChatGPT in an awkward position. Suppose you have a conversation that includes some personal data, that gets used to fine tune the model, then you want to back out… how do you do that? Could the model one day just spit out your personal information to someone else? Who knows?
It’s problems with interpretability and black box behaviour all over again. Basically you can’t guarantee it won’t violate the law because you don’t even know how the fuck it works.
Does anyone have any guesses what caused this ban?
From what I understand, the reason has to do with GDPR, the EU’s data protection law. It’s pretty strict stuff and it essentially says that you can’t store people’s data without their active permission, you can’t store people’s data without a demonstrable need (that isn’t just “I wanna sell it and make moniez off it”), you can’t store people’s data past the end of that need, and you always need to give people the right to delete their data whenever they wish for it.
Now, this puts ChatGPT in an awkward position. Suppose you have a conversation that includes some personal data, that gets used to fine tune the model, then you want to back out… how do you do that? Could the model one day just spit out your personal information to someone else? Who knows?
It’s problems with interpretability and black box behaviour all over again. Basically you can’t guarantee it won’t violate the law because you don’t even know how the fuck it works.