I’m aware of the paper because of the impact it had. I might personallynot have chosen to draw their attention to the issue, since the main effect seems to be making some research significantly more difficult, and I haven’t heard of any attempts to deliberately exfiltrate weights that this would be preventing.
On reflection I somewhat endorse pointing the risk out after discovering it, in the spirit of open collaboration, as you did. It was just really frustrating when all my experiments suddenly broke for no apparent reason. But that’s mostly on OpenAI for not announcing the change to their API (other than emails sent to some few people). Apologies for grouching in your direction.
I’m aware of the paper because of the impact it had. I might personally not have chosen to draw their attention to the issue, since the main effect seems to be making some research significantly more difficult, and I haven’t heard of any attempts to deliberately exfiltrate weights that this would be preventing.
On reflection I somewhat endorse pointing the risk out after discovering it, in the spirit of open collaboration, as you did. It was just really frustrating when all my experiments suddenly broke for no apparent reason. But that’s mostly on OpenAI for not announcing the change to their API (other than emails sent to some few people). Apologies for grouching in your direction.