Yeah, the plan the team I’m working has is “take these results privately to politicians and ask that legislation be put into place to make the irresponsible inclusion of highly dangerous technical information in chatbot training data an illegal act”. Not sure what else can be done, and there’s no way to redact the models that have already been released so.… bad news is what it is. Bad news. Not unexpected, but bad.
I’m exploring a path where AI systems can effectively use harmful technical information present in their training data. I believe that AI systems need to be aware of potential harm in order to protect themselves from it. We just need to figure out how to teach them this.
Yeah, the plan the team I’m working has is “take these results privately to politicians and ask that legislation be put into place to make the irresponsible inclusion of highly dangerous technical information in chatbot training data an illegal act”. Not sure what else can be done, and there’s no way to redact the models that have already been released so.… bad news is what it is. Bad news. Not unexpected, but bad.
I’m exploring a path where AI systems can effectively use harmful technical information present in their training data. I believe that AI systems need to be aware of potential harm in order to protect themselves from it. We just need to figure out how to teach them this.