Why are you concerned in that scenario? Any more concrete details on what you expect to go wrong?
I don’t think there’s a cure-it-all solution, except “don’t build it”, and even that might be counterproductive in some edge cases.
Very broad concerns but two totally random example risks:
During training, the model hacks out of my cluster and sends a copy of itself or a computer virus elsewhere on the internet. Later on, chaos ensues.
AI lawyer has me assassinated and impersonates me to steal my company.
Why are you concerned in that scenario? Any more concrete details on what you expect to go wrong?
I don’t think there’s a cure-it-all solution, except “don’t build it”, and even that might be counterproductive in some edge cases.
Very broad concerns but two totally random example risks:
During training, the model hacks out of my cluster and sends a copy of itself or a computer virus elsewhere on the internet. Later on, chaos ensues.
AI lawyer has me assassinated and impersonates me to steal my company.