That very much depends on how you understand “safe”. Which is a large part of the differences between ethical AI people (safe means that it doesn’t offend anyone, leak private information, give biased answers etc.) and the notkilleveryoneism people (safe means that it doesn’t decide to remove humanity). These aren’t mutually incompatible, but they require focusing on different things.
There is also safe in the PR sense, which means that no output will cause the LLM producer/supplier/whoever to get sued or in any other kind of trouble.
“Safe” is one of those funny words which everyone understands differently, but also assume that everyone else understands the same way.
That very much depends on how you understand “safe”. Which is a large part of the differences between ethical AI people (safe means that it doesn’t offend anyone, leak private information, give biased answers etc.) and the notkilleveryoneism people (safe means that it doesn’t decide to remove humanity). These aren’t mutually incompatible, but they require focusing on different things.
There is also safe in the PR sense, which means that no output will cause the LLM producer/supplier/whoever to get sued or in any other kind of trouble.
“Safe” is one of those funny words which everyone understands differently, but also assume that everyone else understands the same way.