But how do we know that ANY data is safe for AI consumption? What if the scientific theories that we feed the AI models contain fundamental flaws such that when an AI runs off and do their own experiments in say physics or germline editing based on those theories, it triggers a global disaster?
I guess the best analogy for this dilemma is “The Chinese farmer” (The old man lost his horse), I think we simple do not know which data will be good or bad in the long run.
But how do we know that ANY data is safe for AI consumption? What if the scientific theories that we feed the AI models contain fundamental flaws such that when an AI runs off and do their own experiments in say physics or germline editing based on those theories, it triggers a global disaster?
I guess the best analogy for this dilemma is “The Chinese farmer” (The old man lost his horse), I think we simple do not know which data will be good or bad in the long run.