Wouldn’t it be challenging to create relevant digital goods if the training set had no references to humans and computers? Also, wouldn’t the existence and properties of humans and computers be deducible from other items in the dataset?
Depends on the digital goods you are trying to produce. I have in mind trying to simulate things like: detailed and beautiful 3d environments filled with complex ecosystems of plants and animals. Or trying to evolve new strategy or board games by having AI agents play against each other. Stuff like that. For things like medical research, I would instead say we should keep the AI narrow and non-agentic. The need for carefully blinded simulations is more about researching the limits of intelligence and agency and self-improvement where you are unsure what might emerge next and want to make sure you can study the results safely before risking releasing them.
Wouldn’t it be challenging to create relevant digital goods if the training set had no references to humans and computers? Also, wouldn’t the existence and properties of humans and computers be deducible from other items in the dataset?
Depends on the digital goods you are trying to produce. I have in mind trying to simulate things like: detailed and beautiful 3d environments filled with complex ecosystems of plants and animals. Or trying to evolve new strategy or board games by having AI agents play against each other. Stuff like that. For things like medical research, I would instead say we should keep the AI narrow and non-agentic. The need for carefully blinded simulations is more about researching the limits of intelligence and agency and self-improvement where you are unsure what might emerge next and want to make sure you can study the results safely before risking releasing them.