No, I’m not talking about the interpretation of experiments so much as the risks while learning. People grow up (if at all fortunate) with the chance to do a lot of low-stakes attempts while dealing with other people. Even so, very few end up as skilled politicians.
If the AI needs to be able to navigate complex negotiations and signalling, it isn’t going to start with the benefit of a child’s slack for learning. If it needs to practice with people rather than simulations (I’m guessing it will), it could take years to build up to super-negotiator.
I could be wrong. Perhaps the AI can build on algorithms which aren’t obvious to people, and/or use improved sensory abilities, and/or be able to capitalize on a huge bank of information.
The Internet provides plenty of opportunities for anonymous interaction with people, perfect for running safe experiments. And the amounts of raw information that you could process before ever needing to run your own experiments is just enormous—here I’m not talking about the scientific papers, but all the forum threads, mailing list archives, chatlogs, etc. etc. that exist online. These not only demonstrate how online social interaction works, but also contain plenty of data where people report on and analyze their various real-life social interactions (“hey guys, the funniest thing happened today...”, “I was so creeped out on my ride home”, “I met the most charming person”). That is just an insane corpus of what works and what all the different failure modes are. I would expect that it would take one a long time before they could come up with hypotheses that weren’t in principle better answered by searching that corpus than doing your own experiments.
Of course, that is highly unstructured data, and it might be quite hard to effectively categorize it in a way that allows for efficient searches. So doing your own experiments might still be more effective. But again, plenty of anonymous forums exist for that.
No, I’m not talking about the interpretation of experiments so much as the risks while learning. People grow up (if at all fortunate) with the chance to do a lot of low-stakes attempts while dealing with other people. Even so, very few end up as skilled politicians.
If the AI needs to be able to navigate complex negotiations and signalling, it isn’t going to start with the benefit of a child’s slack for learning. If it needs to practice with people rather than simulations (I’m guessing it will), it could take years to build up to super-negotiator.
I could be wrong. Perhaps the AI can build on algorithms which aren’t obvious to people, and/or use improved sensory abilities, and/or be able to capitalize on a huge bank of information.
The Internet provides plenty of opportunities for anonymous interaction with people, perfect for running safe experiments. And the amounts of raw information that you could process before ever needing to run your own experiments is just enormous—here I’m not talking about the scientific papers, but all the forum threads, mailing list archives, chatlogs, etc. etc. that exist online. These not only demonstrate how online social interaction works, but also contain plenty of data where people report on and analyze their various real-life social interactions (“hey guys, the funniest thing happened today...”, “I was so creeped out on my ride home”, “I met the most charming person”). That is just an insane corpus of what works and what all the different failure modes are. I would expect that it would take one a long time before they could come up with hypotheses that weren’t in principle better answered by searching that corpus than doing your own experiments.
Of course, that is highly unstructured data, and it might be quite hard to effectively categorize it in a way that allows for efficient searches. So doing your own experiments might still be more effective. But again, plenty of anonymous forums exist for that.