I am going to ask a painfully naive, dumb question here: what if the training data was curated to contain only agents that can be reasonably taken to be honest and truthful? What if all the 1984, the John LeCarre and what not type of fiction (and sometimes real-life examples of conspiracy, duplicity etc.) were purged out of the training data? Would that require too much human labour to sort and assess? Would it mean losing too much good information, and resulting cognitive capacity? Or would it just not work—the model would still somehow simulate waluigis?
Since my natural bent is to always find ways to criticize my own ideas, here is one, potentially: doing so would result in an extremely naive AI, with no notion that people can even be deceitful. So fallen into the wrong human’s hands that’s an AI that is potentially also extremely easy to manipulate and dangerous as such. Or in an oversimplified version: “The people in country X have assured us that they are all tired of living and find the living experience extremely painful. They have officially let us know and confirmed multiple times that they all want to experience a quick death as soon as possible.” Having no notion of deceit, the AI would probably accept that as the truth based on just being told that it is so—and potentially agree to advance plans to precipitate the quick death of everybody in country X on that basis.
I am going to ask a painfully naive, dumb question here: what if the training data was curated to contain only agents that can be reasonably taken to be honest and truthful? What if all the 1984, the John LeCarre and what not type of fiction (and sometimes real-life examples of conspiracy, duplicity etc.) were purged out of the training data? Would that require too much human labour to sort and assess? Would it mean losing too much good information, and resulting cognitive capacity? Or would it just not work—the model would still somehow simulate waluigis?
Since my natural bent is to always find ways to criticize my own ideas, here is one, potentially: doing so would result in an extremely naive AI, with no notion that people can even be deceitful. So fallen into the wrong human’s hands that’s an AI that is potentially also extremely easy to manipulate and dangerous as such. Or in an oversimplified version: “The people in country X have assured us that they are all tired of living and find the living experience extremely painful. They have officially let us know and confirmed multiple times that they all want to experience a quick death as soon as possible.” Having no notion of deceit, the AI would probably accept that as the truth based on just being told that it is so—and potentially agree to advance plans to precipitate the quick death of everybody in country X on that basis.