Thank you, this is clearer than it was before, and it does seem like a potentially useful technique. I see a couple of limitations:
First, it still seems that the whole plan rests on having a good selection of questions, and the mechanism for choosing them is unclear. If they are chosen by some structured method that thoroughly covers the AI’s representation of the prior, the questions asked of the human are unlikely to capture the most important aspects of the update from new evidence. Most of the differences between the prior and the posterior could be insignificant from a human perspective, and so even if the human “understands” the posterior a broad sense they will not be likely to have the answers to all of these. Even if they can figure out those answers correctly, it does not necessarily test whether they are aware of the differences that are most important.
Second, the requirement for the two AIs to have a common prior, and differ only by some known quantum of new evidence, seems like it might restrict the applications considerably. In simple cases you might handle this by “rolling back” a copy of the first AI to a time when it had not yet processed the new evidence, and making that the starting point for the second AI. But if the processing of the evidence occurred before some other update that you want included in the prior, then you would need some way of working backward to a state that never previously existed.
Your first point is indeed an issue, and I’m thinking about it. The second is less of a problem, because now we have a goal description, so implementing the goal is less of an issue.
Possibly a third adversarial AI? Have an AI that generates the questions based on P, is rewarded if the second AI evaluates their probability as close to 50%, is rewarded for the first AI being able to get them right based on P’, and for the human getting them wrong.
That’s probably not quite right; we want the AI to generate hard but not impossible questions. Possibly some sort of term about the AIs predicting whether the human will get a question right?
Thank you, this is clearer than it was before, and it does seem like a potentially useful technique. I see a couple of limitations:
First, it still seems that the whole plan rests on having a good selection of questions, and the mechanism for choosing them is unclear. If they are chosen by some structured method that thoroughly covers the AI’s representation of the prior, the questions asked of the human are unlikely to capture the most important aspects of the update from new evidence. Most of the differences between the prior and the posterior could be insignificant from a human perspective, and so even if the human “understands” the posterior a broad sense they will not be likely to have the answers to all of these. Even if they can figure out those answers correctly, it does not necessarily test whether they are aware of the differences that are most important.
Second, the requirement for the two AIs to have a common prior, and differ only by some known quantum of new evidence, seems like it might restrict the applications considerably. In simple cases you might handle this by “rolling back” a copy of the first AI to a time when it had not yet processed the new evidence, and making that the starting point for the second AI. But if the processing of the evidence occurred before some other update that you want included in the prior, then you would need some way of working backward to a state that never previously existed.
Your first point is indeed an issue, and I’m thinking about it. The second is less of a problem, because now we have a goal description, so implementing the goal is less of an issue.
Possibly a third adversarial AI? Have an AI that generates the questions based on P, is rewarded if the second AI evaluates their probability as close to 50%, is rewarded for the first AI being able to get them right based on P’, and for the human getting them wrong.
That’s probably not quite right; we want the AI to generate hard but not impossible questions. Possibly some sort of term about the AIs predicting whether the human will get a question right?