I feel mostly confused by the way that things are being framed. ELK is about the human asking for various poly-sized fragments and the model reporting what those actually were instead of inventing something else. The model should accurately report all poly-sized fragments the human knows how to ask for.
Like the thing that seems weird to me here is that you can’t simultaneously require that the elicited knowledge be ‘relevant’ and ‘comprehensible’ and also cover these sorts of obfuscated debate like scenarios.
I don’t know what you mean by “relevant” or “comprehensible” here.
Does it seem right to you that ELK is about eliciting latent knowledge that causes an update in the correct direction, regardless of whether that knowledge is actually relevant?
I feel mostly confused by the way that things are being framed. ELK is about the human asking for various poly-sized fragments and the model reporting what those actually were instead of inventing something else. The model should accurately report all poly-sized fragments the human knows how to ask for.
I think this is what I was missing. I was incorrectly thinking of the system as generating poly-sized fragments.
I feel mostly confused by the way that things are being framed. ELK is about the human asking for various poly-sized fragments and the model reporting what those actually were instead of inventing something else. The model should accurately report all poly-sized fragments the human knows how to ask for.
I don’t know what you mean by “relevant” or “comprehensible” here.
This doesn’t seem right to me.
Thanks for taking the time to explain this!
I think this is what I was missing. I was incorrectly thinking of the system as generating poly-sized fragments.