maybe we should think of “explainability” as the AI’s lossy compression quality for its theories, in which case it must be evaluated together with our ability, as all modern lossy compression takes the human ear, eye and brain into account. In this case it could be measured by how close our reconstruction is to the real theory for each compression.
My view is along these lines, and see first link for an interesting example vis a vis this (start at min 17, or just read the linked paper in the description)
My view is along these lines, and see first link for an interesting example vis a vis this (start at min 17, or just read the linked paper in the description)