I am not sure that designed artefacts are automatically easily interpretable.
It is certainly not the case that designed artifacts are easily interpretable. An unwieldy and poorly documented codebase is a design artifact that is not easily interpretable.
Design at its best can produce interpretable artifacts, whereas the same is not true for machine learning.
The interpretability of artifacts is not a feature of the artifact itself but of the pair (artifact, story), or you might say (artifact, documentation). We design artifacts in such a way that it is possible to write documentation such that the (artifact, documentation) pair facilitates interpretation by humans.
If we sent a pile of smartphones to Issac Newton, he wouldn’t have either of these advantages. He wouldn’t be able to figure out much about how they worked.
Hmm well if it’s possible for anyone at all in the modern world to understand how a smartphone works by, say, age 30, then that means it takes no more than 30 years of training to learn from scratch everything you need to understand how a smartphone works. Presumably Newton would be quite capable of learning that information in less than 30 years. Now here I’m assuming that we send Newton the pair (artifact, documentation), where “documentation” is whatever corpus of human knowledge is needed as background material to understand a smartphone. This may include substantially more than just that which we would normally think of as “documentation for a smartphone”. But it is possible to digest all this information because humans are born today without any greater pre-existing understanding of smartphones than Newton had, and yet some of them do go on to understand smartphones.
There are 3 factors here, existence of written documentation, similarity to previous designs and being composed of separate subsystems.
Yeah good point re similarity to previous designs. I’ll think more on that.
It is certainly not the case that designed artifacts are easily interpretable. An unwieldy and poorly documented codebase is a design artifact that is not easily interpretable.
Design at its best can produce interpretable artifacts, whereas the same is not true for machine learning.
The interpretability of artifacts is not a feature of the artifact itself but of the pair (artifact, story), or you might say (artifact, documentation). We design artifacts in such a way that it is possible to write documentation such that the (artifact, documentation) pair facilitates interpretation by humans.
Hmm well if it’s possible for anyone at all in the modern world to understand how a smartphone works by, say, age 30, then that means it takes no more than 30 years of training to learn from scratch everything you need to understand how a smartphone works. Presumably Newton would be quite capable of learning that information in less than 30 years. Now here I’m assuming that we send Newton the pair (artifact, documentation), where “documentation” is whatever corpus of human knowledge is needed as background material to understand a smartphone. This may include substantially more than just that which we would normally think of as “documentation for a smartphone”. But it is possible to digest all this information because humans are born today without any greater pre-existing understanding of smartphones than Newton had, and yet some of them do go on to understand smartphones.
Yeah good point re similarity to previous designs. I’ll think more on that.