This is a good question. I think ways of thinking about Marr’s levels itself might be underdetermined and therefore worth trying to crux on. Let’s take the example of birds again. On the implementation level we can talk about the physical systems of a bird interacting with its environment. On the algorithmic level we can talk about patterns of behavior supported by the physical environment that allow the bird to do certain tasks. On the computational (intentional) level we can talk about why those tasks are useful in terms of some goal architecture like survival and sexual selection. We can think about underdetermination when we have any notion that different goals might in theory be instantiated by two otherwise similar birds, when we have a notion of following different strategies to achieve the same goal, or when we think about having the same goals and strategies instantiated on a different substrate (simulations).
One of the reasons I think this topic is confusing is that in reality we only ever have access to the algorithmic level. We don’t have direct access to the instantiation level (physics) we just have algorithms that more or less reliably return physics shaped invariances. Likewise we don’t have direct access to our goals, we just have algorithms that return goodharted proxies that we use to triangulate on inferred goals. We improve the accuracy of these algorithms over time through a pendulum swing from modification to the representation vs modification to the traversal (this split is also from Marr).
This is a good question. I think ways of thinking about Marr’s levels itself might be underdetermined and therefore worth trying to crux on. Let’s take the example of birds again. On the implementation level we can talk about the physical systems of a bird interacting with its environment. On the algorithmic level we can talk about patterns of behavior supported by the physical environment that allow the bird to do certain tasks. On the computational (intentional) level we can talk about why those tasks are useful in terms of some goal architecture like survival and sexual selection. We can think about underdetermination when we have any notion that different goals might in theory be instantiated by two otherwise similar birds, when we have a notion of following different strategies to achieve the same goal, or when we think about having the same goals and strategies instantiated on a different substrate (simulations).
One of the reasons I think this topic is confusing is that in reality we only ever have access to the algorithmic level. We don’t have direct access to the instantiation level (physics) we just have algorithms that more or less reliably return physics shaped invariances. Likewise we don’t have direct access to our goals, we just have algorithms that return goodharted proxies that we use to triangulate on inferred goals. We improve the accuracy of these algorithms over time through a pendulum swing from modification to the representation vs modification to the traversal (this split is also from Marr).