This isn’t just a reduction of a goal to a program: predicting the robot’s goal-based behavior and its program-based behavior give different results.
If goals reduce to a program like the robot’s in any way, it’s in the way that Einsteinian mechanics “reduce” to Newtonian mechanics—giving good results in most cases but being fundamentally different and making different predictions on border cases. Because there are other programs that goals do reduce to, like the previously mentioned Robot-1, I don’t think it’s appropriate to call what the blue-minimizer is doing a “goal”.
If you still disagree, can you say exactly what goal you think the robot is pursuing, so I can examine your argument in more detail?
I recall that a big problem we had before was trying to unpack what different people meant by the words “goal”, “model”, etc. But your description of at least this distinction you’re drawing between the things which you’re calling “goals” and the things which you’re calling “programs” is very good, IMO!
I recall that a big problem we had before was trying to unpack what different people meant by the words “goal”, “model”, etc. But your description of at least this distinction you’re drawing between the things which you’re calling “goals” and the things which you’re calling “programs” is very good, IMO!