In AI research, “agents”, “environments” and “goals” sometimes refer to intuitive concepts, and sometimes refer to math that are attempts to formalize those concepts. So they are both mathematical and non-mathematical words, just like “point” and “line” are. This is just how almost every field of research works. Consider “temperature” in physics, “secure” in cryptography, “efficient” in economics, etc.
New terminology only makes sense when phenomena they describe have new qualities on top of the basic phenomena. Every process is an optimizer, because anything that changes states, optimizes towards something (say, new state). Thus, “maximizer,” “intelligent agent” etc. may be said to be redundant.
Certainly true, yet, just because this is how almost every field of research works, doesn’t mean that it is how they should work, and I like shminux’s point.
In AI research, “agents”, “environments” and “goals” sometimes refer to intuitive concepts, and sometimes refer to math that are attempts to formalize those concepts. So they are both mathematical and non-mathematical words, just like “point” and “line” are. This is just how almost every field of research works. Consider “temperature” in physics, “secure” in cryptography, “efficient” in economics, etc.
New terminology only makes sense when phenomena they describe have new qualities on top of the basic phenomena. Every process is an optimizer, because anything that changes states, optimizes towards something (say, new state). Thus, “maximizer,” “intelligent agent” etc. may be said to be redundant.
Certainly true, yet, just because this is how almost every field of research works, doesn’t mean that it is how they should work, and I like shminux’s point.