For example, agent foundations research sometimes assumes that AGI has infinite compute or that it has a strict boundary between its internal decision processes and the outside world.
It’s one of the most standard results in ML that neural nets are universal function approximators. In the context of that proof, ML de-facto also assumes that you have infinite computing power. It’s just a standard tool in ML, AI or CS to see what models predict when you take them to infinity. Indeed, it’s really one of the most standard tools in the modern math toolbox, used by every STEM discipline I can think of.
Similarly, separating the boundary between internal decision processes and the outside world continues to be a standard assumption in ML. It’s really hard to avoid, everything gets very loopy and tricky, and yes, we have to deal with that loopiness and trickiness, but if anything, agent foundations people were the actual people trying to figure out how to handle that loopiness and trickiness, whereas the ML community really has done very little to handle it. In contrary to your statement here, people on LW have been for years pointing out how embedded agency is really important, and been dismissed by active practitioners because they think the cartesian boundary here is just fine for “real” and “grounded” applications like “predicting the next token” which clearly don’t have relevance to these weird and crazy scenarios about power-seeking AIs developing contextual awareness that you are talking about.
It’s one of the most standard results in ML that neural nets are universal function approximators. In the context of that proof, ML de-facto also assumes that you have infinite computing power. It’s just a standard tool in ML, AI or CS to see what models predict when you take them to infinity. Indeed, it’s really one of the most standard tools in the modern math toolbox, used by every STEM discipline I can think of.
Similarly, separating the boundary between internal decision processes and the outside world continues to be a standard assumption in ML. It’s really hard to avoid, everything gets very loopy and tricky, and yes, we have to deal with that loopiness and trickiness, but if anything, agent foundations people were the actual people trying to figure out how to handle that loopiness and trickiness, whereas the ML community really has done very little to handle it. In contrary to your statement here, people on LW have been for years pointing out how embedded agency is really important, and been dismissed by active practitioners because they think the cartesian boundary here is just fine for “real” and “grounded” applications like “predicting the next token” which clearly don’t have relevance to these weird and crazy scenarios about power-seeking AIs developing contextual awareness that you are talking about.