You might say bounded rationality is our primary framework for thinking about AI agents, just like it is in AI textbooks like Russell & Norvig’s. So that question sounds to me like it might sound to a biologist if she was asked whether her sub-area had any connections to that “Neo-Darwinism” thing. :)
That makes sense. I’m surprised that I haven’t found any explicit reference to that in the literature I’ve been looking at. Is that because it is considered to be implicitly understood?
One way to talk about optimization power, maybe, would be to consider a spectrum between unbounded, LaPlacean rationality and the dumbest things around. There seems to be a move away from this though, because it’s too tied to notions of intelligence and doesn’t look enough at outcomes?
You might say bounded rationality is our primary framework for thinking about AI agents, just like it is in AI textbooks like Russell & Norvig’s. So that question sounds to me like it might sound to a biologist if she was asked whether her sub-area had any connections to that “Neo-Darwinism” thing. :)
That makes sense. I’m surprised that I haven’t found any explicit reference to that in the literature I’ve been looking at. Is that because it is considered to be implicitly understood?
One way to talk about optimization power, maybe, would be to consider a spectrum between unbounded, LaPlacean rationality and the dumbest things around. There seems to be a move away from this though, because it’s too tied to notions of intelligence and doesn’t look enough at outcomes?
It’s this move that I find confusing.