That makes sense. I’m surprised that I haven’t found any explicit reference to that in the literature I’ve been looking at. Is that because it is considered to be implicitly understood?
One way to talk about optimization power, maybe, would be to consider a spectrum between unbounded, LaPlacean rationality and the dumbest things around. There seems to be a move away from this though, because it’s too tied to notions of intelligence and doesn’t look enough at outcomes?
That makes sense. I’m surprised that I haven’t found any explicit reference to that in the literature I’ve been looking at. Is that because it is considered to be implicitly understood?
One way to talk about optimization power, maybe, would be to consider a spectrum between unbounded, LaPlacean rationality and the dumbest things around. There seems to be a move away from this though, because it’s too tied to notions of intelligence and doesn’t look enough at outcomes?
It’s this move that I find confusing.