You might say bounded rationality is our primary framework for thinking about AI agents, just like it is in AI textbooks like Russell & Norvig’s. So that question sounds to me like it might sound to a biologist if she was asked whether her sub-area had any connections to that “Neo-Darwinism” thing. :)
That makes sense. I’m surprised that I haven’t found any explicit reference to that in the literature I’ve been looking at. Is that because it is considered to be implicitly understood?
One way to talk about optimization power, maybe, would be to consider a spectrum between unbounded, LaPlacean rationality and the dumbest things around. There seems to be a move away from this though, because it’s too tied to notions of intelligence and doesn’t look enough at outcomes?
It’s not much, but: see our brief footnote #3 in IE:EI and the comments and sources I give in What is intelligence?
Thanks. That’s very helpful.
I’ve been thinking about Stuart Russell lately, which reminds me...bounded rationality. Isn’t there a bunch of literature on that?
http://en.wikipedia.org/wiki/Bounded_rationality
Have you ever looked into any connections there? Any luck with that?
You might say bounded rationality is our primary framework for thinking about AI agents, just like it is in AI textbooks like Russell & Norvig’s. So that question sounds to me like it might sound to a biologist if she was asked whether her sub-area had any connections to that “Neo-Darwinism” thing. :)
That makes sense. I’m surprised that I haven’t found any explicit reference to that in the literature I’ve been looking at. Is that because it is considered to be implicitly understood?
One way to talk about optimization power, maybe, would be to consider a spectrum between unbounded, LaPlacean rationality and the dumbest things around. There seems to be a move away from this though, because it’s too tied to notions of intelligence and doesn’t look enough at outcomes?
It’s this move that I find confusing.