humans model the world at different levels of complexity at different times, and at each of those levels different considerations come into play for making decisions. An agent behaving in this way can appear to be behaving VNM-irrationally when really it is just trying to efficiently use cognitive resources by not modeling the world at the maximum level of complexity all the time. Non-human animals may model the world at more similar levels of complexity over time, so they behave more VNM-rationally even if they have less overall optimization power than humans.
Notice the obvious implications to the ability of super-human AI’s to behave VNM-rationally.
Which are what? The AI that is managing some sort of upload society could trade it’s clock time for utility.
It’s no different from humans where you can either waste your time pondering if you’re being rational about how jumpy you are when you see a moving shadow that looks sort of like a sabre-toothed tiger, or you can figure out how to tie a rock to a stick; in the modern times, ponder what is a better deal at the store vs try to invent something and make a lot of money.
But the point is, it’s computing time costs utility, and so it can’t waste it on things that will not gain it enough utility.
If you consider 2x1x1 cube to have probability of 1⁄6 of landing on each side, you can still be VNM rational about that—then you won’t be dutch booked, you’ll lose money though because that cube is not a perfect die and you’ll accept losing bets. Real world is like that, it doesn’t give cookies for non-dutch-bookability, it gives cookies for correct predictions of what is actually going to happen.
Notice the obvious implications to the ability of super-human AI’s to behave VNM-rationally.
Which are what? The AI that is managing some sort of upload society could trade it’s clock time for utility.
It’s no different from humans where you can either waste your time pondering if you’re being rational about how jumpy you are when you see a moving shadow that looks sort of like a sabre-toothed tiger, or you can figure out how to tie a rock to a stick; in the modern times, ponder what is a better deal at the store vs try to invent something and make a lot of money.
It still has to deal with the external world.
But the point is, it’s computing time costs utility, and so it can’t waste it on things that will not gain it enough utility.
If you consider 2x1x1 cube to have probability of 1⁄6 of landing on each side, you can still be VNM rational about that—then you won’t be dutch booked, you’ll lose money though because that cube is not a perfect die and you’ll accept losing bets. Real world is like that, it doesn’t give cookies for non-dutch-bookability, it gives cookies for correct predictions of what is actually going to happen.