(Which, for instance, seems true about humans, at least in some cases: If humans had the computational capacity, they would lie a lot more and calculate personal advantage a lot more. But since those are both computationally expensive, and therefore can be caught-out by other humans, the heuristic / value of “actually care about your friends”, is competitive with “always be calculating your personal advantage.”
I expect this sort of thing to be less common with AI systems that can have much bigger “cranial capacity”. But then again, I guess that at whatever level of brain size, there will be some problems for which it’s too inefficient to do them the “proper” way, and for which comparatively simple heuristics / values work better.
But maybe at high enough cognitive capability, you just have a flexible, fully-general process for evaluating the exact right level of approximation for solving any given problem, and the binary distinction between doing things the “proper” way and using comparatively simpler heuristics goes away. You just use whatever level of cognition makes sense in any given micro-situation.)
+1; this seems basically similar to the cached argument I have for why human values might be more arbitrary than we’d like—very roughly speaking, they emerged on top of a solution to a specific set of computational tradeoffs while trying to navigate a specific set of repeated-interaction games, and then a bunch of contingent historical religion/philosophy on top of that. (That second part isn’t in the argument you [Eli] gave, but it seems relevant to point out; not all historical cultures ended up valuing egalitarianism/fairness/agency the way we seem to.)
+1; this seems basically similar to the cached argument I have for why human values might be more arbitrary than we’d like—very roughly speaking, they emerged on top of a solution to a specific set of computational tradeoffs while trying to navigate a specific set of repeated-interaction games, and then a bunch of contingent historical religion/philosophy on top of that. (That second part isn’t in the argument you [Eli] gave, but it seems relevant to point out; not all historical cultures ended up valuing egalitarianism/fairness/agency the way we seem to.)