I know this is basically downvote farming on LW, but I find the idea of morality being downstream from the free energy principle very interesting.
Jezos obviously misses out on a bunch of game theoretic problems that arise, and FEP lacks explanatory power in such a domain, so it is quite clear to me that we shouldn’t do this. I do think it’s fundamentally true, just like how utilitarianism is fundamentally true. The only problem is that he’s applying it naively.
I don’t want to bet the future of humanity on this belief, but what if is = ought, and we have just misconstrued it by adopting proxy goals along the way? (IGF gang rise!)
I find the idea of morality being downstream from the free energy principle very interesting
I agree that there are some theoretical curiosities in the neighbourhood of the idea. Like:
Morality is downstream of generally intelligent minds reflecting on the heuristics/shards.
Which are downstream of said minds’ cognitive architecture and reinforcement circuitry.
Which are downstream of the evolutionary dynamics.
Which are downstream of abiogenesis and various local environmental conditions.
Which are downstream of the fundamental physical laws of reality.
Thus, in theory, if we plug all of these dynamics one into another, and then simplify the resultant expression, we should actually get a (probability distribution over) the utility function that is “most natural” for this universe to generate! And the expression may indeed be relatively simple and have something to do with thermodynamics, especially if some additional simplifying assumptions are made.
That actually does seem pretty exciting to me! In an insight-porn sort of way.
Not in any sort of practical way, though[1]. All of this is screened off by the actual values actual humans actually have, and if the noise introduced at every stage of this process caused us to be aimed at goals wildly diverging from the “most natural” utility function of this universe… Well, sucks to be that utility function, I guess, but the universe screwed up installing corrigibility into us and the orthogonality thesis is unforgiving.
At least, not with regards to AI Alignment or human morality. It may be useful for e. g. acausal trade/acausal normalcy: figuring out the prior for what kinds of values aliens are most likely to have, etc.[2]
Or maybe for roughly figuring out what values the AGI that kills us all is likely going to have, if you’ve completely despaired of preventing that, and founding an apocalypse cult worshiping it. Wait a minute...
I know this is basically downvote farming on LW, but I find the idea of morality being downstream from the free energy principle very interesting.
Jezos obviously misses out on a bunch of game theoretic problems that arise, and FEP lacks explanatory power in such a domain, so it is quite clear to me that we shouldn’t do this. I do think it’s fundamentally true, just like how utilitarianism is fundamentally true. The only problem is that he’s applying it naively.
I don’t want to bet the future of humanity on this belief, but what if is = ought, and we have just misconstrued it by adopting proxy goals along the way? (IGF gang rise!)
I agree that there are some theoretical curiosities in the neighbourhood of the idea. Like:
Morality is downstream of generally intelligent minds reflecting on the heuristics/shards.
Which are downstream of said minds’ cognitive architecture and reinforcement circuitry.
Which are downstream of the evolutionary dynamics.
Which are downstream of abiogenesis and various local environmental conditions.
Which are downstream of the fundamental physical laws of reality.
Thus, in theory, if we plug all of these dynamics one into another, and then simplify the resultant expression, we should actually get a (probability distribution over) the utility function that is “most natural” for this universe to generate! And the expression may indeed be relatively simple and have something to do with thermodynamics, especially if some additional simplifying assumptions are made.
That actually does seem pretty exciting to me! In an insight-porn sort of way.
Not in any sort of practical way, though[1]. All of this is screened off by the actual values actual humans actually have, and if the noise introduced at every stage of this process caused us to be aimed at goals wildly diverging from the “most natural” utility function of this universe… Well, sucks to be that utility function, I guess, but the universe screwed up installing corrigibility into us and the orthogonality thesis is unforgiving.
At least, not with regards to AI Alignment or human morality. It may be useful for e. g. acausal trade/acausal normalcy: figuring out the prior for what kinds of values aliens are most likely to have, etc.[2]
Or maybe for roughly figuring out what values the AGI that kills us all is likely going to have, if you’ve completely despaired of preventing that, and founding an apocalypse cult worshiping it. Wait a minute...