we need to produce vague explanation of what we think morality is, and set an optimization process on these explanations to produce a better description of morality
Agreed, but this post isn’t it, and wasn’t meant to be. This post basically nailed down what form morality should take. Perhaps it could be expressed as a summation of all our thousand shards of desire. In order to actually compute this, we would use a function which Yudkowsky calls Coherent Extrapolated Volition. That’s what he describes as “what we would come to believe if we knew all empirical facts and had a million years to think about it”. Actually calculating morality is left as an exercise for the reader.
Agreed, but this post isn’t it, and wasn’t meant to be. This post basically nailed down what form morality should take. Perhaps it could be expressed as a summation of all our thousand shards of desire. In order to actually compute this, we would use a function which Yudkowsky calls Coherent Extrapolated Volition. That’s what he describes as “what we would come to believe if we knew all empirical facts and had a million years to think about it”. Actually calculating morality is left as an exercise for the reader.