Eliezer:Robin Brandt, is whatever increasing technology does to a society, moral progress by definition, or does increasing technology only tend to cause moral progress?
I see, I answered quite a different question there, I had a funny feeling of that while writing that comment.
Increasing technology tends to cause moral progress yes, by making moral choices economically and experientially(as in our experience of things) more strategic/optimal. It all boils down into satisfying our adapted pattern-recognizers that gives us pleasure or a feeling of righteousness. And the human brain is calibrated to exercise a absolute optimal general morality in a much limited way, because of limited mental and limited physical(food, mates, power) resources. But the “absoulte” general morality is by itself just a set of strategies, a soultion to a game-theoretic problem. It can never be in itself moral until some mind gives it that meaning. So morality without agents is just one of other mathematical structures. But when you are a mind you percieve your approximation(regulated by genes and learning) of morality as a strong emotion, parts of it close to what we call preference, parts of it very absolute.
Eliezer:Robin Brandt, is whatever increasing technology does to a society, moral progress by definition, or does increasing technology only tend to cause moral progress?
I see, I answered quite a different question there, I had a funny feeling of that while writing that comment.
Increasing technology tends to cause moral progress yes, by making moral choices economically and experientially(as in our experience of things) more strategic/optimal. It all boils down into satisfying our adapted pattern-recognizers that gives us pleasure or a feeling of righteousness. And the human brain is calibrated to exercise a absolute optimal general morality in a much limited way, because of limited mental and limited physical(food, mates, power) resources. But the “absoulte” general morality is by itself just a set of strategies, a soultion to a game-theoretic problem. It can never be in itself moral until some mind gives it that meaning. So morality without agents is just one of other mathematical structures. But when you are a mind you percieve your approximation(regulated by genes and learning) of morality as a strong emotion, parts of it close to what we call preference, parts of it very absolute.