Are human ethics/morals just an evolutionary mess of incomplete and inconsistent heuristics? One idea I heard that made sense is that evolution for us was optimizing our emotions for long term ‘fairness’. I got a sense of it when watching the monkey fairness experiment
My issue is with ‘friendly ai’. If our ethics are inconsistent then we won’t be choosing a good AI but instead the least bad one. A crap sandwich either way.
The worst part is that we will have to hurry to be the first to AI or some other culture will select the dominate AI.
One idea I heard that made sense is that evolution for us was optimizing our emotions for long term ‘fairness’. I got a sense of it when watching the monkey fairness experiment
Evolution is optimizing us for inclusive genetic fitness. Anything else is just a means to an end.
Are human ethics/morals just an evolutionary mess of incomplete and inconsistent heuristics? One idea I heard that made sense is that evolution for us was optimizing our emotions for long term ‘fairness’. I got a sense of it when watching the monkey fairness experiment
My issue is with ‘friendly ai’. If our ethics are inconsistent then we won’t be choosing a good AI but instead the least bad one. A crap sandwich either way.
The worst part is that we will have to hurry to be the first to AI or some other culture will select the dominate AI.
Evolution is optimizing us for inclusive genetic fitness. Anything else is just a means to an end.