When people talk about “human values” in this context, I think they usually mean something like “goals that are Pareto optimal for the values of individual humans”- and the things you listed definitely aren’t that.
artifex0
The marketing company Salesforce was founded in Silicon Valley in ’99, and has been hugely successful. It’s often ranked as one of the best companies in the U.S. to work for. I went to one of their conferences recently, and the whole thing was a massive status display- they’d built an arcade with Salesforce-themed video games just for that one conference, and had a live performance by Gwen Stafani, among other things.
...But the marketing industry is one massive collective action problem. It consumes a vast amount of labor and resources, distorts the market in a way that harms healthy competition, creates incentives for social media to optimize for engagement rather than quality, and develops dangerous tools for propagandists, all while producing nothing of value in aggregate. Without our massive marketing industry, we’d have to pay a subscription fee or a tax for services like Google and Facebook, but everything else would be cheaper in a way that would necessarily dwarf that cost (since the vast majority of the cost of marketing doesn’t go to useful services)- and we’d probably have a much less sensationalist media on top of that.
People in Silicon Valley are absolutely willing to grant status to people who gained wealth purely through collective action problems.
Do you think it’s plausible that the whole deontology/consequentialism/virtue ethics confusion might arise from our idea of morality actually being a conflation of several different things that serve separate purposes?
Like, say there’s a social technology that evolved to solve intractable coordination problems by getting people to rationally pre-commit to acting against their individual interests in the future, and additionally a lot of people have started to extend our instinctive compassion and tribal loyalties to the entirety of humanity, and also people have a lot of ideas about which sorts of behaviors take us closer to some sort of Pareto frontier- and maybe additionally there’s some sort of acausal bargain that a lot of different terminal values converge toward or something.
If you tried to maximize just one of those, you’d obviously run into conflicts with the others- and then if you used the same word to describe all of them, that might look like a paradox. How can something be clearly good and not good at the same time, you might wonder, not realizing that you’ve used the word to mean different things each time.
If I’m right about that, it could mean that when encountering the question of “what is most moral” in situations where different moral systems provide different answers, the best answer might not be so much “I can’t tell, since each option would commit me to things I think are immoral,” but rather “‘Morality’ isn’t a very well defined word; could you be more specific?”