I’m surprised no one seems to doubt HA’s basic premise. It sure seems to me that toddlers display enough intelligence (especially in choosing what they observe) to make one suspect self-awareness.
I’m really glad you will write about morality, because I was going to ask. Just a data dump from my brain, in case anyone finds this useful:
Obviously, by “We should do X” we mean “I/We will derive utility from doing X”, but we don’t mean only that. Mostly we apply it to things that have to do with altruism—the utility we derive from helping others.
There is no Book of Morality written somewhere in reality like the color of the sky and about which you can do Bayesian magic as if it were a fact, though in extreme circumstances it can be a good idea. E.g., if almost everyone values human life as a terminal value and someone doesn’t, I’ll call them a psychopath and mistaken. Unlike facts, utility functions depend on agents. We will, if we are good Bayesian wannabes, agree on whether doing X will result in A, but I can’t see why the hell we’d agree on whether A is terminally desirable.
That’s a big problem. Our utility functions are what we care about, but they were built by a process we see as outright evil. The intuition that says “I shouldn’t torture random people on the street” and the one that says “I must save my life even if I need to kill a bunch of people to survive” come from the same source, and there is no global ojective morality to call one good and the other bad, just another intuition that also comes from that source.
The confusing part is this: we care about the things we care about for a reason we consider evil. There is no territory of Things worth caring about out there, but we have maps of it and we just can’t throw them away without becoming rocks.
I’m surprised no one seems to doubt HA’s basic premise. It sure seems to me that toddlers display enough intelligence (especially in choosing what they observe) to make one suspect self-awareness.
I’m really glad you will write about morality, because I was going to ask. Just a data dump from my brain, in case anyone finds this useful:
Obviously, by “We should do X” we mean “I/We will derive utility from doing X”, but we don’t mean only that. Mostly we apply it to things that have to do with altruism—the utility we derive from helping others.
There is no Book of Morality written somewhere in reality like the color of the sky and about which you can do Bayesian magic as if it were a fact, though in extreme circumstances it can be a good idea. E.g., if almost everyone values human life as a terminal value and someone doesn’t, I’ll call them a psychopath and mistaken. Unlike facts, utility functions depend on agents. We will, if we are good Bayesian wannabes, agree on whether doing X will result in A, but I can’t see why the hell we’d agree on whether A is terminally desirable.
That’s a big problem. Our utility functions are what we care about, but they were built by a process we see as outright evil. The intuition that says “I shouldn’t torture random people on the street” and the one that says “I must save my life even if I need to kill a bunch of people to survive” come from the same source, and there is no global ojective morality to call one good and the other bad, just another intuition that also comes from that source.
Also, our utility functions differ. The birth lottery made me a liberal ( http://faculty.virginia.edu/haidtlab/articles/haidt.graham.2007.when-morality-opposes-justice.pdf ). It doesn’t seem like I should let my values depend on such a random event, but I just can’t bring myself to think of ingroup/outgroup and authority as moral foundations.
The confusing part is this: we care about the things we care about for a reason we consider evil. There is no territory of Things worth caring about out there, but we have maps of it and we just can’t throw them away without becoming rocks.
I’ll bang my head on the problem some more.
Obviously the moral should is not the instrumental should.