What does this mean? Terminal values are techniques by which we predict future phenomenon? Doesn’t sound like we’re talking about values anymore, but my only understanding of what it would mean for something to be part of the map is that it would be part of how we model the world, i.e. how we predict future occurrences.
The agents that we describe in philosophical or mathematical problems have terminal values. But what confidence have we that these problems map accurately onto the messy real world? To what extent do theories that use the “terminal values” concept accurately predict events in the real world? Do people — or corporations, nations, sub-agents, memes, etc. — behave as if they had terminal values?
I think the answer is “sometimes” at best.
Sometimes humans can be money-pumped or Dutch-booked. Sometimes not. Sometimes humans can end up in situations that look like wireheading, such as heroin addiction or ecstatic religion … but sometimes they can escape them, too. Sometimes humans are selfish, sometimes spendthrift, sometimes altruistic, sometimes apathetic, sometimes self-destructive. Some humans insist that they know what humans’ terminal values are (go to heaven! have lots of rich, smart babies! spread your memes!) but other humans deny having any such values.
Humans are (famously) not fitness-maximizers. I suggest that we are not necessarily anything-maximizers. We are an artifact of an in-progress amoral optimization process (biological evolution) and possibly others (memetic evolution; evolution of socioeconomic entities); but we may very well not be optimizers ourselves at all.
Terminal values are part of the map, not the territory.
What does this mean? Terminal values are techniques by which we predict future phenomenon? Doesn’t sound like we’re talking about values anymore, but my only understanding of what it would mean for something to be part of the map is that it would be part of how we model the world, i.e. how we predict future occurrences.
The agents that we describe in philosophical or mathematical problems have terminal values. But what confidence have we that these problems map accurately onto the messy real world? To what extent do theories that use the “terminal values” concept accurately predict events in the real world? Do people — or corporations, nations, sub-agents, memes, etc. — behave as if they had terminal values?
I think the answer is “sometimes” at best.
Sometimes humans can be money-pumped or Dutch-booked. Sometimes not. Sometimes humans can end up in situations that look like wireheading, such as heroin addiction or ecstatic religion … but sometimes they can escape them, too. Sometimes humans are selfish, sometimes spendthrift, sometimes altruistic, sometimes apathetic, sometimes self-destructive. Some humans insist that they know what humans’ terminal values are (go to heaven! have lots of rich, smart babies! spread your memes!) but other humans deny having any such values.
Humans are (famously) not fitness-maximizers. I suggest that we are not necessarily anything-maximizers. We are an artifact of an in-progress amoral optimization process (biological evolution) and possibly others (memetic evolution; evolution of socioeconomic entities); but we may very well not be optimizers ourselves at all.
They’re theories by which we predict future mental states (such as satisfaction) - our own or those of others.