I’m trying to clarify some feelings I had after reading the post Utopic Nightmares. Specifically, this bit:
But in a future world where advancing technology’s returns on the human condition stop compensating for a state less than perfect hedonism, we can imagine editing boredom out of our lives
I would like to describe a toy moral theory that—while not exactly what I believe—gets at why I would consider “eliminating boredom” morally objectionable.
Experience maximizing hedonism
Consider an agent that perceives external reality through a set of sensors S0,S1,…Sn . It uses these sensors to build a model of external reality and estimate it’s position in that reality at a point in time as a state Mt. It also has a number of actions At,i available to it at any given time.
The agent estimates the number of reachable future states V(t)=E(|{Mis.t∃pathMt−>..−>Mi|}) and “chooses” its actions so as to maximize the value of R(t) for some future time t. Obviously if the agent is dead, it cannot perceive or affect its future state, so it estimates V(dead)=1.
Internally the agent is running some kind of hill-climbing algorithm, so it experiences a reward after choosing an action Ai,t at time t of the form R(t)=V(t)−V(t−1). In this way, the agent experiences pleasure when it takes actions that increase V(t) and pain when it takes actions that decrease V(t) and over time the agent learns to take actions that maximize V(t).
Infinite Boredom
Now consider the infinite boredom of Utopic Nightmares. In this case the agent reaches a local maximum for V(t) and R(t) is now constant (and equal to zero). But of course there is no reason why R(t) need be zero when V(t) is constant. There’s no reason why we couldn’t have instead used R(t)c=V(t)−V(t−1)+c for our hill-climb. The agent would experience endless bliss (for positive values of c) or endless suffering (for negative values of c). Human experience suggests that our personal setting for c is in fact significantly negative (as humans suffer greatly from boredom).
What might be the value of using a large negative value for c? Consider, perhaps, the case of Simulated Annealing where the algorithm intentionally makes “wrong” moves in order to escape when trapped in a local maximum. The key consideration is that R(t)is not the thing being optimized V(t) is. Changing R(t) in a way that doesn’t increase V(t) doesn’t actually improve the situation of our agent, only it’s perception of the situation. In any case, our minds are the product of evolution, so it appears that historically the fitness maximizing value for c is negative.
What SHOULD we do?
At this point, an important objection can be made. Namely, one cannot derive an ought from an is. Just because humans have an existing bias toward a negative value for c, what does that tell us about what ought to be? Why shouldn’t humans be happy even if relegated to endless boredom? One argument is Chesterton’s fence, i.e. until we are quite sure why we dislike boredom so, we ought not mess with it. Another is that if humans ever become “content” with boredom, we cut off all possibility of further growth (however small).
My main point, though, is that I would consider eliminating boredom wrong because it optimizes for our feelings R(t) and not our well being V(t).
Against Against Boredom
I’m trying to clarify some feelings I had after reading the post Utopic Nightmares. Specifically, this bit:
I would like to describe a toy moral theory that—while not exactly what I believe—gets at why I would consider “eliminating boredom” morally objectionable.
Experience maximizing hedonism
Consider an agent that perceives external reality through a set of sensors S0,S1,…Sn . It uses these sensors to build a model of external reality and estimate it’s position in that reality at a point in time as a state Mt. It also has a number of actions At,i available to it at any given time.
The agent estimates the number of reachable future states V(t)=E(|{Mis.t∃pathMt−>..−>Mi|}) and “chooses” its actions so as to maximize the value of R(t) for some future time t. Obviously if the agent is dead, it cannot perceive or affect its future state, so it estimates V(dead)=1.
Internally the agent is running some kind of hill-climbing algorithm, so it experiences a reward after choosing an action Ai,t at time t of the form R(t)=V(t)−V(t−1). In this way, the agent experiences pleasure when it takes actions that increase V(t) and pain when it takes actions that decrease V(t) and over time the agent learns to take actions that maximize V(t).
Infinite Boredom
Now consider the infinite boredom of Utopic Nightmares. In this case the agent reaches a local maximum for V(t) and R(t) is now constant (and equal to zero). But of course there is no reason why R(t) need be zero when V(t) is constant. There’s no reason why we couldn’t have instead used R(t)c=V(t)−V(t−1)+c for our hill-climb. The agent would experience endless bliss (for positive values of c) or endless suffering (for negative values of c). Human experience suggests that our personal setting for c is in fact significantly negative (as humans suffer greatly from boredom).
What might be the value of using a large negative value for c? Consider, perhaps, the case of Simulated Annealing where the algorithm intentionally makes “wrong” moves in order to escape when trapped in a local maximum. The key consideration is that R(t) is not the thing being optimized V(t) is. Changing R(t) in a way that doesn’t increase V(t) doesn’t actually improve the situation of our agent, only it’s perception of the situation. In any case, our minds are the product of evolution, so it appears that historically the fitness maximizing value for c is negative.
What SHOULD we do?
At this point, an important objection can be made. Namely, one cannot derive an ought from an is. Just because humans have an existing bias toward a negative value for c, what does that tell us about what ought to be? Why shouldn’t humans be happy even if relegated to endless boredom? One argument is Chesterton’s fence, i.e. until we are quite sure why we dislike boredom so, we ought not mess with it. Another is that if humans ever become “content” with boredom, we cut off all possibility of further growth (however small).
My main point, though, is that I would consider eliminating boredom wrong because it optimizes for our feelings R(t) and not our well being V(t).