I take exception to this passage, and feel that it is an unnecessary attack:
I have actually heard several smart people po-facedly lament the fact that the universe will end with a whimper. If this seriously bothers you psychologically, then your psychology is severely divorced from the reality that you inhabit.
It’s a reasonable point, if one considers “eventual cessation of thought due to thermodynamic equilibrium” to have an immeasurably small likelihood compared to other possible outcomes. If someone points a gun at your head, would you be worrying about dying of old age?
There are plenty of transhumanists here who believe that (with some nonnegligible probability) the heat death of the universe will be the relevant upper bound on their experience of life.
Which is fair enough I suppose, but it sounds bizarrely optimistic to me. We’re talking about a time span a thousand times longer than the current age of the universe. I have a hard time giving weight to any nontrivial proposition expected to be true over that kind of range.
I believe we have a duty to attempt to predict the future as far as we possibly can. I don’t see how we can take moral or ethical stances without predicting what will happen as a result of our actions.
We need to predict as far as we can, ethical decision making requires that we take into account all foreseeable consequences of our actions. But with the unavoidable complexity of society, there are serious limits as to how far it is reasonable even to attempt to look ahead; the impossibility of anyone (or even a group) seeing very far is one reason centralized economies don’t work. And the complexity of all social interactions is at least an order of magnitude greater than strictly economic interactions.
I’ve been trying to think of a good way to explain my problem with evaluation of [utility | goodness | rightness] given that we’re very bad at predicting the future. I haven’t had much luck at coming up with something I was willing to post, though I consider the topic extremely important.
For example, how much effort should Clippy put into predicting and simplifying the future (basic research, modeling, increases in ability to affect the universe, active reductions to surrounding complexity, etc.) instead of making paperclips?
The answer “however much it predicts will be useful” seems like a circular problem.
They are circular problems; they share a general structure with adaptation problems though, and I have found reading serious books on evolution, some of Dawkins’s are particularly good, and on economics, try Sowell’s Knowledge and Decisions, to be helpful. These types of problems cannot be solved, at best you can only get incrementally improved answers—depending on the costs of acquiring and analyzing further information versus the expected value of that information.
I’m sorry you feel that way but, to be honest, I don’t repent of my statement. I simply can’t imagine why the ultimate fate of an (at that point uninhabited) cosmos should matter to a puny hoo-man (except intellectually). It’s like a mayfly worrying about the Andromeda galaxy colliding with the Milky Way.
I think the confusion here is similar to the fear of being dead (not fear of dying). You sort of imagine how horrible it’ll be to be a corpse, just sitting around in a grave. But there will be no one there to experience how bad being dead is, and when the universe peters out in the end, no one will be there to be disappointed. If you care emotionally about entropic heat death, you should logically also feel bad every time an ice cube melts.
I care about what to measure (utility function) as much as I care about when to measure it (time function). For any measure, there’s a way to maximize it, and I’d like to see whatever measure humans decide is appropriate to be maximized across as much time as possible. So worrying about far future events is important insofar as I’d like my values to be maximized even then.
As for worrying about ice cubes, you’re right, it would be inconsistent of me to say otherwise, so I will say that I do. However, I apply a weighted scale of care, and our future galactic empire tends to weigh pretty heavily when compared with something like that.
ETA: Care about ice cube loss is so small I can’t feel it. Dealing with entropy / resource consumption, my caring gets large enough I can start feeling it around the point of owning and operating large home appliances, automobiles, etc., and ramps up drastically for things like inefficient power plants, creating new humans, and war.
I take exception to this passage, and feel that it is an unnecessary attack:
It’s a reasonable point, if one considers “eventual cessation of thought due to thermodynamic equilibrium” to have an immeasurably small likelihood compared to other possible outcomes. If someone points a gun at your head, would you be worrying about dying of old age?
There are plenty of transhumanists here who believe that (with some nonnegligible probability) the heat death of the universe will be the relevant upper bound on their experience of life.
Which is fair enough I suppose, but it sounds bizarrely optimistic to me. We’re talking about a time span a thousand times longer than the current age of the universe. I have a hard time giving weight to any nontrivial proposition expected to be true over that kind of range.
I believe we have a duty to attempt to predict the future as far as we possibly can. I don’t see how we can take moral or ethical stances without predicting what will happen as a result of our actions.
We need to predict as far as we can, ethical decision making requires that we take into account all foreseeable consequences of our actions. But with the unavoidable complexity of society, there are serious limits as to how far it is reasonable even to attempt to look ahead; the impossibility of anyone (or even a group) seeing very far is one reason centralized economies don’t work. And the complexity of all social interactions is at least an order of magnitude greater than strictly economic interactions.
I’ve been trying to think of a good way to explain my problem with evaluation of [utility | goodness | rightness] given that we’re very bad at predicting the future. I haven’t had much luck at coming up with something I was willing to post, though I consider the topic extremely important.
For example, how much effort should Clippy put into predicting and simplifying the future (basic research, modeling, increases in ability to affect the universe, active reductions to surrounding complexity, etc.) instead of making paperclips?
The answer “however much it predicts will be useful” seems like a circular problem.
They are circular problems; they share a general structure with adaptation problems though, and I have found reading serious books on evolution, some of Dawkins’s are particularly good, and on economics, try Sowell’s Knowledge and Decisions, to be helpful. These types of problems cannot be solved, at best you can only get incrementally improved answers—depending on the costs of acquiring and analyzing further information versus the expected value of that information.
I’m sorry you feel that way but, to be honest, I don’t repent of my statement. I simply can’t imagine why the ultimate fate of an (at that point uninhabited) cosmos should matter to a puny hoo-man (except intellectually). It’s like a mayfly worrying about the Andromeda galaxy colliding with the Milky Way.
I think the confusion here is similar to the fear of being dead (not fear of dying). You sort of imagine how horrible it’ll be to be a corpse, just sitting around in a grave. But there will be no one there to experience how bad being dead is, and when the universe peters out in the end, no one will be there to be disappointed. If you care emotionally about entropic heat death, you should logically also feel bad every time an ice cube melts.
I care about what to measure (utility function) as much as I care about when to measure it (time function). For any measure, there’s a way to maximize it, and I’d like to see whatever measure humans decide is appropriate to be maximized across as much time as possible. So worrying about far future events is important insofar as I’d like my values to be maximized even then.
As for worrying about ice cubes, you’re right, it would be inconsistent of me to say otherwise, so I will say that I do. However, I apply a weighted scale of care, and our future galactic empire tends to weigh pretty heavily when compared with something like that.
ETA: Care about ice cube loss is so small I can’t feel it. Dealing with entropy / resource consumption, my caring gets large enough I can start feeling it around the point of owning and operating large home appliances, automobiles, etc., and ramps up drastically for things like inefficient power plants, creating new humans, and war.