It’s a reasonable point, if one considers “eventual cessation of thought due to thermodynamic equilibrium” to have an immeasurably small likelihood compared to other possible outcomes. If someone points a gun at your head, would you be worrying about dying of old age?
There are plenty of transhumanists here who believe that (with some nonnegligible probability) the heat death of the universe will be the relevant upper bound on their experience of life.
Which is fair enough I suppose, but it sounds bizarrely optimistic to me. We’re talking about a time span a thousand times longer than the current age of the universe. I have a hard time giving weight to any nontrivial proposition expected to be true over that kind of range.
I believe we have a duty to attempt to predict the future as far as we possibly can. I don’t see how we can take moral or ethical stances without predicting what will happen as a result of our actions.
We need to predict as far as we can, ethical decision making requires that we take into account all foreseeable consequences of our actions. But with the unavoidable complexity of society, there are serious limits as to how far it is reasonable even to attempt to look ahead; the impossibility of anyone (or even a group) seeing very far is one reason centralized economies don’t work. And the complexity of all social interactions is at least an order of magnitude greater than strictly economic interactions.
I’ve been trying to think of a good way to explain my problem with evaluation of [utility | goodness | rightness] given that we’re very bad at predicting the future. I haven’t had much luck at coming up with something I was willing to post, though I consider the topic extremely important.
For example, how much effort should Clippy put into predicting and simplifying the future (basic research, modeling, increases in ability to affect the universe, active reductions to surrounding complexity, etc.) instead of making paperclips?
The answer “however much it predicts will be useful” seems like a circular problem.
They are circular problems; they share a general structure with adaptation problems though, and I have found reading serious books on evolution, some of Dawkins’s are particularly good, and on economics, try Sowell’s Knowledge and Decisions, to be helpful. These types of problems cannot be solved, at best you can only get incrementally improved answers—depending on the costs of acquiring and analyzing further information versus the expected value of that information.
It’s a reasonable point, if one considers “eventual cessation of thought due to thermodynamic equilibrium” to have an immeasurably small likelihood compared to other possible outcomes. If someone points a gun at your head, would you be worrying about dying of old age?
There are plenty of transhumanists here who believe that (with some nonnegligible probability) the heat death of the universe will be the relevant upper bound on their experience of life.
Which is fair enough I suppose, but it sounds bizarrely optimistic to me. We’re talking about a time span a thousand times longer than the current age of the universe. I have a hard time giving weight to any nontrivial proposition expected to be true over that kind of range.
I believe we have a duty to attempt to predict the future as far as we possibly can. I don’t see how we can take moral or ethical stances without predicting what will happen as a result of our actions.
We need to predict as far as we can, ethical decision making requires that we take into account all foreseeable consequences of our actions. But with the unavoidable complexity of society, there are serious limits as to how far it is reasonable even to attempt to look ahead; the impossibility of anyone (or even a group) seeing very far is one reason centralized economies don’t work. And the complexity of all social interactions is at least an order of magnitude greater than strictly economic interactions.
I’ve been trying to think of a good way to explain my problem with evaluation of [utility | goodness | rightness] given that we’re very bad at predicting the future. I haven’t had much luck at coming up with something I was willing to post, though I consider the topic extremely important.
For example, how much effort should Clippy put into predicting and simplifying the future (basic research, modeling, increases in ability to affect the universe, active reductions to surrounding complexity, etc.) instead of making paperclips?
The answer “however much it predicts will be useful” seems like a circular problem.
They are circular problems; they share a general structure with adaptation problems though, and I have found reading serious books on evolution, some of Dawkins’s are particularly good, and on economics, try Sowell’s Knowledge and Decisions, to be helpful. These types of problems cannot be solved, at best you can only get incrementally improved answers—depending on the costs of acquiring and analyzing further information versus the expected value of that information.