I believe we have a duty to attempt to predict the future as far as we possibly can. I don’t see how we can take moral or ethical stances without predicting what will happen as a result of our actions.
We need to predict as far as we can, ethical decision making requires that we take into account all foreseeable consequences of our actions. But with the unavoidable complexity of society, there are serious limits as to how far it is reasonable even to attempt to look ahead; the impossibility of anyone (or even a group) seeing very far is one reason centralized economies don’t work. And the complexity of all social interactions is at least an order of magnitude greater than strictly economic interactions.
I’ve been trying to think of a good way to explain my problem with evaluation of [utility | goodness | rightness] given that we’re very bad at predicting the future. I haven’t had much luck at coming up with something I was willing to post, though I consider the topic extremely important.
For example, how much effort should Clippy put into predicting and simplifying the future (basic research, modeling, increases in ability to affect the universe, active reductions to surrounding complexity, etc.) instead of making paperclips?
The answer “however much it predicts will be useful” seems like a circular problem.
They are circular problems; they share a general structure with adaptation problems though, and I have found reading serious books on evolution, some of Dawkins’s are particularly good, and on economics, try Sowell’s Knowledge and Decisions, to be helpful. These types of problems cannot be solved, at best you can only get incrementally improved answers—depending on the costs of acquiring and analyzing further information versus the expected value of that information.
I believe we have a duty to attempt to predict the future as far as we possibly can. I don’t see how we can take moral or ethical stances without predicting what will happen as a result of our actions.
We need to predict as far as we can, ethical decision making requires that we take into account all foreseeable consequences of our actions. But with the unavoidable complexity of society, there are serious limits as to how far it is reasonable even to attempt to look ahead; the impossibility of anyone (or even a group) seeing very far is one reason centralized economies don’t work. And the complexity of all social interactions is at least an order of magnitude greater than strictly economic interactions.
I’ve been trying to think of a good way to explain my problem with evaluation of [utility | goodness | rightness] given that we’re very bad at predicting the future. I haven’t had much luck at coming up with something I was willing to post, though I consider the topic extremely important.
For example, how much effort should Clippy put into predicting and simplifying the future (basic research, modeling, increases in ability to affect the universe, active reductions to surrounding complexity, etc.) instead of making paperclips?
The answer “however much it predicts will be useful” seems like a circular problem.
They are circular problems; they share a general structure with adaptation problems though, and I have found reading serious books on evolution, some of Dawkins’s are particularly good, and on economics, try Sowell’s Knowledge and Decisions, to be helpful. These types of problems cannot be solved, at best you can only get incrementally improved answers—depending on the costs of acquiring and analyzing further information versus the expected value of that information.