So, what do we do if there is more than one basin of attraction a moral
reasoner considering all the arguments can land in? What if there are no
basins?
This is a really insightful question, and it hasn’t been answered convincingly in this thread. Does anybody know if it has been discussed more completely elsewhere?
One option would be to say that the FAI only acts where there is coherence. Another would be to specify a procedure for acting when there are multiple basins of attraction (perhaps by weighting the basins according to the proportion of starting points and orderings of arguments that lead to each basin, when that’s possible, or some other ‘impartial’ procedure).
But still, what if it turns out that most of the difficult extrapolations that we would really care about bounce around without ever settling down or otherwise behave undesirably? No human being has ever done anything like the sorts of calculations that would be involved in a deep extrapolation, so our intuitions based on the extrapolations that we have imagined and that seem to cohere (which all have paths shorter than [e.g.] 1000) might be unrepresentative of the sorts of extrapolations than an FAI would actually have to perform.
This is a really insightful question, and it hasn’t been answered convincingly in this thread. Does anybody know if it has been discussed more completely elsewhere?
One option would be to say that the FAI only acts where there is coherence. Another would be to specify a procedure for acting when there are multiple basins of attraction (perhaps by weighting the basins according to the proportion of starting points and orderings of arguments that lead to each basin, when that’s possible, or some other ‘impartial’ procedure).
But still, what if it turns out that most of the difficult extrapolations that we would really care about bounce around without ever settling down or otherwise behave undesirably? No human being has ever done anything like the sorts of calculations that would be involved in a deep extrapolation, so our intuitions based on the extrapolations that we have imagined and that seem to cohere (which all have paths shorter than [e.g.] 1000) might be unrepresentative of the sorts of extrapolations than an FAI would actually have to perform.