Would you care to cite any such reasonable hypotheses? I.e. under what assumptions do you think that saving a random poor person’s life is likely to be a net negative? Sum over the number of lives saved and even if one person grows up to be a serial killer, the total is still way positive. Can you really defend a situation in which it is preferable to have living people today die from malaria?
The problem with MIRI-hypothesized AI (beyond its implausibility) is that we don’t get to sum over all possible results. We get one result. Even if the chance of a good result is 80%, the chance of a disastrous result is still way too high for comfort.
Would you care to cite any such reasonable hypotheses? I.e. under what assumptions do you think that saving a random poor person’s life is likely to be a net negative? Sum over the number of lives saved and even if one person grows up to be a serial killer, the total is still way positive.
Most obviously it could cause an increase in world GDP without a commensurate acceleration in various risk prevention mechanisms. Species can evolve themselves to extinction and in a similar way humans could easily develop themselves to extinction if they are not careful or lucky. Messing around with various aspects of the human population would influence this… in one direction or another. It’s damn hard to predict.
Having a heuristic “short term lives saved == good” is useful. It massively simplifies calculations and if you have no information either way about side effects of the influence then it works well enough. But it would a significant epistemic error to mistake the heuristic for operating under uncertainty with confidence about the unpredictable (or difficult to predict) system in which you are operating.
Can you really defend a situation in which it is preferable to have living people today die from malaria?
What is socially defensible is not the same thing as what is accurate. But that isn’t the point here. All else being equal I would prefer AMF to have an extra million dollars to spend than to not have that extra million dollars. The expected value is positive. What I criticise is “very likely under all reasonable hypotheses” which is just way off. I do not have the epistemic resources to arrive at that confidence and I believe that you are arriving at that conclusion in error, not because of additional knowledge or probabilistic computational resources.
Would you care to cite any such reasonable hypotheses? I.e. under what assumptions do you think that saving a random poor person’s life is likely to be a net negative? Sum over the number of lives saved and even if one person grows up to be a serial killer, the total is still way positive. Can you really defend a situation in which it is preferable to have living people today die from malaria?
The problem with MIRI-hypothesized AI (beyond its implausibility) is that we don’t get to sum over all possible results. We get one result. Even if the chance of a good result is 80%, the chance of a disastrous result is still way too high for comfort.
Most obviously it could cause an increase in world GDP without a commensurate acceleration in various risk prevention mechanisms. Species can evolve themselves to extinction and in a similar way humans could easily develop themselves to extinction if they are not careful or lucky. Messing around with various aspects of the human population would influence this… in one direction or another. It’s damn hard to predict.
Having a heuristic “short term lives saved == good” is useful. It massively simplifies calculations and if you have no information either way about side effects of the influence then it works well enough. But it would a significant epistemic error to mistake the heuristic for operating under uncertainty with confidence about the unpredictable (or difficult to predict) system in which you are operating.
What is socially defensible is not the same thing as what is accurate. But that isn’t the point here. All else being equal I would prefer AMF to have an extra million dollars to spend than to not have that extra million dollars. The expected value is positive. What I criticise is “very likely under all reasonable hypotheses” which is just way off. I do not have the epistemic resources to arrive at that confidence and I believe that you are arriving at that conclusion in error, not because of additional knowledge or probabilistic computational resources.