I’m in essential agreement with Wei here. Nonparametric extrapolation sounds like a contradiction to me (though I’m open to counterexamples).
The “nonparametric” part of the FAI process is where you capture a detailed picture of human psychology as a starting point for extrapolation, instead of trying to give the AI Four Great Moral Principles. Applying extrapolative processes like “reflect to obtain self-judgments” or “update for the AI’s superior knowledge” to this picture is not particularly nonparametric—in a sense it’s not an estimator at all, it’s a constructor. But yes, the “extrapolation” part is definitely not a nonparametric extrapolation, I’m not really sure what that would mean.
But every extrapolation process starts with gathering detailed data points, so it confused me that you focused on “nonparametric” as a response to Robin’s argument. If Robin is right, an FAI should discard most of the detailed picture of human psychology it captures during its extrapolation process as errors and end up with a few simple moral principles on its own.
Can you clarify which of the following positions you agree with?
An FAI will end up with a few simple moral principles on its own.
We might as well do the extrapolation ourselves and program the results into the FAI.
Robin’s argument is wrong or doesn’t apply to the kind of moral extrapolation an FAI would do. It will end up with a transhuman morality that’s no less complex than human morality.
(Presumably you don’t agree with 2. I put it in just for completeness.)
2, certainly disagree. 1 vs. 3, don’t know in advance. But an FAI should not discard its detailed psychology as “error”; an AI is not subject to most of the “error” that we are talking about here. It could, however, discard various conclusions as specifically erroneous after having actually judged the errors, which is not at all the sort of correction represented by using simple models or smoothed estimators.
I’m in essential agreement with Wei here. Nonparametric extrapolation sounds like a contradiction to me (though I’m open to counterexamples).
The “nonparametric” part of the FAI process is where you capture a detailed picture of human psychology as a starting point for extrapolation, instead of trying to give the AI Four Great Moral Principles. Applying extrapolative processes like “reflect to obtain self-judgments” or “update for the AI’s superior knowledge” to this picture is not particularly nonparametric—in a sense it’s not an estimator at all, it’s a constructor. But yes, the “extrapolation” part is definitely not a nonparametric extrapolation, I’m not really sure what that would mean.
But every extrapolation process starts with gathering detailed data points, so it confused me that you focused on “nonparametric” as a response to Robin’s argument. If Robin is right, an FAI should discard most of the detailed picture of human psychology it captures during its extrapolation process as errors and end up with a few simple moral principles on its own.
Can you clarify which of the following positions you agree with?
An FAI will end up with a few simple moral principles on its own.
We might as well do the extrapolation ourselves and program the results into the FAI.
Robin’s argument is wrong or doesn’t apply to the kind of moral extrapolation an FAI would do. It will end up with a transhuman morality that’s no less complex than human morality.
(Presumably you don’t agree with 2. I put it in just for completeness.)
2, certainly disagree. 1 vs. 3, don’t know in advance. But an FAI should not discard its detailed psychology as “error”; an AI is not subject to most of the “error” that we are talking about here. It could, however, discard various conclusions as specifically erroneous after having actually judged the errors, which is not at all the sort of correction represented by using simple models or smoothed estimators.