(1) In any case, his argument that it may not be possible to have provable Friendliness and it makes more sense to take an incremental approach to AGI than to not do AGI until Friendliness is proven seems reasonable.
That it’s impossible to find a course of action that is knowably good, is not an argument for the goodness of pursuing a course of action that isn’t known to be good.
Certainly, but it is an argument for (2) the goodness of pursuing a course of action that is known to have a chance of being good.
You point out a correct statement (2) for which the incorrect argument (1) apparently argues. This doesn’t argue for correctness of the argument (1).
(A course of action that is known to have a chance of being good is already known to be good, in proportion to that chance (unless it’s also known to have a sufficient chance of being sufficiently bad). For AI to be Friendly doesn’t require absolute certainty in its goodness, but beware the fallacy of gray.)
You point out a correct statement (2) for which the incorrect argument (1) apparently argues. This doesn’t argue for correctness of the argument (1).
(A course of action that is known to have a chance of being good is already known to be good, in proportion to that chance (unless it’s also known to have a sufficient chance of being sufficiently bad). For AI to be Friendly doesn’t require absolute certainty in its goodness, but beware the fallacy of gray.)