Great post! It’s really nice to see some engagement with modern philosophy :)
I do wonder slightly how useful this particular topic is, though. CEV and Ideal Avisor theories are about quite different things. Furthermore, since Ideal Advisor theories are working very much with ideals, the “advisors” they consider are usually supposed to be very much like actual humans. CEV, on the other hand, is precisely supposed to be an effective approximation, and so it would seem surprising if it were to actually proceed by modelling a large number of instances of a person and then enhancing them cognitively. So if instead it proceeds by some more approximate (or alternatively, less brute-force) method, then it’s not clear that we should be able to apply our usual reasoning about human beings to the “values advisor” that you’d get out of the end of CEV. That seems to undermine Sobel’s arguments as applied to CEV.
Great post! It’s really nice to see some engagement with modern philosophy :)
I do wonder slightly how useful this particular topic is, though. CEV and Ideal Avisor theories are about quite different things. Furthermore, since Ideal Advisor theories are working very much with ideals, the “advisors” they consider are usually supposed to be very much like actual humans. CEV, on the other hand, is precisely supposed to be an effective approximation, and so it would seem surprising if it were to actually proceed by modelling a large number of instances of a person and then enhancing them cognitively. So if instead it proceeds by some more approximate (or alternatively, less brute-force) method, then it’s not clear that we should be able to apply our usual reasoning about human beings to the “values advisor” that you’d get out of the end of CEV. That seems to undermine Sobel’s arguments as applied to CEV.