I think Greg Egan makes an important point there that I have mentioned before and John Baez seems to agree:
I agree that multiplying a very large cost or benefit by a very small probability to calculate the expected utility of some action is a highly unstable way to make decisions.
Actually this was what I had in mind when I voiced my first attempt at criticizing the whole endeavour of friendly AI, I just didn’t know what exactly was causing my uneasiness.
I am still confused about it but think that it isn’t much of a problem as long as friendly AI research is not being funded at the cost of other risks that are more thoroughly based on empirical evidence rather than the observation of logically valid arguments.
To be clear, as I wrote in the post above, I think that there are very strong arguments in support of friendly AI research. I believe that it is currently the most important cause one could support, but I also think that there is a limit to what one should do in the name of mere logical implications. Therefore I partly agree with Greg Egan.
All of Yudkowsky’s arguments about the dangers and benefits of AI are just appeals to intuition of various kinds, as indeed are the counter-arguments. So I wouldn’t hold your breath waiting for that to be settled. If he wants to live his own life based on his own hunches, that’s fine, but I see no reason for anyone else to take his land-grabs on terms like “rationality” and “altruism” at all seriously, merely because it’s not currently possible to provide mathematically rigorous proofs that his assignments of probabilities to various scenarios are incorrect. There’s an almost limitless supply of people who believe that their ideas are of Earth-shattering importance, and that it’s incumbent on the rest of the world to either follow them or spend their life proving them wrong.
But clearly you’re showing no signs of throwing in productive work to devote your life to “Friendly AI” — or of selling a kidney in order to fund other people’s research in that area — so I should probably just breathe a sigh and relief, shut up and go back to my day job, until I have enough free time myself to contribute something useful to the Azimuth Project, get involved in refugee support again, or do any of the other “Rare Disease for Cute Kitten” activities on which the fate of all sentient life in the universe conspicuously does not hinge.
I think Greg Egan makes an important point there that I have mentioned before and John Baez seems to agree:
Actually this was what I had in mind when I voiced my first attempt at criticizing the whole endeavour of friendly AI, I just didn’t know what exactly was causing my uneasiness.
I am still confused about it but think that it isn’t much of a problem as long as friendly AI research is not being funded at the cost of other risks that are more thoroughly based on empirical evidence rather than the observation of logically valid arguments.
To be clear, as I wrote in the post above, I think that there are very strong arguments in support of friendly AI research. I believe that it is currently the most important cause one could support, but I also think that there is a limit to what one should do in the name of mere logical implications. Therefore I partly agree with Greg Egan.
ETA
There’s now another comment by Greg Egan: