I begin by thanking Holden Karnofsky of Givewell for his rare gift of his detailed, engaged, and helpfully-meant critical article Thoughts on the Singularity Institute (SI). In this reply I will engage with only one of the many subjects raised therein, the topic of, as I would term them, non-self-modifying planning Oracles, a.k.a. ‘Google Maps AGI’ a.k.a. ‘tool AI’, this being the topic that requires me personally to answer. I hope that my reply will be accepted as addressing the most important central points, though I did not have time to explore every avenue. I certainly do not wish to be logically rude, and if I have failed, please remember with compassion that it’s not always obvious to one person what another person will think was the central point.
Luke Mueulhauser and Carl Shulman contributed to this article, but the final edit was my own, likewise any flaws.
I think you’re starting to write more like a Friendly AI. This is totally a good thing.
Yes, the tone of this response should be commended.
Wouldn’t even a paperclip maximizer write in same style in those circumstances?
IMO, speaking in arrogant absolutes makes people stupid regardless of what conclusion you’re arguing for.
No. It would start hacking things, take over the world, kill everything then burn the cosmic commons.
Only when it has power to do that. Meatbound equivalent would have to upload itself first.
Maybe that was Luke’s contribution ;)