This is definitely an underrated point. In general, I tend to think that worlds where Eliezer/Nate Soares/Connor Leahy/very doomy people are right are worlds where humans just can’t do much of anything around AI extinction risks, and that the rational response is essentially to do what Elizabeth’s friend did here in the link below, which is to leave AI safety/AI governance and do something else worthwile, as you or anyone else can’t do anything:
In general, I think you need certain assumptions in order to justify working either on AI safety or AI governance, and that will set at least a soft ceiling on how doomy you can be. One of those is the assumption of feedback loops are available, which quite obviously rules out a lot of sharp left turns, and in general there’s a limit to how extreme your scenarios for difficulty of safety have to be before you can’t do anything at all, and I think a lot of classic Lesswrong people like Nate Soares and Eliezer Yudkowsky, as well as more modern people like Connor Leahy are way over the line of useful difficulty.
This is definitely an underrated point. In general, I tend to think that worlds where Eliezer/Nate Soares/Connor Leahy/very doomy people are right are worlds where humans just can’t do much of anything around AI extinction risks, and that the rational response is essentially to do what Elizabeth’s friend did here in the link below, which is to leave AI safety/AI governance and do something else worthwile, as you or anyone else can’t do anything:
https://www.lesswrong.com/posts/tv6KfHitijSyKCr6v/?commentId=Nm7rCq5ZfLuKj5x2G
In general, I think you need certain assumptions in order to justify working either on AI safety or AI governance, and that will set at least a soft ceiling on how doomy you can be. One of those is the assumption of feedback loops are available, which quite obviously rules out a lot of sharp left turns, and in general there’s a limit to how extreme your scenarios for difficulty of safety have to be before you can’t do anything at all, and I think a lot of classic Lesswrong people like Nate Soares and Eliezer Yudkowsky, as well as more modern people like Connor Leahy are way over the line of useful difficulty.