Years ago at the Singularity Institute, the Board was entertaining a proposal to expand somewhat. I wasn’t sure our funding was able to support the expansion, so I insisted that—if we started running out of money—we decide in advance who got fired and what got shut down, in what order. Even over the electronic aether, you could hear the uncomfortable silence. …
People are really, really reluctant to plan in advance for the abyss. But what good reason is there not to? How can you be worse off from knowing in advance what you’ll do in the worse cases?
I don’t suppose you can. But the process of deciding in advance can cause a lot of trouble. It would be necessary for people to argue in favour of e.g. firing Eliezer Yudkowsky first, rather than anybody else. Then you might have to work with the person who made that argument. Perhaps after arguing that he should be fired first, instead.
Perhaps it would have been easier to decide in advance that everyone should take a pay cut...
Short version: People try to avoid hard choices because they are hard. If the choice will not have to be implemented for a long time, if ever, there is therefore a lot of pressure to defer making the choice. After all, if you defer it long enough, you might never have to make it at all.
I don’t have to tell you that it’s easier to get a Singularity that goes horribly wrong than one that goes just right
Don’t the acceleration-of-history arguments suggest that there will be another singularity, a century or so after the next one? And another one shortly after that, etc?
What are the chances that they will all go exactly right for us?