One way to plan for the future is to slow down the machinery taking us there to reduce the uncertainty about what is coming to some degree.
Another way to plan for the future is to do what I’ve done, which is to get old (70) so that you have far less chips on the table in the face of the uncertainty. Ok, sorry, not very helpful. But on the other hand, it’s most likely going to happen whether you plan it or not, and some comfort might be taken from knowing that sooner or later we all earn a “get out of jail free” card.
For today, one of the things we have some hope of being able to control is our relationship with risk, living, dying etc. In an era characterized by historic uncertainty, such a pursuit seems a good investment.
AI safety is not the world’s most pressing problem. It is a symptom of the world’s most pressing problem, our unwillingness and/or inability to learn how to manage the pace of the knowledge explosion.
Our outdated relationship with knowledge is the problem. Nuclear weapons, AI, genetic engineering and other technological risks are symptoms of that problem. EA writers insist on continually confusing sources and symptoms.
To make this less abstract, consider a factory assembly line. The factory is the source. The products rolling off the end of the assembly line are the symptoms.
EA writers (and the rest of the culture) insist on focusing on each product as it comes off the end of the assembly line, while the assembly line keeps accelerating faster and faster. While you’re focused on the latest shiny product to emerge off the assembly line, the assembly line is ramping up to overwhelm you with a tsunami of other new products.