To me it seems reasonable to focus on self-improving AI instead of wars and nanotechnology. If we get the AI right, then we can give it a task to solve our problems with wars, nanotechnology, et cetera (the “suboptimal singleton” problem is included in “getting the AI right”). One solution will help us with other solutions.
As an analogy, imagine yourself as an intelligent designer of your favorite species. You can choose to give them an upgrade: fast feet, thick fur, improved senses, or human-like brain. Of course you should choose a human-like brain, because this allows them to also fix their problems with feet, fur and senses. Now when you have an opportunity to give them Friendly AI as a next upgrade, you should do it, because it will help them fix many other problems too.
This reasoning does not work if chance to make the Friendly AI is extremely low, and the chances of fixing other problems are much higher. Then it makes sense to fix the other problems first. Important thing is, in long term we want to fix all these problems, so it’s not about whether “A” is better than “B”, but whether “A, then B” is better than “B, then A”.
To me it seems reasonable to focus on self-improving AI instead of wars and nanotechnology. If we get the AI right, then we can give it a task to solve our problems with wars, nanotechnology, et cetera (the “suboptimal singleton” problem is included in “getting the AI right”). One solution will help us with other solutions.
As an analogy, imagine yourself as an intelligent designer of your favorite species. You can choose to give them an upgrade: fast feet, thick fur, improved senses, or human-like brain. Of course you should choose a human-like brain, because this allows them to also fix their problems with feet, fur and senses. Now when you have an opportunity to give them Friendly AI as a next upgrade, you should do it, because it will help them fix many other problems too.
This reasoning does not work if chance to make the Friendly AI is extremely low, and the chances of fixing other problems are much higher. Then it makes sense to fix the other problems first. Important thing is, in long term we want to fix all these problems, so it’s not about whether “A” is better than “B”, but whether “A, then B” is better than “B, then A”.