This is one of the issues where LW potentially disagrees with the rest of humanity, and I think the LW position (or at least the position you articulate) is actually wrong. I see many object-level ways to help the world once we have a magical optimizer: solve protein folding, model plasma containment, etc. And these are just the opportunities we can take advantage of with existing tech, but the optimizer can also help design new tech.
And these are just the opportunities we can take advantage of with existing tech, but the optimizer can also help design new tech.
We can do lots of useful things, sure (this is not a point where we disagree), but they don’t add up towards “saving the world”. These are just short-term benefits. Technological progress makes it easier to screw stuff up irrecoverably, advanced tech is the enemy. One shouldn’t generally advance the tech if distant end-of-the-world is considered important as compared to immediate benefits (this value judgment can well be a real point of disagreement).
Modeling physical systems is already hard. I don’t think we could yet write down the dynamics of the physical systems well enough (or rather, we don’t understand what the most important characteristics are) to come up with a precise formulation of the major problems in synthetic biology or nanotechnology. I certainly concede that an optimizer would be helpful in solving many subproblems, and would considerably increase the speed of new developments in pretty much every field. I don’t think it solves many problems on its own though.
But even if you could solve narrow existing technological problems or develop new technologies at a steady pace, it seems like you should be able to do more. Suppose the box can do in a minute what takes existing humans a million years. Then our only upper bound on our capabilities using the box is whatever we expect of a million years of progress at the current pace. I don’t know about you, but I expect pretty much everything.
Even having a magical computational-efficiency-optimizer won’t currently help with saving the world. Can easily help with destroying it though.
This is one of the issues where LW potentially disagrees with the rest of humanity, and I think the LW position (or at least the position you articulate) is actually wrong. I see many object-level ways to help the world once we have a magical optimizer: solve protein folding, model plasma containment, etc. And these are just the opportunities we can take advantage of with existing tech, but the optimizer can also help design new tech.
We can do lots of useful things, sure (this is not a point where we disagree), but they don’t add up towards “saving the world”. These are just short-term benefits. Technological progress makes it easier to screw stuff up irrecoverably, advanced tech is the enemy. One shouldn’t generally advance the tech if distant end-of-the-world is considered important as compared to immediate benefits (this value judgment can well be a real point of disagreement).
I agree with Nesov’s response, and would be interested to know if you’ve changed your mind since writing this comment.
Modeling physical systems is already hard. I don’t think we could yet write down the dynamics of the physical systems well enough (or rather, we don’t understand what the most important characteristics are) to come up with a precise formulation of the major problems in synthetic biology or nanotechnology. I certainly concede that an optimizer would be helpful in solving many subproblems, and would considerably increase the speed of new developments in pretty much every field. I don’t think it solves many problems on its own though.
But even if you could solve narrow existing technological problems or develop new technologies at a steady pace, it seems like you should be able to do more. Suppose the box can do in a minute what takes existing humans a million years. Then our only upper bound on our capabilities using the box is whatever we expect of a million years of progress at the current pace. I don’t know about you, but I expect pretty much everything.