You’re right, it is (2)! If we build an artificial intelligence that smart, with such absurd resources, then we _will_ be in danger. Doing this thing implies we lose.
However, that does not mean that not doing this thing implies we do not lose. A ⇒ B doesn’t mean ¬A ⇒ ¬B. Just because simulating trillions of humans then giving them internet access would be dangerous, that doesn’t mean that’s the only dangerous thing in the universe; that would be absurd. By that logic, we’re immune from nuclear weapons or nanotech just because we don’t have enough computronium to simulate the solar system.
Your conclusion simply doesn’t follow. (Plus, the premise of the argument’s totally a strawman, but there’s no point killing a dead argument deader.)
You’re right, it is (2)! If we build an artificial intelligence that smart, with such absurd resources, then we _will_ be in danger. Doing this thing implies we lose.
However, that does not mean that not doing this thing implies we do not lose. A ⇒ B doesn’t mean ¬A ⇒ ¬B. Just because simulating trillions of humans then giving them internet access would be dangerous, that doesn’t mean that’s the only dangerous thing in the universe; that would be absurd. By that logic, we’re immune from nuclear weapons or nanotech just because we don’t have enough computronium to simulate the solar system.
Your conclusion simply doesn’t follow. (Plus, the premise of the argument’s totally a strawman, but there’s no point killing a dead argument deader.)