Ultimately, yes. This whole debate is arguing that the critical threshold where it comes to this is farther away, and we humans should empower ourselves with helpful low superintelligences immediately.
It’s always better to be more powerful than helpless, which is the current situation. We are helpless to aging, death, pollution, resource shortages, enemy nations with nuclear weapons, disease, asteroid strikes, and so on. Hell just bad software—something the current llms are likely months from empowering us to fix.
And eliezer is saying not to take one more step towards fixing this because it MIGHT be hostile, when the entire universe is against us as it is. It already plans to kill us as it is, either from aging, or the inevitability of nuclear war over a long enough timespan, or the sun engulfing us.
eliezer is saying not to take one more step towards fixing this because it MIGHT be hostile
His position is to avoid taking one more step because it DEFINITELY kills everyone. I think it’s very clear that his position is not that it MIGHT be hostile.
Sure, and if there was some way to quantify the risks accurately I would agree with pausing AGI research if the expected value of the risks were less than the potential benefit.
Oh and pausing was even possible.
All it takes is a rival power, which there are several, or just a rival company and you have no choice. You must take the risk because it might be a poisoned banana or it might be giving the other primate a rocket launcher in a sticks and stones society.
This does explain why EY is so despondent. If he’s right it doesn’t matter, the AI wars have begun and only if it doesn’t work from a technical level will things slow down ever again.
Correctness of EY’s position (being infeasible to assess) is unrelated to the question of what EY’s position is, which is what I was commenting on.
When you argue against the position that AGI research should be stopped because it might be dangerous, there is no need to additionally claim that someone in particular holds that position, especially when it seems clear that they don’t.
Ultimately, yes. This whole debate is arguing that the critical threshold where it comes to this is farther away, and we humans should empower ourselves with helpful low superintelligences immediately.
It’s always better to be more powerful than helpless, which is the current situation. We are helpless to aging, death, pollution, resource shortages, enemy nations with nuclear weapons, disease, asteroid strikes, and so on. Hell just bad software—something the current llms are likely months from empowering us to fix.
And eliezer is saying not to take one more step towards fixing this because it MIGHT be hostile, when the entire universe is against us as it is. It already plans to kill us as it is, either from aging, or the inevitability of nuclear war over a long enough timespan, or the sun engulfing us.
His position is to avoid taking one more step because it DEFINITELY kills everyone. I think it’s very clear that his position is not that it MIGHT be hostile.
(My position is that there might be some steps that don’t kill everyone immediately, but probably still do immediately thereafter, while giving a bit more of a chance than doing all the other things that do kill us directly. Doing none of these things would be preferable, because at least aging doesn’t kill the civilization, but Moloch is the one in charge.)
Sure, and if there was some way to quantify the risks accurately I would agree with pausing AGI research if the expected value of the risks were less than the potential benefit.
Oh and pausing was even possible.
All it takes is a rival power, which there are several, or just a rival company and you have no choice. You must take the risk because it might be a poisoned banana or it might be giving the other primate a rocket launcher in a sticks and stones society.
This does explain why EY is so despondent. If he’s right it doesn’t matter, the AI wars have begun and only if it doesn’t work from a technical level will things slow down ever again.
Correctness of EY’s position (being infeasible to assess) is unrelated to the question of what EY’s position is, which is what I was commenting on.
When you argue against the position that AGI research should be stopped because it might be dangerous, there is no need to additionally claim that someone in particular holds that position, especially when it seems clear that they don’t.