If it’s true AI labs aren’t likely to be the cause of extinction, why is everyone upset at the arms race they’ve begun?
You can’t have it both ways: either the progress these labs are making is scary—in which case anything that disrupts them (and hence slows them down even if it doesn’t stop them) is good—or they’re on the wrong track, in which case we’re all fine.
I refer back to the first sentence of the message you’re replying to. I’m not having it both ways, you’re confusing different people’s opinions. My view is the only thing remarkable about labs is that they get to this slightly sooner by having bigger computers; even killing everyone at every big lab wouldn’t undo how much compute there is in the world, so it at most buys a year at an intense cost to rule morality and to knowledge of how to stop disaster. If you disagree with an argument someone else made, lay it out, please. I probably simply never agreed with the other person’s doom model anyway.
If it’s true AI labs aren’t likely to be the cause of extinction, why is everyone upset at the arms race they’ve begun?
You can’t have it both ways: either the progress these labs are making is scary—in which case anything that disrupts them (and hence slows them down even if it doesn’t stop them) is good—or they’re on the wrong track, in which case we’re all fine.
I refer back to the first sentence of the message you’re replying to. I’m not having it both ways, you’re confusing different people’s opinions. My view is the only thing remarkable about labs is that they get to this slightly sooner by having bigger computers; even killing everyone at every big lab wouldn’t undo how much compute there is in the world, so it at most buys a year at an intense cost to rule morality and to knowledge of how to stop disaster. If you disagree with an argument someone else made, lay it out, please. I probably simply never agreed with the other person’s doom model anyway.