Even in a crowd of ai doomers, no one person speaks for ai doomers. But plenty think it likely they’re mistaken somehow. I personally just think the big labs aren’t disproportionately likely to be the cause of an extinction strength ai, so violence is overdeterminedly off the table as an effective strategy, before even considering whether it’s justified, legal, or understandable. The only way we solve this is by constructing the better world.
If it’s true AI labs aren’t likely to be the cause of extinction, why is everyone upset at the arms race they’ve begun?
You can’t have it both ways: either the progress these labs are making is scary—in which case anything that disrupts them (and hence slows them down even if it doesn’t stop them) is good—or they’re on the wrong track, in which case we’re all fine.
I refer back to the first sentence of the message you’re replying to. I’m not having it both ways, you’re confusing different people’s opinions. My view is the only thing remarkable about labs is that they get to this slightly sooner by having bigger computers; even killing everyone at every big lab wouldn’t undo how much compute there is in the world, so it at most buys a year at an intense cost to rule morality and to knowledge of how to stop disaster. If you disagree with an argument someone else made, lay it out, please. I probably simply never agreed with the other person’s doom model anyway.
Even in a crowd of ai doomers, no one person speaks for ai doomers. But plenty think it likely they’re mistaken somehow. I personally just think the big labs aren’t disproportionately likely to be the cause of an extinction strength ai, so violence is overdeterminedly off the table as an effective strategy, before even considering whether it’s justified, legal, or understandable. The only way we solve this is by constructing the better world.
If it’s true AI labs aren’t likely to be the cause of extinction, why is everyone upset at the arms race they’ve begun?
You can’t have it both ways: either the progress these labs are making is scary—in which case anything that disrupts them (and hence slows them down even if it doesn’t stop them) is good—or they’re on the wrong track, in which case we’re all fine.
I refer back to the first sentence of the message you’re replying to. I’m not having it both ways, you’re confusing different people’s opinions. My view is the only thing remarkable about labs is that they get to this slightly sooner by having bigger computers; even killing everyone at every big lab wouldn’t undo how much compute there is in the world, so it at most buys a year at an intense cost to rule morality and to knowledge of how to stop disaster. If you disagree with an argument someone else made, lay it out, please. I probably simply never agreed with the other person’s doom model anyway.