Because it’s anti-social (in most cases; things like law enforcement are usually fine), and the only good timelines (by any metric) are pro-social.
Consider if it became like the Irish troubles. Do you think alignment gets solved in this environment? No. What you get is people creating AI war machines. And they don’t bother with alignment because they are trying to get an advantage over the enemy, not benefit everyone. Everyone is incentivised to push capabilities as far as they can, except past the singularity threshold. And there’s not even a disincentive for going past it, you’re just neutral on it. So the dangerous bit isn’t even that the AI are war machines, it’s that they are unaligned.
It’s a general principle that anti-social acts tend to harm utility overall due to second-order effects that wash out the short-sighted first-order effects. Alignment is an explicitly pro-social endeavor!
Because it’s anti-social (in most cases; things like law enforcement are usually fine), and the only good timelines (by any metric) are pro-social.
Consider if it became like the Irish troubles. Do you think alignment gets solved in this environment? No. What you get is people creating AI war machines. And they don’t bother with alignment because they are trying to get an advantage over the enemy, not benefit everyone. Everyone is incentivised to push capabilities as far as they can, except past the singularity threshold. And there’s not even a disincentive for going past it, you’re just neutral on it. So the dangerous bit isn’t even that the AI are war machines, it’s that they are unaligned.
It’s a general principle that anti-social acts tend to harm utility overall due to second-order effects that wash out the short-sighted first-order effects. Alignment is an explicitly pro-social endeavor!