How about a clear, brief and correct strategy? The sequences are often unclear, often go off onto tangents that have nothing to do with AI safety, and often wrong. How about making sure you’re right beforeyou start shouting?
I really second this. If we’ve got clear arguments as to why the end of the world is nigh, then I don’t know them and I’d appreciate a link. And yes I have read the fucking Sequences. Every last word. And I love them to bits they are the 21st century’s most interesting literature by a very long way.
I’d be stunned if someone suddenly pulled a solution out of their ass, but it wouldn’t cause me an existential crisis like seeing an order 2 element in an order 9 group would.
Whenever I try to convince someone about AI risk, it takes about a year to get them to realise that AI might be dangerous, and then once they do they go off on one about quantum consciousness or something.
Some people just snap straight to the ‘oh shit’ moment like I did, but there are not many. (got me a couple already this year, but it’s not much of a harvest)
And I’m talking about clever people whose intelligence I respect more than my own in other contexts. I don’t even bother trying with normies.
I do wonder if it would have been the same if you were trying to convince people that nuclear bombs would be dangerous and all you had was a vague idea of how a chain reaction might work rather than you could point at Hiroshima.
So I conclude that either:
(1) we’re wrong,
or (2) we’re not being clear,
or (3) I’m not very good at communicating
or (4) really clever people are so bad at thinking that they can stare really good arguments in the face and not be moved by them.
And this matters.
If computer programmers and mathematicians were themselves united in believing in AI risk in the same way that climate people are united, there’d at least be a chance that governments would take it seriously in the sort of half-hearted fuck-everything-up run-around-like-headless-chickens way that they have.
And fucking everything up is not a bad idea. One thing governments can totally provably do is stifle innovation. We kick the can down the road far enough, maybe the horse will sing?
How about a clear, brief and correct strategy? The sequences are often unclear, often go off onto tangents that have nothing to do with AI safety, and often wrong. How about making sure you’re right beforeyou start shouting?
I really second this. If we’ve got clear arguments as to why the end of the world is nigh, then I don’t know them and I’d appreciate a link. And yes I have read the fucking Sequences. Every last word. And I love them to bits they are the 21st century’s most interesting literature by a very long way.
I’d be stunned if someone suddenly pulled a solution out of their ass, but it wouldn’t cause me an existential crisis like seeing an order 2 element in an order 9 group would.
Whenever I try to convince someone about AI risk, it takes about a year to get them to realise that AI might be dangerous, and then once they do they go off on one about quantum consciousness or something.
Some people just snap straight to the ‘oh shit’ moment like I did, but there are not many. (got me a couple already this year, but it’s not much of a harvest)
And I’m talking about clever people whose intelligence I respect more than my own in other contexts. I don’t even bother trying with normies.
I do wonder if it would have been the same if you were trying to convince people that nuclear bombs would be dangerous and all you had was a vague idea of how a chain reaction might work rather than you could point at Hiroshima.
So I conclude that either:
(1) we’re wrong,
or (2) we’re not being clear,
or (3) I’m not very good at communicating
or (4) really clever people are so bad at thinking that they can stare really good arguments in the face and not be moved by them.
And this matters.
If computer programmers and mathematicians were themselves united in believing in AI risk in the same way that climate people are united, there’d at least be a chance that governments would take it seriously in the sort of half-hearted fuck-everything-up run-around-like-headless-chickens way that they have.
And fucking everything up is not a bad idea. One thing governments can totally provably do is stifle innovation. We kick the can down the road far enough, maybe the horse will sing?