“if I thought the chance of doom was 1% I’d say “full speed ahead!”
This is not a reasonable view. Not on Longtermism, nor on mainstream common sense ethics. This is the view of someone willing to take unacceptable risks for the whole of humanity.
Why not ask him for his reasoning, then evaluate it?
If a person thinks there’s 10% x-risk over the next 100 years if we don’t develop superhuman AGI, and only a 1% x-risk if we do, then he’d suggest that anybody in favour of pausing AI progress was taking “unacceptable risks for the whole of himanity”.
I totally get where you’re coming from, and if I thought the chance of doom was 1% I’d say “full speed ahead!”
As it is, at fifty-three years old, I’m one of the corpses I’m prepared to throw on the pile to stop AI.
Hell yes. That’s been needed rather urgently for a while now.
“if I thought the chance of doom was 1% I’d say “full speed ahead!”
This is not a reasonable view. Not on Longtermism, nor on mainstream common sense ethics. This is the view of someone willing to take unacceptable risks for the whole of humanity.
Why not ask him for his reasoning, then evaluate it? If a person thinks there’s 10% x-risk over the next 100 years if we don’t develop superhuman AGI, and only a 1% x-risk if we do, then he’d suggest that anybody in favour of pausing AI progress was taking “unacceptable risks for the whole of himanity”.
The reasoning was given in the comment prior to it, that we want fast progress in order to get to immortality sooner.