My response is ‘the argument from the existence of new self made billionaires’.
There are giant holes in our collective understanding of the world, and giant opportunities. There are things that everyone misses until someone doesn’t.
A much smarter than human beings thing is simply going to be able to see things that we don’t notice. That is what it means for it to be smarter than us.
Given how high dimensional the universe is, it would be really weird in my view if none of the things that something way smarter than us can notice don’t point to highly certain pathways for gaining enough power to wipe out humanity.
I mean sure, this is a handwavy, thought experiment level counter argument. And I can’t really think of any concrete physical evidence that might convince me otherwise. But, despite the weakness of this thought experiment evidenece, I’d be shocked if I ever viewed it as highly unlikely (ie less than one percent, or even less than ten percent) that a much smarter than human AI I won’t be able to kill us.
And remember: To worry, we don’t need to prove that it can, just that it might.
My response is ‘the argument from the existence of new self made billionaires’.
There are giant holes in our collective understanding of the world, and giant opportunities. There are things that everyone misses until someone doesn’t.
A much smarter than human beings thing is simply going to be able to see things that we don’t notice. That is what it means for it to be smarter than us.
Given how high dimensional the universe is, it would be really weird in my view if none of the things that something way smarter than us can notice don’t point to highly certain pathways for gaining enough power to wipe out humanity.
I mean sure, this is a handwavy, thought experiment level counter argument. And I can’t really think of any concrete physical evidence that might convince me otherwise. But, despite the weakness of this thought experiment evidenece, I’d be shocked if I ever viewed it as highly unlikely (ie less than one percent, or even less than ten percent) that a much smarter than human AI I won’t be able to kill us.
And remember: To worry, we don’t need to prove that it can, just that it might.