People have suggested dozens of scenarios, from taking over the Internet to hacking militaries to producing nanoassemblers & eating everything.
But none of them make any sense to me, see below.
That is not what you said. I’ll requote it:
Wait, your quote said what I said I said you said I didn’t say.
Because that’s the best way to take over...
I have no idea. You don’t have any idea either or you’d have told me by now. You are just saying that magic will happen and the world will be ours. That’s the problem with risks from AI.
Let’s see, I’m too dumb to see that I am systematically underinvesting in research despite the high returns when I do investigate something other than X, and apparently I’m also too dumb to notice that I am underperforming compared to those oh-so-diverse humans’ research programs.
See, that’s the problem. The AI can’t acquire the resources that are necessary to acquire resources in the first place. It might figure out that it will need to pursue various strategies or build nanoassemblers, but how does it do that?
Taking over the Internet is no answer, because the question is how. Building nanoassemblers is no answer, because the question is how.
I have no idea. You don’t have any idea either or you’d have told me by now. You are just saying that magic will happen and the world will be ours. That’s the problem with risks from AI.
We have plenty of ideas. Yvain posted a Discussion thread filled with ideas how. “Alternate history” is an old sub-genre dating back at least to Mark Twain (who makes many concrete suggestions about how his Connecticut yankee would do something similar).
But what’s the point? See my reply to Bugmaster—it’s impossible or would defeat the point of the discussion to actually execute the strategies, and anything short of execution is vulnerable to ‘that’s magic!11!!1’
The AI can’t acquire the resources that are necessary to acquire resources in the first place. It might figure out that it will need to pursue various strategies or build nanoassemblers, but how does it do that?
By reading the many discussions of what could go wrong and implementing whatever is easiest, like hacking computers. Oh the irony!
But none of them make any sense to me, see below.
Wait, your quote said what I said I said you said I didn’t say.
I have no idea. You don’t have any idea either or you’d have told me by now. You are just saying that magic will happen and the world will be ours. That’s the problem with risks from AI.
See, that’s the problem. The AI can’t acquire the resources that are necessary to acquire resources in the first place. It might figure out that it will need to pursue various strategies or build nanoassemblers, but how does it do that?
Taking over the Internet is no answer, because the question is how. Building nanoassemblers is no answer, because the question is how.
We have plenty of ideas. Yvain posted a Discussion thread filled with ideas how. “Alternate history” is an old sub-genre dating back at least to Mark Twain (who makes many concrete suggestions about how his Connecticut yankee would do something similar).
But what’s the point? See my reply to Bugmaster—it’s impossible or would defeat the point of the discussion to actually execute the strategies, and anything short of execution is vulnerable to ‘that’s magic!11!!1’
By reading the many discussions of what could go wrong and implementing whatever is easiest, like hacking computers. Oh the irony!