This is going to be very unpopular here. But I find the whole exercise quite ridiculous. If there are no constraints of what kind of AI you are allowed to imagine, the vague notion of “intelligence” used here amounts to a fully general counterargument.
It really comes down to the following recipe:
(1) Leave your artificial intelligence (AI) as vague as possible so that nobody can outline flaws in the scenario that you want to depict.
(2) Claim that almost any AI is going to be dangerous because all AI’s want to take over the world. For example, if you ask the AI “Hey AI, calculate 1+1“, the AI goes FOOOOM and the end of the world follows seconds later.
(2.1) If someone has doubts just use buzzwords such as ‘anthropomorphic bias’ to ridicule them.
(3) Forego the difficulty of outlining why anyone would want to build the kind of AI you have in mind. We’re not concerned with how practical AI is developed after all.
(4) Make your AI as powerful as you can imagine. Since you are ignoring practical AI development and don’t bother about details this should be no problem.
(4.1) If someone questions the power of your AI just outline how humans can intelligently design stuff that monkeys don’t understand. Therefore humans can design stuff that humans don’t understand which will then itself start to design even more incomprehensible stuff.
(5) Outline how as soon as you plug a superhuman machine into the Internet it will be everywhere moments later deleting all your porn videos. Don’t worry if you have no idea how that’s supposed to work in practice because your AI is conjectured to be much smarter than you are so you are allowed to depict scenarios that you don’t understand at all.
(5.1) If someone asks how much smarter the AI you expect to be just make up something like “1000 times smarter”. Don’t worry about what that means because you never defined what intelligence is supposed to be in the first place.
(5.2) If someone calls bullshit on your doomsday scenario just conjecture nanotechnology to make your AI even more powerful, because everyone knows from science fiction how nanotech can pretty much fuck up everything.
(6) If nothing else works frame your concerns as a prediction of a worst case scenario that needs to be taken seriously, even given a low probability of its occurrence, due to the scale of negative consequences associated with it. Portray yourself as a concerned albeit calm researcher who questions the mainstream opinion due to his strong commitment to our collective future. To dramatize the situation even further you can depict the long term consequences and conjecture the possibility of an intergalactic civilization that depends on us.
I understand you have an axe to grind with some things that MIRI believes, but what Katja posted was a request for ideas with an aim towards mapping out the space of possibilities, not an argument. Posting a numbered, point-by-point refutation makes no sense.
It was not meant as a “refutation”, just a helpless and mostly emotional response to the large number of, in my opinion, hopelessly naive comments in this thread.
I know how hard it must be to understand how I feel about this. Try to imagine coming across a forum where in all seriousness people ask about how to colonize the stars and everyone responds with e.g. “Ah, that’s easy! I can imagine many ways how to do that. The most probable way is by using wormholes.” or “We could just transmit copies of our brains and hope that the alien analog of SETI will collect the data!”
Anyway, I am sorry for the nuisance. I already regretted posting it shortly afterwards. Move along, nothing to see here!
(5.2) If someone calls bullshit on your doomsday scenario just conjecture nanotechnology to make your AI even more powerful, because everyone knows from science fiction how nanotech can pretty much fuck up everything.
The exercise specifically calls for avoiding advanced nanotechnology.
This is going to be very unpopular here. But I find the whole exercise quite ridiculous. If there are no constraints of what kind of AI you are allowed to imagine, the vague notion of “intelligence” used here amounts to a fully general counterargument.
It really comes down to the following recipe:
(1) Leave your artificial intelligence (AI) as vague as possible so that nobody can outline flaws in the scenario that you want to depict.
(2) Claim that almost any AI is going to be dangerous because all AI’s want to take over the world. For example, if you ask the AI “Hey AI, calculate 1+1“, the AI goes FOOOOM and the end of the world follows seconds later.
(2.1) If someone has doubts just use buzzwords such as ‘anthropomorphic bias’ to ridicule them.
(3) Forego the difficulty of outlining why anyone would want to build the kind of AI you have in mind. We’re not concerned with how practical AI is developed after all.
(4) Make your AI as powerful as you can imagine. Since you are ignoring practical AI development and don’t bother about details this should be no problem.
(4.1) If someone questions the power of your AI just outline how humans can intelligently design stuff that monkeys don’t understand. Therefore humans can design stuff that humans don’t understand which will then itself start to design even more incomprehensible stuff.
(5) Outline how as soon as you plug a superhuman machine into the Internet it will be everywhere moments later deleting all your porn videos. Don’t worry if you have no idea how that’s supposed to work in practice because your AI is conjectured to be much smarter than you are so you are allowed to depict scenarios that you don’t understand at all.
(5.1) If someone asks how much smarter the AI you expect to be just make up something like “1000 times smarter”. Don’t worry about what that means because you never defined what intelligence is supposed to be in the first place.
(5.2) If someone calls bullshit on your doomsday scenario just conjecture nanotechnology to make your AI even more powerful, because everyone knows from science fiction how nanotech can pretty much fuck up everything.
(6) If nothing else works frame your concerns as a prediction of a worst case scenario that needs to be taken seriously, even given a low probability of its occurrence, due to the scale of negative consequences associated with it. Portray yourself as a concerned albeit calm researcher who questions the mainstream opinion due to his strong commitment to our collective future. To dramatize the situation even further you can depict the long term consequences and conjecture the possibility of an intergalactic civilization that depends on us.
I understand you have an axe to grind with some things that MIRI believes, but what Katja posted was a request for ideas with an aim towards mapping out the space of possibilities, not an argument. Posting a numbered, point-by-point refutation makes no sense.
It was not meant as a “refutation”, just a helpless and mostly emotional response to the large number of, in my opinion, hopelessly naive comments in this thread.
I know how hard it must be to understand how I feel about this. Try to imagine coming across a forum where in all seriousness people ask about how to colonize the stars and everyone responds with e.g. “Ah, that’s easy! I can imagine many ways how to do that. The most probable way is by using wormholes.” or “We could just transmit copies of our brains and hope that the alien analog of SETI will collect the data!”
Anyway, I am sorry for the nuisance. I already regretted posting it shortly afterwards. Move along, nothing to see here!
The exercise specifically calls for avoiding advanced nanotechnology.