That strikes me as incredibly likely to backfire. Most obviously, a paper with more than half a million words is a little much to as an introductory work, especially with things like the War of the Three Armies (because Death Note wasn’t complicated enough!). Media where our heroes destroy a planet also tend to have issues with word of mouth when not a comedy or written by Tomino.
More subtly, there are some serious criticisms of the idea of the Singularity and more generally of transhumanism, which rest on things that would be obviated in HPMoR by nature of the Harry Potter series starting as a fantasy series for young teens, and genre conventions of fantasy series, rather than by the strength of MIRI’s arguments. Many of these criticisms are not very terribly strong. They are still shouted as if strong AI were Rumpelstilskin, unable to stand the sound of an oddly formed name, and HPMoR would have to be twisted very hard to counter them.
Very true. I’d recommend against using Star Wars as a setting for cautionary tales about the Singularity, as well. The Harry Potter setting is just particularly bad, because we’ve already seen and encountered methods for producing human-intelligence artificial constructs that think just like a human. If Rationalist!Harry ends up having the solar system wallpapered with smiley faces, it’s a lot less believable that he did it because The Machine Doesn’t Care when quite a number of other machines already have.
You’ll have to fight assumptions like metaphysical dualism or what limitations self-reinforcing processes might have, no matter what you do, because those mental models apply in fairly broad strokes, but it’s a lot easier to do so when the setting isn’t fighting you at the same time.
You’ll have to fight assumptions like metaphysical dualism or what limitations self-reinforcing processes might have, no matter what you do, because those mental models apply in fairly broad strokes, but it’s a lot easier to do so when the setting isn’t fighting you at the same time.
I don’t think that you have to fight assumptions of metaphysical dualism. I think that the people who don’t believe in UFAI as a risk on that basis are not the ones that are dangerous and might develop an AGI.
That’s an appealing thought, but I’m not sure it’s a true one.
For one, if we’re talking about appealing to general audiences, many folk won’t be trying to develop an AGI, but still be relevant to our interests. Thinking AGI can not invent because they lack souls, or that AGI will be friendly if annoying golden translation droids, may be inconsistent with writing evolutionary algorithms, but is not certainly inconsistent with having investment or political capital.
At a deeper level, a lot of folk do hold such beliefs and simultaneously have inconsistent belief structures, which may still leave them dangerous. It is demonstrably possible have incorrect beliefs about evolution yet run a PCR, or to think it’s easy to preserve semantic significance but also be a computer programmer. It’s tempting to dismiss people who hold irrational beliefs since rationality strongly correlates with long-term success, but from an absolute safety perspective that gets increasingly risky.
You need a bit more to develop an AGI than running PCR that someone else invents. I don’t think you can develop an AGI when you think AGI are impossible due to metaphysical dualism.
You can believe that humans have souls are still design AGI that have minds but no souls, but you won’t get far at developing an AGI with something like a mind if you think that task is impossible.
That strikes me as incredibly likely to backfire. Most obviously, a paper with more than half a million words is a little much to as an introductory work, especially with things like the War of the Three Armies (because Death Note wasn’t complicated enough!). Media where our heroes destroy a planet also tend to have issues with word of mouth when not a comedy or written by Tomino.
More subtly, there are some serious criticisms of the idea of the Singularity and more generally of transhumanism, which rest on things that would be obviated in HPMoR by nature of the Harry Potter series starting as a fantasy series for young teens, and genre conventions of fantasy series, rather than by the strength of MIRI’s arguments. Many of these criticisms are not very terribly strong. They are still shouted as if strong AI were Rumpelstilskin, unable to stand the sound of an oddly formed name, and HPMoR would have to be twisted very hard to counter them.
A lot of people think of strong AI like C3PO from Star Wars. Science fiction has the power of giving people mental models even it isn’t realitstic.
The magical enviroment of the Matrix movies shapes how people think about the simulation argument.
Very true. I’d recommend against using Star Wars as a setting for cautionary tales about the Singularity, as well. The Harry Potter setting is just particularly bad, because we’ve already seen and encountered methods for producing human-intelligence artificial constructs that think just like a human. If Rationalist!Harry ends up having the solar system wallpapered with smiley faces, it’s a lot less believable that he did it because The Machine Doesn’t Care when quite a number of other machines already have.
You’ll have to fight assumptions like metaphysical dualism or what limitations self-reinforcing processes might have, no matter what you do, because those mental models apply in fairly broad strokes, but it’s a lot easier to do so when the setting isn’t fighting you at the same time.
I don’t think that you have to fight assumptions of metaphysical dualism. I think that the people who don’t believe in UFAI as a risk on that basis are not the ones that are dangerous and might develop an AGI.
That’s an appealing thought, but I’m not sure it’s a true one.
For one, if we’re talking about appealing to general audiences, many folk won’t be trying to develop an AGI, but still be relevant to our interests. Thinking AGI can not invent because they lack souls, or that AGI will be friendly if annoying golden translation droids, may be inconsistent with writing evolutionary algorithms, but is not certainly inconsistent with having investment or political capital.
At a deeper level, a lot of folk do hold such beliefs and simultaneously have inconsistent belief structures, which may still leave them dangerous. It is demonstrably possible have incorrect beliefs about evolution yet run a PCR, or to think it’s easy to preserve semantic significance but also be a computer programmer. It’s tempting to dismiss people who hold irrational beliefs since rationality strongly correlates with long-term success, but from an absolute safety perspective that gets increasingly risky.
You need a bit more to develop an AGI than running PCR that someone else invents. I don’t think you can develop an AGI when you think AGI are impossible due to metaphysical dualism.
You can believe that humans have souls are still design AGI that have minds but no souls, but you won’t get far at developing an AGI with something like a mind if you think that task is impossible.