Very true. I’d recommend against using Star Wars as a setting for cautionary tales about the Singularity, as well. The Harry Potter setting is just particularly bad, because we’ve already seen and encountered methods for producing human-intelligence artificial constructs that think just like a human. If Rationalist!Harry ends up having the solar system wallpapered with smiley faces, it’s a lot less believable that he did it because The Machine Doesn’t Care when quite a number of other machines already have.
You’ll have to fight assumptions like metaphysical dualism or what limitations self-reinforcing processes might have, no matter what you do, because those mental models apply in fairly broad strokes, but it’s a lot easier to do so when the setting isn’t fighting you at the same time.
You’ll have to fight assumptions like metaphysical dualism or what limitations self-reinforcing processes might have, no matter what you do, because those mental models apply in fairly broad strokes, but it’s a lot easier to do so when the setting isn’t fighting you at the same time.
I don’t think that you have to fight assumptions of metaphysical dualism. I think that the people who don’t believe in UFAI as a risk on that basis are not the ones that are dangerous and might develop an AGI.
That’s an appealing thought, but I’m not sure it’s a true one.
For one, if we’re talking about appealing to general audiences, many folk won’t be trying to develop an AGI, but still be relevant to our interests. Thinking AGI can not invent because they lack souls, or that AGI will be friendly if annoying golden translation droids, may be inconsistent with writing evolutionary algorithms, but is not certainly inconsistent with having investment or political capital.
At a deeper level, a lot of folk do hold such beliefs and simultaneously have inconsistent belief structures, which may still leave them dangerous. It is demonstrably possible have incorrect beliefs about evolution yet run a PCR, or to think it’s easy to preserve semantic significance but also be a computer programmer. It’s tempting to dismiss people who hold irrational beliefs since rationality strongly correlates with long-term success, but from an absolute safety perspective that gets increasingly risky.
You need a bit more to develop an AGI than running PCR that someone else invents. I don’t think you can develop an AGI when you think AGI are impossible due to metaphysical dualism.
You can believe that humans have souls are still design AGI that have minds but no souls, but you won’t get far at developing an AGI with something like a mind if you think that task is impossible.
Very true. I’d recommend against using Star Wars as a setting for cautionary tales about the Singularity, as well. The Harry Potter setting is just particularly bad, because we’ve already seen and encountered methods for producing human-intelligence artificial constructs that think just like a human. If Rationalist!Harry ends up having the solar system wallpapered with smiley faces, it’s a lot less believable that he did it because The Machine Doesn’t Care when quite a number of other machines already have.
You’ll have to fight assumptions like metaphysical dualism or what limitations self-reinforcing processes might have, no matter what you do, because those mental models apply in fairly broad strokes, but it’s a lot easier to do so when the setting isn’t fighting you at the same time.
I don’t think that you have to fight assumptions of metaphysical dualism. I think that the people who don’t believe in UFAI as a risk on that basis are not the ones that are dangerous and might develop an AGI.
That’s an appealing thought, but I’m not sure it’s a true one.
For one, if we’re talking about appealing to general audiences, many folk won’t be trying to develop an AGI, but still be relevant to our interests. Thinking AGI can not invent because they lack souls, or that AGI will be friendly if annoying golden translation droids, may be inconsistent with writing evolutionary algorithms, but is not certainly inconsistent with having investment or political capital.
At a deeper level, a lot of folk do hold such beliefs and simultaneously have inconsistent belief structures, which may still leave them dangerous. It is demonstrably possible have incorrect beliefs about evolution yet run a PCR, or to think it’s easy to preserve semantic significance but also be a computer programmer. It’s tempting to dismiss people who hold irrational beliefs since rationality strongly correlates with long-term success, but from an absolute safety perspective that gets increasingly risky.
You need a bit more to develop an AGI than running PCR that someone else invents. I don’t think you can develop an AGI when you think AGI are impossible due to metaphysical dualism.
You can believe that humans have souls are still design AGI that have minds but no souls, but you won’t get far at developing an AGI with something like a mind if you think that task is impossible.