Meaning to post this for a while not because it is a novel idea but just so it is recorded somewhere.
I think that there is a good chance that the story finishes with A SuperIntelligence of sorts. Furthermore, I think that if a SI is actually brought in the story, there is at least 50% chance that it will be a SI built/cast with good intentions which nonetheless destroys (in a way) humanity and/or the universe.
I guess by “too transparently” I mean the following. The worst kind of author tract is when an author shoehorns in some point they want to make in a way that has nothing to do with and distracts from the rest of the story. We already know that HPMoR is about rationality and whatnot—it’s exactly what it says on the tin in that respect—but nothing in the story so far suggests the later appearance of a superintelligence, and if one were to appear it would feel shoehorned in.
Eliezer specifically and publicly said that this will not happen. There will be no superintelligent AI in HPMOR. I see no reason to doubt Eliezer’s word on the matter.
I’m pretty sure that what he said was that nothing was intended as an allegory—or maybe a metaphor or something of the sort—to an artificial super-intelligence.
What is Magic besides some form of superintelligence, or at least the remnants of superintelligence? The strongest evidence is that magic-users and even creators don’t really have to understand how the spells actually work in order to use them. There is information entering the system from somewhere, and it’s enough information to accurately interpret the vague wand movements and sounds of humans and do sufficiently amazing things without too many chaotic side-effects. Even the chaotic side-effects are usually improbably harmless. It’s like an almost-Friendly, or perhaps a broken previously-Friendly, AI. Possibly the result of some ancient Singularity that is no longer explicitly remembered.
You also don’t need to know how algorithms work in order to use them, or even to write them. I don’t know how Ukkonen’s algorithm works, but I’ve implemented it. You haven’t seen magic until you’ve seen the suffixes of a string sorted in linear time.
What is Gravity besides some form of superintelligence, or at least the remnants of superintelligence? The strongest evidence is that engineers and even physicists don’t really have to understand how gravity actually works in order to use it. There is information entering the system from somewhere, and it’s enough information to accurately detect when an object is unsupported or structurally unstable. And the chaotic side-effects tend to be improbably harmful. It’s like an almost-Friendly, or perhaps a broken previously-Friendly, AI. Possibly the result of some ancient Singularity that is no longer explicitly remembered.
Meaning to post this for a while not because it is a novel idea but just so it is recorded somewhere.
I think that there is a good chance that the story finishes with A SuperIntelligence of sorts. Furthermore, I think that if a SI is actually brought in the story, there is at least 50% chance that it will be a SI built/cast with good intentions which nonetheless destroys (in a way) humanity and/or the universe.
Doubtful. I think people would complain about HPMoR becoming too transparently an Author Tract in that case.
Keep in mind that is already one of the more common criticisms of the story.
I guess by “too transparently” I mean the following. The worst kind of author tract is when an author shoehorns in some point they want to make in a way that has nothing to do with and distracts from the rest of the story. We already know that HPMoR is about rationality and whatnot—it’s exactly what it says on the tin in that respect—but nothing in the story so far suggests the later appearance of a superintelligence, and if one were to appear it would feel shoehorned in.
Eliezer specifically and publicly said that this will not happen. There will be no superintelligent AI in HPMOR. I see no reason to doubt Eliezer’s word on the matter.
I’m pretty sure that what he said was that nothing was intended as an allegory—or maybe a metaphor or something of the sort—to an artificial super-intelligence.
Somebody has the link, I expect.
fair enough
What is Magic besides some form of superintelligence, or at least the remnants of superintelligence? The strongest evidence is that magic-users and even creators don’t really have to understand how the spells actually work in order to use them. There is information entering the system from somewhere, and it’s enough information to accurately interpret the vague wand movements and sounds of humans and do sufficiently amazing things without too many chaotic side-effects. Even the chaotic side-effects are usually improbably harmless. It’s like an almost-Friendly, or perhaps a broken previously-Friendly, AI. Possibly the result of some ancient Singularity that is no longer explicitly remembered.
You don’t need to know how muscles work in order to use them to move.
You also don’t need to know how algorithms work in order to use them, or even to write them. I don’t know how Ukkonen’s algorithm works, but I’ve implemented it. You haven’t seen magic until you’ve seen the suffixes of a string sorted in linear time.
Here’s another, roughly isomorphic statement:
Never mind, I see your point, although I still disagree with your conclusion on the grounds of narrative plausibility and good writing.