And I have no intention at this time to do it later, but don’t want to make it a blanket prohibition. Who knows, I could run into a possible metaphor that would, improbably, not suck like a giant black hole.
So the story isn’t planned to end with Harry creating a godlike super entity operating outside of time to ensure eternal life for all thinking beings, to download of all consciousnesses throughout history at their point of death, to lay the foundations of magic, and in that way to all at once resolve questions regarding Atlantis, the afterlife, and how to write the happiest, most clearly drunk on wish fulfillment and indulgent to the ‘bargaining’ stage of grief/acceptance-est ending possible?
Surely not, if the story is nearer its end than its beginning, given its pacing so far. Given Eliezer’s beliefs about FAI, and that the story is not supposed to lie, Harry attempting to create a godlike AI without years of careful research should result in a Bad End.
I admit I had not considered that when making my ridiculous proposal.
However, EY has suggested that a good and bad end are already written. The bad end of 3 Worlds was ‘happily ever after,’ so my ridiculous proposal is made no less valid by not meeting your entirely reasonable criteria for a good end, based on teaching a poor moral.
Oh, that kills my secret hypothesis that the “Source of Magic” was a failed attempt at making a FAI (well, I guess I wasn’t the first to think about it anyway). Thanks for dispelling the doubt, anyway.
Actually, the Word of Eliezer didn’t kill this hypothesis yet. He only said nothing in the story represents either FAI or UFAI, in the allegorical sense. But a real FAI/UFAI can still be there.
But this comment of EY effectively kills it. If an FAI were the source of magic, then it should become ‘represented’ in one of the later arcs to explain it.
I’ve thought that some of Harry’s earlier exploits put Dumbledore and the teachers in the position of trying to deal with an AI, but I suppose that Harry isn’t self-improving enough to be more than a lightweight sketch of the early stages.
Maybe he wouldn’t, but that is a fact about him, not about AI. There’s a narrow slice of concept space that includes uFAI that is almost benign. Not that I think it’s likely that we could intentionally build such an entity. And we shouldn’t want to, for basically the same reasons that we shouldn’t want to build uFAI generally.
Consider that in canon, Aberforth is alive, and Dumbledore spews folk wisdom on love.
After having read this chapter, I now believe that Eliezer intended Dumbledore to represent a failed attempt at the creation of Friendly AI.
Nothing in this story so far represents either FAI or UFAI. Consider it Word of God.
Thank all that is amoral. It is well and good this is made clear.
Wait...
Damn it, all.
And I have no intention at this time to do it later, but don’t want to make it a blanket prohibition. Who knows, I could run into a possible metaphor that would, improbably, not suck like a giant black hole.
No intention?
So the story isn’t planned to end with Harry creating a godlike super entity operating outside of time to ensure eternal life for all thinking beings, to download of all consciousnesses throughout history at their point of death, to lay the foundations of magic, and in that way to all at once resolve questions regarding Atlantis, the afterlife, and how to write the happiest, most clearly drunk on wish fulfillment and indulgent to the ‘bargaining’ stage of grief/acceptance-est ending possible?
Surely not, if the story is nearer its end than its beginning, given its pacing so far. Given Eliezer’s beliefs about FAI, and that the story is not supposed to lie, Harry attempting to create a godlike AI without years of careful research should result in a Bad End.
I admit I had not considered that when making my ridiculous proposal.
However, EY has suggested that a good and bad end are already written. The bad end of 3 Worlds was ‘happily ever after,’ so my ridiculous proposal is made no less valid by not meeting your entirely reasonable criteria for a good end, based on teaching a poor moral.
The Atlantis thing was proposed in a chapter titled “Hold Off On Proposing Solutions”.
That doesn’t mean, “Don’t Propose a Solution.”
Seems kind of like rehashing old ground covered by Spider Robinson, to me.
Oh, that kills my secret hypothesis that the “Source of Magic” was a failed attempt at making a FAI (well, I guess I wasn’t the first to think about it anyway). Thanks for dispelling the doubt, anyway.
Actually, the Word of Eliezer didn’t kill this hypothesis yet. He only said nothing in the story represents either FAI or UFAI, in the allegorical sense. But a real FAI/UFAI can still be there.
But this comment of EY effectively kills it. If an FAI were the source of magic, then it should become ‘represented’ in one of the later arcs to explain it.
Thank you.
I’ve thought that some of Harry’s earlier exploits put Dumbledore and the teachers in the position of trying to deal with an AI, but I suppose that Harry isn’t self-improving enough to be more than a lightweight sketch of the early stages.
How’s that? He doesn’t seem very, you know, apocalyptic.
Eliezer would never portray a failed creation of an FAI as someone so impotent and comparatively benign.
Maybe he wouldn’t, but that is a fact about him, not about AI. There’s a narrow slice of concept space that includes uFAI that is almost benign. Not that I think it’s likely that we could intentionally build such an entity. And we shouldn’t want to, for basically the same reasons that we shouldn’t want to build uFAI generally.
Yes. I asserted a fact about Eliezer, not about AI or green cheese. No ‘but’ is required.