No he doesn’t. He’s just an AI creator. If he was representing an AI the entire story would have been over at Chapter 1.
Eliezer would not write a parable in which an AI made a slow journey to power, on the way learning valuable moral lessons through social interaction with the people he cares about. That completely undermines all that he stands for.
Therefore, for Harry to do anything other than destroy everything that we, the readers, hold dear, would go against everything Eliezer has written about AI.
Harry destroying the world would make a reasonable conclusion albeit for different reasons than Harry being an AI. That said, the lesson to be learned is too complicated for fanfic readers to be expected to learn from just a story. It is hard to convey that Harry made the correct decision to try to save the world even though he failed and destroyed everything because the alternative was certain failure and at least his chances were better than that.
Which reminds me: I hope there is a chapter in which Harry (or someone else) loses something dramatically but realizes (or has Harry explain to them) that they made the right choice anyway. And the converse in which a decision is wrong even though it turned out well.
Which reminds me: I hope there is a chapter in which Harry (or someone else) loses something dramatically but realizes (or has Harry explain to them) that they made the right choice anyway. And the converse in which a decision is wrong even though it turned out well.
Harry is not an AI creator. That would require him to create an AI. There is no AI in the story, and no hints that there will be any AI. Harry is on his way to achieving personal power, without any technological intermediator.
Eliezer would not write a parable in which an AI made a slow journey to power, on the way learning valuable moral lessons through social interaction with the people he cares about. That completely undermines all that he stands for.
Why? Do you think Eliezer believes AIs are incapable of learning valuable moral lessons through social interaction with the people it cares about? Presumably, even within Eliezer’s model, an AI can be moral and kind and loving to humans for some time period. If you think that “unfriendly AI” will necessarily be amoral and incapable of love—well, that’s just silly. The reasons for believing that are exactly the same as the reasons for believing that computers can never be intelligent.
As to “slow”, that’s just one of the things that scales differently because of the analogy being used.
Harry is not an AI creator. That would require him to create an AI.
What the? I’m downgrading from your “Harry represents an AI” to “Harry represents an AI creator” and now you want to get technical and say “there is no AI in the story”?
Why? Do you think Eliezer believes AIs are incapable of learning valuable moral lessons through social interaction with the people it cares about? Presumably, even within Eliezer’s model, an AI can be moral and kind and loving to humans for some time period. If you think that “unfriendly AI” will necessarily be amoral and incapable of love—well, that’s just silly.
I believe Eliezer would rather dip his… arm… in a vat of acid than suggest to his readers that an AI will act like a clever well intentioned child that learns valuable moral lessons through social interactions with those he loves, over a long period of a slow, rather inefficient quest for power.
I believe Eliezer would rather dip his… arm… in a vat of acid than suggest to his readers that an AI will act like a clever well intentioned child that learns valuable moral lessons through social interactions with those he loves, over a long period of a slow, rather inefficient quest for power.
Please cool it down a bit.
Just two chapters ago, Harry was compared to a all-powerful summoned entity that knew nothing about how humans worked and so had to be taught not to eat people and stuff. Hermione accepted that comparison as somewhat fitting. So the comparison of Harry to non-human intelligences is MoR!Canonical, even though it was a huge exaggeration even within the context of the story, expressed by people genuinely scared of Harry—the Harry who seemed more powerful than a Nameless Horror and might need the magical nations of the world to ally against him in order to seal off his incursion into our reality.
My hyperbole is going of like crazy over here. You should probably cease this conversation before it becomes even more heated. Both of you.
I don’t think you get it. If you think you can substitute “write something slightly at odds with his beliefs into a fanfiction” even remotely appropriately into that context then you do not have enough knowledge about Eliezer’s publicly expressed goals to be making the kind of moral judgement you presume to make.
I wrote my claim because it sounds like a hyperbole but isn’t. For the right value of ‘acid’ my claim is a literal counterfactual prediction. And I don’t mean a “dipping his arm in vinegar” technicality. I’d estimate the approximate value of acid required to make the preference stand is something that would fall short of leaving permanent scars but give him significant pain for a week. Maybe that is a little light… perhaps leaving some scarring but leaving full function and sensitivity on the hand so he can still type. Just so long as some of that typing constitutes a massive disclaimer and retraction to undo some of the damage his toxic propaganda would have done.
I said Harry represents an AI. You said Harry is an AI creator. I was just taking you at your word. It didn’t occur to me that you meant Harry represents an AI creator, because that wouldn’t make any more sense. Harry does not represent an AI creator unless he creates something that is in some way analogous to an AI. There is no hint he will do that. Harry is not creating an AI, and is not creating anything analogous to an AI.
Whereas Harry has properties analogous to a recursively self-improving AI on the way to having godlike power. And the danger that Harry will pose once he attains these godlike powers is one of the central themes of the fanfic, and is highlighted throughout the entire fanfic, over and over again.
I believe Eliezer would rather dip his… arm… in a vat of acid than suggest to his readers that an AI will act like a clever well intentioned child that learns valuable moral lessons through social interactions with those he loves, over a long period of a slow, rather inefficient quest for power.
The slowness is due to humans, who represent the AIs, operating only at human speeds, and does not detract from my point.
I still have the impression that you think Eliezer’s view implies that an AI cannot be well-intentioned, or learn valuable moral lessons through social interactions with those he loves. Why? That is exactly as absurd as saying that computers can never be intelligent. (I am ignoring questions of whether a computer program can subjective experience love, since “love” used as a verb refers to the action rather than to the subjective experience. But, still, anyone who follows Eliezer at all should be a materialist and therefore believe that a computer program can in principle love as well as a person can.)
I said Harry represents an AI. You said Harry is an AI creator. I was just taking you at your word.
Your interpretation is technically grammatically valid but not remotely reasonable.
I still have the impression that you think Eliezer’s view implies that an AI cannot be well-intentioned, or learn valuable moral lessons through social interactions with those he loves. Why? That is exactly as absurd as saying that computers can never be intelligent.
This impression, however, is not even a technically valid interpretation.
Whoah, whoah. Wait. Back up.
What the heck?
If not all of them, at least Harry does.
No he doesn’t. He’s just an AI creator. If he was representing an AI the entire story would have been over at Chapter 1.
Eliezer would not write a parable in which an AI made a slow journey to power, on the way learning valuable moral lessons through social interaction with the people he cares about. That completely undermines all that he stands for.
Harry destroying the world would make a reasonable conclusion albeit for different reasons than Harry being an AI. That said, the lesson to be learned is too complicated for fanfic readers to be expected to learn from just a story. It is hard to convey that Harry made the correct decision to try to save the world even though he failed and destroyed everything because the alternative was certain failure and at least his chances were better than that.
Which reminds me: I hope there is a chapter in which Harry (or someone else) loses something dramatically but realizes (or has Harry explain to them) that they made the right choice anyway. And the converse in which a decision is wrong even though it turned out well.
Agreed.
Harry is not an AI creator. That would require him to create an AI. There is no AI in the story, and no hints that there will be any AI. Harry is on his way to achieving personal power, without any technological intermediator.
Why? Do you think Eliezer believes AIs are incapable of learning valuable moral lessons through social interaction with the people it cares about? Presumably, even within Eliezer’s model, an AI can be moral and kind and loving to humans for some time period. If you think that “unfriendly AI” will necessarily be amoral and incapable of love—well, that’s just silly. The reasons for believing that are exactly the same as the reasons for believing that computers can never be intelligent.
As to “slow”, that’s just one of the things that scales differently because of the analogy being used.
What the? I’m downgrading from your “Harry represents an AI” to “Harry represents an AI creator” and now you want to get technical and say “there is no AI in the story”?
I believe Eliezer would rather dip his… arm… in a vat of acid than suggest to his readers that an AI will act like a clever well intentioned child that learns valuable moral lessons through social interactions with those he loves, over a long period of a slow, rather inefficient quest for power.
Please cool it down a bit.
Just two chapters ago, Harry was compared to a all-powerful summoned entity that knew nothing about how humans worked and so had to be taught not to eat people and stuff. Hermione accepted that comparison as somewhat fitting. So the comparison of Harry to non-human intelligences is MoR!Canonical, even though it was a huge exaggeration even within the context of the story, expressed by people genuinely scared of Harry—the Harry who seemed more powerful than a Nameless Horror and might need the magical nations of the world to ally against him in order to seal off his incursion into our reality.
My hyperbole is going of like crazy over here. You should probably cease this conversation before it becomes even more heated. Both of you.
I don’t think you get it. If you think you can substitute “write something slightly at odds with his beliefs into a fanfiction” even remotely appropriately into that context then you do not have enough knowledge about Eliezer’s publicly expressed goals to be making the kind of moral judgement you presume to make.
I wrote my claim because it sounds like a hyperbole but isn’t. For the right value of ‘acid’ my claim is a literal counterfactual prediction. And I don’t mean a “dipping his arm in vinegar” technicality. I’d estimate the approximate value of acid required to make the preference stand is something that would fall short of leaving permanent scars but give him significant pain for a week. Maybe that is a little light… perhaps leaving some scarring but leaving full function and sensitivity on the hand so he can still type. Just so long as some of that typing constitutes a massive disclaimer and retraction to undo some of the damage his toxic propaganda would have done.
I said Harry represents an AI. You said Harry is an AI creator. I was just taking you at your word. It didn’t occur to me that you meant Harry represents an AI creator, because that wouldn’t make any more sense. Harry does not represent an AI creator unless he creates something that is in some way analogous to an AI. There is no hint he will do that. Harry is not creating an AI, and is not creating anything analogous to an AI.
Whereas Harry has properties analogous to a recursively self-improving AI on the way to having godlike power. And the danger that Harry will pose once he attains these godlike powers is one of the central themes of the fanfic, and is highlighted throughout the entire fanfic, over and over again.
The slowness is due to humans, who represent the AIs, operating only at human speeds, and does not detract from my point.
I still have the impression that you think Eliezer’s view implies that an AI cannot be well-intentioned, or learn valuable moral lessons through social interactions with those he loves. Why? That is exactly as absurd as saying that computers can never be intelligent. (I am ignoring questions of whether a computer program can subjective experience love, since “love” used as a verb refers to the action rather than to the subjective experience. But, still, anyone who follows Eliezer at all should be a materialist and therefore believe that a computer program can in principle love as well as a person can.)
Your interpretation is technically grammatically valid but not remotely reasonable.
This impression, however, is not even a technically valid interpretation.