I agree with the frustration. Wolfram was being deliberately obtuse. Eliezer summarised it well toward the end, something like “I am telling you that the forest is on fire and you are telling me that we first need to define what do we mean by fire”. I understand that we need definitions for things like “agency” or technology “wanting” something and even why do we mean by a “human” in the year 2070. But Wolfram went a bit too far. A naive genius that did not want to play along in the conversation. Smart teenagers talk like that.
Another issue with this conversation was that, even though they were listening to each other, Wolfram was too keen to go back to his current pet ideas. Eliezer’s argument is (not sure) independent on whether we think the AIs will fall under computational “irreducibilteh”, but he kept going back to this over and over.
I blame the ineffective exchange primarily on Wolfram in this case. Eliezer is also somewhat responsible for the useless rabbitholes in this conversation. He explains his ideas vividly and clearly. But there is something about his rhetoric style that does not persuade those who have not spent time engaging with his ideas beforehand, even someone as impressive as Wolfram. He also goes on too long on some detail or some contrived example rather than ensuring that the interlocutor in the same epistemological plane.
I agree with the frustration. Wolfram was being deliberately obtuse. Eliezer summarised it well toward the end, something like “I am telling you that the forest is on fire and you are telling me that we first need to define what do we mean by fire”. I understand that we need definitions for things like “agency” or technology “wanting” something and even why do we mean by a “human” in the year 2070. But Wolfram went a bit too far. A naive genius that did not want to play along in the conversation. Smart teenagers talk like that.
Another issue with this conversation was that, even though they were listening to each other, Wolfram was too keen to go back to his current pet ideas. Eliezer’s argument is (not sure) independent on whether we think the AIs will fall under computational “irreducibilteh”, but he kept going back to this over and over.
I blame the ineffective exchange primarily on Wolfram in this case. Eliezer is also somewhat responsible for the useless rabbitholes in this conversation. He explains his ideas vividly and clearly. But there is something about his rhetoric style that does not persuade those who have not spent time engaging with his ideas beforehand, even someone as impressive as Wolfram. He also goes on too long on some detail or some contrived example rather than ensuring that the interlocutor in the same epistemological plane.
Anyway, fun thing to listen to