The “strongest” foot I could put forwards is my response to “On current AI not being self-improving:”, where I’m pretty sure you’re just wrong.
You straightforwardly completely misunderstood what I was trying to say on the Bankless podcast: I was saying that GPT-4 does not get smarter each time an instance of it is run in inference mode.
I’ll admit it straight up did not occur to me that you could possibly be analogizing between a human’s lifelong, online learning process, and a single inference run of an already trained model. Those are just completely different things in my ontology.
Anyways, thank you for your response. I actually do think it helped clarify your perspective for me.
Edit: I have now included Yudkowsky’s correction of his intent in the post, as well as an explanation of why I think his corrected argument is still wrong.
You straightforwardly completely misunderstood what I was trying to say on the Bankless podcast: I was saying that GPT-4 does not get smarter each time an instance of it is run in inference mode.
And that’s that, I guess.
I’ll admit it straight up did not occur to me that you could possibly be analogizing between a human’s lifelong, online learning process, and a single inference run of an already trained model. Those are just completely different things in my ontology.
Anyways, thank you for your response. I actually do think it helped clarify your perspective for me.
Edit: I have now included Yudkowsky’s correction of his intent in the post, as well as an explanation of why I think his corrected argument is still wrong.