I stand by pretty much everything I wrote in Objections, with the partial exception of the stuff about strawberry alignment, which I should probably rewrite at some point.
Also, Yudkowsky explained exactly how he’d prefer someone to engage with his position “To grapple with the intellectual content of my ideas, consider picking one item from “A List of Lethalities” and engaging with that.”, which I pointed out I’d previously done in a post that literally quotes exactly one point from LoL and explains why it’s wrong. I’ve gotten no response from him on that post, so it seems clear that Yudkowsky isn’t running an optimal ‘good discourse promoting’ engagement policy.
I don’t hold that against him, though. I personally hate arguing with people on this site.
Unless I’m greatly misremembering, you did pick out what you said was your strongest item from Lethalities, separately from this, and I responded to it. You’d just straightforwardly misunderstood my argument in that case, so it wasn’t a long response, but I responded. Asking for a second try is one thing, but I don’t think it’s cool to act like you never picked out any one item or I never responded to it.
I’m kind of ambivalent about this. On the one hand, when there is a misunderstanding, but he claims his argument still goes through after correcting the misunderstanding, it seems like you should also address that corrected form. On the other hand, Quintin Pope’s correction does seem very silly. At least by my analysis:
Similarly, the reason that “GPT-4 does not get smarter each time an instance of it is run in inference mode” is because it’s not programmed to do that[7]. OpenAI could[8] continuously train its models on the inputs you give it, such that the model adapts to your particular interaction style and content, even during the course of a single conversation, similar to the approach suggested in this paper. Doing so would be significantly more expensive and complicated on the backend, and it would also open GPT-4 up to data poisoning attacks.
This approach considers only the things OpenAI could do with their current ChatGPT setup, and yes it’s correct that there’s not much online learning opportunity in this. But that’s precisely why you’d expect GPT+DPO to not be the future of AI; Quintin Pope has clearly identified a capabilities bottleneck that prevents it from staying fully competitive. (Note that humans can learn even if there is a fraction of people who are sharing intentionally malicious information, because unlike GPT and DPO, humans don’t believe everything we’re told.)
A more autonomous AI could collect actionable information at much greater scale, as it wouldn’t be dependent on trusting its users for evaluating what information to update on, and it would have much more information about what’s going on than the chat-based I/O.
This sure does look to me like a huge bottleneck that’s blocking current AI methods, analogous to the evolutionary bottleneck: The full power of the AI cannot be used to accumulate OOM more information to further improve the power of the AI.
I stand by pretty much everything I wrote in Objections, with the partial exception of the stuff about strawberry alignment, which I should probably rewrite at some point.
Also, Yudkowsky explained exactly how he’d prefer someone to engage with his position “To grapple with the intellectual content of my ideas, consider picking one item from “A List of Lethalities” and engaging with that.”, which I pointed out I’d previously done in a post that literally quotes exactly one point from LoL and explains why it’s wrong. I’ve gotten no response from him on that post, so it seems clear that Yudkowsky isn’t running an optimal ‘good discourse promoting’ engagement policy.
I don’t hold that against him, though. I personally hate arguing with people on this site.
Unless I’m greatly misremembering, you did pick out what you said was your strongest item from Lethalities, separately from this, and I responded to it. You’d just straightforwardly misunderstood my argument in that case, so it wasn’t a long response, but I responded. Asking for a second try is one thing, but I don’t think it’s cool to act like you never picked out any one item or I never responded to it.
EDIT: I’m misremembering, it was Quintin’s strongest point about the Bankless podcast. https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky?commentId=cr54ivfjndn6dxraD
I’m kind of ambivalent about this. On the one hand, when there is a misunderstanding, but he claims his argument still goes through after correcting the misunderstanding, it seems like you should also address that corrected form. On the other hand, Quintin Pope’s correction does seem very silly. At least by my analysis:
This approach considers only the things OpenAI could do with their current ChatGPT setup, and yes it’s correct that there’s not much online learning opportunity in this. But that’s precisely why you’d expect GPT+DPO to not be the future of AI; Quintin Pope has clearly identified a capabilities bottleneck that prevents it from staying fully competitive. (Note that humans can learn even if there is a fraction of people who are sharing intentionally malicious information, because unlike GPT and DPO, humans don’t believe everything we’re told.)
A more autonomous AI could collect actionable information at much greater scale, as it wouldn’t be dependent on trusting its users for evaluating what information to update on, and it would have much more information about what’s going on than the chat-based I/O.
This sure does look to me like a huge bottleneck that’s blocking current AI methods, analogous to the evolutionary bottleneck: The full power of the AI cannot be used to accumulate OOM more information to further improve the power of the AI.