Why would it move toward Paul? He made almost no arguments, and Eliezer made lots. When Paul entered the chat it was focused on describing what each of them believe in order to find a bet, not communicating why they believe it.
I think I was expecting somewhat better from EY; I was expecting more solid, well-explained arguments/rebuttals to Paul’s points from “Takeoff Speeds.” Also EY seemed to be angry and uncharitable, as opposed to calm and rational. I was imagining an audience that mostly already agrees with Paul encountering this and being like “Yeah this confirms what we already thought.”
FWIW “yeah this confirms what we already thought” makes no sense to me. I heard someone say this the other day, and I was a bit floored. Who knew that Eliezer would respond with a long list of examples that didn’t look like continuous progress at the time, and said this more than 3 days ago?
I feel like I got a much better sense of Eliezer’s perspective reading this. One key element is whether AI progress is surprising, which it often is even if you can make trend-line arguments after-the-fact, people basically don’t, and when they do they often get it wrong. (Here’s an example of Dario Amodei + Danny Hernandez finding a trend in AI, that apparently immediately stopped trending as soon as they reported it.) There’s also lots of details about what the chimps-to-humans transition shows, and various other points (like regulation preventing most AI progress from showing up in GDP).
I do think I could’ve gotten a lot of this understanding earlier by more carefully reading IEM, and now that I’m rereading it I get it much better. But nobody seems to have engaged with the arguments in it and tried to connect them to Paul’s post that I can see. Perhaps someone did, and I’d be pretty interested to read that now with the benefit of hindsight.
Who knew that Eliezer would respond with a long list of examples that didn’t look like continuous progress at the time, and said this more than 3 days ago?
What examples are you thinking of here? I see (1) humans and chimps, (2) nukes, (3) AlphaGo, (4) invention of airplanes by the Wright brothers, (5) AlphaFold 2, (6) Transformers, (7) TPUs, and (8) GPT-3.
I’ve explicitly seen 1, 2, and probably 4 in arguments before. (1 and 2 are in Takeoff speeds.) The remainder seem like they plausibly did look like continuous progress* at the time. (Paul explicitly challenged 3, 6, and 7, and I feel like 5 and 8 are also debatable, though 8 is a more complicated story.) I also think I’ve seen some of 3, 5, 6, 7, and 8 on Eliezer’s Twitter claimed as evidence for Eliezer over Hanson in the foom debate, idk which off the top of my head.
I did not know that Eliezer would respond with this list of examples, but that’s mostly because I expected him to have different arguments, e.g. more of an emphasis on a core of intelligence that current systems don’t have and future systems will have, or more emphasis on aspects of recursive self improvement, or some unknown argument because I hadn’t talked to Eliezer nor seen a rebuttal from him so it seemed quite plausible he had points I hadn’t considered. The list of examples itself was not all that novel to me.
(Eliezer of course also has other arguments in this post; I’m just confused about the emphasis on a “long list of examples” in the parent comment.)
* Note that “continuous progress” here is a stand-in for the-strategy-Paul-uses-to-predict, which as I understand it is more like “form beliefs about how outputs scale with effort in this domain using past examples / trend lines, then see how much effort is being added now relative to the past, and use that to make a prediction”.
To be clear, I think that if EY put more effort into it (and perhaps had some help from other people as RAs) he could write a book or sequence rebutting Paul & Katja much more thoroughly and convincingly than this post did. [ETA: I.e. I’m much more on Team Yud than Team Paul here.] The stuff said here felt like a rehashing of stuff from IEM and the Hanson-Yudkowsky AI foom debate to me. [ETA: Lots of these points were good! Just not surprising to me, and not presented as succinctly and compellingly (to an audience of me) as they could have been.]
Also, it’s plausible that a lot of what’s happening here is that I’m conflating my own cruxes and confusions for The Big Points EY Objectively Should Have Covered To Be More Convincing. :)
ETA: And the fact that people updated towards EY on average, and significantly so, definitely updates me more towards this hypothesis!
This is my take: if I had been very epistemically self-aware, and carefully distinguished my own impression/models and my all-things considered beliefs, before I started reading, then this would’ve updated my models towards Eliezer (because hey, I heard new not-entirely-uncompelling arguments) but my all-things considered beliefs away from Eliezer (because I would have expected it to be even more convincing).
I’m not that surprised by the survey results. Most people don’t obey conservation of expected evidence, because they don’t take into account arguments they haven’t heard / don’t think carefully enough about how deferring to others works. People will predictably update toward a thesis after reading a book that argues for it, not have a 50⁄50 chance of updating positively or negatively on it.
I didn’t move significantly towards either party but it seemed like Eliezer was avoiding bets, and generally, in my humble opinion, making his theory unfalsifiable rather than showing what its true weakpoints are. That doesn’t seem like what a confidently correct person would do (but it was already mostly what I expected, so I didn’t update by much on his theory’s truth value).
ETA: After re-reading my comment, I feel I may have come off too strong. I’ll completely unendorse my language and comment if people think this sort of thing is not conducive to productive discourse. Also, I greatly appreciate both parties for doing this.
I find it valuable to know what impressions other people had themselves; it only becomes tone-policing when you worry loudly about what impressions other people ‘might’ have. (If one is worried about how it looks to say so publicly, one could always just DM me (though I might not respond).)
Why would it move toward Paul? He made almost no arguments, and Eliezer made lots. When Paul entered the chat it was focused on describing what each of them believe in order to find a bet, not communicating why they believe it.
I think I was expecting somewhat better from EY; I was expecting more solid, well-explained arguments/rebuttals to Paul’s points from “Takeoff Speeds.” Also EY seemed to be angry and uncharitable, as opposed to calm and rational. I was imagining an audience that mostly already agrees with Paul encountering this and being like “Yeah this confirms what we already thought.”
FWIW “yeah this confirms what we already thought” makes no sense to me. I heard someone say this the other day, and I was a bit floored. Who knew that Eliezer would respond with a long list of examples that didn’t look like continuous progress at the time, and said this more than 3 days ago?
I feel like I got a much better sense of Eliezer’s perspective reading this. One key element is whether AI progress is surprising, which it often is even if you can make trend-line arguments after-the-fact, people basically don’t, and when they do they often get it wrong. (Here’s an example of Dario Amodei + Danny Hernandez finding a trend in AI, that apparently immediately stopped trending as soon as they reported it.) There’s also lots of details about what the chimps-to-humans transition shows, and various other points (like regulation preventing most AI progress from showing up in GDP).
I do think I could’ve gotten a lot of this understanding earlier by more carefully reading IEM, and now that I’m rereading it I get it much better. But nobody seems to have engaged with the arguments in it and tried to connect them to Paul’s post that I can see. Perhaps someone did, and I’d be pretty interested to read that now with the benefit of hindsight.
What examples are you thinking of here? I see (1) humans and chimps, (2) nukes, (3) AlphaGo, (4) invention of airplanes by the Wright brothers, (5) AlphaFold 2, (6) Transformers, (7) TPUs, and (8) GPT-3.
I’ve explicitly seen 1, 2, and probably 4 in arguments before. (1 and 2 are in Takeoff speeds.) The remainder seem like they plausibly did look like continuous progress* at the time. (Paul explicitly challenged 3, 6, and 7, and I feel like 5 and 8 are also debatable, though 8 is a more complicated story.) I also think I’ve seen some of 3, 5, 6, 7, and 8 on Eliezer’s Twitter claimed as evidence for Eliezer over Hanson in the foom debate, idk which off the top of my head.
I did not know that Eliezer would respond with this list of examples, but that’s mostly because I expected him to have different arguments, e.g. more of an emphasis on a core of intelligence that current systems don’t have and future systems will have, or more emphasis on aspects of recursive self improvement, or some unknown argument because I hadn’t talked to Eliezer nor seen a rebuttal from him so it seemed quite plausible he had points I hadn’t considered. The list of examples itself was not all that novel to me.
(Eliezer of course also has other arguments in this post; I’m just confused about the emphasis on a “long list of examples” in the parent comment.)
* Note that “continuous progress” here is a stand-in for the-strategy-Paul-uses-to-predict, which as I understand it is more like “form beliefs about how outputs scale with effort in this domain using past examples / trend lines, then see how much effort is being added now relative to the past, and use that to make a prediction”.
That’s helpful, thanks!
To be clear, I think that if EY put more effort into it (and perhaps had some help from other people as RAs) he could write a book or sequence rebutting Paul & Katja much more thoroughly and convincingly than this post did. [ETA: I.e. I’m much more on Team Yud than Team Paul here.] The stuff said here felt like a rehashing of stuff from IEM and the Hanson-Yudkowsky AI foom debate to me. [ETA: Lots of these points were good! Just not surprising to me, and not presented as succinctly and compellingly (to an audience of me) as they could have been.]
Also, it’s plausible that a lot of what’s happening here is that I’m conflating my own cruxes and confusions for The Big Points EY Objectively Should Have Covered To Be More Convincing. :)
ETA: And the fact that people updated towards EY on average, and significantly so, definitely updates me more towards this hypothesis!
This is my take: if I had been very epistemically self-aware, and carefully distinguished my own impression/models and my all-things considered beliefs, before I started reading, then this would’ve updated my models towards Eliezer (because hey, I heard new not-entirely-uncompelling arguments) but my all-things considered beliefs away from Eliezer (because I would have expected it to be even more convincing).
I’m not that surprised by the survey results. Most people don’t obey conservation of expected evidence, because they don’t take into account arguments they haven’t heard / don’t think carefully enough about how deferring to others works. People will predictably update toward a thesis after reading a book that argues for it, not have a 50⁄50 chance of updating positively or negatively on it.
I didn’t move significantly towards either party but it seemed like Eliezer was avoiding bets, and generally, in my humble opinion, making his theory unfalsifiable rather than showing what its true weakpoints are. That doesn’t seem like what a confidently correct person would do (but it was already mostly what I expected, so I didn’t update by much on his theory’s truth value).
ETA: After re-reading my comment, I feel I may have come off too strong. I’ll completely unendorse my language and comment if people think this sort of thing is not conducive to productive discourse. Also, I greatly appreciate both parties for doing this.
I find it valuable to know what impressions other people had themselves; it only becomes tone-policing when you worry loudly about what impressions other people ‘might’ have. (If one is worried about how it looks to say so publicly, one could always just DM me (though I might not respond).)
FWIW I also don’t like the phrasing of my comment very much either. I came back thinking to remove it but saw you’d already replied :P