My sense is that neither of us have been very persuaded by those conversations, and I claim that’s not very surprising, in a way that’s epistemically defensible for both of us. I’ve spent literal years working through the topic myself in great detail, so it would be very surprising if my view was easily swayed by a short comment chain—and similarly I expect that the same thing is true of you, where you’ve spent much more time thinking about this and have much more detailed thoughts than are easy to represent in a simple comment chain.
I’ve thought about this claim more over the last year. I now disagree. I think that this explanation makes us feel good but ultimately isn’t true.
I can point to several times where I have quickly changed my mind on issues that I have spent months or years considering:
in early 2022, I discarded my entire alignment worldview over the course of two weeks due to Quintin Pope’s arguments. Most of the evidence which changed my mind was comm’d over Gdoc threads. I had formed my worldview over the course of four years of thought, and it crumbled pretty quickly.
In mid-2022, realizing that reward is not the optimization target took me about 10 minutes, even though I had spent 4 years and thousands of hours thinking about optimal policies. I realized while reading an RL paper say “agents are trained to maximize reward”; reflexively asking myself what evidence existed for that claim; and coming back mostly blank. So that’s not quite a comment thread, but still seems like the same low-bandwidth medium.
In early 2023, a basic RL result came out opposite the way which shard theory predicted. I went on a walk and thought about how maybe shard theory was all wrong and maybe I didn’t know what I was talking about. I didn’t need someone to beat me over the head with days of arguments and experimental results. In the end, I came back from my walk and realized I’d plotted the data incorrectly (the predicted outcome did in fact occur).
I think I’ve probably changed my mind on a range of smaller issues (closer to the size of the deceptive alignment case) but have forgotten about them. The presence of example (1) above particularly suggests to me the presence of similar google-doc-mediated insights which happened fast; where I remember one example, probably I have forgotten several more.
To conclude, I think people in comment sections do in fact spend lots of effort to avoid looking dumb, wrong, or falsified, and forget that they’re supposed to be seeking truth.
It seems to me that often people rehearse fancy and cool-sounding reasons for believing roughly the same things they always believed, and comment threads don’t often change important beliefs. Feels more like people defensively explaining why they aren’t idiots, or why they don’t have to change their mind. I mean, if so—I get it, sometimes I feel that way too. But it sucks and I think it happens a lot.
In part, I think, because the site makes truth-seeking harder by spotlighting monkey-brain social-agreement elements.
I’ve thought about this claim more over the last year. I now disagree. I think that this explanation makes us feel good but ultimately isn’t true.
I can point to several times where I have quickly changed my mind on issues that I have spent months or years considering:
in early 2022, I discarded my entire alignment worldview over the course of two weeks due to Quintin Pope’s arguments. Most of the evidence which changed my mind was comm’d over Gdoc threads. I had formed my worldview over the course of four years of thought, and it crumbled pretty quickly.
In mid-2022, realizing that reward is not the optimization target took me about 10 minutes, even though I had spent 4 years and thousands of hours thinking about optimal policies. I realized while reading an RL paper say “agents are trained to maximize reward”; reflexively asking myself what evidence existed for that claim; and coming back mostly blank. So that’s not quite a comment thread, but still seems like the same low-bandwidth medium.
In early 2023, a basic RL result came out opposite the way which shard theory predicted. I went on a walk and thought about how maybe shard theory was all wrong and maybe I didn’t know what I was talking about. I didn’t need someone to beat me over the head with days of arguments and experimental results. In the end, I came back from my walk and realized I’d plotted the data incorrectly (the predicted outcome did in fact occur).
I think I’ve probably changed my mind on a range of smaller issues (closer to the size of the deceptive alignment case) but have forgotten about them. The presence of example (1) above particularly suggests to me the presence of similar google-doc-mediated insights which happened fast; where I remember one example, probably I have forgotten several more.
To conclude, I think people in comment sections do in fact spend lots of effort to avoid looking dumb, wrong, or falsified, and forget that they’re supposed to be seeking truth.
In part, I think, because the site makes truth-seeking harder by spotlighting monkey-brain social-agreement elements.