This exchange has given me the feeling of pushing on a string, so instead of pretending that I feel like engaging on the object level will be productive, I’m going to try to explain why I don’t feel that way.
It seems to me like you’re trying to find an angle where our disagreement disappears. This is useful for papering over disagreements or pushing them off, which can be valuable when that reallocates attention from zero-sum conflict to shared production or trade relations. But that’s not the sort of thing I’d hope for on a rationalist forum. What I’d expect there is something more like double-cruxing, trying to find the angle at which our core disagreement becomes most visible and salient.
Sentences like this seem like a strong tell to me:
I do think that a more continuous model is accurate here, though I share at least a bit of your sense (or at least what I perceive to be your sense) of there being some discrete shift between the two different modes of thinking.
While “I think you’re partly wrong, but also partly right” is a position I often hold about someone I’m arguing with, it doesn’t clarify things any more than “let’s agree to disagree.” It can set the frame for a specific effort to articulate what exactly I think is wrong under what circumstances. What I would have hoped to see from you would have been more like:
If you don’t see why I care about pointing out this distinction, you could just ask me why you should care.
If you think you know why I care but disagree, you could explain what you think I’m missing.
If you’re unsure whether you have a good sense of the disagreement, you could try explaining how you think our points of view differ.
Thanks for popping up a meta-level. Seems reasonable in this circumstance.
I agree with you that that one paragraph is mostly doing the “I think you’re partly wrong, but also partly right” thing, but the rest of my comment doesn’t really do that, so I am a bit sad/annoyed that you perceived that to be my primary intention (or at least that’s what I read into your above comment).
I also think that paragraph is doing some other important work that isn’t only about the “let’s avoid a zero-sum conflict situation”, but I don’t really want to go into that too much, since I expect it to be less valuable than the other conversations we could be having.
The rest of my comment is pointing out some relatively concrete ways that make me doubt the things that you are saying. I have a model in my head of where you are coming from, and can see how that contradicts with other parts of reality that seem a lot more robust than the justifications that I think underlie your model.
I don’t yet have a sense that you see those parts of reality that make me think that your models are unlikely to be correct, and so I was trying primarily to point them out to you, and then for you to either produce a response of how you have actually integrated them, or for you to change your mind.
I think this mostly overlaps with your second suggested frame, so I guess we can just continue from there. I think I know why you care, and can probably give at least an approximate model of where you are coming from. I tried to explain what I think you are missing, which was concretely the concerns around bounded computation and the relatively universal need for people to coordinate with other people, which seem to me to contradict some of the things you are saying.
Also happy to give a summary of where I think you are coming from, and what my best guess of your current model is. While I see some contradictions in your model (or my best guess of it), it does seem actually important to point out that I’ve found value in thinking about it and am interested in seeing it fleshed out further (and am as such interested in continuing this conversation).
This could either happen in the form of...
you trying to more explicitly summarize what you think my current model is missing (probably by summarizing my model first),
or by me summarizing your model and asking some clarifying question,
or by you responding to my concrete objections in an analytic way,
or by me responding to your latest comment (though I don’t really know how to do that, since something about the expected frame of that reply feels off)
I don’t really have any super strong preference for any of these, but will likely not respond for a day. After that, I will try summarizing your perspective a bit more explicitly and then either ask some followup questions or point out the contradictions I currently see in it more explicitly.
I don’t understand the relevance of your responses to my stated model. I’d like it if you tried to explain why your responses are relevant, in a way that characterizes what you think I’m saying more explicitly.
My other most recent comment tries to show what your perspective looks like to me, and what I think it’s missing.
I think this is the most helpful encapsulation I’ve gotten of your preferred meta-frame.
I think I mostly just agree with it now that it’s spelled out a better. (I think I have some disagreements about how exactly rationalist forums should relate to this, and what moods are useful. But in this case I basically agree that the actions you suggest at the end are the right move and it seems better to focus on that).
This exchange has given me the feeling of pushing on a string, so instead of pretending that I feel like engaging on the object level will be productive, I’m going to try to explain why I don’t feel that way.
It seems to me like you’re trying to find an angle where our disagreement disappears. This is useful for papering over disagreements or pushing them off, which can be valuable when that reallocates attention from zero-sum conflict to shared production or trade relations. But that’s not the sort of thing I’d hope for on a rationalist forum. What I’d expect there is something more like double-cruxing, trying to find the angle at which our core disagreement becomes most visible and salient.
Sentences like this seem like a strong tell to me:
While “I think you’re partly wrong, but also partly right” is a position I often hold about someone I’m arguing with, it doesn’t clarify things any more than “let’s agree to disagree.” It can set the frame for a specific effort to articulate what exactly I think is wrong under what circumstances. What I would have hoped to see from you would have been more like:
If you don’t see why I care about pointing out this distinction, you could just ask me why you should care.
If you think you know why I care but disagree, you could explain what you think I’m missing.
If you’re unsure whether you have a good sense of the disagreement, you could try explaining how you think our points of view differ.
Thanks for popping up a meta-level. Seems reasonable in this circumstance.
I agree with you that that one paragraph is mostly doing the “I think you’re partly wrong, but also partly right” thing, but the rest of my comment doesn’t really do that, so I am a bit sad/annoyed that you perceived that to be my primary intention (or at least that’s what I read into your above comment).
I also think that paragraph is doing some other important work that isn’t only about the “let’s avoid a zero-sum conflict situation”, but I don’t really want to go into that too much, since I expect it to be less valuable than the other conversations we could be having.
The rest of my comment is pointing out some relatively concrete ways that make me doubt the things that you are saying. I have a model in my head of where you are coming from, and can see how that contradicts with other parts of reality that seem a lot more robust than the justifications that I think underlie your model.
I don’t yet have a sense that you see those parts of reality that make me think that your models are unlikely to be correct, and so I was trying primarily to point them out to you, and then for you to either produce a response of how you have actually integrated them, or for you to change your mind.
I think this mostly overlaps with your second suggested frame, so I guess we can just continue from there. I think I know why you care, and can probably give at least an approximate model of where you are coming from. I tried to explain what I think you are missing, which was concretely the concerns around bounded computation and the relatively universal need for people to coordinate with other people, which seem to me to contradict some of the things you are saying.
Also happy to give a summary of where I think you are coming from, and what my best guess of your current model is. While I see some contradictions in your model (or my best guess of it), it does seem actually important to point out that I’ve found value in thinking about it and am interested in seeing it fleshed out further (and am as such interested in continuing this conversation).
This could either happen in the form of...
you trying to more explicitly summarize what you think my current model is missing (probably by summarizing my model first),
or by me summarizing your model and asking some clarifying question,
or by you responding to my concrete objections in an analytic way,
or by me responding to your latest comment (though I don’t really know how to do that, since something about the expected frame of that reply feels off)
I don’t really have any super strong preference for any of these, but will likely not respond for a day. After that, I will try summarizing your perspective a bit more explicitly and then either ask some followup questions or point out the contradictions I currently see in it more explicitly.
I don’t understand the relevance of your responses to my stated model. I’d like it if you tried to explain why your responses are relevant, in a way that characterizes what you think I’m saying more explicitly.
My other most recent comment tries to show what your perspective looks like to me, and what I think it’s missing.
I think this is the most helpful encapsulation I’ve gotten of your preferred meta-frame.
I think I mostly just agree with it now that it’s spelled out a better. (I think I have some disagreements about how exactly rationalist forums should relate to this, and what moods are useful. But in this case I basically agree that the actions you suggest at the end are the right move and it seems better to focus on that).