This comment is not only about this post, but is also a response to Scott’s model of Duncan’s beliefs about how epistemic communities work, and a couple of Duncan’s recent Facebook posts. It is also is a mostly unedited rant. Sorry.
I grant that overconfidence is in a similar reference class as saying false things. (I think there is still a distinction worth making, similar to the difference between lying directly and trying to mislead by saying true things, but I am not really talking about that distinction here.)
I think society needs to be robust to people saying false things, and thus have mechanisms that prevent those false things from becoming widely believed. I think that as little as possible of that responsibility should be placed on the person saying the false things, in order to make it more strategy-proof. (I think that it is also useful for the speaker to help by trying not to say false things, but I am more putting the responsibility in the listener)
I think there should be pockets of society, (e.g. collections of people, specific contexts or events) that can collect true beliefs and reliably significantly decrease the extent to which they put trust in the claims of people who say false things. Call such contexts “rigorous.”
I think that it is important that people look to the output these rigorous contexts when e.g. deciding on COVID policy.
I think it is extremely important that the rigorous pockets of society is not “everyone in all contexts.”
I think that that society is very much lacking reliable rigorous pockets.
I have this model where in a healthy society, there can be contexts where people generate all sorts of false beliefs, but also sometimes generate gold (e.g. new ontologies that can vastly improve the collective map). If this context is generating a sufficient supply of gold, you DO NOT go in and punish their false beliefs. Instead, you quarantine them. You put up a bunch of signs that point to them and say e.g. “80% boring true beliefs 19% crap 1% gold,” then you have your rigorous pockets watch them, and try to learn how to efficiently distinguish between the gold and the crap, and maybe see if they can generate the gold without the crap. However sometimes they will fail and will just have to keep digging through the crap to find the gold.
One might look at lesswrong, and say “We are trying to be rigorous here. Let’s push stronger on the gradient of throwing out all the crap.” I can see that. I want to be able to say that. I look at the world, and I see all the crap, and I want there to be a good pocket that can be about “true=good”, “false=bad”, and there isn’t one. Science can’t do it, and maybe lesswrong can.
Unfortunately, I also look at the world and see a bunch of boring processes that are never going to find gold, Science can’t do it, and maybe lesswrong can.
And, maybe there is no tradeoff here. Maybe it can do both. Maybe at our current level of skills, we find more gold in the long run by being better and throwing out the crap.
I don’t know what I believe about how much tradeoff there is. I am writing this, and I am not trying to evaluate the claims. I am imagining inhabiting the world where there is a huge trade off. Imagining the world where lesswrong is the closest thing we have to being able to have a rigorous pocket of society, but we have to compromise, because we need a generative pocket of society even more. I am overconfidently imagining lesswrong as better than it is at both tasks, so that the tradeoff feels more real, and I am imagining the world failing to pick up the slack of whichever one it lets slide. I am crying a little bit.
And I am afraid. I am afraid of being the person who overconfidently says “We need less rigor,” and sends everyone down the wrong path. I am also afraid of the person who overconfidently says “We need less rigor,” and gets flagged as a person who says false things. I am not afraid of saying “We need more rigor.” The fact that I am not afraid of saying “We need more rigor” scares me. I think it makes me feel that if I look to closely, I will conclude that “We need more rigor” is true. Specifically, I am afraid of concluding that and being wrong.
In my own head, I have a part of me that is inhabiting the world where there is a large tradeoff, and we need less rigor. I have another part that is trying to believe true things. The second part is making space for the first part, and letting it be as overconfident as it wants. But it is also quarantining the first part. It is not making the claim that we need more space and less rigor. This quarantine action has two positive effects. It helps the second part have good beliefs, but it also protects the first part from having to engage with hammer of truth until it has grown.
I conjecture that to the extent that I am good at generating ideas, it is partially because I quarantine, but do not squash, my crazy ideas. (Where ignoring the crazy ideas counts as squashing them) I conjecture further that in ideal society needs to do similar motions at the group level, not just the individual level. I said at the beginning that you need to put the responsibility for distinguishing in the listener for strategyproofness. This was not the complete story. I conjecture that you need to put the responsibility in the hand of the listener, because you need to have generators that are not worried about accidentally having false/overconfident beliefs. You are not supposed to put policy decisions in the hands of the people/contexts that are not worried about having false beliefs, but you are supposed to keep giving them attention, as long as they keep occasionally generating gold.
Personal Note: If you have the attention for it, I ask that anyone who sometimes listens to me keeps (at least) two separate buckets: one for “Does Scott sometimes say false things?” and one for “Does Scott sometimes generate good ideas?”, and decide whether to give me attention based on these two separate scores. If you don’t have the attention for that, I’d rather you just keep the second bucket, I concede the first bucket (for now), and think my comparative advantage is the be judged according the the second one, and never be trusted as epistemically sound. (I don’t think I am horrible at being epistemically sound, at least in some domains, but if I only get a one dimensional score, I’d rather relinquish the right to be epistemically trusted, in order to absolve myself of the responsibility to not share false beliefs, so my generative parts can share more freely.)
I’m feeling demoralized by Ben and Scott’s comments (and Christian’s), which I interpret as being primarily framed as “in opposition to the OP and the worldview that generated it,” and which seem to me to be not at all in opposition to the OP, but rather to something like preexisting schemas that had the misfortune to be triggered by it.
Both Scott’s and Ben’s thoughts ring to me as almost entirely true, and also separately valuable, and I have far, far more agreement with them than disagreement, and they are the sort of thoughts I would usually love to sit down and wrestle with and try to collaborate on. I am strong upvoting them both.
But I feel caught in this unpleasant bind where I am telling myself that I first have to go back and separate out the three conversations—where I have to prove that they’re three separate conversations, rather than it being clear that I said “X” and Ben said “By the way, I have a lot of thoughts about W and Y, which are (obviously) quite close to X” and Scott said “And I have a lot of thoughts about X’ and X″.”
Like, from my perspective it seems that there are a bunch of valid concerns being raised that are not downstream of my assertions and my proposals, and I don’t want to have to defend against them, but feel like if I don’t, they will in fact go down as points against those assertions and proposals. People will take them as unanswered rebuttals, without noticing that approximately everything they’re specifically arguing against, I also agree is bad. Those bad things might very well be downstream of e.g. what would happen, pragmatically speaking, if you tried to adopt the policies suggested, but there’s a difference between “what I assert Policy X will degenerate to, given [a, b, c] about the human condition” and “Policy X.”
(Jim made this distinction, and I appreciated it, and strong upvoted that, too.)
And for some reason, I have a very hard time mustering any enthusiasm at all for both Ben and Scott’s proposed conversations while they seem to me to be masquerading as my conversation. Like, as long as they are registering as direct responses, when they seem to me to be riffs.
I think I would deeply enjoy engaging with them, if it were common knowledge that they are riffs. I reiterate that they seem, to me, to contain large amounts of useful insight.
I think that I would even deeply enjoy engaging with them right here. They’re certainly on topic in a not-even-particularly-broad-sense.
But I am extremely tired of what-feels-to-me like riffs being put on [my idea’s tab], and of the effort involved in separating out the threads. And I do not think it is a result of e.g. a personal failure to be clear in my own claims, such that if I wrote better or differently this would stop happening to me. I keep looking for a context where, if I say A and it makes people think of B and C, we can talk about A and B and C, and not immediately lose track of the distinctions between them.
EDIT: I should be more fair to Scott, who did indeed start his post out with a frame pretty close to the one I’m requesting. I think I would take that more meaningfully if I were less tired to start with. But also it being “a response to Scott’s model of Duncan’s beliefs about how epistemic communities work, and a couple of Duncan’s recent Facebook posts” just kind of bumps the question back one level; I feel fairly confident that the same sort of slippery rounding-off is going on there, too (since, again, I almost entirely agree with his commentary, and yet still wrote this very essay). Our disagreement is not where (I think) Ben and Scott think that it lies.
I don’t know what to do about any of that, so I wrote this comment here. Epistemic status: exhausted.
I believe that I could not pass your ITT. I believe I am projecting some views onto you, in order engage with them in my head (and publicly so you can engage if you want). I guess I have a Duncan-model that I am responding to here, but I am not treating that Duncan-model as particularly truth tracking. It is close enough that it makes sense (to me) to call it a Duncan-model, but its primary purpose in me is not for predicting Duncan, but rather for being there to engage with on various topics.
I suspect that being a better model would help it serve this purpose, and would like to make it better, but I am not requesting that.
I notice that I used different words in my header “Scott’s model of Duncan’s beliefs,” I think that this reveals something, but it certainly isn’t clear, “belief” is for true things, “models” are toys for generating things.
I think that in my culture, having a not-that-truth-tracking Duncan-model that I want to engage my ideas with is a sign of respect. I think I don’t do that with that many people (more than 10, but less than 50, I think). I also do it with a bunch of concepts, like “Simic,” or “Logical Induction.” The best models according to me are not the ones that are the most accurate, as much as the ones that are most generally applicable. Rounding off the model makes it fit in more places.
However, I can imagine that maybe in your culture it is something like objectification, which causes you to not be taken seriously. Is this true?
If you are curious about what kind of things my Duncan-model says, I might be able to help you built a (Scott’s-Duncan-Model)-Model. In one short phase, I think I often round you off as an avatar of “respect,” but even my bad model has more nuance than just the word “respect”.
I imagine that you are imagining my comment as a minor libel about you, by contributing to a shared narrative in which you are something that you are not. I am sad to the extent that it has that effect. I am not sure what to do about that. (I could send things like this in private messages, that might help).
However, I want to point out that I am often not asking people to update from my claims. That is often an unfortunate side effect. I want to play with my Duncan-model. I want you to see what I build with it, and point out where it is not correctly tracking what Duncan would actually say. (If that is something you want) I also want to do this in a social context. I want my model to be correct, so that I can learn more from it, but I want to relinquish any responsibility for it being correct. (I am up for being convinced that I should take on that responsibility, either as a general principal, or as a cooperative action towards you.)
Feel free to engage or not.
PS: The above is very much responding to my Duncan-model, rather than what you are actually saying. I reread your above comment, and my comment, and it seems like I am not responding to you at all. I still wanted to share the above text with you.
And I mean the word “maybe” in the above sentence. I am saying the sentence not to express any disagreement, but to play with a conjecture that I am curious about.
For the record I was planning a reply to Scott saying something like “This seems true, and seemed compatible with my interpretation of the OP, which I think went out of it’s way to be pretty well caveated.”
I didn’t end up writing that comment yet in part because I did feel something like “something going on in Scott’s post feels relevant to The Other FB Discussion”, and wanted to acknowledge that, but that seemed to be going down a conversational path that I expected to be exhausted by, and then I wasn’t sure what to do and bounced off.
Yep, I totally agree that it is a riff. I think that I would have put it in response to the poll about how important it is for karma to track truth, if not for the fact that I don’t like to post on Facebook.
This comment is not only about this post, but is also a response to Scott’s model of Duncan’s beliefs about how epistemic communities work, and a couple of Duncan’s recent Facebook posts. It is also is a mostly unedited rant. Sorry.
I grant that overconfidence is in a similar reference class as saying false things. (I think there is still a distinction worth making, similar to the difference between lying directly and trying to mislead by saying true things, but I am not really talking about that distinction here.)
I think society needs to be robust to people saying false things, and thus have mechanisms that prevent those false things from becoming widely believed. I think that as little as possible of that responsibility should be placed on the person saying the false things, in order to make it more strategy-proof. (I think that it is also useful for the speaker to help by trying not to say false things, but I am more putting the responsibility in the listener)
I think there should be pockets of society, (e.g. collections of people, specific contexts or events) that can collect true beliefs and reliably significantly decrease the extent to which they put trust in the claims of people who say false things. Call such contexts “rigorous.”
I think that it is important that people look to the output these rigorous contexts when e.g. deciding on COVID policy.
I think it is extremely important that the rigorous pockets of society is not “everyone in all contexts.”
I think that that society is very much lacking reliable rigorous pockets.
I have this model where in a healthy society, there can be contexts where people generate all sorts of false beliefs, but also sometimes generate gold (e.g. new ontologies that can vastly improve the collective map). If this context is generating a sufficient supply of gold, you DO NOT go in and punish their false beliefs. Instead, you quarantine them. You put up a bunch of signs that point to them and say e.g. “80% boring true beliefs 19% crap 1% gold,” then you have your rigorous pockets watch them, and try to learn how to efficiently distinguish between the gold and the crap, and maybe see if they can generate the gold without the crap. However sometimes they will fail and will just have to keep digging through the crap to find the gold.
One might look at lesswrong, and say “We are trying to be rigorous here. Let’s push stronger on the gradient of throwing out all the crap.” I can see that. I want to be able to say that. I look at the world, and I see all the crap, and I want there to be a good pocket that can be about “true=good”, “false=bad”, and there isn’t one. Science can’t do it, and maybe lesswrong can.
Unfortunately, I also look at the world and see a bunch of boring processes that are never going to find gold, Science can’t do it, and maybe lesswrong can.
And, maybe there is no tradeoff here. Maybe it can do both. Maybe at our current level of skills, we find more gold in the long run by being better and throwing out the crap.
I don’t know what I believe about how much tradeoff there is. I am writing this, and I am not trying to evaluate the claims. I am imagining inhabiting the world where there is a huge trade off. Imagining the world where lesswrong is the closest thing we have to being able to have a rigorous pocket of society, but we have to compromise, because we need a generative pocket of society even more. I am overconfidently imagining lesswrong as better than it is at both tasks, so that the tradeoff feels more real, and I am imagining the world failing to pick up the slack of whichever one it lets slide. I am crying a little bit.
And I am afraid. I am afraid of being the person who overconfidently says “We need less rigor,” and sends everyone down the wrong path. I am also afraid of the person who overconfidently says “We need less rigor,” and gets flagged as a person who says false things. I am not afraid of saying “We need more rigor.” The fact that I am not afraid of saying “We need more rigor” scares me. I think it makes me feel that if I look to closely, I will conclude that “We need more rigor” is true. Specifically, I am afraid of concluding that and being wrong.
In my own head, I have a part of me that is inhabiting the world where there is a large tradeoff, and we need less rigor. I have another part that is trying to believe true things. The second part is making space for the first part, and letting it be as overconfident as it wants. But it is also quarantining the first part. It is not making the claim that we need more space and less rigor. This quarantine action has two positive effects. It helps the second part have good beliefs, but it also protects the first part from having to engage with hammer of truth until it has grown.
I conjecture that to the extent that I am good at generating ideas, it is partially because I quarantine, but do not squash, my crazy ideas. (Where ignoring the crazy ideas counts as squashing them) I conjecture further that in ideal society needs to do similar motions at the group level, not just the individual level. I said at the beginning that you need to put the responsibility for distinguishing in the listener for strategyproofness. This was not the complete story. I conjecture that you need to put the responsibility in the hand of the listener, because you need to have generators that are not worried about accidentally having false/overconfident beliefs. You are not supposed to put policy decisions in the hands of the people/contexts that are not worried about having false beliefs, but you are supposed to keep giving them attention, as long as they keep occasionally generating gold.
Personal Note: If you have the attention for it, I ask that anyone who sometimes listens to me keeps (at least) two separate buckets: one for “Does Scott sometimes say false things?” and one for “Does Scott sometimes generate good ideas?”, and decide whether to give me attention based on these two separate scores. If you don’t have the attention for that, I’d rather you just keep the second bucket, I concede the first bucket (for now), and think my comparative advantage is the be judged according the the second one, and never be trusted as epistemically sound. (I don’t think I am horrible at being epistemically sound, at least in some domains, but if I only get a one dimensional score, I’d rather relinquish the right to be epistemically trusted, in order to absolve myself of the responsibility to not share false beliefs, so my generative parts can share more freely.)
I’m feeling demoralized by Ben and Scott’s comments (and Christian’s), which I interpret as being primarily framed as “in opposition to the OP and the worldview that generated it,” and which seem to me to be not at all in opposition to the OP, but rather to something like preexisting schemas that had the misfortune to be triggered by it.
Both Scott’s and Ben’s thoughts ring to me as almost entirely true, and also separately valuable, and I have far, far more agreement with them than disagreement, and they are the sort of thoughts I would usually love to sit down and wrestle with and try to collaborate on. I am strong upvoting them both.
But I feel caught in this unpleasant bind where I am telling myself that I first have to go back and separate out the three conversations—where I have to prove that they’re three separate conversations, rather than it being clear that I said “X” and Ben said “By the way, I have a lot of thoughts about W and Y, which are (obviously) quite close to X” and Scott said “And I have a lot of thoughts about X’ and X″.”
Like, from my perspective it seems that there are a bunch of valid concerns being raised that are not downstream of my assertions and my proposals, and I don’t want to have to defend against them, but feel like if I don’t, they will in fact go down as points against those assertions and proposals. People will take them as unanswered rebuttals, without noticing that approximately everything they’re specifically arguing against, I also agree is bad. Those bad things might very well be downstream of e.g. what would happen, pragmatically speaking, if you tried to adopt the policies suggested, but there’s a difference between “what I assert Policy X will degenerate to, given [a, b, c] about the human condition” and “Policy X.”
(Jim made this distinction, and I appreciated it, and strong upvoted that, too.)
And for some reason, I have a very hard time mustering any enthusiasm at all for both Ben and Scott’s proposed conversations while they seem to me to be masquerading as my conversation. Like, as long as they are registering as direct responses, when they seem to me to be riffs.
I think I would deeply enjoy engaging with them, if it were common knowledge that they are riffs. I reiterate that they seem, to me, to contain large amounts of useful insight.
I think that I would even deeply enjoy engaging with them right here. They’re certainly on topic in a not-even-particularly-broad-sense.
But I am extremely tired of what-feels-to-me like riffs being put on [my idea’s tab], and of the effort involved in separating out the threads. And I do not think it is a result of e.g. a personal failure to be clear in my own claims, such that if I wrote better or differently this would stop happening to me. I keep looking for a context where, if I say A and it makes people think of B and C, we can talk about A and B and C, and not immediately lose track of the distinctions between them.
EDIT: I should be more fair to Scott, who did indeed start his post out with a frame pretty close to the one I’m requesting. I think I would take that more meaningfully if I were less tired to start with. But also it being “a response to Scott’s model of Duncan’s beliefs about how epistemic communities work, and a couple of Duncan’s recent Facebook posts” just kind of bumps the question back one level; I feel fairly confident that the same sort of slippery rounding-off is going on there, too (since, again, I almost entirely agree with his commentary, and yet still wrote this very essay). Our disagreement is not where (I think) Ben and Scott think that it lies.
I don’t know what to do about any of that, so I wrote this comment here. Epistemic status: exhausted.
I believe that I could not pass your ITT. I believe I am projecting some views onto you, in order engage with them in my head (and publicly so you can engage if you want). I guess I have a Duncan-model that I am responding to here, but I am not treating that Duncan-model as particularly truth tracking. It is close enough that it makes sense (to me) to call it a Duncan-model, but its primary purpose in me is not for predicting Duncan, but rather for being there to engage with on various topics.
I suspect that being a better model would help it serve this purpose, and would like to make it better, but I am not requesting that.
I notice that I used different words in my header “Scott’s model of Duncan’s beliefs,” I think that this reveals something, but it certainly isn’t clear, “belief” is for true things, “models” are toys for generating things.
I think that in my culture, having a not-that-truth-tracking Duncan-model that I want to engage my ideas with is a sign of respect. I think I don’t do that with that many people (more than 10, but less than 50, I think). I also do it with a bunch of concepts, like “Simic,” or “Logical Induction.” The best models according to me are not the ones that are the most accurate, as much as the ones that are most generally applicable. Rounding off the model makes it fit in more places.
However, I can imagine that maybe in your culture it is something like objectification, which causes you to not be taken seriously. Is this true?
If you are curious about what kind of things my Duncan-model says, I might be able to help you built a (Scott’s-Duncan-Model)-Model. In one short phase, I think I often round you off as an avatar of “respect,” but even my bad model has more nuance than just the word “respect”.
I imagine that you are imagining my comment as a minor libel about you, by contributing to a shared narrative in which you are something that you are not. I am sad to the extent that it has that effect. I am not sure what to do about that. (I could send things like this in private messages, that might help).
However, I want to point out that I am often not asking people to update from my claims. That is often an unfortunate side effect. I want to play with my Duncan-model. I want you to see what I build with it, and point out where it is not correctly tracking what Duncan would actually say. (If that is something you want) I also want to do this in a social context. I want my model to be correct, so that I can learn more from it, but I want to relinquish any responsibility for it being correct. (I am up for being convinced that I should take on that responsibility, either as a general principal, or as a cooperative action towards you.)
Feel free to engage or not.
PS: The above is very much responding to my Duncan-model, rather than what you are actually saying. I reread your above comment, and my comment, and it seems like I am not responding to you at all. I still wanted to share the above text with you.
Anyway, my reaction to the actual post is:
“Yep, Overconfidence is Deceit. Deceit is bad.”
However, reading your post made me think about how maybe your right to not be deceived is trumped by my right to be incorrect.
And I mean the word “maybe” in the above sentence. I am saying the sentence not to express any disagreement, but to play with a conjecture that I am curious about.
For the record I was planning a reply to Scott saying something like “This seems true, and seemed compatible with my interpretation of the OP, which I think went out of it’s way to be pretty well caveated.”
I didn’t end up writing that comment yet in part because I did feel something like “something going on in Scott’s post feels relevant to The Other FB Discussion”, and wanted to acknowledge that, but that seemed to be going down a conversational path that I expected to be exhausted by, and then I wasn’t sure what to do and bounced off.
Yep, I totally agree that it is a riff. I think that I would have put it in response to the poll about how important it is for karma to track truth, if not for the fact that I don’t like to post on Facebook.