So in one sense this feels a lot like...causal modeling? Like, this seems to be what people tend to talk about when they mean models, in general, I think? It’s about having a lot of little things that interact with one another, and you know the behavior of the little things, so you can predict the big stuff?
At some point, though, doesn’t every good model need to bottom out to some causal justification?
(EX: I believe that posting on Facebook at about 8 pm is when I can get the most interaction. I know this because this has happened often in the past and also people tend to be done with dinner and active online. If these two things hold, I can be reasonably certain 8 pm will bring with it many comments.)
Also, the “plausibly either way” is definitely a good sign that something’s broken, like when certain adages like “birds of a feather flock together” and “opposites attract” can both be seen as plausible.
(I think Kahneman actually ran a study w/ those two adages and found that people rated the one they were told had scientific backing as more plausible than the other one? But that’s straying a little from the point...)
If the two adages both seemed plausible, then that seems to be a statement, not about the world, clearly, but about your models of humans. If you really query your internal model, the question to ask yourself might be, “Do you see two quiet people having just as a good of a time as one quiet person and one loud person?”
[…]this seems to be what people tend to talk about when they mean models, in general, I think? It’s about having a lot of little things that interact with one another, and you know the behavior of the little things, so you can predict the big stuff?
I think I might be missing your meaning here. Both I and the arithmetic student have a model of how the addition algorithm works such that we make all the same predictions. But my model has more Gears than does the student’s. The difference is that my sense of what the algorithm could even be is much more constrained than is the student’s.
Also, the student has a cause for their belief. It’s just not a Gears-like cause.
(EX: I believe that posting on Facebook at about 8 pm is when I can get the most interaction. I know this because this has happened often in the past and also people tend to be done with dinner and active online. If these two things hold, I can be reasonably certain 8 pm will bring with it many comments.)
Well, okay. I want to factor apart two different things here.
First, it happened a lot before, so you expect it to happen again. Test #2: How Earth-shattering would it be if you were to post to Facebook at about 8pm and not get many comments? Test #1: If you don’t get many comments, what does this demand about the world? Test #3: If you were to forget that people tend to interact on Facebook around 8pm, how clearly would you rederive that fact sans additional data? I think that on its own, noticing a correlation basically doesn’t give you any Gears. You have to add something that connects the two things you’re correlating.
…and you do offer a connection, right? “[P]eople tend to be done with dinner and active online [at about 8pm].” Cool. This is a proposed Gear. E.g., as per test #1, if people don’t reply much to your 8pm Facebook post, you start to wonder if maybe people aren’t done with dinner or aren’t active online for some other reason.
Also, the “plausibly either way” is definitely a good sign that something’s broken[…]
I agree with what I imagine you to mean here. In the spirit of “hard on work, soft on people”, I want to pick at the language.
I think the “plausibly either way” test (#2) is a reasonably accurate test of how Gears-like a model is. And Gears tend to be epistemically very useful.
I worry about equating “Gears-like” with “good” or “missing Gears” with “broken” or “bad”. I think perspectives are subject to easy distortion when they aren’t made of Gears, and that it’s epistemically powerful to track this factor. I want to be careful that this property of models doesn’t get conflated with, say, the value of the person who is using the model. (E.g., I worry about thought threads that go something like, “You’re an idiot. You don’t even notice your explanation doesn’t constrain your expectations.”)
Otherwise, I agree with you! As long as we’re very careful to create common knowledge) about what we mean when we say things like “good” and “broken” and “wrong”, then I’m fine with statements like “These tests can help you notice when something is going wrong in your thinking.”
How do I create links when the URL has close-parentheses in them?
E.g., I can’t seem to link properly to the Wikipedia article on common knowledge in logic. I could hack around this by creating a TinyURL for this, but surely there’s a nicer way of doing this within Less Wrong?
Yep! I noticed that. I know what to do to avoid this problem in HTML. I just didn’t know what the escape character was in the markup.
I actually miss when the main posts were markup too. It made making the posts have the same type of format a lot easier. I also like something about the aesthetic of the types all being the same. C’est la vie!
At some point, though, doesn’t every good model need to bottom out to some causal justification?
If your claim that a model without a justification isn’t a model? I don’t have any problem with conceptualizing a poorly justified or even not justified model.
Hm, okay. I think it’s totally possible for people to have models that aren’t actually based on justifications. I do think that good models are based off justifications, though.
Response: (Warning: not-exactly-coherent thoughts below)
So in one sense this feels a lot like...causal modeling? Like, this seems to be what people tend to talk about when they mean models, in general, I think? It’s about having a lot of little things that interact with one another, and you know the behavior of the little things, so you can predict the big stuff?
At some point, though, doesn’t every good model need to bottom out to some causal justification?
(EX: I believe that posting on Facebook at about 8 pm is when I can get the most interaction. I know this because this has happened often in the past and also people tend to be done with dinner and active online. If these two things hold, I can be reasonably certain 8 pm will bring with it many comments.)
Also, the “plausibly either way” is definitely a good sign that something’s broken, like when certain adages like “birds of a feather flock together” and “opposites attract” can both be seen as plausible.
(I think Kahneman actually ran a study w/ those two adages and found that people rated the one they were told had scientific backing as more plausible than the other one? But that’s straying a little from the point...)
If the two adages both seemed plausible, then that seems to be a statement, not about the world, clearly, but about your models of humans. If you really query your internal model, the question to ask yourself might be, “Do you see two quiet people having just as a good of a time as one quiet person and one loud person?”
I think I might be missing your meaning here. Both I and the arithmetic student have a model of how the addition algorithm works such that we make all the same predictions. But my model has more Gears than does the student’s. The difference is that my sense of what the algorithm could even be is much more constrained than is the student’s.
Also, the student has a cause for their belief. It’s just not a Gears-like cause.
Well, okay. I want to factor apart two different things here.
First, it happened a lot before, so you expect it to happen again. Test #2: How Earth-shattering would it be if you were to post to Facebook at about 8pm and not get many comments? Test #1: If you don’t get many comments, what does this demand about the world? Test #3: If you were to forget that people tend to interact on Facebook around 8pm, how clearly would you rederive that fact sans additional data? I think that on its own, noticing a correlation basically doesn’t give you any Gears. You have to add something that connects the two things you’re correlating.
…and you do offer a connection, right? “[P]eople tend to be done with dinner and active online [at about 8pm].” Cool. This is a proposed Gear. E.g., as per test #1, if people don’t reply much to your 8pm Facebook post, you start to wonder if maybe people aren’t done with dinner or aren’t active online for some other reason.
I agree with what I imagine you to mean here. In the spirit of “hard on work, soft on people”, I want to pick at the language.
I think the “plausibly either way” test (#2) is a reasonably accurate test of how Gears-like a model is. And Gears tend to be epistemically very useful.
I worry about equating “Gears-like” with “good” or “missing Gears” with “broken” or “bad”. I think perspectives are subject to easy distortion when they aren’t made of Gears, and that it’s epistemically powerful to track this factor. I want to be careful that this property of models doesn’t get conflated with, say, the value of the person who is using the model. (E.g., I worry about thought threads that go something like, “You’re an idiot. You don’t even notice your explanation doesn’t constrain your expectations.”)
Otherwise, I agree with you! As long as we’re very careful to create common knowledge) about what we mean when we say things like “good” and “broken” and “wrong”, then I’m fine with statements like “These tests can help you notice when something is going wrong in your thinking.”
Meta question:
How do I create links when the URL has close-parentheses in them?
E.g., I can’t seem to link properly to the Wikipedia article on common knowledge in logic. I could hack around this by creating a TinyURL for this, but surely there’s a nicer way of doing this within Less Wrong?
backslash escape special characters. Test Common knowledge)
done by adding the ‘\’ in logic’\’) without the quotes (otherwise it disappears)
Thanks! Fixed.
In addition to what RomeoStevens said, while comments on LW use markup formatting, the main post uses html formatting.
Yep! I noticed that. I know what to do to avoid this problem in HTML. I just didn’t know what the escape character was in the markup.
I actually miss when the main posts were markup too. It made making the posts have the same type of format a lot easier. I also like something about the aesthetic of the types all being the same. C’est la vie!
If your claim that a model without a justification isn’t a model? I don’t have any problem with conceptualizing a poorly justified or even not justified model.
Hm, okay. I think it’s totally possible for people to have models that aren’t actually based on justifications. I do think that good models are based off justifications, though.