I didn’t understand the connection he was drawing between causal modelling and flow.
It sounded like he was really down on learning mere correlations, but in nature knowing correlations seems pretty good for being able to make predictions about the world. If you know that purple berries are more likely to be poisonous than red berries, you can start extracting value without needing to understand what the causal connection between being purple and being poisonous is.
I didn’t understand why he thought his conditions for flow (clear information, quick feedback, errors matter) were specifically conducive to making causal models, or distinguishing correlation from causation. Did anyone understand this? He didn’t elaborate at all.
It sounded like he was really down on learning mere correlations, but in nature knowing correlations seems pretty good for being able to make predictions about the world.
This also shows up in Pearl; I think humans are in a weird situation where they have very simple intuitive machinery for thinking about causation, and very simple formal machinery for thinking about correlation, and so the constant struggle when talking about them is keeping the two distinct.
Like, there’s a correlation between purple berries and feeling ill, and there’s also a correlation between vomiting and feeling ill. Intuitive causal reasoning is the thing that makes you think about “berries → illness” instead of “vomiting <-> illness”.
Did anyone understand this? He didn’t elaborate at all.
Try flipping each of the conditions.
Information that is obscure or noisy instead of clear makes it harder to determine causes, because the similarities and differences between things are obscured. If the berries are black and white, it’s very easy to notice relationships; if the berries are #f5429e and #f54242, you might misclassify a bunch of the berries, polluting your dataset.
Feedback that’s slow means you can’t easily confirm or disconfirm hypotheses. If eating one black berry makes you immediately ill, then once you come across that hypothesis you can do a few simple checks. If eating one black berry makes you ill 8-48 hours later, then it’ll be hard to tell whether it was the black berry or something else you ate over that window. If you ate a dozen different things, you now have to run a dozen different (long!) experiments.
If errors are irrelevant, then you’re just going to ignore the information and not end up making any models related to it. The more relevant the errors are, the more of your mental energy you can recruit to modeling the situation.
Why those three, and not others? Idk, this is probably just directed sourced from the literature on flow, where they likely have experiments that look into varying these different conditions and trying out others.
I was thinking that there were groudns to think that flow is an experience of lots of implicit learing but I was much more lost on why flow would be conductive to more. Like if I have a proof streak then there is going to be more fodder for more and more proofs but most of that is going to be irrelevant calculation and dead-ends that don’t lead to theorems. And there is no guarantee of success. At some point what is getting and enabling me the results is going to run out. Success doesn’t by itself generate success.
I didn’t understand the connection he was drawing between causal modelling and flow.
It sounded like he was really down on learning mere correlations, but in nature knowing correlations seems pretty good for being able to make predictions about the world. If you know that purple berries are more likely to be poisonous than red berries, you can start extracting value without needing to understand what the causal connection between being purple and being poisonous is.
I didn’t understand why he thought his conditions for flow (clear information, quick feedback, errors matter) were specifically conducive to making causal models, or distinguishing correlation from causation. Did anyone understand this? He didn’t elaborate at all.
This also shows up in Pearl; I think humans are in a weird situation where they have very simple intuitive machinery for thinking about causation, and very simple formal machinery for thinking about correlation, and so the constant struggle when talking about them is keeping the two distinct.
Like, there’s a correlation between purple berries and feeling ill, and there’s also a correlation between vomiting and feeling ill. Intuitive causal reasoning is the thing that makes you think about “berries → illness” instead of “vomiting <-> illness”.
Try flipping each of the conditions.
Information that is obscure or noisy instead of clear makes it harder to determine causes, because the similarities and differences between things are obscured. If the berries are black and white, it’s very easy to notice relationships; if the berries are #f5429e and #f54242, you might misclassify a bunch of the berries, polluting your dataset.
Feedback that’s slow means you can’t easily confirm or disconfirm hypotheses. If eating one black berry makes you immediately ill, then once you come across that hypothesis you can do a few simple checks. If eating one black berry makes you ill 8-48 hours later, then it’ll be hard to tell whether it was the black berry or something else you ate over that window. If you ate a dozen different things, you now have to run a dozen different (long!) experiments.
If errors are irrelevant, then you’re just going to ignore the information and not end up making any models related to it. The more relevant the errors are, the more of your mental energy you can recruit to modeling the situation.
Why those three, and not others? Idk, this is probably just directed sourced from the literature on flow, where they likely have experiments that look into varying these different conditions and trying out others.
I was thinking that there were groudns to think that flow is an experience of lots of implicit learing but I was much more lost on why flow would be conductive to more. Like if I have a proof streak then there is going to be more fodder for more and more proofs but most of that is going to be irrelevant calculation and dead-ends that don’t lead to theorems. And there is no guarantee of success. At some point what is getting and enabling me the results is going to run out. Success doesn’t by itself generate success.