This depends on what you mean by “learn” and what objective you want to achieve by learning. I don’t believe in having a “flawed but simple understanding” of a math topic: people who say such things usually mean that they can recite some rehearsed explanations, but cannot solve even simple problems on the topic. Solving problems should come first, and intuitive explanations should come later.
Imagine you live in the middle ages and decide to study alchemy. So you start digging in, and after your first few lessons you happily decide to write an “intuitive introduction to alchemy techniques” so your successors can pass the initial phase more easily. I claim that this indicates a flawed mindset. If you cannot notice (“cannot be expected to see”, as you charmingly put it) that the whole subject doesn’t frigging work, isn’t your effort misplaced? How on Earth can you be satisfied with an “intuitive” understanding of something that you don’t even know works?
I apologize if my comments here sound rude or offensive. I’m honestly trying to attack what I see as a flawed approach you have adopted, not you personally. And I honestly think that the proper attitude to decision theory is to treat it like alchemy: a pre-paradigmatic field where you can hope to salvage some useful insights from your predecessors, but most existing work is almost certainly going to get scrapped.
No need to worry about being rude or offensive—I’m happy to talk about issues rather than people and I never thought we were doing anything different. However, I wonder if a better comparison is with someone studying “Ways of discovering the secrets of the universe.” If they studied alchemy and then looked at ways it failed that might be a useful way of seeing what a better theory of “discovering secrets” will need to avoid.
That’s my intention. Study CDT and see where it falls down so then we have a better sense of what a Decision Theory needs to do before exploring other approaches to decision theory. You might do the same with alchemy and you might explain its flaw. But first you have to explain what alchemy is before you can point out the issues with it. That’s what this post is doing—explaining what causal decision theory is seen to be before we look at the problems with this perception.
To look at alchemy’s flaws, you first need to know what alchemy is. Even if you can see it’s flawed from the start, that doesn’t mean a step by step process can’t be useful.
Or that’s how I feel. Further disagreement is welcome.
Sorry for deleting my comment—on reflection it sounded too harsh.
Maybe it’s just me, but I don’t think you’re promoting the greater good when you write an intuitive tutorial on a confused topic without screaming in confusion yourself. What’s the hurry, anyway? Why not make some little bits perfectly clear for yourself, and write then?
Here’s an example of an intuitive explanation (of an active research topic, no less) written by someone whose thinking is crystal clear: Cosma Shalizi on causal models. One document like that is worth a thousand “monad tutorials” written by Haskell newbies.
I don’t think you’ve sounded harsh. You obviously disagree with me but I think you’ve done so politely.
I guess my feeling is that different people learn differently and I’m not as convinced as you seem to be that this is the wrong way for all people to learn (as opposed to the wrong way for some people to learn). I grant that I could be wrong on this but I feel that I, at the very least, would gain something from this sort of tutorial. Open to be proven wrong if there’s a chorus of dissenters.
Obviously, I could write a better explanation of decision theory if I had researched the area for years and had a better grasp of it. However, that’s not the case, so I’m left to decide what should do given the experience I do have.
I am writing this hoping that doing so will benefit some people.
And doing so doesn’t stop me writing a better tutorial when I do understand the topic better. I can still do that when that time occurs and yet create something that hopefully has positive value for now.
Thx for the Shalizi link. I’m currently slogging my way through Pearl, and Shalizi clarifies things.
At first I thought that AdamBell had invented Evidential Decision Theory from whole cloth, but I discover by Googling that it really exists. Presumably it makes sense for different problems—it certainly did not for the baby-kissing story as presented.
As far as I know, there’s still no non-trivial formalization of the baby-kissing problem (aka Smoking Lesion). I’d be happy to be proved wrong on that.
This depends on what you mean by “learn” and what objective you want to achieve by learning. I don’t believe in having a “flawed but simple understanding” of a math topic: people who say such things usually mean that they can recite some rehearsed explanations, but cannot solve even simple problems on the topic. Solving problems should come first, and intuitive explanations should come later.
Imagine you live in the middle ages and decide to study alchemy. So you start digging in, and after your first few lessons you happily decide to write an “intuitive introduction to alchemy techniques” so your successors can pass the initial phase more easily. I claim that this indicates a flawed mindset. If you cannot notice (“cannot be expected to see”, as you charmingly put it) that the whole subject doesn’t frigging work, isn’t your effort misplaced? How on Earth can you be satisfied with an “intuitive” understanding of something that you don’t even know works?
I apologize if my comments here sound rude or offensive. I’m honestly trying to attack what I see as a flawed approach you have adopted, not you personally. And I honestly think that the proper attitude to decision theory is to treat it like alchemy: a pre-paradigmatic field where you can hope to salvage some useful insights from your predecessors, but most existing work is almost certainly going to get scrapped.
No need to worry about being rude or offensive—I’m happy to talk about issues rather than people and I never thought we were doing anything different. However, I wonder if a better comparison is with someone studying “Ways of discovering the secrets of the universe.” If they studied alchemy and then looked at ways it failed that might be a useful way of seeing what a better theory of “discovering secrets” will need to avoid.
That’s my intention. Study CDT and see where it falls down so then we have a better sense of what a Decision Theory needs to do before exploring other approaches to decision theory. You might do the same with alchemy and you might explain its flaw. But first you have to explain what alchemy is before you can point out the issues with it. That’s what this post is doing—explaining what causal decision theory is seen to be before we look at the problems with this perception.
To look at alchemy’s flaws, you first need to know what alchemy is. Even if you can see it’s flawed from the start, that doesn’t mean a step by step process can’t be useful.
Or that’s how I feel. Further disagreement is welcome.
Sorry for deleting my comment—on reflection it sounded too harsh.
Maybe it’s just me, but I don’t think you’re promoting the greater good when you write an intuitive tutorial on a confused topic without screaming in confusion yourself. What’s the hurry, anyway? Why not make some little bits perfectly clear for yourself, and write then?
Here’s an example of an intuitive explanation (of an active research topic, no less) written by someone whose thinking is crystal clear: Cosma Shalizi on causal models. One document like that is worth a thousand “monad tutorials” written by Haskell newbies.
Maybe there should be a top-level post on how causal decision theory is like burritos?
I can’t believe you just wrote that. The whole burrito thing is just going to confuse people, when it’s really a very straightforward topic.
Just think of decision theory as if it were cricket...
At least that would be a change from treating decision theory as if it were all about prison.
I don’t think you’ve sounded harsh. You obviously disagree with me but I think you’ve done so politely.
I guess my feeling is that different people learn differently and I’m not as convinced as you seem to be that this is the wrong way for all people to learn (as opposed to the wrong way for some people to learn). I grant that I could be wrong on this but I feel that I, at the very least, would gain something from this sort of tutorial. Open to be proven wrong if there’s a chorus of dissenters.
Obviously, I could write a better explanation of decision theory if I had researched the area for years and had a better grasp of it. However, that’s not the case, so I’m left to decide what should do given the experience I do have.
I am writing this hoping that doing so will benefit some people.
And doing so doesn’t stop me writing a better tutorial when I do understand the topic better. I can still do that when that time occurs and yet create something that hopefully has positive value for now.
Thx for the Shalizi link. I’m currently slogging my way through Pearl, and Shalizi clarifies things.
At first I thought that AdamBell had invented Evidential Decision Theory from whole cloth, but I discover by Googling that it really exists. Presumably it makes sense for different problems—it certainly did not for the baby-kissing story as presented.
As far as I know, there’s still no non-trivial formalization of the baby-kissing problem (aka Smoking Lesion). I’d be happy to be proved wrong on that.