More generally, if I don’t want to optimize X, but merely want to satisfy some threshold T for X, then I don’t really care what the optimal way of doing X is in general, I care what way of doing X gets me across T most cheaply. If getting across T using process P1 costs effort E1 from where I am now, and P2 costs E2, and E2 > E1, and I don’t care about anything else, I should choose E1.
The catch is, like a lot of humans, I also have a tendency to overestimate both the effectiveness of whatever I’m used to doing and the costs of changing to something else. So it’s very easy for me to dismiss P2 on the grounds of an argument like the above even in situations where E1 > E2, or where it turns out that I do care about other things, or both.
There are some techniques that help with countering that tendency. For example, it sometimes helps to ask myself from time to time whether, if I were starting from scratch, I would choose P1 or P2. (E.g. “if I were learning to type for the first time, would I learn Dvorak or Qwerty?”). Asking myself that question lets me at least consider which process I think is superior for my purposes, even if I subsequently turn around and ignore that judgment due to status-quo bias.
That isn’t great, but it’s better than failing to consider which process I think is better.
Well said. In considering your response I notice that a process P as part of its cost E has room to include the cost of learning the process if necessary, something that was concerning me.
I am now considering a more complicated case.
You are in a team of people of which you are not the team leader. Some of the team are scientists, some are magical thinkers, you are the only Bayesian.
Given an arbitrary task which can be better optimised using Bayesian thinking, is there a way of applying a “Bayes patch” to the work of your teammates so that they can benefit from the fruits of your Bayseian thinking without knowing it themselves?
I suppose I am trying to ask how easily or well is Bayes applied to undirected work by non-Bayesian operators. If I was a scientist in a group full of magical thinkers all of us with a task, I do not know what they would come up with but I reckon I would be able to make some scientific use of the information they generate, is the same the case for Bayes?
I expect it depends rather a lot on the nature of the problem, and on just what exactly we mean by “science,” “magical thinking,” and “Bayes”.
I find, thinking about your question, that I’m not really sure what you mean by these terms. Can you give me a more concrete example of what you have in mind? That is, OK, there’s a team comprising A, B, and C. What would lead me to conclude that A is a “magical thinker”, B is a “Bayesian,” and C is a “scientist”?
For my own part, I would say that the primary difference has to do with how evidence is evaluated.
For example, I would expect A, in practice, to examine the evidence holistically and arrive at intuitive conclusions about it, whereas I would expect B and C to examine the evidence more systematically. In a situation where the reality is highly intuitive, I would therefore expect A to arrive at the correct conclusion with confidence quickly, and B and C to confirm it eventually. In a situation where the reality is highly counterintuitive, I would expect A to arrive at the wrong conclusion with confidence quickly, while B and C become (correctly) confused.
For example, I would expect B and C, in practice, to try and set up experimental conditions under which all observable factors but two (F1 and F2) are held fixed, and F1 is varied and F2 measured and correlations between F1 and F2 calculated. In a situation where such conditions can be set up, and strong correlations are observed between certain factors, I would expect C to arrive at correct conclusions about causal links with confidence slowly, and B to confirm them even more slowly. In a situation where such conditions cannot be set up, or where no strong correlations are observed between evaluated factors, I would expect C to arrive at no positive conclusions about causal links, and B to arrive at weak positive conclusions about causal links.
Are these expectations consistent with what you mean by the terms?
I agree with the terms, for the sake of explanation by magical thinker I was thinking along the lines of young non science trained children, or people who have either no knowledge of or no interest in the scientific method. Ancient Greek philosophers could come under this label if they never experimented to test their ideas. The essence is that they theorise without testing their theory.
In terms of the task, my first idea was the marshmallow test from a Ted lecture, “make the highest tower you can that will support a marshmallow on top from dry spaghetti, a yard of string, and a yard of tape.”
Essentially a situation where the results are clearly comparable, but the way to get the best result is hard to prove. So far triangles are the way to go, but there may be a better way that nobody has tried yet. If the task has a time limit, is it worth using scientific or bayesian principles to design the tower or is it better to just start taping some pasta.
At the risk of repeating myself… it depends on the properties of the marshmallow-test task. If it is such that my intuitions about it predict the actual task pretty well, then I should just start taping pasta. If it is such that my intuitions about it predict the task poorly, I might do better to study the system… although if there’s a time limit, that might not be a good idea either, depending on the time limit and how quickly I study.
Sure.
More generally, if I don’t want to optimize X, but merely want to satisfy some threshold T for X, then I don’t really care what the optimal way of doing X is in general, I care what way of doing X gets me across T most cheaply. If getting across T using process P1 costs effort E1 from where I am now, and P2 costs E2, and E2 > E1, and I don’t care about anything else, I should choose E1.
The catch is, like a lot of humans, I also have a tendency to overestimate both the effectiveness of whatever I’m used to doing and the costs of changing to something else. So it’s very easy for me to dismiss P2 on the grounds of an argument like the above even in situations where E1 > E2, or where it turns out that I do care about other things, or both.
There are some techniques that help with countering that tendency. For example, it sometimes helps to ask myself from time to time whether, if I were starting from scratch, I would choose P1 or P2. (E.g. “if I were learning to type for the first time, would I learn Dvorak or Qwerty?”). Asking myself that question lets me at least consider which process I think is superior for my purposes, even if I subsequently turn around and ignore that judgment due to status-quo bias.
That isn’t great, but it’s better than failing to consider which process I think is better.
Well said. In considering your response I notice that a process P as part of its cost E has room to include the cost of learning the process if necessary, something that was concerning me.
I am now considering a more complicated case.
You are in a team of people of which you are not the team leader. Some of the team are scientists, some are magical thinkers, you are the only Bayesian.
Given an arbitrary task which can be better optimised using Bayesian thinking, is there a way of applying a “Bayes patch” to the work of your teammates so that they can benefit from the fruits of your Bayseian thinking without knowing it themselves?
I suppose I am trying to ask how easily or well is Bayes applied to undirected work by non-Bayesian operators. If I was a scientist in a group full of magical thinkers all of us with a task, I do not know what they would come up with but I reckon I would be able to make some scientific use of the information they generate, is the same the case for Bayes?
I expect it depends rather a lot on the nature of the problem, and on just what exactly we mean by “science,” “magical thinking,” and “Bayes”.
I find, thinking about your question, that I’m not really sure what you mean by these terms. Can you give me a more concrete example of what you have in mind? That is, OK, there’s a team comprising A, B, and C. What would lead me to conclude that A is a “magical thinker”, B is a “Bayesian,” and C is a “scientist”?
For my own part, I would say that the primary difference has to do with how evidence is evaluated.
For example, I would expect A, in practice, to examine the evidence holistically and arrive at intuitive conclusions about it, whereas I would expect B and C to examine the evidence more systematically. In a situation where the reality is highly intuitive, I would therefore expect A to arrive at the correct conclusion with confidence quickly, and B and C to confirm it eventually. In a situation where the reality is highly counterintuitive, I would expect A to arrive at the wrong conclusion with confidence quickly, while B and C become (correctly) confused.
For example, I would expect B and C, in practice, to try and set up experimental conditions under which all observable factors but two (F1 and F2) are held fixed, and F1 is varied and F2 measured and correlations between F1 and F2 calculated. In a situation where such conditions can be set up, and strong correlations are observed between certain factors, I would expect C to arrive at correct conclusions about causal links with confidence slowly, and B to confirm them even more slowly. In a situation where such conditions cannot be set up, or where no strong correlations are observed between evaluated factors, I would expect C to arrive at no positive conclusions about causal links, and B to arrive at weak positive conclusions about causal links.
Are these expectations consistent with what you mean by the terms?
I agree with the terms, for the sake of explanation by magical thinker I was thinking along the lines of young non science trained children, or people who have either no knowledge of or no interest in the scientific method. Ancient Greek philosophers could come under this label if they never experimented to test their ideas. The essence is that they theorise without testing their theory.
In terms of the task, my first idea was the marshmallow test from a Ted lecture, “make the highest tower you can that will support a marshmallow on top from dry spaghetti, a yard of string, and a yard of tape.”
Essentially a situation where the results are clearly comparable, but the way to get the best result is hard to prove. So far triangles are the way to go, but there may be a better way that nobody has tried yet. If the task has a time limit, is it worth using scientific or bayesian principles to design the tower or is it better to just start taping some pasta.
At the risk of repeating myself… it depends on the properties of the marshmallow-test task. If it is such that my intuitions about it predict the actual task pretty well, then I should just start taping pasta. If it is such that my intuitions about it predict the task poorly, I might do better to study the system… although if there’s a time limit, that might not be a good idea either, depending on the time limit and how quickly I study.