Well said. In considering your response I notice that a process P as part of its cost E has room to include the cost of learning the process if necessary, something that was concerning me.
I am now considering a more complicated case.
You are in a team of people of which you are not the team leader. Some of the team are scientists, some are magical thinkers, you are the only Bayesian.
Given an arbitrary task which can be better optimised using Bayesian thinking, is there a way of applying a “Bayes patch” to the work of your teammates so that they can benefit from the fruits of your Bayseian thinking without knowing it themselves?
I suppose I am trying to ask how easily or well is Bayes applied to undirected work by non-Bayesian operators. If I was a scientist in a group full of magical thinkers all of us with a task, I do not know what they would come up with but I reckon I would be able to make some scientific use of the information they generate, is the same the case for Bayes?
I expect it depends rather a lot on the nature of the problem, and on just what exactly we mean by “science,” “magical thinking,” and “Bayes”.
I find, thinking about your question, that I’m not really sure what you mean by these terms. Can you give me a more concrete example of what you have in mind? That is, OK, there’s a team comprising A, B, and C. What would lead me to conclude that A is a “magical thinker”, B is a “Bayesian,” and C is a “scientist”?
For my own part, I would say that the primary difference has to do with how evidence is evaluated.
For example, I would expect A, in practice, to examine the evidence holistically and arrive at intuitive conclusions about it, whereas I would expect B and C to examine the evidence more systematically. In a situation where the reality is highly intuitive, I would therefore expect A to arrive at the correct conclusion with confidence quickly, and B and C to confirm it eventually. In a situation where the reality is highly counterintuitive, I would expect A to arrive at the wrong conclusion with confidence quickly, while B and C become (correctly) confused.
For example, I would expect B and C, in practice, to try and set up experimental conditions under which all observable factors but two (F1 and F2) are held fixed, and F1 is varied and F2 measured and correlations between F1 and F2 calculated. In a situation where such conditions can be set up, and strong correlations are observed between certain factors, I would expect C to arrive at correct conclusions about causal links with confidence slowly, and B to confirm them even more slowly. In a situation where such conditions cannot be set up, or where no strong correlations are observed between evaluated factors, I would expect C to arrive at no positive conclusions about causal links, and B to arrive at weak positive conclusions about causal links.
Are these expectations consistent with what you mean by the terms?
I agree with the terms, for the sake of explanation by magical thinker I was thinking along the lines of young non science trained children, or people who have either no knowledge of or no interest in the scientific method. Ancient Greek philosophers could come under this label if they never experimented to test their ideas. The essence is that they theorise without testing their theory.
In terms of the task, my first idea was the marshmallow test from a Ted lecture, “make the highest tower you can that will support a marshmallow on top from dry spaghetti, a yard of string, and a yard of tape.”
Essentially a situation where the results are clearly comparable, but the way to get the best result is hard to prove. So far triangles are the way to go, but there may be a better way that nobody has tried yet. If the task has a time limit, is it worth using scientific or bayesian principles to design the tower or is it better to just start taping some pasta.
At the risk of repeating myself… it depends on the properties of the marshmallow-test task. If it is such that my intuitions about it predict the actual task pretty well, then I should just start taping pasta. If it is such that my intuitions about it predict the task poorly, I might do better to study the system… although if there’s a time limit, that might not be a good idea either, depending on the time limit and how quickly I study.
Well said. In considering your response I notice that a process P as part of its cost E has room to include the cost of learning the process if necessary, something that was concerning me.
I am now considering a more complicated case.
You are in a team of people of which you are not the team leader. Some of the team are scientists, some are magical thinkers, you are the only Bayesian.
Given an arbitrary task which can be better optimised using Bayesian thinking, is there a way of applying a “Bayes patch” to the work of your teammates so that they can benefit from the fruits of your Bayseian thinking without knowing it themselves?
I suppose I am trying to ask how easily or well is Bayes applied to undirected work by non-Bayesian operators. If I was a scientist in a group full of magical thinkers all of us with a task, I do not know what they would come up with but I reckon I would be able to make some scientific use of the information they generate, is the same the case for Bayes?
I expect it depends rather a lot on the nature of the problem, and on just what exactly we mean by “science,” “magical thinking,” and “Bayes”.
I find, thinking about your question, that I’m not really sure what you mean by these terms. Can you give me a more concrete example of what you have in mind? That is, OK, there’s a team comprising A, B, and C. What would lead me to conclude that A is a “magical thinker”, B is a “Bayesian,” and C is a “scientist”?
For my own part, I would say that the primary difference has to do with how evidence is evaluated.
For example, I would expect A, in practice, to examine the evidence holistically and arrive at intuitive conclusions about it, whereas I would expect B and C to examine the evidence more systematically. In a situation where the reality is highly intuitive, I would therefore expect A to arrive at the correct conclusion with confidence quickly, and B and C to confirm it eventually. In a situation where the reality is highly counterintuitive, I would expect A to arrive at the wrong conclusion with confidence quickly, while B and C become (correctly) confused.
For example, I would expect B and C, in practice, to try and set up experimental conditions under which all observable factors but two (F1 and F2) are held fixed, and F1 is varied and F2 measured and correlations between F1 and F2 calculated. In a situation where such conditions can be set up, and strong correlations are observed between certain factors, I would expect C to arrive at correct conclusions about causal links with confidence slowly, and B to confirm them even more slowly. In a situation where such conditions cannot be set up, or where no strong correlations are observed between evaluated factors, I would expect C to arrive at no positive conclusions about causal links, and B to arrive at weak positive conclusions about causal links.
Are these expectations consistent with what you mean by the terms?
I agree with the terms, for the sake of explanation by magical thinker I was thinking along the lines of young non science trained children, or people who have either no knowledge of or no interest in the scientific method. Ancient Greek philosophers could come under this label if they never experimented to test their ideas. The essence is that they theorise without testing their theory.
In terms of the task, my first idea was the marshmallow test from a Ted lecture, “make the highest tower you can that will support a marshmallow on top from dry spaghetti, a yard of string, and a yard of tape.”
Essentially a situation where the results are clearly comparable, but the way to get the best result is hard to prove. So far triangles are the way to go, but there may be a better way that nobody has tried yet. If the task has a time limit, is it worth using scientific or bayesian principles to design the tower or is it better to just start taping some pasta.
At the risk of repeating myself… it depends on the properties of the marshmallow-test task. If it is such that my intuitions about it predict the actual task pretty well, then I should just start taping pasta. If it is such that my intuitions about it predict the task poorly, I might do better to study the system… although if there’s a time limit, that might not be a good idea either, depending on the time limit and how quickly I study.