Would the hurricane have happened if not for the butterfly?
You are talking about counterfactuals, and those a difficult problem to solve when there is only one deterministic or probabilistic world and nothing else. A better question is “Does a model where ‘a hurricane would not have happened as it had, if not for the butterfly’ make useful and accurate predictions about the parts of the world we have not yet observed?” If so, then it’s useful to talk about a butterfly causing a hurricane, if not, then it’s a bad model. This question is answerable, and as someone with an expertise in “complexity science,” whatever it might be, you are probably well qualified to answer it. It seems that your answer is “the impact of butterfly’s wings will typically not rise above the persistent stochastic inputs affecting the Earth, ” meaning that the model where a butterfly caused the hurricane is not a useful one. In that clearly defined sense, you have answered the question you posed.
Yes!! Very cool—going even one meta level up. I agree that usefulness of proposed models is certainly the ultimate judge of whether it’s “good” or not. To make this even more concrete, we could try to construct a game and compare the mean performance of two agents having the two models we want to compare… I wonder if anyone’s tried that… As far as I know, the counterfactual approach is “state of the art” for understanding causality these days—and it is a bit lacking for the reason you say. This could be a cool paper to write!
The counterfactual approach is indeed very popular, despite its obvious limitations. You can see a number of posts from Chris Leung here on the topic, for example. As for comparing performance of different agents, I wrote a post about it some years ago, not sure if that is what you meant, or if it even makes sense to you.
hmm, so what I was thinking is whether we could give an improved definition of causality based on something like “A causes B iff the model [A causes B] performs superior to other models in some (all?) games / environments”—which may have a funny dependence on the game or environment we choose.
Though as hard as the counterfactual definition is to work with in practice, this may be even harder…
You post may be related to this, though not the same, I think. I guess what I’m suggesting isn’t directly about decision theory.
A causes B iff the model [A causes B] performs superior to other models in some (all?) games / environments
There are two parts that go into this: the rules of the game, and the initial state of it. You can fix one or both, you can vary one or both. And by “vary” I mean “come up with a distribution, draw an instance at random used for a particular run” then see which runs cause what. For example, in physics you could start with general relativity and vary the gravitational constant, the cosmological constant, the initial expansion rate, the homogeneity levels etc. Your conclusion might be something like “given this range of parameters, the inhomogeneities cause the galaxies to form around them. Given another range of parameters, the universe might collapse or blow up without any galaxies forming. So, yes, as you said,
“A causes B” … has a funny dependence on the game or environment we choose
In the Game of Life, given a certain setup, a glider can hit a stable block, causing its destruction. This setup could be unique or stable to a range of perturbations or even large changes, and it still would make sens sense to use the cause/effect concept.
The counterfactuals in all those cases would be in the way we set up a particular instance of the universe: the laws and the initial conditions. They are counterfactual because in our world we only have the one run, and all others are imagined, not “real”. However, if one can set up a model of our world where a certain (undetectable) variation leads to a stable outcome, then those would be counterfactuals. The condition that the variations are undetectable given the available resolution is essential, otherwise it would not look like the same world to us. I had a post about that, too.
An example of this “low-res” causing an apparent counterfactual is the classic
If Lee Harvey Oswald hadn’t shot John F. Kennedy, someone else would have
If you can set up a simulation with varying initial conditions that includes, as Eliezer suggests, a conspiracy to kill JFK, but varies in whether Oswald was a good/available tool for it, then, presumably, in many of those runs JFK would have been shot within a time frame that is not too different from our particular realization. In some others JFK would have been killed but poisoned or stabbed, not shot, and so the Lee Harvey Oswald would not be the butterfly you are describing. In the models where there is no conspiracy, Oswald would have been the butterfly, again, as Eliezer describes. There are many other possible butterflies and non-butterflies in this setup, of course, from gusts of wind at a wrong time, to someone discovering the conspiracy early, etc.
Note that some of those imagined worlds are probably impossible physically, as in, when extrapolated into the past they would have caused macroscopic effects that are incompatible with observations. For example, Oswald missing the mark with his shot may have resulted from the rifle being of poor quality which would have been incompatible with the known quality control procedures when it was made.
You are talking about counterfactuals, and those a difficult problem to solve when there is only one deterministic or probabilistic world and nothing else. A better question is “Does a model where ‘a hurricane would not have happened as it had, if not for the butterfly’ make useful and accurate predictions about the parts of the world we have not yet observed?” If so, then it’s useful to talk about a butterfly causing a hurricane, if not, then it’s a bad model. This question is answerable, and as someone with an expertise in “complexity science,” whatever it might be, you are probably well qualified to answer it. It seems that your answer is “the impact of butterfly’s wings will typically not rise above the persistent stochastic inputs affecting the Earth, ” meaning that the model where a butterfly caused the hurricane is not a useful one. In that clearly defined sense, you have answered the question you posed.
Yes!! Very cool—going even one meta level up. I agree that usefulness of proposed models is certainly the ultimate judge of whether it’s “good” or not. To make this even more concrete, we could try to construct a game and compare the mean performance of two agents having the two models we want to compare… I wonder if anyone’s tried that… As far as I know, the counterfactual approach is “state of the art” for understanding causality these days—and it is a bit lacking for the reason you say. This could be a cool paper to write!
The counterfactual approach is indeed very popular, despite its obvious limitations. You can see a number of posts from Chris Leung here on the topic, for example. As for comparing performance of different agents, I wrote a post about it some years ago, not sure if that is what you meant, or if it even makes sense to you.
hmm, so what I was thinking is whether we could give an improved definition of causality based on something like “A causes B iff the model [A causes B] performs superior to other models in some (all?) games / environments”—which may have a funny dependence on the game or environment we choose.
Though as hard as the counterfactual definition is to work with in practice, this may be even harder…
You post may be related to this, though not the same, I think. I guess what I’m suggesting isn’t directly about decision theory.
There are two parts that go into this: the rules of the game, and the initial state of it. You can fix one or both, you can vary one or both. And by “vary” I mean “come up with a distribution, draw an instance at random used for a particular run” then see which runs cause what. For example, in physics you could start with general relativity and vary the gravitational constant, the cosmological constant, the initial expansion rate, the homogeneity levels etc. Your conclusion might be something like “given this range of parameters, the inhomogeneities cause the galaxies to form around them. Given another range of parameters, the universe might collapse or blow up without any galaxies forming. So, yes, as you said,
In the Game of Life, given a certain setup, a glider can hit a stable block, causing its destruction. This setup could be unique or stable to a range of perturbations or even large changes, and it still would make sens sense to use the cause/effect concept.
The counterfactuals in all those cases would be in the way we set up a particular instance of the universe: the laws and the initial conditions. They are counterfactual because in our world we only have the one run, and all others are imagined, not “real”. However, if one can set up a model of our world where a certain (undetectable) variation leads to a stable outcome, then those would be counterfactuals. The condition that the variations are undetectable given the available resolution is essential, otherwise it would not look like the same world to us. I had a post about that, too.
An example of this “low-res” causing an apparent counterfactual is the classic
If you can set up a simulation with varying initial conditions that includes, as Eliezer suggests, a conspiracy to kill JFK, but varies in whether Oswald was a good/available tool for it, then, presumably, in many of those runs JFK would have been shot within a time frame that is not too different from our particular realization. In some others JFK would have been killed but poisoned or stabbed, not shot, and so the Lee Harvey Oswald would not be the butterfly you are describing. In the models where there is no conspiracy, Oswald would have been the butterfly, again, as Eliezer describes. There are many other possible butterflies and non-butterflies in this setup, of course, from gusts of wind at a wrong time, to someone discovering the conspiracy early, etc.
Note that some of those imagined worlds are probably impossible physically, as in, when extrapolated into the past they would have caused macroscopic effects that are incompatible with observations. For example, Oswald missing the mark with his shot may have resulted from the rifle being of poor quality which would have been incompatible with the known quality control procedures when it was made.
Hope some of this makes sense.