I’m not saying anything about MCMC. I’m saying random noise is not what I care about, the MCMC example is not capturing what I’m trying to get at when I talk about causal closure.
I don’t disagree with anything you’ve said in this comment, and I’m quite confused about how we’re able to talk past each other to this degree.
Yeah duh I know you’re not talking about MCMC. :) But MCMC is a simpler example to ensure that we’re on the same page on the general topic of how randomness can be involved in algorithms. Are we 100% on the same page about the role of randomness in MCMC? Is everything I said about MCMC super duper obvious from your perspective? If not, then I think we’re not yet ready to move on to the far-less-conceptually-straightforward topic of brains and consciousness.
I’m trying to get at what you mean by:
But imagine instead that (for sake of argument) it turned out that high-resolution details of temperature fluctuations throughout the brain had a causal effect on the execution of the algorithm such that the algorithm doesn’t do what it’s meant to do if you just take the average of those fluctuations.
I don’t understand what you mean here. For example:
If I run MCMC with a PRNG given random seed 1, it outputs 7.98 ± 0.03. If I use a random seed of 2, then the MCMC spits out a final answer of 8.01 ± 0.03. My question is: does the random seed entering MCMC “have a causal effect on the execution of the algorithm”, in whatever sense you mean by the phrase “have a causal effect on the execution of the algorithm”?
My MCMC code uses a PRNG that returns random floats between 0 and 1. If I replace that PRNG with return 0.5, i.e. the average of the 0-to-1 interval, then the MCMC now returns a wildly-wrong answer of 942. Is that replacement the kind of thing you have in mind when you say “just take the average of those fluctuations”? If so, how do you reconcile the fact that “just take the average of those fluctuations” gives the wrong answer, with your description of that scenario as “what it’s meant to do”? Or if not, then what would “just take the average of those fluctuations” mean in this MCMC context?
MCMC is a simpler example to ensure that we’re on the same page on the general topic of how randomness can be involved in algorithms.
Thanks for clarifying :)
Are we 100% on the same page about the role of randomness in MCMC? Is everything I said about MCMC super duper obvious from your perspective?
Yes.
If I run MCMC with a PRNG given random seed 1, it outputs 7.98 ± 0.03. If I use a random seed of 2, then the MCMC spits out a final answer of 8.01 ± 0.03. My question is: does the random seed entering MCMC “have a causal effect on the execution of the algorithm”, in whatever sense you mean by the phrase “have a causal effect on the execution of the algorithm”?
Yes, the seed has a causal effect on the execution of the algorithm by my definition. As was talked about in the comments of the original post, causal closure comes in degrees, and in this case the MCMC algorithm is somewhat causally closed from the seed. An abstract description of the MCMC system that excludes the value of the seed is still a useful abstract description of that system—you can reason about what the algorithm is doing, predict the output within the error bars, etc.
In contrast, the algorithm is not very causally closed to, say, idk, some function f() that is called a bunch of times on each iteration of the MCMC. If we leave f() out of our abstract description of the MCMC system, we don’t have a very good description of that system, we can’t work out much about what the output would be given an input.
If the ‘mental software’ I talk about is as causally closed to some biophysics as the MCMC is causally closed to the seed, then my argument in that post is weak. If however it’s only as causally closed to biphysics as our program is to f(), then it’s not very causally closed, and my argument in that post is stronger.
My MCMC code uses a PRNG that returns random floats between 0 and 1. If I replace that PRNG with return 0.5, i.e. the average of the 0-to-1 interval, then the MCMC now returns a wildly-wrong answer of 942. Is that replacement the kind of thing you have in mind when you say “just take the average of those fluctuations”?
Hmm, yea this is a good counterexample to my limited “just take the average of those fluctuations” claim.
If it’s important that my algorithm needs a pseudorandom float between 0 and 1, and I don’t have access to the particular PRNG that the algorithm calls, I could replace it with a different PRNG in my abstract description of the MCMC. It won’t work exactly the same, but it will still run MCMC and give out a correct answer.
To connect it to the brain stuff: say I have a candidate abstraction of the brain that I hope explains the mind. Say temperatures fluctuate in the brain between 38°C and 39°C. Here are 3 possibilities of how this might effect the abstraction:
Maybe in the simulation, we can just set the temperature to 38.5°C, and the simulation still correctly predicts the important features of the output. In this case, I consider the abstraction causally closed to the details temperature fluctuations.
Or maybe temperature is an important source of randomness for the mind algorithm. In the simulation, we need to set the temperature to 38+x°C where, in the simulation, I just generate x as a PRN between 0 and 1. In this case, I still consider the abstraction causally closed to the details of the temperature fluctuations.
Or maybe even doing the 38+x°C replacement makes the simulation totally wrong and just not do the functions its meant to do. The mind algorithm doesn’t just need randomness, it needs systematic patterns that are encoded in the temperature fluctuations. In this case, to simulate the mind, we need to constantly call a function temp() which simulates the finer details of the currents of heat etc throughout the brain. In this case, in my parlance, I’d say the abstraction is not causally closed to the temperature fluctuations.
I’m not saying anything about MCMC. I’m saying random noise is not what I care about, the MCMC example is not capturing what I’m trying to get at when I talk about causal closure.
I don’t disagree with anything you’ve said in this comment, and I’m quite confused about how we’re able to talk past each other to this degree.
Yeah duh I know you’re not talking about MCMC. :) But MCMC is a simpler example to ensure that we’re on the same page on the general topic of how randomness can be involved in algorithms. Are we 100% on the same page about the role of randomness in MCMC? Is everything I said about MCMC super duper obvious from your perspective? If not, then I think we’re not yet ready to move on to the far-less-conceptually-straightforward topic of brains and consciousness.
I’m trying to get at what you mean by:
I don’t understand what you mean here. For example:
If I run MCMC with a PRNG given random seed 1, it outputs 7.98 ± 0.03. If I use a random seed of 2, then the MCMC spits out a final answer of 8.01 ± 0.03. My question is: does the random seed entering MCMC “have a causal effect on the execution of the algorithm”, in whatever sense you mean by the phrase “have a causal effect on the execution of the algorithm”?
My MCMC code uses a PRNG that returns random floats between 0 and 1. If I replace that PRNG with
return 0.5
, i.e. the average of the 0-to-1 interval, then the MCMC now returns a wildly-wrong answer of 942. Is that replacement the kind of thing you have in mind when you say “just take the average of those fluctuations”? If so, how do you reconcile the fact that “just take the average of those fluctuations” gives the wrong answer, with your description of that scenario as “what it’s meant to do”? Or if not, then what would “just take the average of those fluctuations” mean in this MCMC context?Thanks for clarifying :)
Yes.
Yes, the seed has a causal effect on the execution of the algorithm by my definition. As was talked about in the comments of the original post, causal closure comes in degrees, and in this case the MCMC algorithm is somewhat causally closed from the seed. An abstract description of the MCMC system that excludes the value of the seed is still a useful abstract description of that system—you can reason about what the algorithm is doing, predict the output within the error bars, etc.
In contrast, the algorithm is not very causally closed to, say, idk, some function f() that is called a bunch of times on each iteration of the MCMC. If we leave f() out of our abstract description of the MCMC system, we don’t have a very good description of that system, we can’t work out much about what the output would be given an input.
If the ‘mental software’ I talk about is as causally closed to some biophysics as the MCMC is causally closed to the seed, then my argument in that post is weak. If however it’s only as causally closed to biphysics as our program is to f(), then it’s not very causally closed, and my argument in that post is stronger.
Hmm, yea this is a good counterexample to my limited “just take the average of those fluctuations” claim.
If it’s important that my algorithm needs a pseudorandom float between 0 and 1, and I don’t have access to the particular PRNG that the algorithm calls, I could replace it with a different PRNG in my abstract description of the MCMC. It won’t work exactly the same, but it will still run MCMC and give out a correct answer.
To connect it to the brain stuff: say I have a candidate abstraction of the brain that I hope explains the mind. Say temperatures fluctuate in the brain between 38°C and 39°C. Here are 3 possibilities of how this might effect the abstraction:
Maybe in the simulation, we can just set the temperature to 38.5°C, and the simulation still correctly predicts the important features of the output. In this case, I consider the abstraction causally closed to the details temperature fluctuations.
Or maybe temperature is an important source of randomness for the mind algorithm. In the simulation, we need to set the temperature to 38+x°C where, in the simulation, I just generate x as a PRN between 0 and 1. In this case, I still consider the abstraction causally closed to the details of the temperature fluctuations.
Or maybe even doing the 38+x°C replacement makes the simulation totally wrong and just not do the functions its meant to do. The mind algorithm doesn’t just need randomness, it needs systematic patterns that are encoded in the temperature fluctuations. In this case, to simulate the mind, we need to constantly call a function temp() which simulates the finer details of the currents of heat etc throughout the brain. In this case, in my parlance, I’d say the abstraction is not causally closed to the temperature fluctuations.