The actual relationship is: A causes B. Furthermore, there is no noise in the process. A is varying randomly, but B is deterministically caused by A and nothing else, and not by a complex process either.
[. . .] It does not matter what smooth waveform the signal generator puts out, it will have zero correlation with the current that it is the sole cause of.
This equivocates the entire waveform A and the values of A at single points in time. The random value of the entire waveform A is a sole cause of the entire value of the waveform B. The random value of A at a single point in time is not a sole cause of the random value of B at that point in time. What would be a sole cause of the value of B at any point in time is the value of A at that time together with any one of three other variables: the value of a hidden low-pass-filtered white noise at that time, the value of A at an immediately preceding time in the continuum limit, or, if this is a second-order system, the value of B at an immediately preceding time in the continuum limit.
As entire waveforms, the random value of A is perfectly correlated with the random value of B (up to the rank of the covariance of B), because B is a deterministic linear transformation of A. As values at single points in time, the random value of A is uncorrelated with the random value of B.
So, marginalizing out the equivocation, either A is a sole deterministic cause of B, and A and B are perfectly correlated (but correlation is not logically necessary; see below), or A and B have zero correlation, and A is not a sole deterministic cause of B.
. . . Spirtes et al’s Causation, Prediction, and Search. One of the axioms used in the last-mentioned is the Faithfulness Axiom. See the book for the precise formulation; informally put it amounts to saying that if two variables are uncorrelated, then they are causally independent. . . . The purpose of this article is to argue that this is not the case.
Emphasis added here and below.
Causation, Prediction, and Search, page 31:
Faithfulness Condition: Let G be a causal graph and P a probability distribution generated by G. <G, P> satisfies the Faithfulness Condition if and only if every conditional independence relation true in P is entailed by the Causal Markov Condition applied to G.
If the variables are independent then the correlation is 0, but the converse is not true because the correlation coefficient detects only linear dependencies between two variables. Here is an example: Suppose the random variable X is uniformly distributed on the interval from −1 to 1, and Y = X². Then Y is completely determined by X, so that X and Y are dependent, but their correlation is zero; they are uncorrelated.
Spirtes’s example on page 71 looks like a linear Gaussian causal system. In a linear Gaussian causal system, uncorrelation is identical with simple marginal independence, and it can imply complete conditional independence.
Theorem: In the long run, a bounded, differentiable real function has zero correlation with its first derivative. . . .
Notice that unlike the case that Spirtes considers, where the causal connections between two variables just happen to have multiple effects that exactly cancel, the lack of correlation between A and B is robust.
Yes, I think this is true for values of a function and its derivative sampled at single uniformly random times (for a limit sense of “uniform”).
This equivocates the entire waveform A and the values of A at single points in time. The random value of the entire waveform A is a sole cause of the entire value of the waveform B. The random value of A at a single point in time is not a sole cause of the random value of B at that point in time. What would be a sole cause of the value of B at any point in time is the value of A at that time together with any one of three other variables: the value of a hidden low-pass-filtered white noise at that time, the value of A at an immediately preceding time in the continuum limit, or, if this is a second-order system, the value of B at an immediately preceding time in the continuum limit.
As entire waveforms, the random value of A is perfectly correlated with the random value of B (up to the rank of the covariance of B), because B is a deterministic linear transformation of A. As values at single points in time, the random value of A is uncorrelated with the random value of B.
So, marginalizing out the equivocation, either A is a sole deterministic cause of B, and A and B are perfectly correlated (but correlation is not logically necessary; see below), or A and B have zero correlation, and A is not a sole deterministic cause of B.
Emphasis added here and below.
Causation, Prediction, and Search, page 31:
Wikipedia on correlation:
Spirtes’s example on page 71 looks like a linear Gaussian causal system. In a linear Gaussian causal system, uncorrelation is identical with simple marginal independence, and it can imply complete conditional independence.
Yes, I think this is true for values of a function and its derivative sampled at single uniformly random times (for a limit sense of “uniform”).