Caveat: epistemic status of all of this is somewhat tentative, but even if you assign e.g. only 70% confidence in each claim (which seems reasonable) and you assign a 50% hit to the reasoning from sheer skepticism, naively multiplying it out as if all of the claims were independent still leaves you with a 12% chance that your brain is doing this to you, which is large enough that it seems at least worth a few cycles of trying to think about it and ameliorate the situation.
Fwiw, my (not-that-well-sourced-but-not-completely-made-up) impression is that the overall story is a small extrapolation of probably-mainstream neuroscience, and also consistent with the way AI algorithms work, so I’d put significantly higher probability on it (hard to give an actual number without being clearer about the exact claim).
(For someone who wants to actually check the sources, I believe you’d want to read Peter Dayan’s work.)
(I’m not expressing confidence in specific details like e.g. turning sensory data into implicit causal models that produce binary signals.)
Fwiw, my (not-that-well-sourced-but-not-completely-made-up) impression is that the overall story is a small extrapolation of probably-mainstream neuroscience, and also consistent with the way AI algorithms work, so I’d put significantly higher probability on it (hard to give an actual number without being clearer about the exact claim).
(For someone who wants to actually check the sources, I believe you’d want to read Peter Dayan’s work.)
(I’m not expressing confidence in specific details like e.g. turning sensory data into implicit causal models that produce binary signals.)