Crisis of Faith case study: beyond reductionism?
What follows are some notes to self that I took while reading the first 7 pages of Stuart A. Kauffman’s book Reinventing the Sacred. This was over 6 years ago, in January 2017.
I read those pages multiple times, pausing whenever I had some objections arise, and journalling about the process. I ended up having a bit of a Crisis of Faith with it, and something loosened, but I didn’t actually return to the book until years later. The objections were largely coming up based on what I’d read in the Sequences and learned elsewhere from rationalists. I remember thinking at the time it would be a good case study for LessWrong, but I didn’t get around to sharing it until now.
It’s kinda funny/awkward pasting this in—I’ve changed a lot in the past years, both in terms of how I write and how I related to all of this stuff. But since the whole thing is about my perspective at the time, there’s no point in trying to change any of that, so I’ve basically only edited it for clarity, grammar, etc. I did take out a couple tangents. The section titles I wrote now with the benefit of hindsight.
If you get bored towards the start, you might skim til it picks up; I think it gets more interesting as it goes on, but it seemed important to include my whole process of course.
📖 Reading notes
↳12017-01-17 evening
First pass: skepticism, committing to the virtues of rationality
I got a few pages into this book and it was already quite challenging, in the sense of “challenging to take the author seriously as having something worth saying”. He struck me as being the kind of person that caused Eliezer to rant about “emergence”.
Main issues so far are:
his lack of distinguishing between “current physics” and “coherent extrapolated physics”
relatedly, saying ~”half of people believe in god. that isn’t just going to get explained away by science”
I was like “well a century or two ago, that fraction was substantially higher, so it seems to be going down”
but then I actually looked into it, and that does not appear to be the case!
I’m still not particularly into his case though; I think that peoples’ reasons for converting (or not de-converting) may be about other things
also part of why those numbers are increasing is that religious people are having more children
which has nothing to do with the quality of the ideas
I do see the case for saying “well, for whatever reason, there are gonna be a fuckton of religious people. we might as well attempt to develop a coherent interface with them” but that does nooootttt seem to be his case
seeming handwaviness around ontology & so on
which is sort of to be expected by someone who is making the claim that various non-physical things are ontologically basic
but then, let’s break down the concept of ontology!
ontology = “the study of the nature of being”
it’s not beingness itself. it could be that in a hypothetical world of beingness-itself, there is no ontologically basic “desire” or “values”, but that since we are subjective agents with desires and values, any attempt that we make to understand the world will be our attempts to understand the worlds that we construct in our heads, which *do* contain desires and values
(this is my best steelman so far)
although if we do actually think about this not just using psych statistics but using a serious model of the brain, like Perceptual Control Theory, then we encounter that… hey! from a different perspective, our values and desires are totally reducible to basic physical phenomena (they just don’t feel like it from the inside)
there’s something here about the nature of being subject to (one’s values, perspective etc)
noticed myself thinking something akin to “he’s not exactly doing a good job of convincing me he’s not nuts”
but… maybe that’s not his job?
and I can separately ask “why might this be worth reading even if he is?”
...basically PO the whole thing.
(Edit: I was inspired here by having recently read about Edward de Bono’s idea of “po”, which is about sitting with an idea even if it clearly won’t work as-stated, to see if it inspires something else that might be useful, rather than just rejecting it off-hand)
also finding myself thinking “is [transcending and including reductionism] oxymoronic?” it seems like it might be. have just asked this on fb. we’ll see what people say
...just read a couple more pages. Am now on page 7. Still very challenging.
I’m noting though that I maybe have some stale beliefs here though!
It feels important to separate “what beliefs LW taught me” and “how LW taught me to think”. In particular,
I think that “how LW taught me to think” is not a complete manual for correct thinking, but is roughly-speaking lacking in error
but
(a) it seems likely on priors that some of the beliefs I’ve picked up from the rationalist memesphere are false
(b) I have already come to a somewhat post-rational(ist) perspective on a bunch of this stuff, from reading meaningness and other sources
so, considering virtue #0:
> “Every step of your reasoning must cut through to the correct answer in the same movement. More than anything, you must think of carrying your map through to reflecting the territory.
> If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.
> How can you improve your conception of rationality? Not by saying to yourself, “It is my duty to be rational.” By this you only enshrine your mistaken conception. Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think: “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake.”
I don’t just want to find myself defending reductionism using Eliezer’s thoughts. Nor does he! (Except inasmuch as he might believe it to be a waste of my time to be engaging with this.)
So what *do* I believe (about reductionism), that I think Kauffman doesn’t?
(and why?)
(and how is eg godel’s incompleteness theorem relevant?)
(and how are simulated universes relevant?)
I think that my (semi-cached) beliefs are something like
In principle, everything could be understood by reducing it to its smallest parts and seeing how they combined.
In practice, this is often totally impossible, due to the Square Law as I think Weinberg calls it in Introduction to General Systems Theory
It may in fact furthermore be that this is only even possible in principle to an agent who is outside of the universe, running it as a simulation (or maybe some similar circumstance that I can’t now imagine).
For example, consider “game of life”. Since it’s turing complete, it could implement an agent that builds world-models and makes predictions and exhibits “intentional” behaviors. But I thiiiiink it’s basically the case that it’s “physically” impossible for that agent to have the appropriate access to information that would let it fully model its world as deterministic. Maybe I’m confusing levels of abstraction though. And maybe this would depend on how the agent is implemented; certainly, there are many many ways that that could happen.
I think my main line of thinking here is about the entangledness of reality and information about reality. The agent can’t poke parts of reality at the cells-level in order to find out what they are, because in the process they would totally change and destroy them. We can stand outside the GoL universe and see it’s deterministicness (although good lord is it still unpredictable to someone not literally applying the full rules every step of the way!) but this is not something that an agent within it can do, and they can’t apply all the rules to determine what will happen
so then, one question is “could a GoL-based agent have a reductionistic world -model that is the true cell-based deterministic physics, even if they can’t actually compute what will happen using it?”
There is a “territory” that behaves in an essentially deterministic way and everything else is made out of that territory.
What do I think that Kauffman believes differently than this?
He states “[complex systems] are beyond physics and cannot be reduced to it”.
And it seems… maybe my qualm with that depends on whether by “cannot” he means… hm. Before I would have said something about “in principle” vs “in practice”
I think that I actually basically agree with what he’s saying on some level? My face is making a weird expression right now.
(One thing I’m noting as I think about this is the extent to which my beliefs about this stuff come not just from Eliezer but from conversations with Nate about:
agents with world models & representations
many-worlds
One thing to remember is that a lot of the reason that rationalists are suspicious of people who use words like emergence and who make arguments for elements of consciousness being ontologically basic is that these people are typically engaging in some form of mind projection fallacy, plus motivated cognition of various kinds.
(I of course am undoubtedly engaging in a whole host of motivated cognitions, both being somewhat motivated to appreciate Kauffman’s perspective because of wanting to align with Jean, and also being motivated to reject it because of wanting to align with my rationalist identity and that group of people)
(maybe also something about not getting duped/tricked?)
But this mind projection fallacy is a really big thing. One might imagine: if I were a robot attempting to make sense of the universe, would I possibly converge on this model?
This thought experiment points at both:
why anti-mind-projection-fallacy lines of reasoning came naturally to Eliezer
why he thinks they’re so dangerous (because of the assumptions people tend to make about how robots *would* make sense of the universe)
...which is prompting me to reflect that I don’t actually think that that question is very practical for a vast majority of people, who would simply anthropomorphize the robot and conclude that *of course* it would converge on this model.
Part of why I trust Chapman’s thinking on this is that he’s actually worked with AIs, and theorized on AI ontology (I believe).
(Aside: I think my original intention to finish Behavior: the Control of Perception then read Reinventing the Sacred made a fair bit of sense, in abstract. It certainly has important, relevant things to say. I haven’t totally finished B:CP, but I could imagine wanting to finish it before I do go much further in RtS)
I just took a snack break, during which I reflected on this stuff while cutting up some grapefruit, and then talked with MA a bit about chess.
One thing I found myself reflecting was that it seems that it’s less that I literally disagree with what Kauffman has written, and more that it is the kind of thing that [someone who would say all sorts of ridiculous claims] would say. Or at least, I maybe have no disagreement with what he’s written on the first 7 pages :-/
I’m re-reading some posts on LW which talk about reductionism now.
http://lesswrong.com/lw/tv/excluding_the_supernatural/
> “Ultimately, reductionism is just disbelief in fundamentally complicated things. If “fundamentally complicated” sounds like an oxymoron… well, that’s why I think that the doctrine of non-reductionism is a confusion, rather than a way that things could be, but aren’t. You would be wise to be wary, if you find yourself supposing such things.”
follow-up: http://lesswrong.com/lw/tw/psychic_powers/
refers to naturalism, which is a view of reality excluding supernatural phenomena
it seems really really really worth distinguishing naturalism from reductionism. It seems that Kauffman and I are both naturalist, which is a very very very good start :P
> “The hope of psychic powers arises from treating beliefs and desires as sufficiently fundamental objects that they can have unmediated connections to reality.”
I don’t think Kauffman is exhibiting this sort of error.
Then there’s the dimension of determinism. I think I’m in principle a determinist, although… hm.
somewhat non-sequitoriously (in response to reading https://en.wikipedia.org/wiki/Reductionism) it seems noting that it’s very easy to set up a system with emergent behavior but which is also fully predictable. So these two aren’t coupled.
from wikipedia > “Sven Erik Jorgensen, an ecologist, lays out both theoretical and practical arguments for a holistic approach in certain areas of science, especially ecology. He argues that many systems are so complex that it will not ever be possible to describe all their details. Drawing an analogy to the Heisenberg uncertainty principle in physics, he argues that many interesting and relevant ecological phenomena cannot be replicated in laboratory conditions, and thus cannot be measured or observed without influencing and changing the system in some way. He also points to the importance of interconnectedness in biological systems. His viewpoint is that science can only progress by outlining what questions are unanswerable and by using models that do not attempt to explain everything in terms of smaller hierarchical levels of organization, but instead model them on the scale of the system itself, taking into account some (but not all) factors from levels both higher and lower in the hierarchy.”
I really like the bolded line of thinking. It’s related to the distinction between classic sociology and ethnomethodology: things that are highly context-sensitive must be studied in-context.
http://lesswrong.com/lw/qx/timeless_identity/
I found my way to https://www.edge.org/q2008/q08_5.html#smolin
Lee Smolin is a Perimeter Institute physicist who has a quote on the back of RtS
On which note...
I think I can say “enough” for this line of thinking. I think my conclusions can be summarized as “My worldview is ontologically reductionistic but absolutely *not* methodologically reductionistic. (In particular, I reject mental agents as ontologically basic). Kauffman appears to confuse these two kinds of reductionism, which is perhaps somewhat annoying but he may still have much to teach me about methodological reductionism.”
Second pass: why is reductionism important to me?
I’m going to start at the beginning of the book again, with all of this in mind, and see how I feel reading it :P
...re-reading the first few pages, it’s clear that my summary above was very relevant but not enough to get me into smooth sailing through this book! Having personally clarified the distinction between ontological and methodological reductionism, it’s very clear that he is aware of both and is rejecting both as limited. On page 3, which I clearly didn’t read closely enough the first time:
It turns out that biological evolution by Darwin’s heritable variation and natural selection cannot be “reduced” to physics alone. It is emergent in two senses”. The first is epistemological, meaning that we cannot from physics deduce upwards to the evolution of the biosphere. The second is ontological, concerning what entities are real in the universe. For the reductionist, only particles in motion are ontologically real entities. Everything else is to be explained by different complexities of particles in motion, hence are not real in their own ontological right.
But okay, let’s back up a bit: why is reductionism important to me? How would I fail if I allowed non-reducible realities?
[Note from 2023: That question ↑ seems to me to be central to how I was able to make progress here. I shifted from defending my view to focusing on what criteria I was looking for in whatever view I might want to adopt. I shifted from “reductionism is correct” to “[my] reductionism doesn’t make these particular errors”.]
When I think about this, it becomes relatively clear that a better description for the whole thing is more like:
many people taking non-reductionistic approaches seem to consistently make certain kinds of thinking errors
the largest class of these errors in the ontological domain could be described as supernaturalism
there are probably others. “phlogiston”, for instance, might fall in this sort of category
there’s also animism and panpsychism and...
so “reductionism” as I’ve understood it, is primarily a way of thinking that avoids these errors. But we might then ask “might there be other, better ways of thinking, that avoid these errors and *also* avoid whatever errors reductionism may make?
and from here I have rederived an appetite for RtS, which purports to present exactly this
Third pass: discovering curiosity as I flip around the pages
(I still don’t quite even know what it *means* for “a couple in love on the banks of the Seine” to be more than whatever it is they are made of. but maybe my concept of “is” is still reductionistic on some level? and maybe there’s another way to conceive of “is”?)
holy shit, I was literally on the other side of this like 20 minutes ago, thinking “I’m gonna start reading again, but (*ugh*) it might have to require me to fucking redefine terms like “real”″ And I was pretty pissed about that. Whereas as I wrote the paragraph above I was feeling pretty creative and curious and open.
But then I read the paragraph at the top of p4, which said “agency has emerged in evolution and cannot be deduced by physics” and I’m like hnnnngh.
I think a lot of this is due to my understandings of simple white-box systems that produce statistical phenomena equivalent to many of the complex systems he’s describing
but I guess there’s a difference between being able to white-box economic statistics and being able to white-box full on economic trends and inventions and so on.
...the page 4-5 spread is still pretty hard to read.
I’m noting though that part of this seems actually less frame-based and more belief-based. he’s making claims about things science/physics can/can’t do, and I apparently believe that it can. but why?
ohhhhh the glimpse of insight, looking at p3 at the bottom
it seems so obvious having glimpsed it. he’s basically saying “we have to model these things as being “real” on their own, because we can’t understand them reductively, and they’re sure as hell not not part of reality”
I feel like I may have disagreements about what in fact is reducible, but I can see what he’s pointing at on an abstract level, sure.
*Malcolm looks at p3 again*
Right! And this shift is actually thus necessary, because we *do* want to understand and explain what is, and thus if there are things we can’t understand in terms of their parts, we need a different way to think about them such that they’re part of what is and therefore part of the domain of what is to be understood
from this perspective, it’s almost like it’s reductionism that looks anti-scientific or something
...one of the anti-reductionist errors (a particular un-favorite of Eliezer’s) might be called the Emergence Fallacy: the tendency to hide explanations under “emergence”.
Kauffman is not doing that, and in fact is pointing out that the presence of emergence means that we should be much less sure we know what’s going on and what will happen.
(another error would be some sort of symbolism → causality failure)
astrology makes this one big time
reductionism solves it by saying “well, there’s no meaningful way for those stars to exert any relevant causal influence on you except via your own beliefs about them”
I think that post-reductionism would solve it in a pretty similar way
[Edit: by “post-reductionism” I mean “some view that doesn’t make the errors reductionism doesn’t make, and also doesn’t make errors reductionism makes.]
AH! the laws of physics may not allow us to accurately predict what complex systems will do, but they CAN allow us to predict
the types of statistical behaviors complex systems might exhibit
the types of phenomena that will NOT be observed, either in complex or simple systems. and one of these is causality through non-physical means (which need to be specified for this to be a meaningful sentence… and which could of course themselves be wrong or misunderstood (eg dark matter maybe); but complex systems don’t *break* this on their own)
the behavior of complex systems may not be able to be productively explained or predicted on the level of particles, waves, quantum configurations, but it is still happening there
relatedly/equivalently: things may be composed of a nonlinear combination/relationship of their parts, but they are still made of those parts and not other things (I think!?)
yeah, they don’t *break* physical laws;
rather, attempting to apply such a law *to something at that level of complexity* is something equivalent to a type error
this is related to the psychics follow-up post on LW (linked above I think)
let’s start at the beginning again, having had that insight. even though I didn’t make it to p7 again. I think maybe I’ll only keep reading when I stop feeling stuck on these pages. It feels cool that I’m relating to that stuckness excitedly, as a sign of learning/understanding available, etc.
Fourth pass: this makes sense now. I can start to listen without feeling like it’s going to make me crazy
(aside: it is certainly NOT helping matters, from my perspective, that Kauffman is bringing up the science vs religion debate here. it seems very very very clear to me that religion is an incidental part of human history)
[Edit: the opposite now seems clear to me, even though I don’t have any “faith” in any religion or religious creed. It seems like a very important part of understanding how humans are human. See eg The Snake Cult of Consciousness for an exploration of how the exit from the Garden of Eden and other creation myths can be understood as telling the story of humans inventing recursion—the knowledge of one’s own choices in relation to societal norms = the proverbial “knowledge of good and evil”.]
p3: “a central implication of this new worldview is that we are co-creators of a universe, biosphere, and culture of endlessly novel complexity”
I got stuck a bit on this sentence before, but if I think about it from an ethnomethodological stance, it makes sense. but yeah, that sort of does represent a post-reductionist frame, because it’s saying that X and Y which are co-creating each other can not more simply be understood as “no more than” some underlying phenomenon (of which they would be epiphenomena, if I’ve got my terms right)
p4: “agency has emerged in evolution and cannot be deduced by physics”
I guess the fact that I could program a simple agent doesn’t refute this, if we’re thinking that an agent creating another agent is unremarkable in this respect. because we’re still left with “whence agency”
we could attempt to answer this by making a white-box simulation of an evolutionary environment… and oh, huh. it *does* seem to me that such an environment could be designed where an organism could evolve that would:
a) reasonably be considered an agent
b) be reducible
that being possible would imply that *some* agency is reducible
> “with agency come meaning and value”
hm. well, sure, if “meaning” is “a cause and effect model” (as Simler hazarded) and “value” is “that which is to be optimized for” then that still feels like the above thing could be true
[I need to head to bed, even though I still haven’t made it past p7]
wanting to note 2 things:
post-reductionism isn’t about not being able to decompose complex systems into parts, but rather about not being able to put them back together again (although there may also be systems that are entangled in such a way that part-ness is impossible to reasonably define)
we might want to distinguish between “ontologically basic” and something else. “ontologically non-reducible”, maybe
the point being that you still need some sort of model that says *that* the phenomenon is made of other, more basic parts, even if no model can say *how*
this prevents errors like the supernaturalist ones, and maybe also others
like just because “agency” or “life” or something is a thing in one’s ontology, doesn’t mean you can have it for free without some complex system that is yet made of atoms
something about construction feels relevant here
Reflection, 6+ years later
It’s hard to say what impact this experience I documented above had on that process, but it seems to me likely that even though I didn’t read further in Reinventing the Sacred at the time, that the shift from guardedness and defensiveness towards curiosity helped me appreciate post-reductionist insights over the following years, from other books and from just looking at the world.
I no longer consider myself a reductionist, and I don’t think I’m making any particular anti-reductionistic errors. I definitely don’t think of myself as a post-reductionist, although I suppose that label would apply. It seems to me to be inherently counterproductive to consider myself to be anything that ends in “-ist”, except for guitarist and pianist.
I would also no longer say that I think that ” ‘how LW taught me to think’ [...] is roughly-speaking lacking in error.” 😆 It feel slightly embarrassed to have written that, to be honest. Literally “Less Wrong”, not “we have it all figured out”. But there has at times been a “we have it all figured out” vibe around here, even though the official policy is more fallible. My guess is that basically I felt that I had to say that, even in my private notes, in order to feel free to question Eliezer on reductionism.
I’m not entirely sure why I waited 6 years to post this—I had a lot on my plate and this did take a couple hours of editing out of workflowy-bullets… but it seems likely that one of the reasons is that at the time I was probably a bit nervous given that reductionism still seemed cool at the time and I didn’t want to subject myself to a ton of public criticism. Now it feels like there’s a large fraction of LWers and other friends or peers of mine who’ve gone through a similar shift, whether from reading David Chapman or whatever other life experiences. Not to say you can’t still be cool if you consider yourself a reductionist! But I will say that I experience myself substantially saner and more integrated on the other side of that shift, so I do recommend it.
Anyway, the juiciest thing about this case study, from my current perspective, is that it’s a brilliant example of something I didn’t come to understand until years later, about trust and distrust. Essentially, Kauffman tripped a few warning signs for me pretty early on that put me in a skeptical stance towards him. Both skeptical towards what he had to say and skeptical towards the idea of it being worth continuing to read the book at all. Guarded, rather than listening.
It’s common for rationalists (and, I think, STEM-folk more broadly) to flinch-dismiss things that pattern-match to something they think they’ve rejected already, without recognizing the pre-post fallacy (the stage beyond yours often at first looks like the stage before yours). I now see this urge to dismiss reading something as a waste of time as… not a very good heuristic. The recognition that it would be a waste of time to debate someone about something? Great. But if reading or watching something intrigues me, I take that as a sign that there’s something there that my system wants to understand.
Anyway, in this case, what allowed me to listen was when I owned my distrust as coming from something I cared about (not making certain anti-reductionistic errors) and then sought to read the book while keeping that in mind. I shifted from making a positive/assertive claim “reductionism is correct” (therefore this piece saying “reductionism is wrong” is wrong) to something more like “well I’m not going to trade reductionism for something worse” (and this is currently failing to prove that it’s not worse, but if it were better I would want it).
This resonates with something Logan wrote about courage & defensiveness:
“What is defensiveness trying to accomplish?” According to my working model, defensiveness is trying to preserve access to valuable information you can see from your current perspective. Defensiveness knows that sometimes, when you accidentally fall toward another vantage point, you might get stuck there, forgetting what you used to see and know and care about. Defensiveness would like for you to only gain information, and never lose it.
In 2021 I quote-tweeted someone who said “Books should be a canvas on which you paint your reaction” and mentioned my experience of RtS:
In 2017 I spent 2h re-reading and taking notes on the first 7 pages of a book because I found it so disturbing to my ontology and was trying to integrate it.
(Rereading it last year I now think it’s way unnecessarily belligerent but basically correct in its content.)
Re-reading it today in 2023 I didn’t even find it particularly belligerent, although it still seems to me like not the best gentle on-ramp for reductionists.
Incidentally: the central point (I think?) of the book
In 2020, I picked up the book again, started once more at the beginning, and—seeing an early reference to Chapter 10: Breaking the Galilean Spell—I decided to jump to the middle of the book. There I came to understand his thesis quite directly, which is basically that while the laws of physics absolutely constrain what is possible on higher levels (ie no supernatural stuff) they can’t predict those higher levels for reasons relating to combinatorics & evolution—there are too many possibilities, each of which depends on unexpected remixes of other possibilities. He writes in chapter 10:
If a scientific law is (as the physicist Murray Gell-Mann concisely put it) a compact statement, available beforehand, of the regularities of of a process, then the evolution of the biosphere is partially beyond scientific law.
Here’s a recent piece of writing from an astrobiologist & theoretical physicist outlining Assembly Theory, which (it seems to me) operationalizes one part of this quite precisely. They can measure in any sample of matter, whether there are molecules present that are above the threshold that can only come into existence via some life-like process. This is a simple way in which all the explanations don’t point downward—some point upward. Why is there a ton of this molecule here? Because it was useful for something bigger, whether a nematode or a pharmaceutical company. There is no other way to account for this particular molecule existing in such quantities given, again, combinatorics.
These solid samples of stone, bone, flesh and other forms of matter were dissolved in a solvent and then analysed with a high-resolution mass spectrometer that can identify the structure and properties of molecules. We found that only living systems produce abundant molecules with an assembly index above an experimentally determined value of 15 steps. The cut-off between 13 and 15 is sharp, meaning that molecules made by random processes cannot get beyond 13 steps. We think this is indicative of a phase transition where the physics of evolution and selection must take over from other forms of physics to explain how a molecule was formed.
I’ve become increasingly fascinated with applications of universal law to biology (what can we say about life that we could confidently know would be true on all planets?) which often comes from physicists who have also studied complexity theory and emergence (another great example being Scale by Geoffrey West, which looks at geometry and scaling laws and finds fascinating invariances). But it’s still just constraints; it can’t say what will happen, only what won’t.
I feel like in the years since 2017, I’ve had some key insights about the evolutionary qualities of learning (and what you could call belief-updating) and I’ve come to appreciate what appear to be some of its invariances. One of them is that it’s both necessary to surface any defensiveness/distrust AND to find a way to simultaneously be curious, in order to have deep paradigm-shifting learning, as I happened to do here without fully understand that. If you don’t surface any existing defensiveness/distrust, then even if you do learn, the new learning will be compartmentalized from the old one and at odds with it (though this can maybe be integrated later). If you surface the distrust but don’t find a way to honor it, you won’t be able to listen at all. Something about this seems deeply true to me in both a logical sense and in that it matches a lot of my experiences quite precisely, but I’d love to be able to formalize and verify it even further. This is building on, among other things, Unlocking the Emotional Brain. I’ve written a bit more about my perspective on it here.
Kauffman says: the reason “evolution” displaced “God” is that it was an idea worthy of displacing God, and the more I’ve come to understand about evolution and how it applies on pre-biological scales and memetic scales (David Deutsch is a great source there, who makes a solid rationalist case for post-reductionism) the more I’ve come to appreciate that evolution is worthy of displacing God not just in terms of explanatory power but in terms of potential for sacred devotion as well—without naive or ignorant faith!
Comments?
I’d love to hear what’s on your mind. Some prompts you could respond to:
have you ever had a deliberate Crisis-of-Faith-type experience like this? with a book or something else?
if you used to have a reductionist viewpoint, and that shifted, what did it for you? how did you integrate the wisdom of reductionism into your new view?
if you consider yourself a reductionist, how do you relate to post-reductionistic models? or what do you make of this post?
Reductionist: “Apples are made of atoms.”
Post-reductionist: “I agree. However, it often makes sense to think about ‘apples’ instead of ‘huge sets of atoms’, when we are saying things such as ‘if you plant an apple seed to the ground, an apple tree will grow out of it’. Trying to express this in the terms of individual atoms would require insane computational capacity. The patterns at high level are repeating sufficiently regularly to make ‘apple’ a useful thing to talk about. Most of the time, it allows us to make predictions about what happens at the high level, without mentioning the atoms.”
Reductionist: “Yes, I know. I think this is called ‘a map and the territory’; the territory is made of atoms, but apples are a useful concept.”
Post-reductionist: “From your perspective, apples are merely a useful fiction. From my perspective, atoms are real, but apples are real too.”
Reductionist: “Let’s taboo ‘real’. I already admitted that apples are a useful abstraction, in everyday situations. There are also situations where the abstraction would break. For example if we asked ‘how many DNA bases can we change before the apple stops being an apple?‘, there is no exact answer, because the boundary of the concept is fuzzy; the objects currently existing in our world may be easy to classify, but it is possible to create new objects that would be controversial. By the way, talking about the ‘apple DNA’ already brings the atomic level back to the debate.
On the other hand, a hypothetical superintelligent being with an insane computing capacity (probably would need to exist in a different universe) might be able to describe the apple in the terms of atoms, something like ‘apple is a set of atoms that has this super-complicated mathematical property’; it might also be able to describe in terms of atoms how the apple trees grow. If it did many computations regarding apples, it might develop an abstraction for ‘apple’, in a similar sense how humans have an abstraction for ‘prime numbers’. However, we are not such beings, and I am definitely not pretending to be one. Just saying that it is possible in principle.”
Post-reductionist: …well, I don’t know what to write here to pass the ITT.
I can’t speak for the post-reductionist view in general. But I can name one angle:
Atoms aren’t any more real than apples. What you’re observing is that in theory the map using atoms can derive apples, but not the other way around. Which is to say, the world you build out of atoms (plus other stuff) is a strictly richer ontology — in theory.
But in a subtle way, even that claim about richer ontology is false.
In an important way, atoms are made of apples (plus other stuff). We use metaphors to extend our intuitions about everyday objects in order to think about… something. It’s not that atoms are really there. It’s that we use this abstraction of “atom” in order to orient to a quirky thing reality does. And we build that abstraction out of our embodied interactions with things like apples.
Said more simply: infants & toddlers don’t encounter atoms. They encounter apples. That’s the kind of thing they use to build maps of atoms later on.
This is why quantum mechanics comes across as “weird”. Reality doesn’t actually behave like objects that are interacting when you look closely at it. That’s more like an interface human minds construct. The interface breaks down upon inspection, kind of like how icons on a screen dissolve if you look at them through a microscope.
(This confusion is embedded in our language so it’s hard to say this point about mental interfaces accurately. E.g., what are “human minds” if objects aren’t actually in the territory? It’s a fake question but I’ve never found a way to dispel it with accurate language. Language assumes thing-ness in order to grammar. It’s like we can only ever talk about maps, where “territory” is a map of… something.)
So really, we already have examples of constructing atoms out of apples (and other things). That’s how we’re able to talk about atoms in the first place! It’s actually the inverse that’s maybe impossible in practice.
(…as you point out! “…a hypothetical superintelligent being with an insane computing capacity (probably would need to exist in a different universe) might be able to describe the apple in the terms of atoms….”)
I think the core issue here is that the standard reductionist view is that maps can be accurate. Whereas my understanding of post-reductionism basically says that “accurate” is a type error arising from a faulty assumption. You can compare two maps and notice if they’re consistent. And you can tell whether using a map helps you navigate. But things get very confusing at a basic philosophical level when you start talking about whether a map accurately describes the territory.
What are the new predictions that post-reductionism makes?
I don’t know if it does. It’s not that kind of shift AFAICT. It strikes me as more like the shift from epicycles to heliocentrism. If I recall right, at the time the point wasn’t that heliocentrism made better predictions. I think it might have made exactly the same predictions at first IIRC. The real impact was something more like the mythic reframe on humanity’s role in the cosmos. It just turned out to generalize better too.
Post-reductionism (as I understand it) is an invitation to not be locked in the paradigm of reductionism. To view reductionism as a tool instead of as a truth. This invites wider perceptions which might, in turn, result in different predictions. Hard to say. But the mythic impact on humanity is still potentially quite large: the current reductionist model lends itself to nihilism, but reality might turn out to be vastly larger than strict reductionism (or maybe any fixed paradigm) can fully handle.
This sounds to me like: “I believe the same things as you do, but on top of that I also believe that you are wrong (but I am not).”
Which in turn sounds like: “I do not want to be associated with you, regardless of how much our specific beliefs match”.
Yeah, that’s similar to my impression as well. Frankly, I think that “real” is a terribly confusing category which we owe dozens of fruitless philosophical arguments. And as soon as we switch to a much more useful map-territory distinction they mostly dissolve.
I skimmed this a bit, but found it a little hard to follow along your notes without knowing what those first seven pages of the book were actually saying. I felt that in order to understand your thought process, I would first need to read your notes and reconstruct the argument the book was making from your responses to it, and then read the post a second time once I had some sense of what it was a reaction to. But that seems pretty error-prone; it would help a lot to have a summary of those seven pages before your notes.
I’m under the impression that the difference between reductionism and postreductionism or rationalism and postrationalism, for that matter, is not in the realm of ideas but more like political affiliations and aesthetics.
Things you’ve said about the usefulness of multi-level models isn’t something reductionists disagree with. There is no crux here so I don’t really see an opportunity for the genuine crisis of faith. Could you explicitly state a couple of object-level beliefs you had previously when you identified as reductionist and the corresponding ones, that you have now as postreductionist, so that I could see the difference?
The procedure where you keep steelmanning and sanewashing someone’s argument to the point where they start making sense is a bit questionable. I can see potential use cases, like when you exist in adversarial epistemic conditions with highly polarized social bubbles, and every argument of the other side arrives to you distorted to be weak and crazy. Then doing your best to find a reasonable sounding version of the argument and try to check whether it was its original form from someone who actually holds this view is a good idea. But what you seem to have done—isn’t that.
The way I see it, you’ve distorted the original ontological argument, turning it into a methodological one. The new version of the argument makes sense, it can be a valid critique of some behaviour patterns inside the community. But this isn’t what the author of the argument originally meant! You try to find some common ground and when you can’t, you keep misrepresenting his views to yourself until you can find the common ground with this misrepresented version. I guess as a result, you are reading a much more interesting and nuanced book then the author wrote. And most credit for that should go to you, not the author.
I am quite sure I am missing your point, but just in case...
You might be vaguely gesturing at a weak emergence? (Strong emergence is not a thing.) When you are not a Laplace’s demon, but a bounded embedded agent trying to make accurate predictions, that might be one useful way to abstract your observations to simplify this job. If so, emergence is fully compatible with reductionism.