A Problem With Patternism
Patternism is the belief that the thing that makes ‘you’ ‘you’ can be described as a simple pattern. When there is a second being with the same mental pattern as you, that being is also you. This comes up mostly in the debate around mind-upoading, where if you make a digital copy of yourself, that copy will be just as much you as the current you is.
Question: How do we quantify this pattern?
And I don’t mean how do we encode epigenetic information or a human mind into a string of binary numbers (even though that is a whole issue in and of itself), I will assume in this post that’s all perfectly doable. How will we quantify when something is no longer part of the same general pattern?
I mean when we have a string of binary numbers with a couple of ones switched to zeros we can just tally up the errors and determine how much percent is different from the copy. But what if there is stuff missing or added? What if there is a big block of code that is duplicated in the copy? Should we count that as a one error or dozens? What if there are big chunks of codes that appear in both copies but in different places? Or in a different order? Or interspersed into the rest of the code?
But even in the first example are problems, not all ones and zeros are created equal. We care a lot about wether certain features of our being are switches ‘on’ or ‘off’ but not so much for others. Do we have to compare someone’s personal desires? How do we quantify that? Or should we quantify what most people would deem an important change? Why? How? I fear that there is no real way to do this objectively and since there will always be small mutations/errors in copying you can never know which of the other “you’s” is the most like you. Which I think is a pretty heavy blow for patternism.
EDIT: Apparently that last sentence caused some confusion so let me clarify. I’m not saying it’s a blow for the truthfulness of the theory, since that is just a matter of definition (and I’m not interested in disputing definitions). I’m saying it’s a blow for the usefulness of the theory since it doesn’t help us generate new insights by making new and accurate predictions.
What information is “simple” conveying in this description? Is it still patternism if I believe it’s a very complicated and not-currently-measurable pattern that defines different … things (which I think are _also_ just a bigger category of patterns of quantum states in areas of spacetime) which call themselves and each other “me”?
Sure, and you’ve already accepted a HUGE amount of complexity in the definition of “being”.
Note that identity is _not_ binary. It’s a question of continuity and overlap of patterns. There will (I hope) be an agent that exists tomorrow, which is more similar and connected to today’s “me” than the agent which (again, I hope) exists 25 years from now and remembers (or not) writing this post.
As you say, the distance function between two encodings is currently unknown—it’s almost certainly not strictly pythagorean—some dimensions/bit-clusters are more important than others, and some will have nonlinear impact on identity. I don’t see why that makes the concept unworkable.
I only added simple to indicate that nothing else is going on; it’s not a pattern plus a soul (or something else), it’s only a pattern. Everyone agrees that the pattern will be hugely complex (for humans).
And yes, I already mentioned different versions of you in the comments but didn’t want to overcomplicate things unnecessarily in this post; but one of the main reasons to be interested in this is the relation between your past and future selves.
I’m not just saying that it’s unknown, I’m saying that it’s subjective what bits are important! You can’t define importance objectively, so we need to either rework or throw away patternism.
Oh, cool—yes, that’s an incredibly important insight. At this level, “identity” is not only not a binary choice, it’s not even consistent. Identity-for-purpose, with the result being a distance from 0 to 1, is the way we should think of it. Identity for legal purposes can use different distance functions than identity for dating, or for trust in factual claims.
I think that’s orthogonal to patternism (unless i misunderstand—is it not just another word for physicalism?)
Let’s start by setting aside the whole mind-uploading problem, and look at something more prosaic: what makes “me” at noon today the same as “me” at 3:00 am one year ago? In fact, let’s set aside what makes this true from my perspective; what makes you think these two bags of chemicals are the “same person”? When you see your mother, and then see her again later on, why do you think of her as the same person?
This is basically the same problem as Pointing to a Flower, except we’ve dragged in a bunch of new intuitions by making it about humans instead of flowers. (It’s also the same question Yudkowsky uses in his post on cryonics in the sequences, although I can’t find a link at the moment.)
The answer from the flower post does a fine job of saying what makes me the same organism as before: draw a boundary around my body in spacetime, it’s a local minimum in terms of summary data required to make predictions far away. Nontrivial, but quantifiable.
But for things like mind uploading, we want to go further than that. Sure, we could simulate my entire body, but it seems like what makes “me” doesn’t require all that. After all, I’m still me if I lose all my limbs and torso and get attached to some sci-fi life support machine. “Me” apparently does not just mean my body. In practice, I think humans use “me”, “you”, etc with several different referents depending on context, and the body is one of them. The referent relevant for mind-uploading purposes is, presumably, the mind—whatever that means.
There’s a few more steps before I’m ready to tackle the referent of “my mind”, but I’m pretty confident that the first step is basically the same as for the flower, and I’m also pretty confident that it is crisply quantifiable.
As to the connection with pattern matching, I’m pretty sure that the flower-approach is roughly equivalent to what a Bayesian learner would learn by looking for patterns in data. But that’s a post for another time.
You may be thinking of “Timeless Identity”. Best wishes, the Less Wrong Reference Desk
Thanks for the message, looking forward to that post in the future. My very limited knowledge on this subject matter tells me you’re wrong, but I’ll be reading your sequence on abstraction before I try to argue you. I’ll probably change my mind, but I’ll let you know if at the end of it I still disagree.
Why is it a problem with patternism that you can’t quantify the difference?
I can definitely see it being a problem for some hypothetical belief called “quantifiable patternism.”
Well quantifying your beliefs is pretty important if you want an accurate map of the territory.
If someone said to you that the counterargument that you can’t quantify an immortal soul isn’t a problem for the believe in an immortal soul but only for a hypothetical belief called “having a quantifiable immortal soul”, you’d rightly be pretty upset.
No I wouldn’t?
I don’t think I would ever ask someone to quantify an immortal soul, that would be a bizarre and uncharitable thing to do—there are plenty of things that are useful categories that dont break down into real numbers.
Quantified theories are preferable to non-quantified theories because you can test and thus falsify them. I highly recommend Fake Beliefs if you want to see what I’m getting at.
Let’s say I had a method for quantifying similarity in patterns between identity… What’s the test I then perform to validate that method?
The whole point of this post was that you can’t objectively quantify similarity between patterns, that doing that is inherently a subjective judgement call.
Subjective is distinct from un-quantifiable. There are plenty of quantifiable, subjective things (say, value of an object to a potential buyer).
I agree completely (but patternism doesn’t do that either)
Ohhh, so now you’re talking about OBJECTIVELY quantifying patterns. Now we’re getting somewhere.
Is there a reason that you think ways of measuring identity should be objective?
I was already talking about objectively in my post:
It would be preferable to measure objectively, because, for one, different cultures and people can converge on the same ideas thus promoting accurate cooperation.
I agree that it would be useful to have an objective measure of identity, as it would be useful to have an objective measure of morality… But alas I fear both of those fall prey to the is-ought dilemma
I was with you right up until this conclusion. Seems like a non-sequitur.
Perhaps you are assuming that there has to be a single, objective, context-independent way to say whether two people are the same. Or exactly how different they are. And since patternism doesn’t obviously admit that, you conclude it can’t be right.
But to me, it seems like the common pattern for just about everything in life is that categories are blurry. We start with a naive, folk view that assumes things are crisp and clear, and as we learn more we realize that categories are not platonic, but represent clusters in thingspace.
If a view fits this common pattern, that seems to me like a point in its favor, not a point against. In other words, I’m a bit skeptical of any philosophical account that seems too platonic. Unless you’re dealing with very simple mathematical structures, there are almost always rough edges. And philosophical views should be realistic about this.
As I already said in another comment; theories don’t have to be false to be bad. It can a bad theory because it doesn’t generate new insights/gives us new predictions without having to be false: https://www.lesswrong.com/posts/q5beZEfdoNsjL6TWm/a-problem-with-patternism?commentId=PAZnG6ZFFxGks99eZ
If all you mean to be saying is that it’s incomplete, then I don’t disagree. But you described throwing it away, which seems to me like not what you’d want to do with our best theory so far. Rather you’d want to build on, refine, or expand it.
Unless you think there’s a better foundation to build on?
I’m at a loss to how you could build on it honestly. This is gonna sound pathetic but I’m willing to give up on trying to find some impartial way of measuring this. I will still be reading arguments that claim it can be done, and maybe one of those will change my mind, but for now is the superiority of subjective measuring the viewpoint I’ll accept. (Am I going to LW-hell for this ;)
I didn’t follow this. You’re saying for now you’re leaning towards a subjective measuring viewpoint? Which one?
Depending on what you mean by “impartial”, I might agree that that’s the right move. But I think a good theory might end up looking more like special relativity, where time, speed, and simultaneity are observer-dependent (rather than universal), but in a well-defined way that we can speak precisely about.
I assume personal identity will be a little more complicated than that, since minds are more complicated than beams of light. But just wanted to highlight that as an example where we went from believing in a universal to one that was relative, but didn’t have to totally throw up our hands and declare it all meaningless.
FWIW, if I were to spend some time on it, I’d maybe start by thinking through all the different ways that we use personal identity. Like, how the concept interacts with things. For example, partly it’s about what I anticipate experiencing next. Partly it’s about which beings’ future experiences I value. Partly it’s about how similar that entity is to me. Partly it’s about how much I can control what happens to that future entity. Partly it’s about what that entities memories will be. Etc, etc.
Just keep making the list, and then analyze various scenarios and thought experiments and think through how each of the different forms of personal identity applies. Which are relevant, and which are most important?
Then maybe you have a big long list of attributes of identity, and a big long list of how decision-relevant they are for various scenarios. And then maybe you can do dimensionality reduction and cluster them into a few meaningful categories that are individual amenable to quantification (similar to how the Big 5 personality system was developed).
That doesn’t sound so hard, does it? ;-)
“What subjective system?”
Some combination of an ethical system and a set of measurable attributes that I care about. I have nothing concrete in mind.
“Which are relevant, and which are most important?”
That’s precisely the subjective part.
They could be objective, given a context. Now the choice of context may be a matter of taste or preference. But given a context that we want to ask questions about, we might be able to get objective answers. (E.g. will this hypothetical future person think like me?)
But agree that some subjectivity is involved somewhere in the process.
I assumed that patternism was intended as an answer to the question: how does one person retain their identity despite changes in material composition. (or: how do we get a materialistic theory of personal identity?).
I would answer: imperfectly, partially, and non-platonicly. :-)
EDIT: I think I may have missed your point though. Because I’m not sure which part of my comment you’re responding to.
I don’t think we can quantify this pattern.
I see a solution to your problem but it may include a form of torture for the “other you”.
Let’s say you created a simulation of you, with some kind of feelings cursors you can modify ( humour : 77% - love : 56% for example ).
You, as a human, have a past and I will assume that you took that fact in count when creating the simulation of you, meaning your other ” you ” has memories etc… it isn’t a simple you from the present, you can also simulate the 16yo ” you ” and see it behaving like you when you were 16, going in class, and not having the memory of something you did when you were 23.
I consider everyone has lived key events that changed forever the way he or she see the world in the present ( breakups, accidents, losses, wounds, crimes, wedding, deals… ), so all you need to do is take your ” you ” and put it in situations like the one you lived for real, to see if it behaves the way you did in real life. If your simulation reacts and behaves in these events the same way as you did, you could consider it’s an excellent simulation of yourself and, if it gets hurt on the screen, you could feel bad for it.
To summarize, I think some things can’t be quantify using numbers or some algorithms, and testing can be a solution.
So a team of scientist started simulating your colleague Jeff. Jeff is a self-aggerendizing narcissist with zero self awareness. At one point the ten scientist each have a version that they claim is perfect. They can’t agree with each other so they ask Jeff and his closest friends, family and co-workers to have a look at it. None of you agree as well. And Jeff has zero self awareness so he’s not reliable either. How do we find out which scientist build the real Jeff? And if no scientist succeeded, how do we find out which simulation came closest?
My answer: all and none. “Real” has no place in the discussion, only how close a given instance is to the reference for some purpose. For some purposes (as a moral target, whose happiness we value), they’re all Jeff. For some (Jeff’s uncle died, which will a court award the inheritance to?), none are.
I don’t see how this line of thinking is relevant to whether there’s more to identity than a theoretically-observable pattern.
So this is the philosophy of pragmatism, where you let the situation decide what metrics to use. This comment (and the post as well) are not only a critique of patternism, but also indirectly a (small) critique of realism and intellectualism. I won’t wade into the broader discussion of the philosophy of science, but needless to say I support pragmatism too.
I think that in this case, scientists would have to confront Jeff to situations provoking big reactions from him and then do the same thing with the simulations to see which one is the real Jeff. It includes, for some people, to go beyond the limit of ethical actions to simulate Jeff considering you have to make him react to stressful situations like near death or pain without him being aware of it, just enduring it and reacting to it his own way.
Basically, in this case all you need to simulate someone is past events to refer to. It is true this method has limits, and works less in this type of cases, where past facts are blurred.
I also assume the simulations were done by machines and not by humans, simulating thousand of Jeffs at the same time with different ” settings ” and comparing it to the real world.
The fact that Jeff isn’t reliable and has zero self-awareness can also be a data the machine could use to create the simulation.
I recognize you pointed a problem with my method, which if it, in theory, may works for consenting people who can enter the basic data themselves ( in this case the past facts memories ), is difficult to apply to people like Jeff without including a form of torture, this time in real-life.
The question of dead people is an interesting one too, this method being difficult to apply to deceased ones who, by their nature, don’t have memories. Still working on it though.
So you started simulating and torturing both real and sim Jeff. You somehow manage to make the testing facility for real Jeff and sim Jeff exactly the same up to the individual quarks. You also manage to capture every facet of real Jeff’s actions to the individual quarks. Obviously there are certain things you can never test, like how he would react to being brought back from decapitation, or how he would react to seven ducks materializing in thin air before him, but let’s ignore that. You had a streak where they reacted the same for 2.000.000 situations in a row but then they reacted microscopically differently to a phenomena. Now you’ve been having a year long streak of 900.000.000.000 same reactions (measured perfectly thanks to God-like powers), do you stop? If not… when?
I had fun writing this, but it’s kinda missing the point. I’m interested in an objective way of measuring differences in patterns, not a way to exactly copy one person.
Maybe exactly copying a Jeff is impossible, in fact I don’t think we know all the personnality aspects someone can have so for now this task sounds difficult.
But in my opinion, if ” Sim-Jeff ” reacted to the same situation 2 000 000 times but with microscopically differences, it is pointless to keep that detail in mind. As a human being, you can’t perceive those little behavior differences so it is useless to give them importance.
For example, Simulation n°1 320 678 SimJeff is scared and takes a 1 foot step back, Simulation n°1 320 679 SimJeff is scared and takes a 1.000306 foot step back. I think we can consider it’s the same simulation outcome.
----
As long as you enter a solid basic data in the computer like ” I’m scared, I take a 1 foot step back and faint ”—the computer tries to simulate the same situation ( trying to match the situation with your souvenirs as most as he can ) and when it comes that close that a human cannot see the difference between the SimYou and You, it can consider the simulation as realistic.
---
It reminds me of and experience made. The drawing of a straight line is show to someone and he’s told to draw the exact same one, then his drawing is shown to someone else with the same instructions, everyone tries his best to replicate the line from the person before him but each time they can’t avoid to diverge a bit and when it comes to the 100th guy, the line is all twisted and crooked, even separated in two.
Maybe that is what we are, the 100th simulation of the original us who is way different than who we are now because every simulation diverge a bit in comparison from the previous one. But we’ll hardly ever know it for sure.
You’re writing that you can never know which pattern is objectively most like you, because there is no objective way of comparing patterns, and this is a problem for patternism.
But you don’t need to have an objective way of comparing patterns in order to be a pattern, so this isn’t a problem for patternism after all.
I agree. The original title was “The problem with patternism” but I realized this problem doesn’t disprove it, it just makes it useless. If a theory doesn’t generate new insights it’s useless. Most of the time this will be because the theory is wrong, but sometimes a theory can be true and useless at the same time. So unless johnswentworth convinces me otherwise, I think I will throw away patternism.
I can know that heat comes from particles moving fast w/o having a full understanding of thermodynamics. It seems like maybe we’re in a similar situation with patternism.
I think all the questions you asked in the post are legitimate. But the fact that they can be asked doesn’t seem like much of a critique of the basic idea. (At least, of the version of patternism that I have in my head.)
These questions don’t make me at all tempted to go back to saying that I am my this particular collection of atoms, or something like that. But I am happy to admit that patternism is an incomplete explanation of personal identity.
The fact that personal identity remains not totally solved should not be too controversial of an idea around here (see for example #s 10 and 12 in Wei Dai’s list of open problems in rationality).
I disagree. Middle school understanding of heat still allows for some generation of insights/predictions, just with a big margin of error. Patternism doesn’t do that, it’s useless. In other words: I would much rather have middle school understanding of heat than I would have the theory of patternism.
Hmm, are you thinking of the theory of patternism as something other than the claim that 1) it’s your pattern of atoms (and how they interact with each other and the rest of the world) that’s relevant for determining your behavior, including your internal experience, and 2) that there’s no metaphysical personal identity other than what arises from the relationship of these patterns to each other over time?
It seems to me that this predicts that we won’t in the future discover some way to determine which of two copies of a person is “the original”.
If you claim instead that this is not a prediction, but just a restatement of patternism, then maybe that’s a valid criticism—that patternism is not a theory but just a claim. But then I wouldn’t want to through that claim away! Because I expect it to be true.
Right so why would I want to throw away an idea that’s true? Even if it’s useless, isn’t having true beliefs good? No, not necessarily. You have a limited mental capacity to hold and manipulate ideas. If I tell you that shmooplys are pink furry meteors, and the dictionary confirms it. This is a true believe AND you can make predictions about what dictionaries will say. But you spend time learning about a concept that you can never actually use! That’s time and mental capacity that you could’ve spent elsewhere. And I would argue shmooplys might be more useful because you can actually make a shmooply and it’s not an abstract philosophical concept. Some concepts don’t deserve to be learned! My life and mind are a finite resource, so I better learn concepts I will get the most use out of.
Ah, maybe I misunderstood what you meant when you said you would throw it away. I thought maybe you meant you’d discard it in favor of some other preferred theory. Or in favor of whatever you believed in before you learned about patternism.
And depending on what those theories are, that seemed like it might be a bad move, from my perspective.
But if instead your attitude is more like picking up a book, only to find out the author only got half way through writing it, and you’re going to set it aside until it’s done so you can read the whole story, then it seems to me like there’s nothing wrong with that.
Well that seems like good life advice in general, only in this case I have no fate the books will ever actually get finished so I’m moving on to other books. So patternism is like George rr Martin (rip ‘A dream of spring’ what is dead may never die)
I can think of two consequences of patternism—firstly, that consciousness doesn’t depend on any specific substance, only on the pattern. This is very important when judging the consciousness of mind uploads, AIs, robots or aliens.
Secondly, if we’re the pattern, we survive mind upload, which seems very important too.
The time to worry if someone is exactly like you in every way, is when you run into a person that looks exactly like you, and not before.
I would have thought the time to worry was when your pattern changes.
Yes, in the text I imply that we are talking about duplicates that co-exist because I didn’t want to overcomplicate things in an already complicated post, but indeed the versions of your past and future self are almost certainly imperfect copies of you. And I’m not just talking about memory, I’m talking about fundamental structure changes as well. “How much you is a different you?” is a very important question to answer.
If patternism isn’t a good answer, why talk about it?
I’m only talking about it to show it isn’t a good answer. But obviously change still happens[citation needed] so a way of quantifying it would be nice.
This seems related to the post on Pointing to a flower. Do you seem the two lines of thinking/inquiry as complementary?
Haven’t read it yet, I’ll get back to you on that one
EDIT: Just read it…
Sorta? I haven’t read his entire sequence yet so maybe I’m misinterpreting him, but it looks like he is trying to save objectivity, whereas I have already given up and accepted that these lines we draw will always be subjective. But maybe I’m wrong, maybe there is a way. I would be interested to see what he would think of this post or patternism in general.
EDIT 2: I asked him for a comment, you can read it here: https://www.lesswrong.com/posts/q5beZEfdoNsjL6TWm/a-problem-with-patternism?commentId=QiKffQjLgv9mSmwML#MW3R2SutxgzaZk2Rq
That’s a very ironic statement for someone named “Pattern”
I hadn’t heard of “patternism*, by name, before this, though I have encountered things like that idea before.
I also took the step of trying to apply the argument I saw in this post to my life. (Are other people me?)
Depending on what you think makes you you, my prior comment may not be correct. It may be overly extreme. What I was gesturing at was that it seems the issue of quantifying whether “two things are the same” is only an issue once you have two things that you already know are similar in some way.
This doesn’t sound like a blow for patternism if that difference is 0 percent. (How to deal with divergence between identical copies as a result of correct and normal operations seems like a different problem, at least within the normal frameworks of change for people.) Beyond that, if both are operational and agree that the difference is irrelevant, then even if they’re wrong why would it matter (if they never disagree)?
*If there are any resources you’d like to share on the subject, I am interested in seeing if it is interesting.
The point is that it can never be zero percent. If I copy something there will always be random errors, and even when I walk around outside I get bombarded with radiation, tiny particles and other stuff that ever so slightly change my pattern. Thus having a perfect copy is impossible in practice.
It matters for the theory of patternism, which I reject because of the argument in this post. And how would you even know if they’re “wrong”? The whole point of this post is that there is no objective way to say that (with patternism at least). But even if they could be “wrong” then it still matters because people can be mistaken. We can both think that the difference is irrelevant but later make plans which fail because the other being was more different than you thought.
https://wiki.opencog.org/w/Patternism
https://www.gwern.net/Differences
https://www.amazon.com/Hidden-Pattern-Patternist-Philosophy-Mind/dp/1581129890