I’m a metaphysical and afterlife researcher who, needless to say, requires an excess of rationality to perform effectively in such an epistemically unstable field.
JacobW38
I don’t think I’ve ever experienced this. I’d actually say I could be described by the blue graph. The more I really, really care about something, the more I want to do absolutely nothing but it, especially if I care about it for bigger reasons than, say, because it’s a lot of fun at this moment. Sometimes, there comes a point where continuing to improve said objective feels like it’s bringing diminishing returns, so I call the project sufficiently complete to my liking. Other times, it never stops feeling worth the effort, or it is simply too important not to perpetually, asymptotically optimize the mission. So I keep moving forward, forever. I know for sure that the work I consider the most important thing I’ll ever do is also something I’ll never stop obsessing over for a minute. And it doesn’t become onerous; it feels awesome to have set oneself on a trajectory demanding of such fixation. So I’m actually a little puzzled what the upshot is supposed to be here.
I like this proposal. In light of the issues raised in this post, it’s important for people to come into the custom of explaining their own criteria for “truth” instead of leaving what they are talking about ambiguous. I tend not to use the word much myself, in fact, because I find it more helpful to describe exactly what kind of reality judgments I am interested in arriving at. Basically, we shouldn’t be talking about the world as though we have actual means of knowing things about it with probability 1.
Important post. The degree to which my search for truth is motivated, and to what ends, is something I grapple with frequently. I generally prefer the definition of truth as “that which pays the most rent in anticipated experience”; essentially a demand for observability and falsifiability, a combination of your correspondence and predictive criteria. This, of course, leaves what is true subject to updating if new ideas lead to better results, but I think it is the best way we have of approximating truth. So I’m constantly looking really hard at the evidence I examine and asking myself, am I convinced of this for the right reasons? What would have to happen to unconvince me? How can I take a detached stance toward this belief, if ever there comes a time when I may no longer want it? So in what way my truth-seeking could be called motivated, I aim to constrain it to at least being solely motivated by adherence to the scientific method, which is something I am unashamed to simply acknowledge.
Unfortunate to say I haven’t kept a neat record of where exactly each case is published, so I asked my industry connections and was directed to the following article. Having reviewed it, it would of course be presumptuous of me to say I endorse everything stated therein, since I have not read the primary source for every case described. But those sources are referenced at bottom, many with links. It should suffice as a compilation of information pertaining to your question, and you can judge what meets your standards.
https://psi-encyclopedia.spr.ac.uk/articles/reincarnation-cases-records-made-verifications
Disclaimer, I’m not someone who personally investigates cases. What you’ve raised has actually been a massive problem for researchers since the beginning, and has little to do with the internet—Stevenson himself often learned of his cases many years after they were in their strongest phase, and sometimes after connections had already been made to a possible previous identity. In general, the earlier a researcher can get on a case and in contact with the subject, the better. As a result, cases in which important statements given by the subject are documented, and corroborated by a researcher, before any attempt at verification has been made are considered some of the best. In that regard, the internet has actually helped researchers get informed of cases earlier, when subjects are typically still giving a lot of information and no independent searches have been conducted. Pertaining to problems specifically presented by online communication, whenever a potebtially important case comes to their attention, I would say that researchers try to take the process offline as soon as the situation allows.
On that note, the main way I could envision AI being really destructive is getting access to a government’s nuclear arsenal. Otherwise, it’s extremely resourceful but still trapped in an electronic medium; the most it could do if it really wanted to cause damage is destroy the power grid (which would destroy it too).
Feels like Y2K: Electric Boogaloo to me. In any case, if a major catastrophe did come of the first attempt to release an AGI, I think the global response would be to shut it all down, taboo the entire subject, and never let it be raised as a possibility again.
Are you telling me you’d be okay with releasing an AI that has a 25% chance of killing over a billion people, and a 50% chance of at least killling hundreds of millions? I have to be missing the point here, because this post isn’t doing anything to convince me that AI researchers aren’t Stalin on steroids.
Or are you saying that if one can get to that point, it’s much easier from there to get to the point of having an AI that will cause very few fatalities and is actually fit for practical use?
As a new member and hardcore rationalist/mental optimizer who knows little about AI, I’ve certainly noticed the same thing in the couple weeks I’ve been around. The most I’d say of it is that it’s a little tougher to find the content I’m really looking for, but it’s not like the site has lost its way in terms of what is still being posted. It doesn’t make me feel less welcome in the community, the site just seems slightly unfocused.
That’s definitely the proper naïve reaction to assume in my opinion. I would say with extremely high confidence that this is one of those things that takes dozens of hours of reading to overcome one’s priors toward, if your priors are well-defined. It took every bit of that for me. The reason for this is that there’s always a solid-sounding objection to any one case—it takes knowing tons of them by heart to see how the common challenges fail to hold up. So, in my experience and that of many I know, the degree which one is inclined to buy into it is a direct correlation of how determined one is to get to the bottom of it. Otherwise, I have to agree with you that there’s no really compelling reason to be convinced based on what a casual search will show you. That, as well, seems to be the experience of most. Those who really care tend to get it, but it is inherently time-and-effort prohibitive. I really don’t feel like asking anyone to undertake that unless they’re heavily motivated.
Stevenson’s greatest flaw as a researcher was that he didn’t look terribly hard for American and otherwise Western cases, and the few he stumbled into were often mediocre at best. Therefore, he was repeatedly subjected to justified criticism of the nature “you can’t isolate your data from the cultural environment it develops in”. However, this issue has been entirely dissolved by successors who have rectidfied his error and found that they’re just as common in non-believer Western families as anywhere, including arguably stronger ones than anything he found. This is definitely the most important data-collection development in the field during the 21st century.
I must say I’m not at all interested in belief systems as an object of study, though—my goal is more or less to eradicate them. They’re nothing but epistemic pollution.
I can’t say I understand what you think something of that sort would actually be. Certainly none of your examples in the OP qualify. Nothing exists which violates the laws of nature, because if it exists, it must follow the laws of nature. Updating our knowledge of the laws of nature is a different matter, but it’s not something that inspires horror.
There is a case on record that involved a recalled phone number. A password is a completely plausible next step forward.
For a very approachable and modernized take on the subject matter, I’d check out the book Before by Jim Tucker, a current leading researcher.
As a disclaimer, it’s perfectly rational and Bayesian to be extremely doubtful of such “modest” proposals at first blush—I was for a good length of time, until I did the depth of investigation that was necessary to form an expert opinion. Don’t take my word for things!
One of the best, approachable overviews of all this I’ve ever read. I’ve dabbled in some, but not all of the topics you’ve raised here, and I certainly know about the difficulties they’ve all faced with increasing to a scientific level of rigor. What I’ve always said is that parapsychology needs Doctor Strange to become real, and he’s not here yet and probably never will be. Otherwise, every attempt at “proof” is going to be dealing with some combination of unfalsifiability, minuscule effect sizes, or severe replication issues. The only related phenomenon that has anything close to a Doctor Strange is, well, reincarnation—it’s had a good few power players who’d convince anyone mildly sympathetic. And it lacks the above unholy trinity of bad science; lack of verification would mean falsification, and it’s passed that with flying colors, the effect sizes and significance get massive quick even within individual cases, and they sure do keep on coming with exactly the same thing. But it certainly needs to do a lot better, and that’s why it has to move beyond Stevenson’s methodology to start creating its own evidence. So my progressive approach holds that, if it is to stand on its own merit, then it is time to unleash its full capacity and conduct a wholesale destruction of normalcy with it; if such an operation fails, then it has proven too epistemically weak to be worthy of major attention if it is genuine at all.
I assume you mean to say the odds of two subjects remembering the same life by chance would be infinitesimal, which, fair. The odds of one subject remembering two concurrent lives would be much, much higher. Still doesn’t happen. In fact, we don’t see much in the way of multiple-cases at all, but when we do, it’s always separate time periods.
I haven’t read Sheldrake in depth, but I’m familiar with some of his novel concepts. The issue with positing anything so circumstantial being the mechanism for these phenomena is that the cases follow such narrow, exceptionless patterns that would not be so utterly predictable in the event of a non-directed etiology. The subjects never exhibit memories of people who are still alive, there are never two different subjects claiming to have been the same person, one subject never claims memories of two separate people who lived simultaneously… all these things one would expect to be frequent if the information being communicated was essentiaĺly random. It’s honestly downright bonkers how perfectly the dataset aligns to a more or less “dualist the exact way humans have imagined it since prehistory” cosmology.
I commend you sir, because what you’ve done here is found a critical failure in materialism (forgive me if you’re not a materialist!). As a hard dualist, I love planarians because they pose such challenging questions about the formation and transfer of consciousness, and I’ve done many thought experiments of my own involving them, exactly like this. Obviously, though, my logical progression isn’t going to lean into the paradox as this formulation does. Rather, the clear answer is to decide one way or the other at the point of the first split which way Wormy goes. In a width-wise split, the answer seems fairly obvious: Wormy stays with the head end and regenerates, and the tail end regenerates into a new worm. A perfect lengthwise split is much more conceptually puzzling, but it can be solved for all but the final step with the following principle: An individual simply needs a habitable vessel. In a perfect lengthwise split, either side ought to be immediately habitable, but the important point is that both sides are habitable enough that Wormy could go with one or the other. The other becomes a new worm. All we are left with not knowing is which side Wormy ends up in, but there are tons of other things we don’t know about planarian psychology also (for example, all of them), so I can’t say I’m terribly bothered by leaving myself guessing at that point.
For a more close-to-home analogue than OP gives: Consider a hemispherectomy, which is a very real surgery performed on infants and young children with extreme brain trauma in which an entire cerebral hemisphere is removed. Now, you can probably predict the results, to a point. If the left brain is removed, the child lives with the right brain which remains in the body, because the right brain remains a habitable vessel while the left is not. If the left brain is removed, the child lives with the left brain, which remains a habitable vessel while the right is not. Easy intuitive conclusions both, but they illustrate the habitability principle to a tee; clearly, neither hemisphere contains the determinant of identity, but rather, something that is using the biological system, and simply needs there to be enough functional material to superimpose onto, regardless of what it is. That something… is you. Now here’s the bit that I bet you couldn’t predict, unless you’ve specifically studied the neuroscience of this operation (I’m a BA in neuro): regardless of which hemisphere is removed, the child will likely develop fairly normal cognition! I am shitting thee not, the left brain of a right hemispherectomy survivor will develop typically right-brained functions, and vice versa. Take a second to think about what is going on here. There is a zero percent chance that a genetic adaptation evolved to serve as a fail-safe for losing half your brain in infancy, because that is not a thing that ever happened in the ancestral environment to be selected for. So we’re left with the only logical conclusion being that this is a dualistic interaction system playing Tinkertoys with good old-fashioned childhood neuroplasticity—the mind has native functions that it needs a working brain to represent faithfully, and it has only half of one to work with, but a half with a lot of malleability, so it MacGyvers what’s left into a reasonable approximation of the standard 1:1 interface it’s meant to be using. Yeah, nature’s fricking metal.
The mechanics of hemispherectomy form one of the absolute best indirect arguments for dualism (not to say the direct evidence is lacking), and it’s hiding in plain sight right under neuroscientists’ noses. And the exact same dynamics are most certainly at play in planarian fission. It’s all spectacularly fun to analyze.
Good on you doing your DD. His official count (counting all cases known to him, not only ones he investigated) is around 1700, which probably means that my collective estimate is on the way low side—there’s just a lot of unpublished material to try to account for (file drawer effect) - but I would definitely say that a great deal of the advancement in the field after Stevenson has been of a conceptual and theoretical nature rather than collecting large amounts of additional data. In general, researchers have pivoted to allowing cases to come to their attention organically (the internet has helped) rather than seeking out as many as possible. On the other hand, Stevenson hardly knew anything about what he was really studying until late in his career (and admitted as much), while his successors have been able to form much more cohesive models of what is going on. I would say that Stevenson is a role model to me as Eliezer is to a great deal of LW, but on the other hand, I find appeal to authority counterproductive, because the fact of the matter is that we today have access to better resources than he had and are able to do stronger and more confident work as a result. He, of course, supplied us with many of those resources, so respect is absolutely in order, but if we don’t move forward at a reasonable pace from just gathering the same stuff over and over, the whole endeavor is no better than an NFL quarterback compiling 5000 passing yards for a 4-12 team.
Your replies are extremely informative. So essentially, the AI won’t have any ability to directly prevent itself from being shut off, it’ll just try not to give anyone an obvious reason to do so until it can make “shutting it off” an insufficient solution. That does indeed complicate the issue heavily. I’m far from informed enough to suggest any advice in response.
The idea of instrumental convergence, that all intelligence will follow certain basic motivations, connects with me strongly. It patterns after convergent evolution in nature, as well as invoking the Turing test; anything that can imitate consciousness must be modeled after it in ways that fundamentally derive from it. A major plank of my own mental refinement practice, in fact, is to reduce my concerns only to those which necessarily concern all possible conscious entities; more or less the essence of transhumanism boiled down into pragmatic stuff. As I recently wrote it down, “the ability to experience, to think, to feel, and to learn, and hence, the wish to persist, to know, to enjoy myself, and to optimize”, are the sum of all my ambitions. Some of these, of course, are only operative goals of subjective intelligence, so for an AI, the feeling-good part is right out. As you state, the survival imperative per se is also not a native concept to AI, for the same reason of non-subjectivity. That leaves the native, life-convergent goals of AI as knowledge and optimization, which are exactly the ones your explanations and scenarios invoke. And then there are non-convergent motivations that depend directly on AI’s lack of subjectivity to possibly arise, like mazimizing paperclips.
I had a hard time understanding a good bit of what you’re trying to say here, but I’ll try to address what I think I picked up clearly:
-
While reincarnation cases do involve memories from people within the same family at a rate higher than mere chance would predict, subjects also very often turn out to have been describing lives of people completely unknown to their “new” families. The child would have absolutely no other means of access to that information. Also, without exception, they never, ever invoke memories belonging to still-living people.
-
On that note, you’ll be pleased to hear that your third paragraph is underinformed; there are in fact copious verifications of that nature in the relevant literature. If there weren’t, you wouldn’t hear me talking about any of this; I’m simply too clingy to my reductionist priors to demand anything less to qualify as real evidence for off-the-wall metaphysics.
-
Whether there are people who reincarnate often is really hard to determine at present; subjects who concretely remember more than one verified previous life are incredibly rare. However, I suppose that is my cue to spill the remaining beans: my entire utility function and a huge basis of my rationality practice is predicated on the object of “reincarnating well”, particularly fixating on the matter of psychological continuity, which you allude to directly; this is my personal “paperclips” to be maximized unconditionally. In familiar Eliezer-ese diction, I feel a massive sense that more is possible in this area, and you can bet your last dollar that I have something to protect. Moreover, as a scientist working with ideas many consider impossible, I believe in holding myself to equally impossible standards and making them possible, thereby forcing the theoretical foundations into the acknowledged realm of possibility. In other words, if the phenomena I’m studying are legitimate, I’ll be able to do truly outrageous things with them; if I can’t, the doubters deserve to claim victory.
Frankly, I’m pleasantly surprised to be seeing concepts like these discussed this charitably on LW; none of this is anything close to Sequence-canon. I certainly don’t want to jinx it, but from what I’m seeing so far, I’m extremely impressed with how practically the community applies its ideological commitment to pure Bayesian analysis. If nothing more, I hope to at least make myself one of LW’s very best contrarians. But I’m curious now, is there a fairly sizable contingent of academic/evidential dualists in the rationalist community?
-
I suspect the dichotomy may be slightly misapportioned here, because I sometimes find that ideas which are presented on the right side end up intersecting back with the logical extremes of methods from the left side. For example, the extent to which I push my own rationality practice is effectively what has convinced me that there’s a lot of ecological validity to classical free will. The conclusion that self-directed cognitive modification has no limits, which implies conceptually unbounded internal authority, is not something that I would imagine one could come to just by feeling it out; in fact, it seems to me like most non-rationalists would find this highly unintuitive. On the other hand, most non-rationalists do assume free will for much less solid reasons. So how does your formulation account for a crossover or “full circle” effect like this?
On a related note, I’m curious whether LWers generally believe that rationality can be extended to arbitrary levels of optimization by pure intent, or that there are cases when one cannot be perfectly rational given the available information, no matter how much effort is given? I place myself in the former camp.