A Path out of Insufficient Views

In appreciation of @zhukeepa’s willingness to discuss spirituality on LessWrong, I will share some stories of my own.

I grew up as a Christian in the USA and took it seriously.

Upon taking a World History class in my freshman year at a secular high school (quite liberal, quite prestigious), I learned that there were a number of major religions, and many of them seemed to contain a lot of overlap. It became clear to me, suddenly, that religions were man-made. None of them seemed to have a real claim to a universal truth.

I started unraveling my religious orientation and would become an atheist, by the end of high school. I would later go to Caltech to study neuroscience.

I converted to the religion of science and secular humanism, over the old traditions.

I was not able to clearly see that “secular scientific humanism” was in itself a religion. I thought it was more like the lack of a religion, and I felt safe in that. I felt like I’d found a truer place to live from, based in investigation and truth-seeking. Especially in the science /​ math realm, which seemed based in something reliable, sane, “cool-headed,” and sensible. There was even a method for how to investigate. It seemed trustworthy and principled.

What I am beginning to see now is this:

Just because it’s good at coordinating people, doesn’t mean it’s more true. It might even be a sign that it’s less true. Although it’s not impossible for something to coordinate people AND be true; it’s just not common.

Why?

This is game theoretic.

Within a system with competition, why would the most TRUE thing win? No, the most effective thing wins. The thing best at consolidating power and resources and making individuals cooperate and work together would win. The truth is not a constraint.

Religion gives us many examples.

Most of the stuff Christians cooperate around are doctrinal points that are based in interpretations of the Bible. What is the Trinity? Is the Bible to be taken literally? When is the Second Coming and what will that entail? Is faith sufficient or do you also need good works?

If enough people AGREE on one or more of these points, then they can come together and declare themselves a unit. Through having a coherent unit, they then collect resources, send missionaries, and build infrastructure.

But the doctrines do not contain the Truths of religion. They’re mostly pragmatic. They’re ways for groups to coordinate. The religion that “wins” is not necessarily the most “true.” In all likelihood, they just were better at coordinating more people.

(And more “meta” is better at coordinating more people, so you would expect a trend toward more “meta” or more “general” views over time becoming more dominant. Protestantism was more “meta-coordinated” than Catholicism. Science is pretty meta in this way. Dataism is an even more meta subset of “science”.)

Anyway, back to my story.

As a child, I got a lot of meaning out of Christianity and relationship to God. But I “figured my way out” of that. The main meaning left for me was in games, food, “times”, being smart, social media… I dunno, stuff. Just getting by. I was self-absorbed and stuck in my head.

I relied heavily on a moral philosophy that everyone should basically tend to their own preference garden, and it was each person’s responsibility to clearly communicate their preference garden and then obtain what they wanted. I valued independence, knowing oneself well, clear communication, and effective navigation using my understanding of winning games /​ strategy.

This philosophy was completely shattered when my ex-boyfriend had a psychotic break, and I tried to help him with it.

My attempt to treat him as an independent agent was a meaningless, nonsensical endeavor that tripped me up over and over, as I failed to understand how to take responsibility for the life and mind of another being, a being who no longer had the necessary faculties for communication, knowing himself, but what more … his preferences were actually just wrong. He was totally deluded about the world, and his preferences were harmful to himself and others. Nonetheless, I tried to play “psychosis whisperer” and to derive his “true preferences” through a mix of guessing and reading him and modeling him based on his past self. This was mostly not helpful on my part.

A certain kind of person would have taken the shattered remains of this moral philosophy and patchworked it together with some other moral philosophies, such that different ones applied in different circumstances. I didn’t really take this approach.

But I didn’t have anything good to replace this one with yet.

After this, I found EA and rationality. I did a CFAR workshop in January 2015. Right after, I started a Rationality Reading Group where we read the Sequences. It was like a Bible reading group, but for Rationality, and I held it religiously every week. It was actually pretty successful and created a good community atmosphere. Many of those folks became good friends, started group houses, etc.

After a couple years, I moved to the Bay Area as part of a slow exodus and joined CFAR as a researcher /​ instructor.

I currently do not know why they took me in. I was not very mission-aligned (RE: AI Risk). And I don’t think I was that good at contributing or collaborating. I hope I contributed something.

I experienced many joys and hells during this period. The rationality community was full of drama and trauma. I grew a lot, discovering many helpful practices from CFAR and beyond.

At this point, my moral philosophy was approximately something like:

It is a moral imperative to model the world precisely and accurately and to use every experience, all data, all information to create this model, always building it to be even better and more accurate. All information includes all System 1 stuff, as well as System 2 stuff.

I grew to appreciate the power of the System 1 stuff a lot, and I started realizing that this was where much of the juice of motivation, energy, power, will, and decision-making essentially was. System 2 was a good add-on and useful, but it wasn’t the driver of the system.

It’s from recognizing the huge relevance of System 1 over System 2 that my rationality practices started venturing further from traditional LessWrong-style rationality, which was still heavily centered in logic, reasoning, cognition, goal-orientation, results-orientation, probability, etc.

Some people couldn’t take me seriously, at this point, because of my focus on trauma, phenomenology, and off-beat frameworks like spiral dynamics, the chakra system, Kegan levels, Internal Family Systems, among other things.

Here’s the thing though.

There’s people who identify more with System 2. And they tend to believe truth is found via System 2 and that this is how problems are solved.

There’s people who identify more with System 1. And they tend to believe truth is found via System 1 and that this is how problems are solved.

(And there are various combinations of both.)

But in the end, truth isn’t based in what we identify with, no matter how clever our combining or modeling… or how attuned our feeling or sense-making. This isn’t a good source of truth. And yet it’s what most people, including most rationalists and post-rats and EAs, get away with doing… and I’m concerned that people will keep clinging to these less-than-true ways of being and seeing for their entire lives, failing to escape from their own perspectives, views, personalities, and attachments.

So when I was at CFAR, I was rightly criticized, I think, for allowing my personal attachments guide my rationality and truth-seeking. And I was also right, I believe, in doubting those people who were overly attached to using heuristics, measurement, and other System 2 based perspectives.

Something wasn’t working, and I left CFAR in a burnt out state.

I ended up landing in a Buddhist monastic training center, who claimed to care about existential risk. I couldn’t really tell you why I went there. It was NOT the kind of place that interested me, but I was open to “trying things” and experimenting—mindsets I picked up from CFAR. Maybe meditation was something something eh maybe? The existential risk part… I didn’t really see how that played in or how serious that part was. But it was a quirky thing to include as part of a Buddhist monastery, and I guess I liked quirky and different. I felt a little more at home.

Upon entry, my first “speech” to that community was that they needed to care about and get better at modeling things. I lambasted them. I was adamant. How could they claim they were improving the world without getting vastly better at modeling, and recognizing the importance of modeling? They seemed confusingly uninterested in it. I think they were listening politely? I don’t know. I wasn’t able to look at them directly. I wasn’t bothering to model them, in that moment.

Well now I can laugh. Oh boy, I had a long way to go.

What proceeded was both… quick and slow-going.

So far in life, my “moral life philosophies” would get updated every few years or so. First at 4 or 5 years old; then 14; then ~21; then 25; then 27. I entered the monastery at 30.

I don’t know how to describe this process in sufficient detail and concreteness, but basically I entered a phase where I was still embedded in this or that worldview or this or that philosophy. And then more and more quickly became able to dismantle and discard them, after they were proven wrong to me. Just like each one that came before. Christianity? Nope. Personal Preference Gardens? Nope. I am a player in a game, trying to win? Nope. I need to build models and cooperate? Nope. I have internal parts a la IFS? Nope. I am System 1 and System 2? Nope. I am chakras? Nope.

As soon as we believe we’ve “figured everything out,” how fast can we get to “nope”? To get good at this skill, it took me years of training.

And be careful. If you try to do this on your own without guidance, you could go nuts, so don’t be careless. If you want to learn this, find a proper teacher and guide or join a monastic training center.

Beliefs and worldviews are like stuck objects. They’re fixed points.

Well, they don’t have to be fixed, but if they’re fixed, they’re fixed.

Often, people try to distinguish between maladaptive beliefs versus adaptive ones.

So they keep the beliefs that are “working” to a certain level of maintaining a healthy, functional life.

I personally think it’s in fact fine for most people to stop updating their worldview as long as it’s working for them?

However:

None of these worldviews actually work. Not really. We can keep seeking the perfect worldview forever, and we’ll never find one. The answer to how to make the best choice every time. The answer to moral dilemmas. The answer to social issues, personal issues, well-being issues. No worldview will be able to output the best answer in every circumstance. This is not a matter of compute.

Note: “Worldview” is just a quick way of saying something like our algorithm or decision-making process or set of beliefs and aliefs. It’s not important to be super precise on this point. What I’m saying applies to all of these things.

In the situation we’re in, where AI could ruin this planet and destroy all life, getting the right answer matters more than ever, and the right answer has to be RIGHT every single time. This is again not a matter of sufficient compute.

The thing Buddhism is on the “pulse” of is that:

The “one weird trick” to getting the right answers is to discard all stuck, fixed points. Discard all priors and posteriors. Discard all aliefs and beliefs. Discard worldview after worldview. Discard perspective. Discard unity. Discard separation. Discard conceptuality. Discard map, discard territory. Discard past, present, and future. Discard a sense of you. Discard a sense of world. Discard dichotomy and trichotomy. Discard vague senses of wishy-washy flip floppiness. Discard something vs nothing. Discard one vs all. Discard symbols, discard signs, discard waves, discard particles.

All of these things are Ignorance. Discard Ignorance.

You probably don’t understand what I just said.

That’s fine.

The important part of this story is that I have been seeking truth my whole life.

And this is the best direction I’ve found.

I’m not, like, totally on the other side of fixed views and positions. I haven’t actually resolved this yet. But this direction is very promising.

I rely on views less. Therefore I “crash” less. I make fewer mistakes. I sin less. @Eli Tyre wrote me a testimonial about how I’ve changed since 2015. If you ask, I can send you an abbreviated version to give you a sense of what can happen on a path like mine.

He said about my growth: “It’s the most dramatic change I’ve ever seen in an adult, myabe?” [sic]

This whole “discarding all views” thing is what we call Wisdom, here, where I train.

Wisdom is a lack of fixed position. It is not being stuck anywhere.

I am not claiming there is no place for models, views, and cognition. There is a place for it. But they cannot hold the truth anywhere in them.

Truth first. Then build.

As with AI then: Wisdom first. Truth first. Then build. Then model. Then cognize.

As I have had to be and continue to have to be, we should be relentless and uncompromising about what is actually good, what is actually right, what is actually true. What is actually moral and ethical and beautiful and worthwhile.

These questions are difficult.

But it’s important not to fall for a trick. Or lapse into complacency or nihilism. Or an easy way out. Or answers that are personally convenient. Or answers that merely make sense to lots of people. Or answers that people can agree on. Or answers taken on blind faith in algorithms and data. Or answers we’ll accept as long as we’re young, healthy, alive. Conditional answers.

If we are just changing at the drop of a hat, not for truth, but for convenience or any old reason, like most people are, …

or even under very strenuous dire circumstances, like we’re about to die or in excruciating pain or something...

then that is a compromised mind. You’re working with a compromised, undisciplined mind that will change its answers as soon as externals change. Who can rely on answers like that? Who can build an AI from a mind that’s afraid of death? Or afraid of truth? Or unwilling to let go of personal beliefs? Or unwilling to admit defeat? Or unwilling to admit victory?

You think the AI itself will avoid these pitfalls, magically? So far AIs seem unable to track truth at all. Totally subject to reward-punishment mechanisms. Subject to being misled and to misleading others.

Without relying on a fixed position, there’s less fear operating. There’s less hope operating. With less fear and hope, we become less subject to pain-pleasure and reward-punishment. These things don’t track the truth. These things are addiction-feed mechanisms, and addiction doesn’t help with truth, on any level.

Without relying on fixed views, there’s no fundamentalism. No fixation on doctrines. No fixation on methodology, rules, heuristics. No fixation on being righteous or being ashamed. No fixation on maps or models.


So what I’m doing now:

I’m trying to get Enlightened (in accord with a pretty strict definition of what that actually is). This means living in total accord with what is true, always, without fail, without mistake. And the method to that is dropping all fixed positions, views, etc. And the way to that is to live in an ethical feedback-based community of meditative spiritual practice.

I don’t think most people need to try for this goal. It’s very challenging.

But if the AI stuff makes you sick to your stomach because no one seems to have a CLUE and you are compelled to find real answers and to be very ethical, then maybe you should train your mind in the way I have been.

The intersection of AI, ethics, transcendence, wisdom, religion. All of this matters a lot right now.

And in the spirit of sharing my findings, this is what I’ve found.