Breaking beliefs about saving the world

Goal = Save the world.


See Ending Ignorance | Solving Ignorance[1] & We can Survive for greater context. (Includes defining of terms like System, Functional, functional reality, functional system, functional systems psychology, High-level, tactical, conscious, subconscious, Reward, Punishment, Satisfaction, Suffering.)

Use the table of contents on the left.

Read footnotes or suffer in confusion

Read links if you need additional context

Easier to read format

Tech problem[2]

End human suffering

Ever felt like shit?

Ok, of course you have, bad question. Do you remember how it felt?

Do you remember the actions you took as a result of it?

When you were suffering, you didn’t give a s*** about EA.

The only thing that mattered to you… was you.

You were one selfish bastard.

And that’s how everyone else is as well.

If you think from a Maslow’s hierarchy of needs perspective, you’ll note that there is actually a hierarchy of human priorities that affect decision-making in significant ways.

China has experienced a technological growth rate unparalleled by any nation on earth at any point of time.

The first generation was one of farmers, the second was of factory workers, and the third are specialized intellectual workers like you and me.

The result of this rapid change is a different cultural norm.

Unlike in the U.S/​Europe, altruism is not an obvious & undebated ideal to be pursued for the layman.

I know what it’s like to not have enough food to eat. I’ve eaten 0-1 meals/​day for a year and went 9 days without food.

Your mind curbs and twists your “rationality” into whatever will get you food.

Your unmet expectations, desires, lack of energy, and the gap between the pain you’re experiencing right now and the pleasure you believe you should feel are major barriers to getting anything done. And the fact that you’re in that state, cripples your actual ability and reinforces the actions & environments that keep you in that state.

I don’t think any of us are arguing that basic needs like food and shelter must be met if we want greater worldwide participation in EA as a concept.

But I am arguing that we’re thinking about the problem wrong.

And I argue this because those 9 days of eating nothing were the most productive 9 days of my life.

I argue this because that period of 0-1 meals/​day for a year is me right now, and I’m still making progress towards my goals.

By using my own reward/​punishment system as a test dummy, I’ve discovered important aspects of some of the real psychological systems in our mind that have significance relative to reward, punishment, and EA.

After a lot of testing and suffering, I’ve found out the necessary prerequisites to EA motivation.

  1. Suffering & satisfaction.

  2. Social influences | centrist theory | Early adopters vs the layman

  3. A specific system within the brain [3]

Suffering & Satisfaction (The 7 Premises for this being 1 of 3 prerequisites for saving the world.)

  1. By giving people the intellectual tools necessary to be satisfied in their daily life, they will gain goodwill towards EA as a concept, increase their overall competency at a more rapid rate, be more giving, and naturally gravitate toward EA activity provided they have the foundational values/​ideals of rationality & selflessness. (Or rationality & selfish status gain)

  2. By creating a system where participation in EA leads to a satisfactory reward chain of activities, consistency of activity will be incentivized and therefore prevalent.

  3. By bringing [4]attention to tactical/​emotionally pulling patterns of suffering, people will recognize it in their own life, and we will create an unfulfilled desire that only we have the solution for.

  4. Fundamentally, everyone will suffer no matter what. The deeper your understanding of Functional systems psychology, the more likely you are to come to this conclusion. In other words, once you reach this level of understanding your perception of the world state with the highest reward for yourself becomes the positive AI future, and thus you will work towards it.

  5. We can make this “feel real.” Through manipulation of media/​entertainment, values, and perceptions of the world.

  6. Personal concurrent life-satisfaction is possible in-spite of punishment/​suffering when punishment/​suffering is perceived as a necessary sacrifice for an impending reward.

  7. Suffering can be significantly reduced without reducing psychological punishment through managing of expectations and implementation of psychological toolkits.

Again, I have tested all of these claims on myself. Aside from the story about china, I’m not parroting things I’ve read/​heard on the internet. I’m speaking from experience so come at my argument from as many perspectives as you can and try and tear it apart. My argument is based on real systems & functions, so I’m confident that disagreements will primarily come from informational discrepancies between me and readers, and I will be happy to converse on the clarification of those discrepancies so that I can add them to future docs for people’s future clarifications.

Counterarguments:

1. Because of unknown unknowns and the lack of clarity with which we’ve determined EA to be an effective method of avoiding the next fermi filter, EA may not be an effective way of avoiding the next fermi filter. As in, even if there were 1 million Elizer Yokowskys, nothing would change relative to the next fermi filter.

  • In which case, we’re all f***** because we are products of EA. And if the rule is that EA has no functional difference on the outcome of the human species/​intelligent life in the universe, we are also bound by that rule[5].

  • Our intuition tells us otherwise, and we are the smartest humankind has, so we’re going to have to act as if we’re correct. (Because we actually are correct. If every human on earth was Elon Musk, the likelihood we pass the next filter is high)

2. (Your comment)

Reader, give me your best counterargument and lets have a rational conversation!


Social influences[6] | centrist theory [7] | Early adopters vs the layman[8]


A specific unnamed system within the brain[9]

Create a community

  • As referenced earlier, the impact of social influences, centrist theory, and the concept of early adopters as opposed to the layman make successful creation of a community paramount to the virality of EA psychology.

  • I have a separate doc[10] on this.

Spread logical decision making

  • Even if people are incentivized to “do the right thing”. We are still in trouble if the average human is as stupid as I am. From a marketing perspective, there is currently massive demand to learn. We just need to give people the tools they need to do so.

  • I will create the high-level and the structure, and I will depend on this community to help me encompass all relevant information, create a consumer-oriented experience, iron out my personal biases, logical errors, and flesh out the info from high-level concept to situation specific action items/​valuable information sources.

World management problem (Incentives[11])

If we solve the tech problem, there’s still the problem of the bureaucracy imploding. This is a problem to be solved that I am not focusing much on right now aside from subconsciously optimizing for a few concepts from a long-term positional perspective so that when the time is right, the EA optimized functional systems in the world are in an optimal position to tackle this problem.

Conclusion

I’ve read a few posts on this community, and I was glad to see a level of rationalism I couldn’t find anywhere else. But from a human extinction perspective, the world needs more than rationalism.


It needs rationalism, functionalism, systems thinking, business/​realism, and marketing.

Here is a story that I’ll think you’ll all enjoy.

In a galaxy far far away, there was a 1 in a trillion planet. As an astronomical anomaly, it was a planet just like ours. There were viruses, bacteria, insects, reptilia, flora, and mammals. There was even a species of monkey that used rocks as tools. On this planet, there was an ant colony. This ant colony was one of millions in vicious competition with each other. And right when it was on its last leg, something happened. The ants simple brains started forming parts of a complex neurological structure that resembled a level of agency intelligence similar to humans. The ants started manipulating other ant’s pheromones to trick and defeat them. They started not just destroy, but consume the resources of other ant colonies much like imperialists nations have done in the past. They assassinated queens and placed their queens in charge, and through pheromone mimicry the other ant colony never figured it out. The queen then starved the old ants and when the queen laid eggs and created new ants, they fed on the old colony’s carcasses. With their superior intellect, the ant colony quickly grew and took over other colonies, increasing its intellectual capacity. But then something happened. Like the pattern present in all ants, the colony started waging war with other colonies of the same species that have different queens over resource scarcity!

So we can assume that the rule of Moloch must be applied here. As the ant’s reward modeling system would evolve to optimize towards whatever leads to the growth/​benefit of the colony. Any colony that doesn’t do this, would be defeated by colonies that do. The same is true with technological progress. This is one way that this theoretical ant colony, and modern society are identical. Additionally, the universe tends towards simplicity, so it is infinitely likely that the reward system would be towards progress-adjacent models as opposed to directly optimized models, (as evidenced in human brain reward systems & AI reward systems) However, in spite of all these similarities present in all instances of agency as a universal concept, there is one principle that makes humans different from ant colonies. That is the sheer lack of individualism present in individual ants that make up the colony-wide neural network as a contrast to present society. Individualism is the ultimate enemy of Molarch. If nuclear fallout is a fermi filter, then Elon Musk’s efforts to get a self-sustaining colony on mars is a direct challenge to that filter. Elon Musk could not exist in an ant colony, or any form of collaborative intelligence society, and Molarch would be absolute.

I tell this story to illustrate the fact that given the nature of biological evolution and power, it is probably far more likely that a species of collaborative intelligence would reach world dominance.

And when this happens 999999999999999/​100000000000000 times, Moloch definitely wins every time. Moloch is winning even in our present society where individualism is a significant psychological value of a significant portion of society.

Furthermore, the humans species has an opportunity that it has never had before, because the intellectual capital present in the world is greater than ever before. The thing is, no one is realizing this or utilizing it because humans can’t think on long time horizons. We only think about right now, and the systems currently in place within our society are legacy. They were created in a time where people couldn’t read, write, and do basic arithmetic. And those were values to aspire to. And like the stupid socially-reward optimized creatures we are[12], we somehow convince ourselves that operating under these crippling systems is ok.

Here is a second story that I think you’ll enjoy.

In a galaxy far far away, there was a planet just like ours. In fact, it was exactly like ours. All of the same people existed, and they were all the exact same places. Except there was one difference. Stanislav Petrov was to be promoted to Lieutenant Colonel in 1984. So he wasn’t the Lieutenant Colonel on September 26th, 1983, when the Russians received a false alarm of a U.S (Nuclear) missile attack, and he chose not to alert his superiors. Given that we live in a conformist society where the dominant intellectual agents value social rewards, status, authority, and lack of responsibility as opposed to individualism, it is a miracle that Stanislav Petrov chose not to alert his superiors and start a possible nuclear war. In this alternate world where Stanislav wasn’t there, if we use rationalist principles as opposed to anecdotal evidence, you can guess what would’ve happened.

In another 99999/​100000 planets exactly like ours, Britain, won the American revolution. So libertarianism didn’t spread. So the average nation today operates like China. (which strengthens Molarch.)

And finally, I want to introduce the concept of luck, perspective, anecdotal history, and their relevance to human extinction.

Think for a second. How likely is it that you exist right now, and that you’re reading this. Almost infinitesimal right? But how can that be? How can an infinitesimal likelihood occurrence actually exist? It’s impossible to win the lottery right? Well, think from another perspective. For people who’ve already won the lottery, it is 100% likely for the lottery to be won. In other words, just because we’ve gotten lucky as a species thus far, does not mean human extinction was not almost infinitely likely in the past. My intellectual foundations are rooted in the existence of the internet, and profound intellectuals who chose to share their beliefs about the world are prerequisites for me to exist from a functional perspective.

So come on rationalists, let’s be rational. Anecdotal evidence of what almost happened is just real-world evidence proving this theory’s basis in reality[13]

And if we won’t be rational. No one else will. And we will just be one of the many world-states the fails. We’ve got to act now, and we’ve got to move fast, or our children will never grow up.

To finish this off, here is a third story that I think you’ll enjoy.

In the milky way galaxy, on a planet called earth, there was a small child.

It was his first day at school, and he decided to read with the other children. He picked out a book, sat next to another child, and said, “Let’s read together!” They did so, but as soon as they started, the child noticed something was off. The boy next to the child was reading very slowly. Pausing after every syllable, annunciating every letter, and speaking without comprehension of the words he spoke.

The child was upset, so he stood up, walked to another child, and said, “Let’s read together!”

The same thing happened. And when the day was over, the child, confused, asked his mother a question. “Why? Why can’t they read?” And the mother didn’t have an answer. Nor did the teacher, or his father, or anyone else the child had access to. So the child dismissed the thought.

Instead, From that day onward, the child read alone.

The child learned alone,

The child researched alone.

And the child’s intellectual capacity sprouted relative to the context of what was possible, instead of what the system of the child’s environment wanted to mold them into.

That child was me. And that child was you as well.

We are the early adopters of EA. We have goodwill relative to the concept of helping other people, and we have the intellect to grasp high-level concepts and see them brought to fruition in the real world.

Each of our impacts needs to be substantial, because we are all society has.

Please help me.

I can’t do this alone.

Asking for collaboration

This post is meant for me to receive feedback as opposed to making a long-term commitment that’s isolated from a perspective outside of my own.

tear it apart, be as critical as possible, and presume that I’m an idiot and everything I’ve written was done so out of complete ignorance to the ways of the world.

Nothing you could type on a digital screen will ever offend me since my perception of pain is anchored towards things like not eating anything for 9 days.

  1. ^

    Currently in early stages. Forgive me for linking to a group of unfinished thoughts.

  2. ^

    Proposed Solutions to the tech problem:

    1. End human suffering
    2. Create a community
    3. Spread logical decision making

  3. ^

    (A person’s perception of the world state with the most psychological reward or satisfaction for themselves) = the decision calculus for this system. ((I haven’t figured out if it’s reward or satisfaction, and I haven’t thought of a name for this system.))

  4. ^

    Everyone suffers, we are just often not aware of our patterns o fsuffering.

  5. ^

    This is a wide-arching imaginary rule for purely theoretical play. As it has no basis in reality, it is unlikely to be true.

    And anyone who uses this to take a defeatist perspective on EA, simply does so because they have personal incentives to avoid EA and want to not feel guilty about it.

  6. ^

    (For context, read ending Ignorance)

    Most people receive a significant amount of their psychological reward from social circles.

    In other words, their focus, attention, reward, actions, etc. are relative to the perceived responses of other people around them.

    By providing the (Much desired, needed, and unfulfilled community) (As evidenced by the demand for social media, prevalent loneliness, etc.)

    We can make the layman receive psychological reward as a consequence of acting as a part of EA

  7. ^

    (Political/​sociological/​logical-realist theory of human psychology.)

    Laymen gravitate towards where they perceive the center to be.

    From a universal/​physics/​reality perspective, human’s center

    For example, many U.S republicans think LGBTQ policy/​culture is madness. (Gender spectrum, trans-women in womens sports, early gender education. (I’m aware these can be seen as strawmen, I’m making a point))

    From a universal perspective, family values, conservatism as a concept, and religiousness are equally as wild as any values expressed in LGBTQ communities.

    Internet communities lean right, where conservatist values are considered more centrist.

    While in—academia/​government/​bureaucracy, leftist culture is considered more centrist.

    To reiterate, I don’t care about politics. I use these examples to prove a point about centrism. Our political views are influenced more by our environment than any of or core personal qualities, because most personal qualities are malleable relative to what we perceive to be our most-optimal course of action in pursuit of our own self-interest

  8. ^

    (Startup/​technical/​SaaS/​Business world Jargon that means a subset of a population that has goodwill to spend and is willing to take a desired action even without perception of short-term selfish gain.)

  9. ^

    I haven’t thought of a name for it yet, but there is a system in the brain people know as “the logical mind” “the conscious”, “that voice in my head” etc.

    As opposed to other reward systems within the brain, this system takes a long-term approach to reward optimization.

    What it believe you “should” do is dictated by your subconscious perception of the situation with maximal future reward. (This is referring to another system in the brain covered in functional systems psychology)

    By using a functional systems psychology understanding of reward, we can use currently prevalent opportunity vehicles like media to change the layman’s perception of what world state is most optimal for their selfish reward maximization into one that is optimal for EA.

  10. ^

    In early phases. Forgive me if its unreadable...

  11. ^

    Media
    Values
    Perception of the world

    Systemic reward & punishment
    Parenting
    School systems
    Power process
    Perception of future | Macro problems
    Extracurricular opportunities

    +

    Role of Moloch vs intention in shaping society at high levels.

    Probably fixable by using systems/​power to shift incentives

  12. ^

    As explained earlier in social influences, centrist theory, and early adopters vs the layman

  13. ^

    This is another concept covered in “Ending Ignorance.” that has to do with realism, the dysfunctionality of modern theories, and the human overestimation of the capability of future prediction