But in the back of our minds, we all know we’re f*****.
If AI doesn’t kill us all, it will be biotech, nanotech, nuclear fallout, or a random science experiment that creates a substance or reaction not naturally occurring in nature that just happens to wipe us all out. In the minuscule chance technology doesn’t accidentally wipe out all of humanity, there is the problem of world governance, ignorance, and incentives at the top of society. In other words, we will intentionally wipe ourselves out or do so because a bureaucratic fool has the superAI world-destruction button. From my understanding, we currently live in an age of international anarchism.[2] Inherent competition between states, and security incentives creating more incentive for security resource allocation, create an unstoppable prisoner’s dilemma where Moloch’s army march us all into extinction.
Additionally, nations all around the world are trending towards totalitarianistic governance. Likely because there is a state incentive for power acquisition & control inherent to human nature itself. As nations grow more authoritarian, innovation & individualism will decrease. In other words, in the unlikely event that human society makes it pass the Fermi paradox’s technology filter, humanity will still implode as a consequence of bureaucratic patterns caused by the nature of operational complexity[3] and middle management paradoxes.
So the overtone window[4] for the innovation necessary to bring humanity pass this filter is quickly closing.
Relative to human extinction, I take an optimists perspective.
I think all humans are stupid, and the butterfly effect[5] is one of many heavily underestimated facets of reality.
As a society, we over optimize for short-term results and routinely fall prey to simple psychological fallacies/macro-directional inaccuracies[6] that could easily be reverted with simple awareness of information.
I believe that humanity is at less than 1/10000th of its realistic capabilities capacity.
A systematized[7] approach to decision making, learning, opportunity vehicles, intelligent collaboration, self-discipline, and actionable approaches to EA and achievement of one’s personal goals would exponentially increase society’s productive output.
I believe that intelligent people are the correct system for saving the world, provided they are given the intellectual tools necessary to rapidly improve and be efficient in their actions
I am not native to this community. I found it a week ago after years of isolation from people who think anything like I do. I’m making this post because I believe in the results of collaboration between intelligent people, and I believe building in public is a much more efficient vehicle for proper systems creation. Since it allows high-level[8] systems to be altered in real-time from an early-adopter’s[9] experience[10] perspective, which can be somewhat generalized to the broader market[11] by optimizing for estimated discrepancies in goodwill [12]& expectations[13].
I have a marketing background, and from a marketing perspective, it is very possible to take the layman’s current general nonchalance and ignorance, to a simple, but accurate understanding of the problems currently facing society. With my current understanding of human psychology and sociology, it is even possible to make them care as much as, if not more than we do over a 40 year time horizon. As unknown unknowns [14]reveal themselves, I estimate that this conservative estimate of the time-horizon could potentially decrease to a number with a more realistic time-relevancy to humanity’s current needs.
Words are cheap, and it is easy to talk in summary’s, theories, and ideals.
I am presenting a high-level theory on the functional realities [15]associated with the problems we’re facing, what will solve them, what can be done from a skills perspective, a learning[16] perspective, an unknown unknowns perspective, and what actions I believe I can take to facilitate the greater movement that needs to happen if we want our children to ever grow up.
Tear it apart as much as you can. I intend on spending the next few years bringing these ambitions to fruition.
If my premise is flawed, my efforts would be meaningless to the EA community.
The theory that society will not change drastically within the next few decades as a consequence of technology. (More of a psychological bias, less of a theory)
The theory that morality from an international perspective is different than morality from a domestic/national perspective because of the inherent lack of accountability, state incentives of security, ambiguity of resources & intentions, cultural discrepancies, and the reign of Molarch.
The concept describing the increasing difficulty of managing large-scale systems/operations.
As things grow, founder intentions become diluted & the impact of unknown unknowns become unmanageable.
Additionally, the ability of high-level management to control things like hiring practices, systems implementation, low-level worker incentives, environment, etc. decreases.
Ex 1. A youtube post is an example of a compounding return that operates over a long time horizon. The first day you post a video, you will have 0 views. The 100th day you post a video, your first video will have 10k views and your 100th video will have 1k views.
The people you’ve influenced/affected with your youtube post, now make every decision/action with the information you’ve given them in the back of their minds. Additionally, if your post is good they will tell their friends about it and youtube will recommend the video to more people, which will take the video from 10k, to 200k views. Additionally, if your video was compelling from a functional systems psychology perspective of the “long-term mind.” then you may have influenced 3 people to start posting youtube videos exactly like you are today in 3 years.
As an oppositionary concept, you could talk to some random person, say everything the same as the youtube video, and instead of changing 10k people’s worldviews, you change 1 person’s worldview.
Wasted efforts from a high-level perspective. Typically caused by ignorance of the concept of opportunity cost[17] and not knowing what you don’t know/what exists.
Example: An elite hacker decides to quit coding and become a watercolor painter.
a subset of a population that has goodwill to spend and is willing to take a desired action even without perception of short-term selfish gain. (Typically used in a business context)
Experience in this context refers to consumer convenience, utility, and value as opposed to the original creators of a thing. (A concept illustrated in UX design fields)
Goodwill refers to the psychological currency that results in a state of being willing to give more than you get until the currency expires.
Goodwill is owing someone | wanting to give | wanting to contribute
Goodwill can be quantified by the degree to which a person is willing to sacrifice for a choice that does not result in (typically short-term) selfish gain.
Expectations refer to system within the brain that estimates the future reward of a given action, and how that system in collaboration with other systems create distortions in your perception relative to satisfaction & suffering
See (Ending ignorance for a deeper context on reward, perception, and satisfaction & suffering.)
what we don’t know about what we don’t know exists.
Ex. cavemen did not have the context with which to conceptualize aliens, because their focus was on material needs and they were unaware outer space existed.
Important functions that are part of a system or concept | 80⁄20 rule for directional accuracy/efficiency | Typically expressed from a high-level perspective in this context.
High-level term for how the brain changes. Includes acquisition of knowledge, skills, beliefs, traits, tendencies, intellectual capacities (Think from an agency perspective. Ex is “processing power”) etc.
The concept that for any action you take or don’t take, you are losing something as well as gaining something. This theory implies that prioritization is undervalued in society, and that limited resources (I.E time, attention, energy, capital) should be allocated efficiently. And just because something is a good opportunity does not mean it is the opportunity you should choose.
(Believe it or not, most people don’t think this way and are oblivious to the concept of opportunity cost)
We can survive
We subconsciously optimize for “Business as usual”[1]
But in the back of our minds, we all know we’re f*****.
If AI doesn’t kill us all, it will be biotech, nanotech, nuclear fallout, or a random science experiment that creates a substance or reaction not naturally occurring in nature that just happens to wipe us all out. In the minuscule chance technology doesn’t accidentally wipe out all of humanity, there is the problem of world governance, ignorance, and incentives at the top of society. In other words, we will intentionally wipe ourselves out or do so because a bureaucratic fool has the superAI world-destruction button. From my understanding, we currently live in an age of international anarchism.[2] Inherent competition between states, and security incentives creating more incentive for security resource allocation, create an unstoppable prisoner’s dilemma where Moloch’s army march us all into extinction.
Additionally, nations all around the world are trending towards totalitarianistic governance. Likely because there is a state incentive for power acquisition & control inherent to human nature itself. As nations grow more authoritarian, innovation & individualism will decrease. In other words, in the unlikely event that human society makes it pass the Fermi paradox’s technology filter, humanity will still implode as a consequence of bureaucratic patterns caused by the nature of operational complexity[3] and middle management paradoxes.
So the overtone window[4] for the innovation necessary to bring humanity pass this filter is quickly closing.
Relative to human extinction, I take an optimists perspective.
I think all humans are stupid, and the butterfly effect[5] is one of many heavily underestimated facets of reality.
As a society, we over optimize for short-term results and routinely fall prey to simple psychological fallacies/macro-directional inaccuracies[6] that could easily be reverted with simple awareness of information.
I believe that humanity is at less than 1/10000th of its realistic capabilities capacity.
A systematized[7] approach to decision making, learning, opportunity vehicles, intelligent collaboration, self-discipline, and actionable approaches to EA and achievement of one’s personal goals would exponentially increase society’s productive output.
I believe that intelligent people are the correct system for saving the world, provided they are given the intellectual tools necessary to rapidly improve and be efficient in their actions
I am not native to this community. I found it a week ago after years of isolation from people who think anything like I do. I’m making this post because I believe in the results of collaboration between intelligent people, and I believe building in public is a much more efficient vehicle for proper systems creation. Since it allows high-level[8] systems to be altered in real-time from an early-adopter’s[9] experience[10] perspective, which can be somewhat generalized to the broader market[11] by optimizing for estimated discrepancies in goodwill [12]& expectations[13].
I have a marketing background, and from a marketing perspective, it is very possible to take the layman’s current general nonchalance and ignorance, to a simple, but accurate understanding of the problems currently facing society. With my current understanding of human psychology and sociology, it is even possible to make them care as much as, if not more than we do over a 40 year time horizon. As unknown unknowns [14]reveal themselves, I estimate that this conservative estimate of the time-horizon could potentially decrease to a number with a more realistic time-relevancy to humanity’s current needs.
Words are cheap, and it is easy to talk in summary’s, theories, and ideals.
I am presenting a high-level theory on the functional realities [15]associated with the problems we’re facing, what will solve them, what can be done from a skills perspective, a learning[16] perspective, an unknown unknowns perspective, and what actions I believe I can take to facilitate the greater movement that needs to happen if we want our children to ever grow up.
Tear it apart as much as you can. I intend on spending the next few years bringing these ambitions to fruition.
If my premise is flawed, my efforts would be meaningless to the EA community.
Breaking Beliefs about Saving the world
The theory that society will not change drastically within the next few decades as a consequence of technology. (More of a psychological bias, less of a theory)
The theory that morality from an international perspective is different than morality from a domestic/national perspective because of the inherent lack of accountability, state incentives of security, ambiguity of resources & intentions, cultural discrepancies, and the reign of Molarch.
The concept describing the increasing difficulty of managing large-scale systems/operations.
As things grow, founder intentions become diluted & the impact of unknown unknowns become unmanageable.
Additionally, the ability of high-level management to control things like hiring practices, systems implementation, low-level worker incentives, environment, etc. decreases.
The fading period of time before a change becomes impossible
(Leverage, compounding returns, scale, branding)
Ex 1. A youtube post is an example of a compounding return that operates over a long time horizon. The first day you post a video, you will have 0 views. The 100th day you post a video, your first video will have 10k views and your 100th video will have 1k views.
The people you’ve influenced/affected with your youtube post, now make every decision/action with the information you’ve given them in the back of their minds. Additionally, if your post is good they will tell their friends about it and youtube will recommend the video to more people, which will take the video from 10k, to 200k views. Additionally, if your video was compelling from a functional systems psychology perspective of the “long-term mind.” then you may have influenced 3 people to start posting youtube videos exactly like you are today in 3 years.
As an oppositionary concept, you could talk to some random person, say everything the same as the youtube video, and instead of changing 10k people’s worldviews, you change 1 person’s worldview.
Wasted efforts from a high-level perspective. Typically caused by ignorance of the concept of opportunity cost[17] and not knowing what you don’t know/what exists.
Example: An elite hacker decides to quit coding and become a watercolor painter.
System = A thing that is a combination of smaller parts that serves a function different than the smaller parts themselves.
Systematized = Turned into a process/organized structure that is repeatable/efficient.
Terming that makes broad generalizations with the goal of directing energy within the given domain.
a subset of a population that has goodwill to spend and is willing to take a desired action even without perception of short-term selfish gain. (Typically used in a business context)
Experience in this context refers to consumer convenience, utility, and value as opposed to the original creators of a thing. (A concept illustrated in UX design fields)
The public (In a given context) | The target audience
From a functional systems psychology perspective:
Goodwill refers to the psychological currency that results in a state of being willing to give more than you get until the currency expires.
Goodwill is owing someone | wanting to give | wanting to contribute
Goodwill can be quantified by the degree to which a person is willing to sacrifice for a choice that does not result in (typically short-term) selfish gain.
Functional systems psychology perspective:
Expectations refer to system within the brain that estimates the future reward of a given action, and how that system in collaboration with other systems create distortions in your perception relative to satisfaction & suffering
See (Ending ignorance for a deeper context on reward, perception, and satisfaction & suffering.)
what we don’t know about what we don’t know exists.
Ex. cavemen did not have the context with which to conceptualize aliens, because their focus was on material needs and they were unaware outer space existed.
Important functions that are part of a system or concept | 80⁄20 rule for directional accuracy/efficiency | Typically expressed from a high-level perspective in this context.
High-level term for how the brain changes. Includes acquisition of knowledge, skills, beliefs, traits, tendencies, intellectual capacities (Think from an agency perspective. Ex is “processing power”) etc.
The concept that for any action you take or don’t take, you are losing something as well as gaining something. This theory implies that prioritization is undervalued in society, and that limited resources (I.E time, attention, energy, capital) should be allocated efficiently. And just because something is a good opportunity does not mean it is the opportunity you should choose.
(Believe it or not, most people don’t think this way and are oblivious to the concept of opportunity cost)