There is an important variable (or cluster of similar correlated variables) that I need a better name for. I also appreciate feedback on whether or not this variable is even a thing, and if so how I should characterize it. I have two possible names and two attempts at explaining it so far.
Name 1: “Heavy-tailedness of the world.”
Name 2: “Great Man Theory vs. Psychohistory”
Attempt 1: Sometimes history hinges on the deliberate actions of small groups, or even individuals. Other times the course of history cannot be altered by anything any small group might do. Relatedly, sometimes potential impact of an individual or group follows a heavy-tailed distribution, and other times it doesn’t.
Some examples of things which could make the world heavier-tailed in this sense:
Currently there are some domains in which humans are similar in effectiveness (e.g. manual labor, voting) and others in which the distribution is heavy-tailed, such that most of the total progess/influence comes from a small fraction of individuals (e.g. theoretical math, donating to political parties). Perhaps in the future history will hinge more on what happens in the second sort of domain.
Transformative technologies, such that when, where, and how they appear matters a lot.
Such technologies being unknown to most people, governments, and corporations, such that competition over them is limited to the few who forsee their importance.
Wealth inequality and political inequality concentrating influence in fewer people.
Technologies such as brain-machine interfaces, genetic engineering, and wireheading increasing inequality in effectiveness and influentialness.
Attempt 2: Consider these three fictional worlds; I claim they form a spectrum, and it’s important for us to figure out where on this spectrum our world is:
World One: How well the future goes depends on how effectively world governments regulate advanced AI. The best plan is to contribute to the respected academic literature on what sorts of regulations would be helpful while also doing activism and lobbying to convince world governments to pay attention to the literature.
World Two: How well the future goes depends on whether the first corporation to build AGI obeys proper safety protocols or not in the first week or so after they build it. Which safety protocols work is a hard problem which requires unusually smart people working for years to solve. Figuring out which corporation will build AGI and when is a complex forecasting task that can be done but only by the right sort of person, and likewise for the task of convincing them to follow safety protocols. The best plan is to try to assemble the right community of people so that all of these tasks get done.
World Three: How well the future goes depends on whether AGI of architecture A or B is built first. By default, AGI-A will be built first. The best plan involves assembling a team of unusually rational geniuses, founding a startup that makes billions of dollars, fleeing to New Zealand before World War Three erupts, and using the money and geniuses to invent and build AGI-B in secret.
Help?
For some other discussion of (facets of) this variable and its implications, see this talk and this post.
[Question] Better name for “Heavy-tailedness of the world?”
There is an important variable (or cluster of similar correlated variables) that I need a better name for. I also appreciate feedback on whether or not this variable is even a thing, and if so how I should characterize it. I have two possible names and two attempts at explaining it so far.
Name 1: “Heavy-tailedness of the world.”
Name 2: “Great Man Theory vs. Psychohistory”
Attempt 1: Sometimes history hinges on the deliberate actions of small groups, or even individuals. Other times the course of history cannot be altered by anything any small group might do. Relatedly, sometimes potential impact of an individual or group follows a heavy-tailed distribution, and other times it doesn’t.
Some examples of things which could make the world heavier-tailed in this sense:
Currently there are some domains in which humans are similar in effectiveness (e.g. manual labor, voting) and others in which the distribution is heavy-tailed, such that most of the total progess/influence comes from a small fraction of individuals (e.g. theoretical math, donating to political parties). Perhaps in the future history will hinge more on what happens in the second sort of domain.
Transformative technologies, such that when, where, and how they appear matters a lot.
Such technologies being unknown to most people, governments, and corporations, such that competition over them is limited to the few who forsee their importance.
Wealth inequality and political inequality concentrating influence in fewer people.
Technologies such as brain-machine interfaces, genetic engineering, and wireheading increasing inequality in effectiveness and influentialness.
Attempt 2: Consider these three fictional worlds; I claim they form a spectrum, and it’s important for us to figure out where on this spectrum our world is:
World One: How well the future goes depends on how effectively world governments regulate advanced AI. The best plan is to contribute to the respected academic literature on what sorts of regulations would be helpful while also doing activism and lobbying to convince world governments to pay attention to the literature.
World Two: How well the future goes depends on whether the first corporation to build AGI obeys proper safety protocols or not in the first week or so after they build it. Which safety protocols work is a hard problem which requires unusually smart people working for years to solve. Figuring out which corporation will build AGI and when is a complex forecasting task that can be done but only by the right sort of person, and likewise for the task of convincing them to follow safety protocols. The best plan is to try to assemble the right community of people so that all of these tasks get done.
World Three: How well the future goes depends on whether AGI of architecture A or B is built first. By default, AGI-A will be built first. The best plan involves assembling a team of unusually rational geniuses, founding a startup that makes billions of dollars, fleeing to New Zealand before World War Three erupts, and using the money and geniuses to invent and build AGI-B in secret.
Help?
For some other discussion of (facets of) this variable and its implications, see this talk and this post.