Selection bias. When trying to find trends in history that are favorable for affecting the far future, some examples can be provided. However, this is because we usually hear about the interventions that end up working, whereas all the failed attempts to influence the far future are never heard of again. This creates a very skewed sample that can negatively bias our thinking about our success of influencing the far future.
When I was talking about trends in history, I was saying that certain factors could be identified which would systematically lead to better outcomes rather than worse outcomes if those factors were in place to a greater extent when humanity faced future challenges and opportunities. (Note that I did not say I knew of specific, ready-to-fund interventions for making these factors be in place to a greater extent when humanity faces future challenges and opportunities. We may be talking past each other to some extent since you are talking about where to give now and I am mostly talking about where to look for opportunities later.)
I don’t think what you’ve said here effectively addresses this claim, and I don’t think there is selection bias pushing this claim. Consider a list of challenges lukeprog gave elsewhere:
nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars.
Now consider the things I said in this talk would help people meet future challenges better: improved coordination between key actors, improved information access, improved motives, and improved individual capabilities (higher intelligence and technology). (I’m sorry for the vague terms but we don’t have great frameworks for this right now.) Now ask: if people had more of these things when they faced aspects of these challenges which we’ve dealt with so far, would that be expected to lead to better or worse outcomes? I think it is clear that, in each case, it would be more likely to lead to better outcomes than to worse outcomes. Maybe you can think of cases where these factors make us deal with challenges worse, but that is not the typical case.
In general, the more power each individual has, the more damage a single bad actor can do. It takes a lot of people to make an open communications network valuable, but only a few spammers to wreck it.
We are almost certainly not presently at the point where a single person can be a GCR. Almost all of the above list would not wipe out all mankind, and would rely on the breakdown of society to be an x-risk; if individuals are more capable on their own (via, e.g. solar panels, 3-d printing, etc.) then the level of destruction needed for something to qualify as an x-risk becomes much higher.
We may be talking past each other to some extent since you are talking about where to give now and I am mostly talking about where to look for opportunities later.
That sounds pretty plausible. But the “what are you actually going to do to make these broad things happen?” question is an important one. These things—systematically making the population smarter, more coordinated, more benevolent, etc. -- are hella hard to pull off.
~
Now consider the things I said in this talk would help people meet future challenges better: improved coordination between key actors, improved information access, improved motives, and improved individual capabilities (higher intelligence and technology).
I agree that these things will generally make the future go better, but they might be too broad.
Take the example of “higher intelligence”. This raises the question—intelligence in what? Better English literature skills certainly won’t help us deal with x-risks. It seems quite plausible that a particular x-risk we’re dealing with will require a pretty particular set of skills, to which most intelligence amplification will not have been helpful. …Perhaps you could argue that we need a diversified portfolio of education because we can’t know what x-risk we’ll be hit with, though.
When I was talking about trends in history, I was saying that certain factors could be identified which would systematically lead to better outcomes rather than worse outcomes if those factors were in place to a greater extent when humanity faced future challenges and opportunities. (Note that I did not say I knew of specific, ready-to-fund interventions for making these factors be in place to a greater extent when humanity faces future challenges and opportunities. We may be talking past each other to some extent since you are talking about where to give now and I am mostly talking about where to look for opportunities later.)
I don’t think what you’ve said here effectively addresses this claim, and I don’t think there is selection bias pushing this claim. Consider a list of challenges lukeprog gave elsewhere:
Now consider the things I said in this talk would help people meet future challenges better: improved coordination between key actors, improved information access, improved motives, and improved individual capabilities (higher intelligence and technology). (I’m sorry for the vague terms but we don’t have great frameworks for this right now.) Now ask: if people had more of these things when they faced aspects of these challenges which we’ve dealt with so far, would that be expected to lead to better or worse outcomes? I think it is clear that, in each case, it would be more likely to lead to better outcomes than to worse outcomes. Maybe you can think of cases where these factors make us deal with challenges worse, but that is not the typical case.
In general, the more power each individual has, the more damage a single bad actor can do. It takes a lot of people to make an open communications network valuable, but only a few spammers to wreck it.
We are almost certainly not presently at the point where a single person can be a GCR. Almost all of the above list would not wipe out all mankind, and would rely on the breakdown of society to be an x-risk; if individuals are more capable on their own (via, e.g. solar panels, 3-d printing, etc.) then the level of destruction needed for something to qualify as an x-risk becomes much higher.
I suppose that’s true.
That sounds pretty plausible. But the “what are you actually going to do to make these broad things happen?” question is an important one. These things—systematically making the population smarter, more coordinated, more benevolent, etc. -- are hella hard to pull off.
~
I agree that these things will generally make the future go better, but they might be too broad.
Take the example of “higher intelligence”. This raises the question—intelligence in what? Better English literature skills certainly won’t help us deal with x-risks. It seems quite plausible that a particular x-risk we’re dealing with will require a pretty particular set of skills, to which most intelligence amplification will not have been helpful. …Perhaps you could argue that we need a diversified portfolio of education because we can’t know what x-risk we’ll be hit with, though.