My view about global rationality is similar to that the view of John Baez about individual risk-adversity. An individual should typically be cautious because the maximum downside (destruction of your brain) is huge even for day-to-day actions like crossing the street. In the same way, we have only one habitable planet and one intelligent species. If we (accidentally) destroy either we’re boned. Especially when we don’t know exactly what we’re doing (as is the case with AI) caution should be the default approach, even if we were completely oblivious to the concept of a singularity.
that the most pressing issue is to increase the confidence into making decisions under extreme uncertainty or to reduce the uncerainty itself.
I disagree, it’s not the most pressing issue. In a sufficiently complex system there are always going to be vectors we poorly understand. The problem here is that we have a global society where it becomes harder every year for a single part to fail independently of the rest. A disease or pathogen is sure to spread to all parts of the world, thanks to our infrastructure. Failure of the financial markets affect the entire world because the financial markets too are intertwined. Changes in the climate also affect the entire globe, not just the countries who pollute. An unfriendly AI cannot be contained either. Everywhere you look there are now single points of failure. The more connected our world becomes the more vulnerable we become to black swan events that rock the world. Therefore, the more cautious we have to be. The strategy we used in the past 100.000 years (blindly charge forward) got us where we are today but it isn’t very good anymore. If we don’t know exactly what we’re doing we should make absolutely sure that all worst case scenarios affect only a small part of the world. If we can’t make such guarantees then we should probably be even more reluctant to act at all. We must learn to walk before we can run.
Under extreme uncertainty we cannot err on the side of caution. We can reduce uncertainty somewhat (by improving our estimates) but there is no reason to assume we will take all significant factors into account. If you start out with a 0.001 probability of killing all of humanity there is no amount of analysis that can rationally lead to the conclusion “eh, whatever, let’s just try it and see what happens”, because the noise in our confidence will exceed a few parts in a million at the least, which is already an unacceptable level of risk. It took billions of years for evolution to get us to this point. We can now mess it up in the next 1000 years or so because we’re in such a damn hurry. That’d be a shame.
My view about global rationality is similar to that the view of John Baez about individual risk-adversity. An individual should typically be cautious because the maximum downside (destruction of your brain) is huge even for day-to-day actions like crossing the street. In the same way, we have only one habitable planet and one intelligent species. If we (accidentally) destroy either we’re boned. Especially when we don’t know exactly what we’re doing (as is the case with AI) caution should be the default approach, even if we were completely oblivious to the concept of a singularity.
I disagree, it’s not the most pressing issue. In a sufficiently complex system there are always going to be vectors we poorly understand. The problem here is that we have a global society where it becomes harder every year for a single part to fail independently of the rest. A disease or pathogen is sure to spread to all parts of the world, thanks to our infrastructure. Failure of the financial markets affect the entire world because the financial markets too are intertwined. Changes in the climate also affect the entire globe, not just the countries who pollute. An unfriendly AI cannot be contained either. Everywhere you look there are now single points of failure. The more connected our world becomes the more vulnerable we become to black swan events that rock the world. Therefore, the more cautious we have to be. The strategy we used in the past 100.000 years (blindly charge forward) got us where we are today but it isn’t very good anymore. If we don’t know exactly what we’re doing we should make absolutely sure that all worst case scenarios affect only a small part of the world. If we can’t make such guarantees then we should probably be even more reluctant to act at all. We must learn to walk before we can run.
Under extreme uncertainty we cannot err on the side of caution. We can reduce uncertainty somewhat (by improving our estimates) but there is no reason to assume we will take all significant factors into account. If you start out with a 0.001 probability of killing all of humanity there is no amount of analysis that can rationally lead to the conclusion “eh, whatever, let’s just try it and see what happens”, because the noise in our confidence will exceed a few parts in a million at the least, which is already an unacceptable level of risk. It took billions of years for evolution to get us to this point. We can now mess it up in the next 1000 years or so because we’re in such a damn hurry. That’d be a shame.