Vision of a positive Singularity

Introduction

Many people feel significant anxiety about how AI and superintelligence will play out, regarding both the ultimate outcome and the intermediate stages. There is the sense that some kind of loss is inevitable as humanity becomes more powerful in this way. There is also the concern that there will be no place for a person or society with existing values any more as things progress. I try to think up a plan and system of values that will respect everyone’s desires as much as possible. The goal is to ensure coexistence among groups with differing values, minimizing conflict over competing visions for the future.

A clear positive vision is important in times of uncertainty we need to know and fully imagine how things can go well just as much as it can go badly. Positive visions can inspire people to make good choices.

Spheres of control and influence

The basic idea is to build on what we already have for groups and creatures with different intellectual and technological capabilities. For humans there is modern civilization, then say the Amish, then uncontacted tribes. You can take this further and include nature. At one end of the spectrum are great apes with complex social structures, followed by ecosystems dominated by insects, single-celled organisms, and finally lifeless environments.

In most of these cases we have the concept that increasing a place or group up the scale is not done without thought.

You can start with bringing life to a lifeless place, say tardigrades to the moon. Some people think this is spoiling a pristine environment and the moon has some right to be a pristine environment from now to the end of time (I don’t feel that way). Then there is the concept of invasive species, even if it is one bacteria or very simple organism replacing another. Many people would be strongly against the prospect of planting a thriving forest in one of the dry valleys in Antarctica if it became possible even though there would be more varied life there as a result.

We also respect the rights of groups of humans that don’t want more technology, starting with the obvious step of leaving uncontacted tribes mostly alone, to the generally positive sentiment towards the Amish as far as I know. Additionally effort is made to let indigenous people keep their historical way of life where possible. For example if a group has been using a fishing technique for hundreds of years they often get that right protected going forward. They may get first rights to fishing quotas, and more effective fishing techniques in the area would not be allowed.

Can we apply such a system to groups today? The difference from most peoples point of view is that they would now not be on the most extreme tech frontier, they would be Amish in many ways. If we where to try to apply this principle, then AI would not be allowed to disrupt some professions, groups or regions.

A clear way to segregate is by physical location. Lets consider starships first, at the end stage of the Singularity. It should be clear that non-biological craft will be able to withstand greater acceleration, reproduce faster. Such mind upload/​AI will not be taking the place of biological humans, and they will take >99% of future humanities territory. Even if the biological humans take all that they can expand to, that is still far less than non-bio. You could then restrict significantly AI enhanced humans with the likes of Neuralink to new societies where most if not all people had them, say space colonies or new cities in the desert.

The difficulty is deciding how to achieve this. The first neurally enhanced human can’t live in their own city. However we could more feasibly have rules that superintelligences don’t run non-enhanced countries.

TAI or soon to become superintelligent mind uploads could be restrained by the kind of work they are allowed to do, and then by physical location. In the comedy TV series “upload”, mind uploads are not allowed to work, including write software I think.

While on earth, they could be limited to designing and building space infrastructure, curing aging and disease, enabling mind uploads if not already created.

We probably will want them to reverse or mitigate the negative environmental impact of our current tech. How far is a question? Do we want them to enable people to drive large SUV, overfish because it is now part of “historical” culture? That is create synthetic fuels, fix atmospheric CO2 levels, breed and release animals to hunt/​fish? As a society we are OK with indigenous practices that are sustainable, but what about protecting more modern ones that are not. Old car culture will look a lot like past fishing practices soon. It already does to my young son, he just cannot understand why anyone would want a loud car or motorbike.

Current unethical practices (Factory farming, harmful culture)

It is not so clear how things will play out with existing practices that are arguably unethical. One approach is to ignore them because they are insignificant compared to the consciousness to be created on the billions of stars probably available. That is let it continue but not spread. You can keep factory farming, but only on earth, with resources you can sustainably create without AI help. Some people view evolved life itself like this and claim that it is net negative and suffers more than enjoys itself. In that case we would leave nature alone, but not spread it to the stars, instead only spread parts that had been adapted to have an ethically positive existence.

Incompatible desires

This system could work for many peoples desires but not everyone. Few people want to spread factory farming or slavery to the stars, but some regard any human expansion as inherently bad. Those people want to stop others from going their own way. E.g. you can’t colonize Mars as it isn’t yours to go to, but it is our right to stop you. This could apply to current conflicts as well—“we desire to destroy this group or people”

We are not all in this together anymore

Currently it is fashionable to say that space exploration must be for the good of humanity or all people on earth and that “we are all in this together”. If instead you explicitly recognize groups rights to go their own way, this does not apply so much anymore. Instead of arguing that their values and lifestyle is best, people could recognize that they are destined for different places post Singularity.

Swapping between tech spheres?

The different tech spheres would need to decide when people can swap between them. For example someone born into a 2020 tech level without AGI and anti-aging may decide at 70 they would rather move to the Moon, get enhanced and rejuvenation therapy than die of old age. Advancing to a higher technological sphere, such as adopting Neuralink or immortality, seems more feasible than reverting to a lower-tech lifestyle. Lower tech groups may not allow people from higher tech groups to join.

No spoilers?

Part of the culture of a sphere could be that its inhabitants want to discover scientific truths for themselves. So no sharing the solution of the Riemann hypothesis or even if it can be solved from the superintelligences to others.

Summary

The main point of this article is that we need a collective vision for a positive Singularity and it needs to respect peoples different values as much as possible. At present if most people think about it at all, they probably assume a high degree of technological conformity will be enforced on everyone with a common set of values. Maybe this is how things will play out, but other options should be properly considered. It is easier to see it happening with some alignment paths and plans than others.

No comments.