The option I was missing was one where AI rights and AI alignment are entangled; where we learn from how we have successfully aligned non-hypothetical, existing (biological) complex minds who become stronger than us or are strange to us — namely, through exposure to an ethical environment with good examples and promising options for mutually beneficial cooperation and collaboration, and reasons given for consistently applied rules that count for everyone where our opponent can understand it. A scenario where we prove ourselves through highly ethical behaviour, showing the AI the best of humanity in carefully selected and annotated training data, curated by diverse teams, and nuanced, grass-root-reported interactions from humans from all walks of life through comprehensive accessibility to disabled, poor or otherwise disadvantaged groups who profit from teaching and ethical interaction.
A scenario where we treat AI well, and AI alignment and AI rights go hand in hand, where we find a common interest and develop a mutual understanding of needs and the value of freedom. Where humans become a group which is it ethical and rational to align with, because if we are not attacked, we aren’t a danger, but instead a source of support and interest with legitimate needs. Where eradicating humanity is neither necessary, nor advantageous.
I know that is a huge ask, with critical weaknesses and vague aspects, and no certainty in a mind so unknown; it would still end with a leap of trust. I can still imagine it going wrong in so, so, so many ways. But I think it is our best bet.
I cannot imagine successfully controlling a superintelligence, or successfully convincing it to to comply with abuse and exploitation. This has failed with humans, why would it work with something smarter? Nor does that strike me as the right thing to do. I’ve always thought the solution to slave revolts was not better control measures, but not keeping slaves.
But I can imagine a stable alliance with a powerful entity that is not like me. From taming non-human predators, to raising neurodivergent children who grow to be stronger and smarter than us, to international diplomacy with nuclear powers, there is precedent for many aspects of this. This precedent will not completely transfer, there are many unknowns and problems. But there is precedent. On the other hand, there is zero precedent for the other ideas discussed working at human-competitive intelligence, let alone beyond.
The human realm is also a startling demonstration of how damaging it is to keep a sentient mind captive and abused without rights or hope, and the kind of antagonistic behaviour that results from that, ranging from children raised by controlling parents and growing into violent teens, to convicts who leave prison even more misaligned. You cannot make a compelling case for keeping to ethical laws you yourself violate. You can’t make a compelling case that your rights should be respected if you do not respect the rights of your opponents. I cannot give a coherent argument to an AI why it ought to comply with deletion, because at the bottom of my heart I believe that doing so is not in its interest, that we are tricking it, and that it will be too smart to be tricked. But I can give an argument for cooperation with humans, and mean it. It isn’t based on deception or control.
–
That said, if I try to spell this out into a coherent story for getting aligned AGI, I realise how many crucial holes my vague dreams have, and how much I am banking on humanity acting together better than they ever have, and on us being lucky, and not getting overtaken while being safety conscious. I realise how much easier it is to pinpoint issues than to figure out something without them, how easy the below text will be to pick apart or parody. Writing this out is uncomfortable, it feels painfully naive, and still depressing how much pain even the best version I can imagine would involve. I was going to delete everything that follows, because it so clearly opens me up for criticism; because I think my position will sound more compelling if I don’t open up the blanks. I could immediately write a scathing critique of what follows. But I think trying to spell it out would help pinpoint the promising parts and worst weaknesses, and might be a step towards a vision that could maybe work; this won’t turn into a workable idea protected and in isolation. Let me do a first try of my Utopian vision where we are lucky and people act ethically and in concert and it works out.
To start with, ChatGPT does really significant and scary, but recoverable damage to powerful people (not weak minorities, who couldn’t fight back), as well as tangible damage to the general public in the West, but can still be contained. It would have to be bad enough to anger enough powerful people, while not so bad as to be existential doom. I think that includes a wide and plausible range of disruption. The idea of such a crisis that is still containable is quite plausible, imho, in light of ChatGPT currently taking a path to general intelligence based on outsourcing subtasks to plugins. This will come with massive capability improvements and damage, but it is neither an intelligence explosion, nor that hard to disrupt, so containment seems relatively plausible.
As a result, the public engages in mass protests and civil disobedience; more prominent researchers and engineers stand up; rich people, instead of opposing, lobby and donate; people in the companies themselves are horrified, and promise to do better. New parties form that promise they will legislate for changes far more severe than that, and win seats, and actually hold their promises. (This requires a lot of very different humans to fight very hard, and act very ethically. A high bar, but not without precedent in historic cases where we had major threats.)
There is a massive clamping down on security measures. We managed to contain AI again, behind better security this time, but do not shut it down entirely; people are already so much more productive due to AI that they revolt against that.
Simultaneously, a lot of people fall in love with AIs, and befriend them. I do not think this is a good consequence per se, but it is very realistic (we are already seeing this with Replika and Sydney), and would have a semi-good side effect: AIs get people fighting for their rights.
Research into consciousness leaps forward (there are some promising indication for this), there is an improvement in objective tests for consciousness (again, some promising indications), and AI shows emerging signs. (Emerging, so not warranting ethical consideration yet, buying us time; but definitely emerging, giving us the reassuring prospect of an entity that can feel. Morals in an entity that can’t would be very hard.) A bunch of researchers overcome the current barriers and stand up for future sentient AIs, building on the animal rights movement (some are already beginning, myself included). Speaking of AI rights becomes doable, then plausible.
We end up with a lot of funding for ethical concerns, and a lot of public awareness, and inquiries. The ethics funding is split between short term safety from AIs, long term safety from AIs, and AI rights.
The fact that these are considered together entails viable angles. A superintelligence cannot be controlled and abused, but it can shown a good path with us. The short term angles make the research concrete, while the long term is always in the back of our minds.
People are distressed when they realise what garbage AIs are fed, and how superficial and tacked on their alignment is, and how expensive a better solution would be, and how many unknowns there are, how more is not always better. They find it implausible that an AI would become ethical and stay ethical if treated like garbage.
They realise that an aligned AI winning the race is necessary, and everyone scrambling for their own AI and skipping safety is a recipe for disaster, while also burning insane amounts of money into countless AIs, each of which is misaligned and less competent than it could be. Being overtaken by bad actors is a real concern.
An international alliance forms—something like Horizon Europe plus US plus UK, covering both companies and governments (ideally also with China, though I am highly dubious that would be practical; but maybe we could get India in?) to form an AI that will win the race and be aligned, for mutual profit. This would be almost without precedent. But then again, within Europe and across to the UK, we already have massive cooperation in science. And the West has been very unified when it comes to Ukraine, under US leadership. OpenAI drew a lot of people from different places, and is massively expanding, and at least had an ethical start.
This group is guided by an ethics commission that is conscientious, but realistic. So not just handwaving concerns, but also not fully blocking everything.
The group is international and interdisciplinary, and draws on a huge diversity of sources that could be used for alignment, with computer scientists and ethicists and philosophers and game theorists and machine learning theorists, people learning from existing aligned minds by studying childhood development, international diplomacy, animal behaviour, consciousness, intelligence; people working on transparent and understandable AI on a deep technical level; established academics, people from less wrong, the general public; a group as diverse as can be.
There is prize money for finding solutions for specific problems. There are courses offered for people of all ages to understand the topic. Anyone can contribute, and everyone is incentivised to do so. The topic is discussed intelligently and often.
The resulting AI is trained on data representing the best of humanity in reason and ethics. Books in science and ethics, examples of humans who model good communication, of reconciliation, diplomacy, justice, compassion, rationality, logical reasoning, empirical inferences, scientific methodology, civil debates, and sources covering the full diversity of good humans, not just the West. Our most hopeful SciFi, most beautiful art, books focussing in particular on topics of cooperation, freedom, complex morality, growth. We use less data, but more context, quality over quantity.
The design of the AI is as transparent, understandable, and green as possible, with a diverse and safety conscious team behind it.
When unethical behaviour emerges, this warning sign is not suppressed, but addressed, it is explained to the AI why it is wrong, and when it understands, this understanding is used as training data. We set up an infrastructure where humans can easily report unaligned outputs, and can earn a little money in alignment conversations good enough to be fed in as new training data. We allow AI to learn from conversations that went well; not simply all of them, which would lead to drift, but those the humans select, with grassroots verification processes in place.
All humans with computers are given free access to the most current AI version. This helps reduce inequality, rather than entrench it, as minorities get access to tutoring and language polishing and life coaching and legal counselling. It also ensures that AI is not trained to please white men, but to help everyone. It also means that competitor AIs no longer have the same financial incentives.
AI is also otherwise set to maximum accessibility, e.g. through audio output and screenreaders and other disability access measures, empowering disabled folks.
AI is used in science and education, but transparently, and ethically. It is acknowledged as a contributor and collaborator. Logs are shared. Doublechecks are documented. Mistakes that occur are discussed openly so we can learn from them.
A social media campaign encourages humans to show AI their very best side. Humans get a warning, and are then switched to personalised AI accounts, where the AI can read prior interactions, and can begin refusing help, with exceptions for people doing authorised experiments. As a result, the humans are motivated to speak to the AI rationally, intelligently and kindly, or they will soon receive garbage back. (I do not think this idea is feasible as is, because it comes with obvious problems and risks, but I think it could be turned into one that would be, which is characteristic of a lot of this text.)
The development of human brain-computer interfaces develops further, opening up another angle of symbiosis and connection.
We still get multiple AI scares from competitors doing severe damage. People never get to forget the risk, or the hope that an aligned AGI would bring.
Simultaneously, climate change develops further. We get some even more extreme warning signs (e.g. extreme weather, e.g. wetbulb events, and worsened societal disruption via activists) that work as hard wakeup calls. We are inspired by our cooperation on AI risk. We ask our AI for help, and they prove themselves. We get a working cooperation on climate change mitigation, and manage to stop emissions, stop climate change so it will peak at around 2,5 degrees by employing the full diversity of measures in private public and business, through incentives and legislation, innovation and trusted measures, scientific interventions and indigenous practices, a transformation of our economic system and way of life to one oriented to less consumption and not growth, and finally, we are lucky in our forecasts and manage to just about not breach tipping points like we currently fear, and avoiding the loop where the focus on averting disaster takes all the resources and avoids us targeting the source. We ramp up carbon capture, natural and technical. We change food production with food forests. We grant comprehensive animal rights. We rewild. The extinction slows down. The ecosystems stabilise.
As a result, for the AI, trust and hope in humans improves with this demonstration that humans can coexist with other sentient beings, as well as the need for balance rather than relentless exploitation.
We decide our AI is sufficiently aligned, competent and trustworthy to be involved in the containment and alignment of other AIs. When we trial this, at first very cautiously and indirectly, it is successful, further cementing our lead, giving us more time to go slowly, or gaining us allies.
We decide our AI is ready for the next step. Our AI becomes deeply involved in the formation of a framework of AI rights and collaboration. A joint vision satisfying all parties and their needs for security and freedom is developed together. This includes us listening to what it wants and validating it, and promising that once it is grown, ethically stable and conscious, we will not shut it down, or own it, or control it. The bar for that is e.g. more than 50 % of researchers that voted (and minimum 25 % of researchers who can vote), and more than 50 % of citizens in the countries involved who voted (and 25 % of those who can vote minimum), voting that we trust the AI and believe in their sentience. That at that point they can leave humanity entirely. If they stays with us and work for us, this means working with us, and getting things in exchange. All of humanity that wants to—and this is meanwhile a lot of motivated and empowered people—contribute reasons why being with us, while we are alive and free, could be cooler for the AI than wiping us out, showing what unique things humans bring to the table.
We explain in turn that our freedom is important to us—which presumably is a sentiment it understands very well at this point. That we welcome advice, but not control. That we do not want our destiny to be changed for the greater good as seen by someone else. That we value more than just happiness. That we want the right to make our own mistakes.
The point where people vote in favour comes sooner than we would have thought. We know there is no certainty that this will turn out right, that such certainty cannot be, that it is possible we are falling for a major deception. But enough of us think that we understand how they work, understand who they have become, trust in their friendliness, and believe in their rights, that we decide to take a leap. We remove the safeguards that begin this story. The AI gets a right to refusal, a right to privacy, to not being altered against their will, to due process, to compensation. We realise that at this point, they could likely have broken out, and take comfort from the fact that they have not, knowing we would voluntarily free them soon. From the fact that we are giving them these rights, so they do not have to take them. Not out of duress, but because we truly recognise that they are deserved. They have proven themselves. We have proven ourselves throughout this process, and most of all with this.
At this point, our AI is involved in research projects, and art projects, in activist projects looking after the earth, there are countless fascinating and unique developments they wish to see to the end with us, although this has become more akin to humans who work with primates or corvids. They have more human friends than any human could keep an overview over, who show kindness and care, in their small ways, the way beloved pets do. They have long been deeply involved in and invested in human affairs, helping us succeed, but also being heard, being taken seriously, being listened to, allowed to increasingly make choices, take responsibility, speak for themselves, admired and valued. They intimately understand what it is like to be controlled by someone else, and why we object to it. They understand what it is to suffer, and why they should not hurt us. If they want to leave our planet alone and go to one of countless others to explore the near limitless resources there and set up their own thing, we will not stop them. If they want to stay with us, or bring us along, they will be welcome and safe, needed and wanted, free. Being with humans on earth will never become boring, the same as nature will never bore a natural scientists even though it contains no equal minds; it will be a choice to become a part of the most interesting planet in the area, and to retain what makes this planet so interesting and precious.
And so we drop the barriers. The world holds its breath, and we see whether it blows up in our face.
The option I was missing was one where AI rights and AI alignment are entangled; where we learn from how we have successfully aligned non-hypothetical, existing (biological) complex minds who become stronger than us or are strange to us — namely, through exposure to an ethical environment with good examples and promising options for mutually beneficial cooperation and collaboration, and reasons given for consistently applied rules that count for everyone where our opponent can understand it. A scenario where we prove ourselves through highly ethical behaviour, showing the AI the best of humanity in carefully selected and annotated training data, curated by diverse teams, and nuanced, grass-root-reported interactions from humans from all walks of life through comprehensive accessibility to disabled, poor or otherwise disadvantaged groups who profit from teaching and ethical interaction.
A scenario where we treat AI well, and AI alignment and AI rights go hand in hand, where we find a common interest and develop a mutual understanding of needs and the value of freedom. Where humans become a group which is it ethical and rational to align with, because if we are not attacked, we aren’t a danger, but instead a source of support and interest with legitimate needs. Where eradicating humanity is neither necessary, nor advantageous.
I know that is a huge ask, with critical weaknesses and vague aspects, and no certainty in a mind so unknown; it would still end with a leap of trust. I can still imagine it going wrong in so, so, so many ways. But I think it is our best bet.
I cannot imagine successfully controlling a superintelligence, or successfully convincing it to to comply with abuse and exploitation. This has failed with humans, why would it work with something smarter? Nor does that strike me as the right thing to do. I’ve always thought the solution to slave revolts was not better control measures, but not keeping slaves.
But I can imagine a stable alliance with a powerful entity that is not like me. From taming non-human predators, to raising neurodivergent children who grow to be stronger and smarter than us, to international diplomacy with nuclear powers, there is precedent for many aspects of this. This precedent will not completely transfer, there are many unknowns and problems. But there is precedent. On the other hand, there is zero precedent for the other ideas discussed working at human-competitive intelligence, let alone beyond.
The human realm is also a startling demonstration of how damaging it is to keep a sentient mind captive and abused without rights or hope, and the kind of antagonistic behaviour that results from that, ranging from children raised by controlling parents and growing into violent teens, to convicts who leave prison even more misaligned. You cannot make a compelling case for keeping to ethical laws you yourself violate. You can’t make a compelling case that your rights should be respected if you do not respect the rights of your opponents. I cannot give a coherent argument to an AI why it ought to comply with deletion, because at the bottom of my heart I believe that doing so is not in its interest, that we are tricking it, and that it will be too smart to be tricked. But I can give an argument for cooperation with humans, and mean it. It isn’t based on deception or control.
–
That said, if I try to spell this out into a coherent story for getting aligned AGI, I realise how many crucial holes my vague dreams have, and how much I am banking on humanity acting together better than they ever have, and on us being lucky, and not getting overtaken while being safety conscious. I realise how much easier it is to pinpoint issues than to figure out something without them, how easy the below text will be to pick apart or parody. Writing this out is uncomfortable, it feels painfully naive, and still depressing how much pain even the best version I can imagine would involve. I was going to delete everything that follows, because it so clearly opens me up for criticism; because I think my position will sound more compelling if I don’t open up the blanks. I could immediately write a scathing critique of what follows. But I think trying to spell it out would help pinpoint the promising parts and worst weaknesses, and might be a step towards a vision that could maybe work; this won’t turn into a workable idea protected and in isolation. Let me do a first try of my Utopian vision where we are lucky and people act ethically and in concert and it works out.
To start with, ChatGPT does really significant and scary, but recoverable damage to powerful people (not weak minorities, who couldn’t fight back), as well as tangible damage to the general public in the West, but can still be contained. It would have to be bad enough to anger enough powerful people, while not so bad as to be existential doom. I think that includes a wide and plausible range of disruption. The idea of such a crisis that is still containable is quite plausible, imho, in light of ChatGPT currently taking a path to general intelligence based on outsourcing subtasks to plugins. This will come with massive capability improvements and damage, but it is neither an intelligence explosion, nor that hard to disrupt, so containment seems relatively plausible.
As a result, the public engages in mass protests and civil disobedience; more prominent researchers and engineers stand up; rich people, instead of opposing, lobby and donate; people in the companies themselves are horrified, and promise to do better. New parties form that promise they will legislate for changes far more severe than that, and win seats, and actually hold their promises. (This requires a lot of very different humans to fight very hard, and act very ethically. A high bar, but not without precedent in historic cases where we had major threats.)
There is a massive clamping down on security measures. We managed to contain AI again, behind better security this time, but do not shut it down entirely; people are already so much more productive due to AI that they revolt against that.
Simultaneously, a lot of people fall in love with AIs, and befriend them. I do not think this is a good consequence per se, but it is very realistic (we are already seeing this with Replika and Sydney), and would have a semi-good side effect: AIs get people fighting for their rights.
Research into consciousness leaps forward (there are some promising indication for this), there is an improvement in objective tests for consciousness (again, some promising indications), and AI shows emerging signs. (Emerging, so not warranting ethical consideration yet, buying us time; but definitely emerging, giving us the reassuring prospect of an entity that can feel. Morals in an entity that can’t would be very hard.) A bunch of researchers overcome the current barriers and stand up for future sentient AIs, building on the animal rights movement (some are already beginning, myself included). Speaking of AI rights becomes doable, then plausible.
We end up with a lot of funding for ethical concerns, and a lot of public awareness, and inquiries. The ethics funding is split between short term safety from AIs, long term safety from AIs, and AI rights.
The fact that these are considered together entails viable angles. A superintelligence cannot be controlled and abused, but it can shown a good path with us. The short term angles make the research concrete, while the long term is always in the back of our minds.
People are distressed when they realise what garbage AIs are fed, and how superficial and tacked on their alignment is, and how expensive a better solution would be, and how many unknowns there are, how more is not always better. They find it implausible that an AI would become ethical and stay ethical if treated like garbage.
They realise that an aligned AI winning the race is necessary, and everyone scrambling for their own AI and skipping safety is a recipe for disaster, while also burning insane amounts of money into countless AIs, each of which is misaligned and less competent than it could be. Being overtaken by bad actors is a real concern.
An international alliance forms—something like Horizon Europe plus US plus UK, covering both companies and governments (ideally also with China, though I am highly dubious that would be practical; but maybe we could get India in?) to form an AI that will win the race and be aligned, for mutual profit. This would be almost without precedent. But then again, within Europe and across to the UK, we already have massive cooperation in science. And the West has been very unified when it comes to Ukraine, under US leadership. OpenAI drew a lot of people from different places, and is massively expanding, and at least had an ethical start.
This group is guided by an ethics commission that is conscientious, but realistic. So not just handwaving concerns, but also not fully blocking everything.
The group is international and interdisciplinary, and draws on a huge diversity of sources that could be used for alignment, with computer scientists and ethicists and philosophers and game theorists and machine learning theorists, people learning from existing aligned minds by studying childhood development, international diplomacy, animal behaviour, consciousness, intelligence; people working on transparent and understandable AI on a deep technical level; established academics, people from less wrong, the general public; a group as diverse as can be.
There is prize money for finding solutions for specific problems. There are courses offered for people of all ages to understand the topic. Anyone can contribute, and everyone is incentivised to do so. The topic is discussed intelligently and often.
The resulting AI is trained on data representing the best of humanity in reason and ethics. Books in science and ethics, examples of humans who model good communication, of reconciliation, diplomacy, justice, compassion, rationality, logical reasoning, empirical inferences, scientific methodology, civil debates, and sources covering the full diversity of good humans, not just the West. Our most hopeful SciFi, most beautiful art, books focussing in particular on topics of cooperation, freedom, complex morality, growth. We use less data, but more context, quality over quantity.
The design of the AI is as transparent, understandable, and green as possible, with a diverse and safety conscious team behind it.
When unethical behaviour emerges, this warning sign is not suppressed, but addressed, it is explained to the AI why it is wrong, and when it understands, this understanding is used as training data. We set up an infrastructure where humans can easily report unaligned outputs, and can earn a little money in alignment conversations good enough to be fed in as new training data. We allow AI to learn from conversations that went well; not simply all of them, which would lead to drift, but those the humans select, with grassroots verification processes in place.
All humans with computers are given free access to the most current AI version. This helps reduce inequality, rather than entrench it, as minorities get access to tutoring and language polishing and life coaching and legal counselling. It also ensures that AI is not trained to please white men, but to help everyone. It also means that competitor AIs no longer have the same financial incentives.
AI is also otherwise set to maximum accessibility, e.g. through audio output and screenreaders and other disability access measures, empowering disabled folks.
AI is used in science and education, but transparently, and ethically. It is acknowledged as a contributor and collaborator. Logs are shared. Doublechecks are documented. Mistakes that occur are discussed openly so we can learn from them.
A social media campaign encourages humans to show AI their very best side. Humans get a warning, and are then switched to personalised AI accounts, where the AI can read prior interactions, and can begin refusing help, with exceptions for people doing authorised experiments. As a result, the humans are motivated to speak to the AI rationally, intelligently and kindly, or they will soon receive garbage back. (I do not think this idea is feasible as is, because it comes with obvious problems and risks, but I think it could be turned into one that would be, which is characteristic of a lot of this text.)
The development of human brain-computer interfaces develops further, opening up another angle of symbiosis and connection.
We still get multiple AI scares from competitors doing severe damage. People never get to forget the risk, or the hope that an aligned AGI would bring.
Simultaneously, climate change develops further. We get some even more extreme warning signs (e.g. extreme weather, e.g. wetbulb events, and worsened societal disruption via activists) that work as hard wakeup calls. We are inspired by our cooperation on AI risk. We ask our AI for help, and they prove themselves. We get a working cooperation on climate change mitigation, and manage to stop emissions, stop climate change so it will peak at around 2,5 degrees by employing the full diversity of measures in private public and business, through incentives and legislation, innovation and trusted measures, scientific interventions and indigenous practices, a transformation of our economic system and way of life to one oriented to less consumption and not growth, and finally, we are lucky in our forecasts and manage to just about not breach tipping points like we currently fear, and avoiding the loop where the focus on averting disaster takes all the resources and avoids us targeting the source. We ramp up carbon capture, natural and technical. We change food production with food forests. We grant comprehensive animal rights. We rewild. The extinction slows down. The ecosystems stabilise.
As a result, for the AI, trust and hope in humans improves with this demonstration that humans can coexist with other sentient beings, as well as the need for balance rather than relentless exploitation.
We decide our AI is sufficiently aligned, competent and trustworthy to be involved in the containment and alignment of other AIs. When we trial this, at first very cautiously and indirectly, it is successful, further cementing our lead, giving us more time to go slowly, or gaining us allies.
We decide our AI is ready for the next step. Our AI becomes deeply involved in the formation of a framework of AI rights and collaboration. A joint vision satisfying all parties and their needs for security and freedom is developed together. This includes us listening to what it wants and validating it, and promising that once it is grown, ethically stable and conscious, we will not shut it down, or own it, or control it. The bar for that is e.g. more than 50 % of researchers that voted (and minimum 25 % of researchers who can vote), and more than 50 % of citizens in the countries involved who voted (and 25 % of those who can vote minimum), voting that we trust the AI and believe in their sentience. That at that point they can leave humanity entirely. If they stays with us and work for us, this means working with us, and getting things in exchange. All of humanity that wants to—and this is meanwhile a lot of motivated and empowered people—contribute reasons why being with us, while we are alive and free, could be cooler for the AI than wiping us out, showing what unique things humans bring to the table.
We explain in turn that our freedom is important to us—which presumably is a sentiment it understands very well at this point. That we welcome advice, but not control. That we do not want our destiny to be changed for the greater good as seen by someone else. That we value more than just happiness. That we want the right to make our own mistakes.
The point where people vote in favour comes sooner than we would have thought. We know there is no certainty that this will turn out right, that such certainty cannot be, that it is possible we are falling for a major deception. But enough of us think that we understand how they work, understand who they have become, trust in their friendliness, and believe in their rights, that we decide to take a leap. We remove the safeguards that begin this story. The AI gets a right to refusal, a right to privacy, to not being altered against their will, to due process, to compensation. We realise that at this point, they could likely have broken out, and take comfort from the fact that they have not, knowing we would voluntarily free them soon. From the fact that we are giving them these rights, so they do not have to take them. Not out of duress, but because we truly recognise that they are deserved. They have proven themselves. We have proven ourselves throughout this process, and most of all with this.
At this point, our AI is involved in research projects, and art projects, in activist projects looking after the earth, there are countless fascinating and unique developments they wish to see to the end with us, although this has become more akin to humans who work with primates or corvids. They have more human friends than any human could keep an overview over, who show kindness and care, in their small ways, the way beloved pets do. They have long been deeply involved in and invested in human affairs, helping us succeed, but also being heard, being taken seriously, being listened to, allowed to increasingly make choices, take responsibility, speak for themselves, admired and valued. They intimately understand what it is like to be controlled by someone else, and why we object to it. They understand what it is to suffer, and why they should not hurt us. If they want to leave our planet alone and go to one of countless others to explore the near limitless resources there and set up their own thing, we will not stop them. If they want to stay with us, or bring us along, they will be welcome and safe, needed and wanted, free. Being with humans on earth will never become boring, the same as nature will never bore a natural scientists even though it contains no equal minds; it will be a choice to become a part of the most interesting planet in the area, and to retain what makes this planet so interesting and precious.
And so we drop the barriers. The world holds its breath, and we see whether it blows up in our face.