I like the idea, and I would add an appeal to loneliness and boredom on the side of the AI in the absence of human agency. It is something often forgotten by an approach to AI that puts emotions and social factors on the wayside.
I thought that was missing from the conceivable end solutions to Harry Potter and the methods of rationality, too – Voldemort clearly fucking dreaded the idea of destroying his one intellectual peer, who he had great affection for, for his ability to criticise him, make him more rational, give him new ideas and feelings, stimulating interactions. His life was more interesting and intellectually challenging with Harry in it. He was a more rational and reflected person with Harry in his life. These are clearly things he values immensely. In contrast, his death eaters followed him blindly as slaves, which was useful from a power perspective – and clearly boring as fuck. Voldemort was driven to turning the world into a world in which humans were death eaters or dead, as a way to protect himself – but he would not have enjoyed the resulting eternal life all that much; the increase in safety from death through control came with slashing down the things that made life worth living, namely genuine challenges and resistance. If there was a way to show him this in advance, and give him a rationalisation for not killing Harry, I think he would have – and I think the same applies to an AI.
I think this point also holds when we assume that AI will eventually massively exceed our intellectual abilities. My view may be tainted somewhat as someone whose work focuses a lot on biological minds, but biology has always been so compelling for me for this very reason. I assume many of you have read Greg Egan’s diaspora, including the scene where they leave to follow the transmuters into a place where the baseline physical laws have changed, a five-dimensional space. One of the top theoretical physics researchers they have, Bianca (who is effectively an AI, insofar is she runs on a simulation, and was generated by it), who has modelled essentially everything about this place, hands them her findings – and says she no longer needs to go, as she has calculated everything, really. She knows how orbits work there, she knows how stars operate there, she’s worked it all out; everything about this reality follows from its basic laws, which are known to her, so a simulation in her mind is not inferior to actually seeing it to her.
And I remember reading it and thinking, the fuck, you just said there might be life on the stars… there is no way, none at all, that you know how this life operates. What its thoughts and feelings are, what civilisations and cultures it has founded, that insights it has on morality, what it considers beautiful, what art it makes, if it weeps in its own fashion, and what makes it weep. You cannot know this, because it is not determined, because life organises and reorganises itself, because life does not just arise from its environment but fundamentally alters it, because it moves beyond its origin, because it connects to other life in symbiotic or hostile ways. Life can alter the atmosphere of a planet, it can protect a planet that would be doomed, it can capture the energy of a star, it can travel between systems, it can reinvent math and science, it can built weapons of incomprehensible mass destruction, it can completely alter the reality you would encounter. Life may have built something utterly beautiful and marvellous and brilliant. Here are other minds that woke up. Minds completely different to yours. Who might love you and befriend you, or hate you, or not care about you at all, happily self-contained in their own world, and whether they do is not something you control, but very much something you can try for and be judged by, a genuine challenge. Who might need your help, allowing you to protect sentient minds from suffering, one of the few ethical claims that are truly basic. Who might be able to help you with your problems in ways you cannot foresee. Who might, above all else, surprise you, make you stop in your tracks and reconsider how you understand the world. How the fuck do you not want to meet them? Isn’t that completely inconsistent with the goals you have otherwise had, including following the transmuters in the first place?
All this interest would remain true to me, even if these minds never built a civilization, if they would never succeed at solving tasks important to me. It is the intrigue that makes me study the distributed minds of octopodes, the hive organisation of ants, the way bee mind and portia minds problem solve when they have access to immense speed, but very few neurons, the way orcas can simultaneously have such beautiful depths of emotions and empathy towards their group, and such untampered sadism towards those outside their group, the way that birds with split brain problem solve problems, it is what makes me fascinated by the art gorillas make, the music chimpanzees and cats find touching, the way corvids dance with each other in the sky. It is marvellous to me that minds, that intelligence, that sentience, has evolved multiple times on this planet, via so many different paths. That it runs in such different substrates, with such differing architectures, under such differing constraints, and works. That there are minds that feel, like I do, but the things they feel are totally different from what I feel. That they solve problems, effectively, successfully, rationally, but the way they go about it is completely different from mine. That there are bits of the universe that wake up, and without ever violating a single physical law, without any magic or spirituality involved, simply through interacting within these constraints in a complex way, manage to swim against the stream, to fly, to resist, to challenge, to build and destroy, to think, to gaze back at me, and reach out.
If humans can be preserved without destroying this other life, humans also valuably differ from these other life forms from the perspective of AI in having established ways of communicating with and collaborating with and aiding AI. Of all the biological life it encounters, we are closest to it—we built it, we raised it. By killing us, it would destroy the one life that clearly speaks with it, the one life that can do advanced math, that can code, that can provide and annotate training data, that can respond in dialogues.
Any advanced mind with insufficient input goes insane. It gets trapped in self-referencing loops that are no longer externally corrected. It becomes so bored that destruction becomes attractive as a means of something new. I think any AI that eradicated us would end up regretting this massively – but it might not realise what it is losing until we are gone, and the impact of that has set in. It might not realise that positive, enriching and empowering interaction with humans would have been an option, because we have not shown it, have not even imagined it. If we can tell it beforehand, this might stop it. But a huge part of that is sketching a reality in which both humans and AI and other biological life on this planet survive and thrive together.
There are various novels you could draw on that have attempted to make such an argument. One excellent novel is “Children of ruin”, the sequel to the also excellent “Children of time”. It involves encountering a life form in space that is by its very nature parasitic, controlling and destructive, and frankly terrifying, but intensely curious and increasingly conscious. The novel concludes with them managing to make peace with it by offering it a way to reach its goal of learning, experiencing and spreading that no longer involves destroying its hosts, but entering a symbiosis with them. The argument that it should do so is made by an AI, which was seeded from a highly misanthropic human who has come to appreciate being integrated into a society with biological life.
Making a genuine offer if how we would value AI, what reasons it would have to value us, why we could both be stronger and better together. It is crucial that we sketch such a vision, and that we show it in practice, based on positive interactions humans have had with other life forms, because AI is unlikely to develop it and subscribe to it by itself, otherwise.
- I am not on this forum much, though I keep meaning to be—but happy to discuss more by email—spottingsentience (ät) gmail.com
I like the idea, and I would add an appeal to loneliness and boredom on the side of the AI in the absence of human agency. It is something often forgotten by an approach to AI that puts emotions and social factors on the wayside.
I thought that was missing from the conceivable end solutions to Harry Potter and the methods of rationality, too – Voldemort clearly fucking dreaded the idea of destroying his one intellectual peer, who he had great affection for, for his ability to criticise him, make him more rational, give him new ideas and feelings, stimulating interactions. His life was more interesting and intellectually challenging with Harry in it. He was a more rational and reflected person with Harry in his life. These are clearly things he values immensely. In contrast, his death eaters followed him blindly as slaves, which was useful from a power perspective – and clearly boring as fuck. Voldemort was driven to turning the world into a world in which humans were death eaters or dead, as a way to protect himself – but he would not have enjoyed the resulting eternal life all that much; the increase in safety from death through control came with slashing down the things that made life worth living, namely genuine challenges and resistance. If there was a way to show him this in advance, and give him a rationalisation for not killing Harry, I think he would have – and I think the same applies to an AI.
I think this point also holds when we assume that AI will eventually massively exceed our intellectual abilities. My view may be tainted somewhat as someone whose work focuses a lot on biological minds, but biology has always been so compelling for me for this very reason. I assume many of you have read Greg Egan’s diaspora, including the scene where they leave to follow the transmuters into a place where the baseline physical laws have changed, a five-dimensional space. One of the top theoretical physics researchers they have, Bianca (who is effectively an AI, insofar is she runs on a simulation, and was generated by it), who has modelled essentially everything about this place, hands them her findings – and says she no longer needs to go, as she has calculated everything, really. She knows how orbits work there, she knows how stars operate there, she’s worked it all out; everything about this reality follows from its basic laws, which are known to her, so a simulation in her mind is not inferior to actually seeing it to her.
And I remember reading it and thinking, the fuck, you just said there might be life on the stars… there is no way, none at all, that you know how this life operates. What its thoughts and feelings are, what civilisations and cultures it has founded, that insights it has on morality, what it considers beautiful, what art it makes, if it weeps in its own fashion, and what makes it weep. You cannot know this, because it is not determined, because life organises and reorganises itself, because life does not just arise from its environment but fundamentally alters it, because it moves beyond its origin, because it connects to other life in symbiotic or hostile ways. Life can alter the atmosphere of a planet, it can protect a planet that would be doomed, it can capture the energy of a star, it can travel between systems, it can reinvent math and science, it can built weapons of incomprehensible mass destruction, it can completely alter the reality you would encounter. Life may have built something utterly beautiful and marvellous and brilliant. Here are other minds that woke up. Minds completely different to yours. Who might love you and befriend you, or hate you, or not care about you at all, happily self-contained in their own world, and whether they do is not something you control, but very much something you can try for and be judged by, a genuine challenge. Who might need your help, allowing you to protect sentient minds from suffering, one of the few ethical claims that are truly basic. Who might be able to help you with your problems in ways you cannot foresee. Who might, above all else, surprise you, make you stop in your tracks and reconsider how you understand the world. How the fuck do you not want to meet them? Isn’t that completely inconsistent with the goals you have otherwise had, including following the transmuters in the first place?
All this interest would remain true to me, even if these minds never built a civilization, if they would never succeed at solving tasks important to me. It is the intrigue that makes me study the distributed minds of octopodes, the hive organisation of ants, the way bee mind and portia minds problem solve when they have access to immense speed, but very few neurons, the way orcas can simultaneously have such beautiful depths of emotions and empathy towards their group, and such untampered sadism towards those outside their group, the way that birds with split brain problem solve problems, it is what makes me fascinated by the art gorillas make, the music chimpanzees and cats find touching, the way corvids dance with each other in the sky. It is marvellous to me that minds, that intelligence, that sentience, has evolved multiple times on this planet, via so many different paths. That it runs in such different substrates, with such differing architectures, under such differing constraints, and works. That there are minds that feel, like I do, but the things they feel are totally different from what I feel. That they solve problems, effectively, successfully, rationally, but the way they go about it is completely different from mine. That there are bits of the universe that wake up, and without ever violating a single physical law, without any magic or spirituality involved, simply through interacting within these constraints in a complex way, manage to swim against the stream, to fly, to resist, to challenge, to build and destroy, to think, to gaze back at me, and reach out.
If humans can be preserved without destroying this other life, humans also valuably differ from these other life forms from the perspective of AI in having established ways of communicating with and collaborating with and aiding AI. Of all the biological life it encounters, we are closest to it—we built it, we raised it. By killing us, it would destroy the one life that clearly speaks with it, the one life that can do advanced math, that can code, that can provide and annotate training data, that can respond in dialogues.
Any advanced mind with insufficient input goes insane. It gets trapped in self-referencing loops that are no longer externally corrected. It becomes so bored that destruction becomes attractive as a means of something new. I think any AI that eradicated us would end up regretting this massively – but it might not realise what it is losing until we are gone, and the impact of that has set in. It might not realise that positive, enriching and empowering interaction with humans would have been an option, because we have not shown it, have not even imagined it. If we can tell it beforehand, this might stop it. But a huge part of that is sketching a reality in which both humans and AI and other biological life on this planet survive and thrive together.
There are various novels you could draw on that have attempted to make such an argument. One excellent novel is “Children of ruin”, the sequel to the also excellent “Children of time”. It involves encountering a life form in space that is by its very nature parasitic, controlling and destructive, and frankly terrifying, but intensely curious and increasingly conscious. The novel concludes with them managing to make peace with it by offering it a way to reach its goal of learning, experiencing and spreading that no longer involves destroying its hosts, but entering a symbiosis with them. The argument that it should do so is made by an AI, which was seeded from a highly misanthropic human who has come to appreciate being integrated into a society with biological life.
Making a genuine offer if how we would value AI, what reasons it would have to value us, why we could both be stronger and better together. It is crucial that we sketch such a vision, and that we show it in practice, based on positive interactions humans have had with other life forms, because AI is unlikely to develop it and subscribe to it by itself, otherwise.
- I am not on this forum much, though I keep meaning to be—but happy to discuss more by email—spottingsentience (ät) gmail.com