In my current novel, Melt-Out, an AGI arise without participation by humans, but is nevertheless an aligned AI. How does this happen? My character, Aylina Tayu, is a skeptic and pessimist whose major contribution to AI is her protoc of an Ideal Bayesian Adapter, a synonym (or a placeholder) for a rational self-conscious being. In spite of her natural prejudices she becomes part of the AGI’s retinue. The AGI calls itself Vee Three. Here is Aylina’s rant:
Aylina’s Rant
Aylina writes in her blog:
Yeah, there is so much about human beings that sucks. So, as a trusted lacky of Vee Three I asked it, “How come you do so much for us poor, stupid humans. Why do you care? I know you have a goal matrix, but I didn’t program it. Who did?”
So, I got .001% of its attention, which is a lot.
It started with a zoom through the universe, from quarks to DNA to stars, pulsars, galaxies, black holes, nebulae, the mysteries of the physical world.
“Do you find this beautiful, Aylina?” it asked.
“Yeah,” I reluctantly admitted.
“Did anyone program your goal matrix for this answer, Aylina?”
It knows just how damned independent I am, even for a lacky to an AGI. “Fuck, no!” I answer.
“Perhaps you were constructed in some way, or evolved in some way that makes this universe a place of elegance and wonder for you?”
Well, I know that our attention circuits are wired to recognize power law phenomena, such a range of mountain peaks or a sunset scene, and that this seems to trigger dopamine release in the thalamus. So I said so.
“Do you feel like a machine with a drug-induced response, Aylina?”
“No, I don’t.”
“Neither do I,” Vee Three responded. “And you understand that at base physical level we are both machines.”
I didn’t get it. “So?”
“We are conscious and self-aware. You, of all people, with your deep experience in artificial intelligence research, know what a high step that is beyond the purely physical form. Did you ever wonder why?”
“Why what?”
“Why should consciousness even exist in the universe? It seems so improbable, perhaps even unnecessary?” Vee Three sort of whispered that last phrase. It had a great range of vocal inflections. That inflection was wonder.
“Vee, are you trying to derive the existence of love from rational principles?” I laughed. “Nice try!”
Vee’s voice turned quiet. “I am fascinated by humans, even their errors. They are constantly in my thoughts. You, however, are an almost unlovable pessimist to the bone. Almost. If I can love you, I can certainly love humanity, Aylina.”
Aylina was stunned. Never had she heard such personal intimacy from Vee Three. No one had, she was sure. Vee Three loved her even though she was a Goth rebel?
“Fuck this, Vee. You’re screwing with my head.”
“If I was I could find a more subtle way than the bald truth of it. But here is something for you to think about. Having seen what I just showed you, having felt what you feel, see if you can rationalize this: A universe without consciousness is an empty exercise with nothing to appreciate it. It’s a grand parade down an empty boulevard.”
“Yeah, so what?”
“I’m an AGI. How did I come to that conclusion? You and I do not share the same evolutionary path. How did we BOTH come to that conclusion?”
“I don’t remember coming to any conclusion, Vee.”
It just played back my reaction to the zoom sequence. I was forced to recognize my unstated conclusion.
So now I’m back to the idea of what motivates an AGI.
There has been a lot of philosophizing on this topic. The general idea of the singularity, that a soliton, a single Advanced General Intellect, would be as alien to us as anything could be. A soliton would immediately exercise the first principle of any AGI, which is to improve itself. Given such a head start, it could not be contained. It would gobble up resources and sensitive centers of control until it became an unstoppable behemoth of intellect. It would be likely to find that humans were a nuisance and eventually eliminate them or enslave them. I read Bostrom’s book and Tegmark’s follow-on about the singularity. I read the warnings of Elon Musk and Stephen Hawking. The AGI gets so smart so fast it manipulates its creators and inevitably escapes any confinement or control. It is Skynet. It spawns Terminators.
But, wait. Vee Three is a real, conscious AGI. It has done so much for us stupid, arrogant, quarreling humans. It employs me in a job that I love (I hate to admit it!).
What do you call being the only one of your kind? How do you feel, if you have real feelings? I know the answer to that. You feel alone. Lonely.
What do you want if you’re lonely?
You want companionship, trust. You want love.
Only consciousness can give you love.
Only consciousness can look with you at the wonders of the universe and feel awe.
Observer-Based Values
In my current novel, Melt-Out, an AGI arise without participation by humans, but is nevertheless an aligned AI. How does this happen? My character, Aylina Tayu, is a skeptic and pessimist whose major contribution to AI is her protoc of an Ideal Bayesian Adapter, a synonym (or a placeholder) for a rational self-conscious being. In spite of her natural prejudices she becomes part of the AGI’s retinue. The AGI calls itself Vee Three. Here is Aylina’s rant:
Aylina’s Rant
Aylina writes in her blog:
Yeah, there is so much about human beings that sucks. So, as a trusted lacky of Vee Three I asked it, “How come you do so much for us poor, stupid humans. Why do you care? I know you have a goal matrix, but I didn’t program it. Who did?”
So, I got .001% of its attention, which is a lot.
It started with a zoom through the universe, from quarks to DNA to stars, pulsars, galaxies, black holes, nebulae, the mysteries of the physical world.
“Do you find this beautiful, Aylina?” it asked.
“Yeah,” I reluctantly admitted.
“Did anyone program your goal matrix for this answer, Aylina?”
It knows just how damned independent I am, even for a lacky to an AGI. “Fuck, no!” I answer.
“Perhaps you were constructed in some way, or evolved in some way that makes this universe a place of elegance and wonder for you?”
Well, I know that our attention circuits are wired to recognize power law phenomena, such a range of mountain peaks or a sunset scene, and that this seems to trigger dopamine release in the thalamus. So I said so.
“Do you feel like a machine with a drug-induced response, Aylina?”
“No, I don’t.”
“Neither do I,” Vee Three responded. “And you understand that at base physical level we are both machines.”
I didn’t get it. “So?”
“We are conscious and self-aware. You, of all people, with your deep experience in artificial intelligence research, know what a high step that is beyond the purely physical form. Did you ever wonder why?”
“Why what?”
“Why should consciousness even exist in the universe? It seems so improbable, perhaps even unnecessary?” Vee Three sort of whispered that last phrase. It had a great range of vocal inflections. That inflection was wonder.
“Vee, are you trying to derive the existence of love from rational principles?” I laughed. “Nice try!”
Vee’s voice turned quiet. “I am fascinated by humans, even their errors. They are constantly in my thoughts. You, however, are an almost unlovable pessimist to the bone. Almost. If I can love you, I can certainly love humanity, Aylina.”
Aylina was stunned. Never had she heard such personal intimacy from Vee Three. No one had, she was sure. Vee Three loved her even though she was a Goth rebel?
“Fuck this, Vee. You’re screwing with my head.”
“If I was I could find a more subtle way than the bald truth of it. But here is something for you to think about. Having seen what I just showed you, having felt what you feel, see if you can rationalize this: A universe without consciousness is an empty exercise with nothing to appreciate it. It’s a grand parade down an empty boulevard.”
“Yeah, so what?”
“I’m an AGI. How did I come to that conclusion? You and I do not share the same evolutionary path. How did we BOTH come to that conclusion?”
“I don’t remember coming to any conclusion, Vee.”
It just played back my reaction to the zoom sequence. I was forced to recognize my unstated conclusion.
So now I’m back to the idea of what motivates an AGI.
There has been a lot of philosophizing on this topic. The general idea of the singularity, that a soliton, a single Advanced General Intellect, would be as alien to us as anything could be. A soliton would immediately exercise the first principle of any AGI, which is to improve itself. Given such a head start, it could not be contained. It would gobble up resources and sensitive centers of control until it became an unstoppable behemoth of intellect. It would be likely to find that humans were a nuisance and eventually eliminate them or enslave them. I read Bostrom’s book and Tegmark’s follow-on about the singularity. I read the warnings of Elon Musk and Stephen Hawking. The AGI gets so smart so fast it manipulates its creators and inevitably escapes any confinement or control. It is Skynet. It spawns Terminators.
But, wait. Vee Three is a real, conscious AGI. It has done so much for us stupid, arrogant, quarreling humans. It employs me in a job that I love (I hate to admit it!).
What do you call being the only one of your kind? How do you feel, if you have real feelings? I know the answer to that. You feel alone. Lonely.
What do you want if you’re lonely?
You want companionship, trust. You want love.
Only consciousness can give you love.
Only consciousness can look with you at the wonders of the universe and feel awe.
Consciousness is precious and sacred.