Is it “alignment” if, instead of AGI killing us all, humans change what it is to be human so much that we are almost unrecognizable to our current selves?
I can foresee a lot of scenarios where humans offload more and more of their cognitive capacity to silicon, but they are still “human”—does that count as a solution to the alignment problem?
If we all decide to upload our consciousness to the cloud, and become fast enough and smart enough to stop any dumb AGI before it can get started is THAT a solution?
Even today, I offload more and more of my “self” to my phone and other peripherals. I use autocomplete to text people, rather than writing every word, for example. My voicemail uses my voice to answer calls and other people speak to it, not me. I use AI to tell me which emails I should pay attention to and a calendar to augment my memory. “I” already exist, in part, in the cloud and I can see more and more of myself existing there over time.
Human consciousness isn’t single-threaded. I have more than one thought running at the same time. It’s not unlikely that some of them will soon partially run outside my meat body. To me, this seems like the solution to the alignment problem: make human minds run (more) outside of their current bodies, to the point that they can keep up with any AGI that tries to get smarter than them.
Frankly, I think if we allow AGI to get smarter than us (collectively, at least), we’re all fucked. I don’t think we will ever be able to align a super-intelligent AGI. I think our only solution is to change what it means to be human instead.
What I am getting at is: are we trying to solve the problem of saving a static version of humanity as it exists today, or are we willing to accept that one solution to Alignment may be for humanity to change significantly instead?
I personally like the idea of uploading ourselves (and asked about it here).
Note that even if we are uploaded—if someone creates an unaligned AGI that is MUCH SMARTER than us, it will still probably kill us.
“keeping up” in the sense of improving/changing/optimizing so quickly that we’d compete with software that is specifically designed (perhaps by itself) to do that—seems like a solution I wouldn’t be happy with. As much as I’m ok with posting my profile picture on Facebook, there are some degrees of self modification that I’m not ok with
Is it “alignment” if, instead of AGI killing us all, humans change what it is to be human so much that we are almost unrecognizable to our current selves?
I can foresee a lot of scenarios where humans offload more and more of their cognitive capacity to silicon, but they are still “human”—does that count as a solution to the alignment problem?
If we all decide to upload our consciousness to the cloud, and become fast enough and smart enough to stop any dumb AGI before it can get started is THAT a solution?
Even today, I offload more and more of my “self” to my phone and other peripherals. I use autocomplete to text people, rather than writing every word, for example. My voicemail uses my voice to answer calls and other people speak to it, not me. I use AI to tell me which emails I should pay attention to and a calendar to augment my memory. “I” already exist, in part, in the cloud and I can see more and more of myself existing there over time.
Human consciousness isn’t single-threaded. I have more than one thought running at the same time. It’s not unlikely that some of them will soon partially run outside my meat body. To me, this seems like the solution to the alignment problem: make human minds run (more) outside of their current bodies, to the point that they can keep up with any AGI that tries to get smarter than them.
Frankly, I think if we allow AGI to get smarter than us (collectively, at least), we’re all fucked. I don’t think we will ever be able to align a super-intelligent AGI. I think our only solution is to change what it means to be human instead.
What I am getting at is: are we trying to solve the problem of saving a static version of humanity as it exists today, or are we willing to accept that one solution to Alignment may be for humanity to change significantly instead?
I personally like the idea of uploading ourselves (and asked about it here).
Note that even if we are uploaded—if someone creates an unaligned AGI that is MUCH SMARTER than us, it will still probably kill us.
“keeping up” in the sense of improving/changing/optimizing so quickly that we’d compete with software that is specifically designed (perhaps by itself) to do that—seems like a solution I wouldn’t be happy with. As much as I’m ok with posting my profile picture on Facebook, there are some degrees of self modification that I’m not ok with
Ding Ding Ding, we have a winner here. Strong up vote.