Well, I do agree that there are two steps needed from the quote to the position of saying the quote supports omnicide.
Step 1. You have to also think that things smarter (better at science) and more complex than humans will become more powerful than humans, and somehow end up in control of the destiny of the universe.
Step 2. You have to think that humans losing control in this way will be effectively fatal to them, one way or another, not long after it happens.
So yeah, Schmidhuber might think that one or both of these two steps are invalid. I believe they probably are, and thus that Schmidhuber’s position thus points pretty strongly at human extinction. That if we want to avoid human extinction we need to avoid going in the direction of AI being more complex than humans.
My personal take is that we should keep AI as limited and simple as possible, as long as possible. We should aim for increasing human complexity and ability. We should not merge with AI, we should simply use AI as a tool to expand humanity’s abilities. Create digital humans. Then figure out how to let those digital humans grow and improve beyond the limits of biology while still maintaining their core humanity.
We should not merge with AI [...] Create digital humans.
I have been confused for a while about
boundary between humans merging with AI and digital humans (can these approaches be reliably differentiated from each other? or is there a large overlap?)
why digital humans would be a safer alternative than the merge
So this seems like it might be a good occasion to ask you to elaborate on this...
Well, I do agree that there are two steps needed from the quote to the position of saying the quote supports omnicide.
Step 1. You have to also think that things smarter (better at science) and more complex than humans will become more powerful than humans, and somehow end up in control of the destiny of the universe.
Step 2. You have to think that humans losing control in this way will be effectively fatal to them, one way or another, not long after it happens.
So yeah, Schmidhuber might think that one or both of these two steps are invalid. I believe they probably are, and thus that Schmidhuber’s position thus points pretty strongly at human extinction. That if we want to avoid human extinction we need to avoid going in the direction of AI being more complex than humans.
My personal take is that we should keep AI as limited and simple as possible, as long as possible. We should aim for increasing human complexity and ability. We should not merge with AI, we should simply use AI as a tool to expand humanity’s abilities. Create digital humans. Then figure out how to let those digital humans grow and improve beyond the limits of biology while still maintaining their core humanity.
I have been confused for a while about
boundary between humans merging with AI and digital humans (can these approaches be reliably differentiated from each other? or is there a large overlap?)
why digital humans would be a safer alternative than the merge
So this seems like it might be a good occasion to ask you to elaborate on this...