Tim, do we have any idea what is required for uploads? Do we have any idea what is required for AGI? How can you make those comparisons?
If we thin-section and scan a frozen brain, it’s an immense amount of data, but at least potentially, captures everything you need to know about a brain. This is a solvable technological problem. If we understand neurons well enough, we can simulate that mapped brain. Again, that’s just a matter of compute power. I’m sure there’s a huge distance from a simulated scan to a functional virtual human, but it doesn’t strike me as impossible. Are we really farther from doing that than from building a FriendlyAI from first principles?
Nick, what I’d like to see in order to take this FriendlyAI concept seriously, is some scenario, even with a lot of hand-waving, of how it would work, and what kind of results it would produce. All I’ve seen in a year of lurking on this board is very abstract and high level.
I don’t take FriendlyAI seriously because I think it’s the wrong idea, from start to finish. There is no common goal that we would agree on. Any high-level moral goal is going to be impossible to state with mathematical precision. Any implementation of an AI that tries to satisfy that goal will be too complex to prove correct. It’s a mirage.
Eliezer writes “[FAI] computes a metamoral question, looking for reflective equilibria of your current inconsistent and unknowledgeable self; something along the lines of “What would you ask me to do if you knew what I know and thought as fast as I do?” ”. This strikes me as a clever dodge of the question. As I put it in my post, “I don’t know what I want or what the human race wants, but here I have a superintelligent AI. Let’s ask it!” It just adds another layer of opacity to the entire question.
If this is your metagoal, you are prepared to activate a possibly unfriendly AI with absolutely no notion of what it would actually do. What kind of “proof” could you possibly construct that would show this AI will act the way you want it to, when you don’t even know how you want it to act?
I fall back to the view that Eliezer has actually stated, that the space of all possible intelligences is much larger than the space of human intelligences. That most “points” in that space would be incomprehensible or insane by human standards. And so I think the only solution is some kind of upload society, one that can be examined more effectively by ordinary humans. One that can work with us and gain trust. Ordinary human minds in simulation, not self-modifying, and not accelerated. Once we’ve gotten used to that, we can gradually introduce faster human minds or modified human minds.
This all or nothing approach to FriendlyAI strikes me as a dead end.
This idea of writing off the human race, and assuming that some select team will just hit the button and change the world, like it or not, strikes me as morally bankrupt.
Tim, do we have any idea what is required for uploads? Do we have any idea what is required for AGI? How can you make those comparisons?
If we thin-section and scan a frozen brain, it’s an immense amount of data, but at least potentially, captures everything you need to know about a brain. This is a solvable technological problem. If we understand neurons well enough, we can simulate that mapped brain. Again, that’s just a matter of compute power. I’m sure there’s a huge distance from a simulated scan to a functional virtual human, but it doesn’t strike me as impossible. Are we really farther from doing that than from building a FriendlyAI from first principles?
Nick, what I’d like to see in order to take this FriendlyAI concept seriously, is some scenario, even with a lot of hand-waving, of how it would work, and what kind of results it would produce. All I’ve seen in a year of lurking on this board is very abstract and high level.
I don’t take FriendlyAI seriously because I think it’s the wrong idea, from start to finish. There is no common goal that we would agree on. Any high-level moral goal is going to be impossible to state with mathematical precision. Any implementation of an AI that tries to satisfy that goal will be too complex to prove correct. It’s a mirage.
Eliezer writes “[FAI] computes a metamoral question, looking for reflective equilibria of your current inconsistent and unknowledgeable self; something along the lines of “What would you ask me to do if you knew what I know and thought as fast as I do?” ”. This strikes me as a clever dodge of the question. As I put it in my post, “I don’t know what I want or what the human race wants, but here I have a superintelligent AI. Let’s ask it!” It just adds another layer of opacity to the entire question.
If this is your metagoal, you are prepared to activate a possibly unfriendly AI with absolutely no notion of what it would actually do. What kind of “proof” could you possibly construct that would show this AI will act the way you want it to, when you don’t even know how you want it to act?
I fall back to the view that Eliezer has actually stated, that the space of all possible intelligences is much larger than the space of human intelligences. That most “points” in that space would be incomprehensible or insane by human standards. And so I think the only solution is some kind of upload society, one that can be examined more effectively by ordinary humans. One that can work with us and gain trust. Ordinary human minds in simulation, not self-modifying, and not accelerated. Once we’ve gotten used to that, we can gradually introduce faster human minds or modified human minds.
This all or nothing approach to FriendlyAI strikes me as a dead end.
This idea of writing off the human race, and assuming that some select team will just hit the button and change the world, like it or not, strikes me as morally bankrupt.