Does anybody think this will actually help with existential risk? I suspect the goal of “keeping up” or preventing irrelevance after the onset of AGI is pretty much a lost cause. But maybe if it makes people smarter it will help us solve the control problem in time.
Yes, I think the entire concept of the AI x-risk scary idea (e.g. Clippy) is predicated on machines being orders of magnitude smarter in some ways than their human builders. If instead there is a smooth transition to increasingly more powerful human augmented intelligence, then the transformative power of AI becomes evolutionary not revolutionary. Existing power structures continue to remain in effect as we move into a post human future.
Of course there will be issues of access to augmentation technologies, bioethics panels, government regulation, etc. But these won’t be existential risks.
Imagine that we understand the brain. We can replicate it in silicon and we can functionally decompose it in to problem solving and motivational sections. With a neural interface we could connect up a problem solving bit with our motivational section. To give ourselves an external lobe (this could perhaps be done in a hacky indirect way with out a direct connection).
If this happens then there are two benefits to existential risk:
1) People will spend less money/time trying to create new agents
2) We will be closer to parity to new agents when they come about in problem solving capability.
It depends how hard it is to specify content in the motivation section. You can see all of the FAI work as suggesting it is pretty hard. I think the path of least resistance is augmenting known motivation systems.
But yeah possible crazy failure scenario. I think a small subset of humanity getting hold and monopolizing the tech to just enhance themselves is another more likely failure scenario. It all depends on how it develops, which is probably a little influence-able at the moment.
It depends how hard it is to specify content in the motivation section. You can see all of the FAI work as suggesting it is pretty hard.
It is hard to specify motivation for a god-like entity. It’s pretty easy to specify motivation for slaves: “You will love the Big Brother, you will experience debilitating anxiety and disgust at any thoughts of resistance, you will consider the most important thing in life to be fulfilling your quota of growing turnips, approval from your supervisor will be the most pleasurable thing you ever feel”.
I also think this project will be on a fairly slow timeline. Maybe the AGI connections are functionally just marketing, and the real benefit of this org will be more mundane medical issues.
Does anybody think this will actually help with existential risk? I suspect the goal of “keeping up” or preventing irrelevance after the onset of AGI is pretty much a lost cause. But maybe if it makes people smarter it will help us solve the control problem in time.
It has been fairly standard LW wisdom for a long time that any kind of human augmentation is unhelpful for friendliness.
I think that we should be much less confident about this, and I welcome alternative efforts such as the neural lace.
Yes, I think the entire concept of the AI x-risk scary idea (e.g. Clippy) is predicated on machines being orders of magnitude smarter in some ways than their human builders. If instead there is a smooth transition to increasingly more powerful human augmented intelligence, then the transformative power of AI becomes evolutionary not revolutionary. Existing power structures continue to remain in effect as we move into a post human future.
Of course there will be issues of access to augmentation technologies, bioethics panels, government regulation, etc. But these won’t be existential risks.
It is part of a research program that I can see.
Imagine that we understand the brain. We can replicate it in silicon and we can functionally decompose it in to problem solving and motivational sections. With a neural interface we could connect up a problem solving bit with our motivational section. To give ourselves an external lobe (this could perhaps be done in a hacky indirect way with out a direct connection).
If this happens then there are two benefits to existential risk:
1) People will spend less money/time trying to create new agents
2) We will be closer to parity to new agents when they come about in problem solving capability.
At which point powers-that-be specify what has to be in the motivation section and it’s game over, man.
It depends how hard it is to specify content in the motivation section. You can see all of the FAI work as suggesting it is pretty hard. I think the path of least resistance is augmenting known motivation systems.
But yeah possible crazy failure scenario. I think a small subset of humanity getting hold and monopolizing the tech to just enhance themselves is another more likely failure scenario. It all depends on how it develops, which is probably a little influence-able at the moment.
It is hard to specify motivation for a god-like entity. It’s pretty easy to specify motivation for slaves: “You will love the Big Brother, you will experience debilitating anxiety and disgust at any thoughts of resistance, you will consider the most important thing in life to be fulfilling your quota of growing turnips, approval from your supervisor will be the most pleasurable thing you ever feel”.
I also think this project will be on a fairly slow timeline. Maybe the AGI connections are functionally just marketing, and the real benefit of this org will be more mundane medical issues.