If I’m wrong and the core learning algorithm(s) of human beings is too complicated to write in a handful of scientific papers then superintelligence will not be built by 2121.
It is possible for evolution to have stumbled upon a really complicated algorithm for humans. Deep learning is fairly simple. AIXI is simple. Evolution is simple. If the human brain is incredibly complicated, even in its core learning algorithm, we could make something else. (Or possibly copy lots of data with little understanding)
You could also likely build superintelligence by wiring up human brains with brain computer interfaces, then using reinforcement learning to generate some pattern of synchronized activations and brain-to-brain communication that prompts to brains collectively solve problems more effectively than a single brain is able to—a sort of AI guided super-collaboration. That would bypass both the algorithmic complexity and the hardware issues.
The main constraints here are the bandwidths of brain computer interfaces (I saw a publication that derived a Moore’s law-like trend for this, but now can’t find it. If anyone knows where to find such a result, please let me know.) and the difficulty of human experiments.
The set of designs that look like “Human brains + BCI + Reinforcement learning” is large. There is almost certainly something superintelligent in that design space, and a lot of things that aren’t. Finding a superintelligence in this design space is not obviously much easier than finding a superintelligence in the space of all computer programs.
I am unsure how this bypasses algorithmic complexity and hardware issues. I would not expect human brains to be totally plug and play compatible. It may be that the results of wiring 100 human brains together (with little external compute) are no better than the same 100 people just talking. It may be you need difficult algorithms and/or lots of hardware as well as BCI’s.
I think using AI + BCI + human brains will be easier than straight AI for the same reason that it’s easier to finetune pretrained models for a specific task than it is to create a pretrained model. The brain must have pretty general information processing structure, and I expect it’s easier to learn the interface / input encoding for such structures than it is to build human level AI.
Part of that intuition comes from how adaptable the brain is to injury, new sensory modalities, controlling robotic limbs, etc. Another part of the intuition comes from how much success we’ve seen even with relatively unsophisticated efforts to manipulate brains, such as curing depression.
Its easier to couple a cart to a horse than to build an internal combustion engine.
Its easier to build a modern car, than to cybernetically enhance a horse to be that fast and strong.
Humans plus BCI are not to hard. If keyboards count as crude BCI, its easy. Making something substantially superhuman. That’s harder than building an ASI from scratch.
You can easily combine multiple horses into a “super-equine” transport system by arranging for fresh horses to be available periodically across the journey and pushing each horse to unsustainable speeds.
Also, I don’t think it’s very hard to reach somewhat superhuman performance with BCIs. The difference between keyboards and the BCIs I’m thinking of is that my BCIs can directly modify neurology to increase performance. E.g., modifying motivation/reward to make the brains really value learning about/accomplishing assigned tasks. Consider a company where every employee/manager is completely devoted to company success, fully trust each other and have very little internal politicking/empire building. Even without anything like brain-level, BCI enabled parallel problem solving or direct intelligence augmentation, I’m pretty sure such a company would perform far better than any pure human company of comparable size and resources.
Secondly, do BCI’s mean brainwashing for the good of the company? I think most people wouldn’t want to work for such a company. I mean companies probably could substantially increase productivity with psycoactive substances. But that’s illegal and a good way to loose all your employees.
Also something moloch like has a tendency to pop up in a lot of unexpected ways. I wouldn’t be surprised if you get direct brain to brain politicking.
Also this is less relevant for AI safety research, where there is already little empire building because most of the people working on it already really value success.
“… do BCI’s mean brainwashing for the good of the company? I think most people wouldn’t want to work for such a company.”
I think this is a mistake lots of people make when considering potentially dystopian technology: that dangerous developments can only happen if they’re imposed on people by some outside force. Most people in the US carry tracking devices with them wherever they go, not because of government mandate, but simply because phones are very useful.
Adderall use is very common in tech companies, esports gaming, and other highly competitive environments. Directly manipulating reward/motivation circuits is almost certainly far more effective than Adderall. I expect the potential employees of the sort of company I discussed would already be using BCIs to enhance their own productivities, and it’s a relatively small step to enhancing collaborative efficiency with BCIs.
The subjective experience for workers using such BCIs is probably positive. Many of the straightforward ways to increase workers’ productivity seem fairly desirable. They’d be part of an organisation they completely trust and that completely trusts them. They’d find their work incredibly fulfilling and motivating. They’d have a great relationship with their co-workers, etc.
Brain to brain politicking is of course possible, depending on the implementation. The difference is that there’s an RL model directly influencing the prevalence of such behaviour. I expect most unproductive forms of politicking to be removed eventually.
Finally, such concerns are very relevant to AI safety. A group of humans coordinated via BCI with unaligned AI is not much more aligned than the standard paper-clipper AI. If such systems arise before superhuman pure AI, then I expect them to represent a large part of AI risk. I’m working on a draft timeline where this is the case.
This makes sense. I like that you brought the topic up.
I predict that brain-computer interfaces will advance too slow to matter much in the race to a superintelligence but I’d be excited to be proven wrong. A world where brain-computer interfaces advance faster than AI would be extrenely interesting.
I’m actually woking on an AI progress timeline / alignment failure story where the big risk comes from BCI-enabled coordination tech (I’ve sent you the draft if you’re interested). I.e., instead of developing superintelligence, the timeline develops models that can manipulate mood/behavior through a BCI, initially as a cure for depression, then gradually spreading through society as a general mood booster / productivity enhancer, and finally being used to enhance coordination (e.g., make everyone super dedicated to improving company profits without destructive internal politics). The end result is that coordination models are trained via reinforcement learning to maximize profits or other simple metrics and gradually remove non-optimal behaviors in pursuit of those metrics.
This timeline makes the case that AI doesn’t need to be superhuman to pose a risk. The behavior modifying models manipulate brains through BCIs with far fewer electrodes than the brain has neurons and are much less generally capable than human brains. We already have a proof of concept that a similar approach can cure depression, so I think more complex modifications like loyalty/motivation enhancement are possible in the not too distant future.
You may also find the section of my timeline addressing progress standard in AI interesting:
My rough mental model for AI capabilities is that they depend on three inputs:
Compute per dollar. This increases at a somewhat sub-exponential rate. The time between 10x increases is increasing. We were initially at ~10x increase every four years, but recently slowed to ~10x increase every 10-16 years (source).
Algorithmic progress in AI. Each year, the compute required to reach a given performance level drops by a constant factor, (so far, a factor of 2 every ~16 months) (source). I think improvements to training efficiency drive most of the current gains in AI capabilities, but they’ll eventually begin falling off as we exhaust low hanging fruit.
The money people are willing to invest in AI. This increases as the return on investment in AI increases. There was a time when money invested in AI rose exponentially and very fast, but it’s pretty much flattened off since GPT-3. My guess is this quantity follows a sort of stutter-stop pattern where it spikes as people realize algorithmic/hardware improvements make higher investments in AI more worthwhile, then flattens once the new investments exhaust whatever new opportunities progress in hardware/algorithms allowed.
When you combine these somewhat sub-exponentially increasing inputs with the power-law scaling laws so far discovered (see here), you probably get something roughly linear, but with occasional jumps in capability as willingness to invest jumps.
I think there’s a reasonable case that AI progress will continue at approximately the same trajectory as it has over the last ~50 years.
What metric would you use to capture the trajectory of AI progress over the last 50 years? And would such a metric be able to bridge the transition from GOFAI to deep learning?
My preferred algorithmic metric would be compute required to reach a certain performance level. This doesn’t really work for hand-crafted expert systems. However, I don’t think those are very informative of future AI trajectories.
It is possible for evolution to have stumbled upon a really complicated algorithm for humans. Deep learning is fairly simple. AIXI is simple. Evolution is simple. If the human brain is incredibly complicated, even in its core learning algorithm, we could make something else. (Or possibly copy lots of data with little understanding)
The brain may also be excessively complicated to defend against parasites.
You could also likely build superintelligence by wiring up human brains with brain computer interfaces, then using reinforcement learning to generate some pattern of synchronized activations and brain-to-brain communication that prompts to brains collectively solve problems more effectively than a single brain is able to—a sort of AI guided super-collaboration. That would bypass both the algorithmic complexity and the hardware issues.
The main constraints here are the bandwidths of brain computer interfaces (I saw a publication that derived a Moore’s law-like trend for this, but now can’t find it. If anyone knows where to find such a result, please let me know.) and the difficulty of human experiments.
“Accelerating progress in brain recording tech”? One reason to be optimistic about brain imitation learning: we may just be in the knee of the curve, well before the curves cross.
Thanks! I’m pretty sure this isn’t the one I saw, but it works even better for my purposes.
Edit: I’m working on an AI timeline / risk scenario where BCIs and neuro-imitative AI play a big role. I’ve sent you the draft if you’re interested.
The set of designs that look like “Human brains + BCI + Reinforcement learning” is large. There is almost certainly something superintelligent in that design space, and a lot of things that aren’t. Finding a superintelligence in this design space is not obviously much easier than finding a superintelligence in the space of all computer programs.
I am unsure how this bypasses algorithmic complexity and hardware issues. I would not expect human brains to be totally plug and play compatible. It may be that the results of wiring 100 human brains together (with little external compute) are no better than the same 100 people just talking. It may be you need difficult algorithms and/or lots of hardware as well as BCI’s.
I think using AI + BCI + human brains will be easier than straight AI for the same reason that it’s easier to finetune pretrained models for a specific task than it is to create a pretrained model. The brain must have pretty general information processing structure, and I expect it’s easier to learn the interface / input encoding for such structures than it is to build human level AI.
Part of that intuition comes from how adaptable the brain is to injury, new sensory modalities, controlling robotic limbs, etc. Another part of the intuition comes from how much success we’ve seen even with relatively unsophisticated efforts to manipulate brains, such as curing depression.
Its easier to couple a cart to a horse than to build an internal combustion engine.
Its easier to build a modern car, than to cybernetically enhance a horse to be that fast and strong.
Humans plus BCI are not to hard. If keyboards count as crude BCI, its easy. Making something substantially superhuman. That’s harder than building an ASI from scratch.
You can easily combine multiple horses into a “super-equine” transport system by arranging for fresh horses to be available periodically across the journey and pushing each horse to unsustainable speeds.
Also, I don’t think it’s very hard to reach somewhat superhuman performance with BCIs. The difference between keyboards and the BCIs I’m thinking of is that my BCIs can directly modify neurology to increase performance. E.g., modifying motivation/reward to make the brains really value learning about/accomplishing assigned tasks. Consider a company where every employee/manager is completely devoted to company success, fully trust each other and have very little internal politicking/empire building. Even without anything like brain-level, BCI enabled parallel problem solving or direct intelligence augmentation, I’m pretty sure such a company would perform far better than any pure human company of comparable size and resources.
Firstly we already have humans working together.
Secondly, do BCI’s mean brainwashing for the good of the company? I think most people wouldn’t want to work for such a company. I mean companies probably could substantially increase productivity with psycoactive substances. But that’s illegal and a good way to loose all your employees.
Also something moloch like has a tendency to pop up in a lot of unexpected ways. I wouldn’t be surprised if you get direct brain to brain politicking.
Also this is less relevant for AI safety research, where there is already little empire building because most of the people working on it already really value success.
“… do BCI’s mean brainwashing for the good of the company? I think most people wouldn’t want to work for such a company.”
I think this is a mistake lots of people make when considering potentially dystopian technology: that dangerous developments can only happen if they’re imposed on people by some outside force. Most people in the US carry tracking devices with them wherever they go, not because of government mandate, but simply because phones are very useful.
Adderall use is very common in tech companies, esports gaming, and other highly competitive environments. Directly manipulating reward/motivation circuits is almost certainly far more effective than Adderall. I expect the potential employees of the sort of company I discussed would already be using BCIs to enhance their own productivities, and it’s a relatively small step to enhancing collaborative efficiency with BCIs.
The subjective experience for workers using such BCIs is probably positive. Many of the straightforward ways to increase workers’ productivity seem fairly desirable. They’d be part of an organisation they completely trust and that completely trusts them. They’d find their work incredibly fulfilling and motivating. They’d have a great relationship with their co-workers, etc.
Brain to brain politicking is of course possible, depending on the implementation. The difference is that there’s an RL model directly influencing the prevalence of such behaviour. I expect most unproductive forms of politicking to be removed eventually.
Finally, such concerns are very relevant to AI safety. A group of humans coordinated via BCI with unaligned AI is not much more aligned than the standard paper-clipper AI. If such systems arise before superhuman pure AI, then I expect them to represent a large part of AI risk. I’m working on a draft timeline where this is the case.
This makes sense. I like that you brought the topic up.
I predict that brain-computer interfaces will advance too slow to matter much in the race to a superintelligence but I’d be excited to be proven wrong. A world where brain-computer interfaces advance faster than AI would be extrenely interesting.
I’m actually woking on an AI progress timeline / alignment failure story where the big risk comes from BCI-enabled coordination tech (I’ve sent you the draft if you’re interested). I.e., instead of developing superintelligence, the timeline develops models that can manipulate mood/behavior through a BCI, initially as a cure for depression, then gradually spreading through society as a general mood booster / productivity enhancer, and finally being used to enhance coordination (e.g., make everyone super dedicated to improving company profits without destructive internal politics). The end result is that coordination models are trained via reinforcement learning to maximize profits or other simple metrics and gradually remove non-optimal behaviors in pursuit of those metrics.
This timeline makes the case that AI doesn’t need to be superhuman to pose a risk. The behavior modifying models manipulate brains through BCIs with far fewer electrodes than the brain has neurons and are much less generally capable than human brains. We already have a proof of concept that a similar approach can cure depression, so I think more complex modifications like loyalty/motivation enhancement are possible in the not too distant future.
You may also find the section of my timeline addressing progress standard in AI interesting:
I think there’s a reasonable case that AI progress will continue at approximately the same trajectory as it has over the last ~50 years.
What metric would you use to capture the trajectory of AI progress over the last 50 years? And would such a metric be able to bridge the transition from GOFAI to deep learning?
My preferred algorithmic metric would be compute required to reach a certain performance level. This doesn’t really work for hand-crafted expert systems. However, I don’t think those are very informative of future AI trajectories.