The article only touches on it briefly, but suggests faster AI takeoff are worse, but “fast” is only relative to the fastest human minds.
Has there been much examination of the benefits of slow takeoff scenarios, or takeoffs that happen after human enhancements become available? I vaguely recall a MIRI fundraiser saying that they would start putting marginal resources toward investigating a possible post-Age of EM takeoff, but I have no idea if they got to that funding goal.
Personally, I don’t see Brain-Computer Interfaces as useful for AI takeoffs, at least in the near term. We can type ~100 words per minute, but it takes more than 400 minutes to write a 40,000 word novel. So, we aren’t actually I/O bound, as Elon believes. We’re limited by the number of neurons devoted to a given task.
Early BCIs might make some tasks much faster, like long division. Since some other tasks really are I/O bound, they’d help some with those. But, we wouldn’t be able to fully keep up with AI unless we had full-fledged upgrades to all our cognative architecture.
So, is almost keeping up with AI likely to be useful, or are slow takeoff just as bad? Are the odds of throwing together a FAI in the equivalent of a month any better than in a day? What % of those pannicked emergency FAI activities could be speed up by better computer user interfaces/text editors, personal assistants, a device that zapped your brain every time it detected Akrasia setting in, or by a RAM upgrade to the brain’s working-memory?
(sorry to spam. I’m separating questions out to keep the discussion tidy.)
Perhaps Elon doesn’t believe we are I/O bound, but that he is I/O bound. ;]
There’s a more serious problem which I’ve not seen most of the Neuralink-related articles talk about* - which is that layering intelligence augmentations around an overclocked baboon brain will probably actually increase the risk of a non-friendly takeoff.
I think most people interested in IA want to make it that there will be a large number of humans using IA at once taking off as a group and policing each other, so in aggregate things go okay. It would be madness to rely on a 1 or a small set of humans to take off and rule us.
So the question becomes whether this scenario is better or worse than having an single AI using a goal system based off a highly abstract theorising of overclocked baboon brains to control the future?
For this tactic to be effectual it requires that a society of augmented human brains will converge on a pattern of aggregate behaviours that maximize some idea of humanity’s collective values or at least doesn’t optimize anything that is counter to such an idea. If the degree to which human values can vary between _un_augmented brains reflects some difference between them that would be infeasible to change then it’s not likely that a society of augmented minds would be any more coordinated in values that a society of augmented ones.
In one sense I do believe a designed AI is better—the theorems a human being devised can stand or fall independently of the man who devised them. The risk increases inversely with our ability to follow trustworthy inference procedures in reasoning about designing AIs. With brain-augmentation the risk increases inversely with our aggregate ability to avoid the temptation of power. Humanity has produced many examples of great mathematicians. Trustworthy but powerful men are rarer.
We have been gradually getting more peaceful, even with increasing power. So I think there is an argument that brain augmentation is like literacy and so could increase that trend.
A lot depends on how hard a take off is possible.
I like maths. I like maths safely in the theoretical world, occasionally bought out to bear on select problems that have proven to be amenable to it. Also I’ve worked with computers enough to know that maths is not enough. They are imperfectly modeled physical systems.
I really don’t like maths trying to be in charge of everything in the world, dealing with knotty problems of philosophy. Question like what is a human, what is life, what is a humans value; these do not seem the correct things for maths to be trying to tackle.
Knowledge and ability to direct energy. There are a lot more people who could probably put together half decent fertilizer bomb nowadays but we are not in continual state of trying to assassinate leaders and overthrow governments.
Privately manufactured bombs are common enough to be a problem—and there is a very plausible threat of life imprisonment ( or possibly execution ) for anyone who engages in such behaviour. That an augmented brain with the inclination to doing something analogous would be effectively punishable is open to doubt—they may well find ways of either evading the law or of raising the cost of any attempted punishment to a prohibitive level.
I’d say it’s more useful to think of power in terms of things you can do with a reasonable chance of getting away with it rather than just things you can do. Looking at the former class of things—there are many things that people do that are harmful to others that they do nevertheless because they can get away with it easily: littering, lying, petty theft, deliberately encouraging pathological interpersonal relationship dynamics, going on the internet and getting into an argument and trying to bully the other guy into feeling stupid… ( no hint intended to be dropped here, just for clarity’s sake ). Many, in my estimation probably most, human beings do in fact have at least some consequence-free power over others and do choose to abuse that minute level of power.
The more intelligence augmentation is equitably spread the more likely that there will be less consequence free power over others. Intelligence augmentation would allow you to collect more data and be able to communicate with more people about the actions you see other people taking.
There are worlds where IA is a lot easier than standalone AI, I think that is what elon is optimizing for. He has publicly stated he wants to spread it around when it is created (probably why he is investing in OpenAI as well).
This world feels more probable to me as well, currently. It conflicts somewhat with the need for secrecy in singleton AI scenarios.
The more intelligence augmentation is equitably spread the more likely that there will be less consequence free power over others.
That is not apparent to me though. It seems like it would lead to a MAD style situation where no agent is able to take any action that might be construed as malintent without being punished. Every agent would have to be suspicious of the motives of every other agent since advanced agents may do a very good job of hiding their own malintent, making any coordinated development very difficult. Some agents might reason that it is better to risk a chance of destruction for the chance of forming a singleton.
It seems to me very hard to reason about the behaviour of advanced agents without ultimately resorting to mathematics ( e.g. situations involving mutual-policing should be formalizable in game-theoretic terms ).
The key ingredient for a MAD situation as far as I can think is some technology with a high destructiveness potential distributed among multiple agents who cannot trust each other. To reduce my whole argument to its cartoon outlines: serious brain augmentation seems about as good an idea as handing everyone their own nuclear arsenal.
I think there is a whole long discussion about whether individual or small numbers of brain augments can somehow hope to outsmart whole societies of brain augments that are all working together to improve their augmentations. And also discussions around how much smarter pure AIs would be compared to normal augments.
societies of brain augments that are all working together
Even that this presupposition should hold is questionable. Mutual distrust and the associated risk might make cooperative development an exceptional scenario rather than the default one.
The article only touches on it briefly, but suggests faster AI takeoff are worse, but “fast” is only relative to the fastest human minds.
Has there been much examination of the benefits of slow takeoff scenarios, or takeoffs that happen after human enhancements become available? I vaguely recall a MIRI fundraiser saying that they would start putting marginal resources toward investigating a possible post-Age of EM takeoff, but I have no idea if they got to that funding goal.
Personally, I don’t see Brain-Computer Interfaces as useful for AI takeoffs, at least in the near term. We can type ~100 words per minute, but it takes more than 400 minutes to write a 40,000 word novel. So, we aren’t actually I/O bound, as Elon believes. We’re limited by the number of neurons devoted to a given task.
Early BCIs might make some tasks much faster, like long division. Since some other tasks really are I/O bound, they’d help some with those. But, we wouldn’t be able to fully keep up with AI unless we had full-fledged upgrades to all our cognative architecture.
So, is almost keeping up with AI likely to be useful, or are slow takeoff just as bad? Are the odds of throwing together a FAI in the equivalent of a month any better than in a day? What % of those pannicked emergency FAI activities could be speed up by better computer user interfaces/text editors, personal assistants, a device that zapped your brain every time it detected Akrasia setting in, or by a RAM upgrade to the brain’s working-memory?
(sorry to spam. I’m separating questions out to keep the discussion tidy.)
Perhaps Elon doesn’t believe we are I/O bound, but that he is I/O bound. ;]
There’s a more serious problem which I’ve not seen most of the Neuralink-related articles talk about* - which is that layering intelligence augmentations around an overclocked baboon brain will probably actually increase the risk of a non-friendly takeoff.
haven’t read the linked article through yet
I think most people interested in IA want to make it that there will be a large number of humans using IA at once taking off as a group and policing each other, so in aggregate things go okay. It would be madness to rely on a 1 or a small set of humans to take off and rule us.
So the question becomes whether this scenario is better or worse than having an single AI using a goal system based off a highly abstract theorising of overclocked baboon brains to control the future?
For this tactic to be effectual it requires that a society of augmented human brains will converge on a pattern of aggregate behaviours that maximize some idea of humanity’s collective values or at least doesn’t optimize anything that is counter to such an idea. If the degree to which human values can vary between _un_augmented brains reflects some difference between them that would be infeasible to change then it’s not likely that a society of augmented minds would be any more coordinated in values that a society of augmented ones.
In one sense I do believe a designed AI is better—the theorems a human being devised can stand or fall independently of the man who devised them. The risk increases inversely with our ability to follow trustworthy inference procedures in reasoning about designing AIs. With brain-augmentation the risk increases inversely with our aggregate ability to avoid the temptation of power. Humanity has produced many examples of great mathematicians. Trustworthy but powerful men are rarer.
We have been gradually getting more peaceful, even with increasing power. So I think there is an argument that brain augmentation is like literacy and so could increase that trend.
A lot depends on how hard a take off is possible.
I like maths. I like maths safely in the theoretical world, occasionally bought out to bear on select problems that have proven to be amenable to it. Also I’ve worked with computers enough to know that maths is not enough. They are imperfectly modeled physical systems.
I really don’t like maths trying to be in charge of everything in the world, dealing with knotty problems of philosophy. Question like what is a human, what is life, what is a humans value; these do not seem the correct things for maths to be trying to tackle.
At the individual level? By what metric?
Is that a result of mathematics or of philosophy? :P
Knowledge and ability to direct energy. There are a lot more people who could probably put together half decent fertilizer bomb nowadays but we are not in continual state of trying to assassinate leaders and overthrow governments.
Privately manufactured bombs are common enough to be a problem—and there is a very plausible threat of life imprisonment ( or possibly execution ) for anyone who engages in such behaviour. That an augmented brain with the inclination to doing something analogous would be effectively punishable is open to doubt—they may well find ways of either evading the law or of raising the cost of any attempted punishment to a prohibitive level.
I’d say it’s more useful to think of power in terms of things you can do with a reasonable chance of getting away with it rather than just things you can do. Looking at the former class of things—there are many things that people do that are harmful to others that they do nevertheless because they can get away with it easily: littering, lying, petty theft, deliberately encouraging pathological interpersonal relationship dynamics, going on the internet and getting into an argument and trying to bully the other guy into feeling stupid… ( no hint intended to be dropped here, just for clarity’s sake ).
Many, in my estimation probably most, human beings do in fact have at least some consequence-free power over others and do choose to abuse that minute level of power.
The more intelligence augmentation is equitably spread the more likely that there will be less consequence free power over others. Intelligence augmentation would allow you to collect more data and be able to communicate with more people about the actions you see other people taking.
There are worlds where IA is a lot easier than standalone AI, I think that is what elon is optimizing for. He has publicly stated he wants to spread it around when it is created (probably why he is investing in OpenAI as well).
This world feels more probable to me as well, currently. It conflicts somewhat with the need for secrecy in singleton AI scenarios.
That is not apparent to me though. It seems like it would lead to a MAD style situation where no agent is able to take any action that might be construed as malintent without being punished. Every agent would have to be suspicious of the motives of every other agent since advanced agents may do a very good job of hiding their own malintent, making any coordinated development very difficult. Some agents might reason that it is better to risk a chance of destruction for the chance of forming a singleton.
It seems to me very hard to reason about the behaviour of advanced agents without ultimately resorting to mathematics ( e.g. situations involving mutual-policing should be formalizable in game-theoretic terms ).
I think I am unsure what properties of future tech you think will lead to more MAD style situations than we have currently. Is it hard takeoff?
The key ingredient for a MAD situation as far as I can think is some technology with a high destructiveness potential distributed among multiple agents who cannot trust each other. To reduce my whole argument to its cartoon outlines: serious brain augmentation seems about as good an idea as handing everyone their own nuclear arsenal.
I think there is a whole long discussion about whether individual or small numbers of brain augments can somehow hope to outsmart whole societies of brain augments that are all working together to improve their augmentations. And also discussions around how much smarter pure AIs would be compared to normal augments.
Even that this presupposition should hold is questionable. Mutual distrust and the associated risk might make cooperative development an exceptional scenario rather than the default one.