Knowledge and ability to direct energy. There are a lot more people who could probably put together half decent fertilizer bomb nowadays but we are not in continual state of trying to assassinate leaders and overthrow governments.
Privately manufactured bombs are common enough to be a problem—and there is a very plausible threat of life imprisonment ( or possibly execution ) for anyone who engages in such behaviour. That an augmented brain with the inclination to doing something analogous would be effectively punishable is open to doubt—they may well find ways of either evading the law or of raising the cost of any attempted punishment to a prohibitive level.
I’d say it’s more useful to think of power in terms of things you can do with a reasonable chance of getting away with it rather than just things you can do. Looking at the former class of things—there are many things that people do that are harmful to others that they do nevertheless because they can get away with it easily: littering, lying, petty theft, deliberately encouraging pathological interpersonal relationship dynamics, going on the internet and getting into an argument and trying to bully the other guy into feeling stupid… ( no hint intended to be dropped here, just for clarity’s sake ). Many, in my estimation probably most, human beings do in fact have at least some consequence-free power over others and do choose to abuse that minute level of power.
The more intelligence augmentation is equitably spread the more likely that there will be less consequence free power over others. Intelligence augmentation would allow you to collect more data and be able to communicate with more people about the actions you see other people taking.
There are worlds where IA is a lot easier than standalone AI, I think that is what elon is optimizing for. He has publicly stated he wants to spread it around when it is created (probably why he is investing in OpenAI as well).
This world feels more probable to me as well, currently. It conflicts somewhat with the need for secrecy in singleton AI scenarios.
The more intelligence augmentation is equitably spread the more likely that there will be less consequence free power over others.
That is not apparent to me though. It seems like it would lead to a MAD style situation where no agent is able to take any action that might be construed as malintent without being punished. Every agent would have to be suspicious of the motives of every other agent since advanced agents may do a very good job of hiding their own malintent, making any coordinated development very difficult. Some agents might reason that it is better to risk a chance of destruction for the chance of forming a singleton.
It seems to me very hard to reason about the behaviour of advanced agents without ultimately resorting to mathematics ( e.g. situations involving mutual-policing should be formalizable in game-theoretic terms ).
The key ingredient for a MAD situation as far as I can think is some technology with a high destructiveness potential distributed among multiple agents who cannot trust each other. To reduce my whole argument to its cartoon outlines: serious brain augmentation seems about as good an idea as handing everyone their own nuclear arsenal.
I think there is a whole long discussion about whether individual or small numbers of brain augments can somehow hope to outsmart whole societies of brain augments that are all working together to improve their augmentations. And also discussions around how much smarter pure AIs would be compared to normal augments.
societies of brain augments that are all working together
Even that this presupposition should hold is questionable. Mutual distrust and the associated risk might make cooperative development an exceptional scenario rather than the default one.
Knowledge and ability to direct energy. There are a lot more people who could probably put together half decent fertilizer bomb nowadays but we are not in continual state of trying to assassinate leaders and overthrow governments.
Privately manufactured bombs are common enough to be a problem—and there is a very plausible threat of life imprisonment ( or possibly execution ) for anyone who engages in such behaviour. That an augmented brain with the inclination to doing something analogous would be effectively punishable is open to doubt—they may well find ways of either evading the law or of raising the cost of any attempted punishment to a prohibitive level.
I’d say it’s more useful to think of power in terms of things you can do with a reasonable chance of getting away with it rather than just things you can do. Looking at the former class of things—there are many things that people do that are harmful to others that they do nevertheless because they can get away with it easily: littering, lying, petty theft, deliberately encouraging pathological interpersonal relationship dynamics, going on the internet and getting into an argument and trying to bully the other guy into feeling stupid… ( no hint intended to be dropped here, just for clarity’s sake ).
Many, in my estimation probably most, human beings do in fact have at least some consequence-free power over others and do choose to abuse that minute level of power.
The more intelligence augmentation is equitably spread the more likely that there will be less consequence free power over others. Intelligence augmentation would allow you to collect more data and be able to communicate with more people about the actions you see other people taking.
There are worlds where IA is a lot easier than standalone AI, I think that is what elon is optimizing for. He has publicly stated he wants to spread it around when it is created (probably why he is investing in OpenAI as well).
This world feels more probable to me as well, currently. It conflicts somewhat with the need for secrecy in singleton AI scenarios.
That is not apparent to me though. It seems like it would lead to a MAD style situation where no agent is able to take any action that might be construed as malintent without being punished. Every agent would have to be suspicious of the motives of every other agent since advanced agents may do a very good job of hiding their own malintent, making any coordinated development very difficult. Some agents might reason that it is better to risk a chance of destruction for the chance of forming a singleton.
It seems to me very hard to reason about the behaviour of advanced agents without ultimately resorting to mathematics ( e.g. situations involving mutual-policing should be formalizable in game-theoretic terms ).
I think I am unsure what properties of future tech you think will lead to more MAD style situations than we have currently. Is it hard takeoff?
The key ingredient for a MAD situation as far as I can think is some technology with a high destructiveness potential distributed among multiple agents who cannot trust each other. To reduce my whole argument to its cartoon outlines: serious brain augmentation seems about as good an idea as handing everyone their own nuclear arsenal.
I think there is a whole long discussion about whether individual or small numbers of brain augments can somehow hope to outsmart whole societies of brain augments that are all working together to improve their augmentations. And also discussions around how much smarter pure AIs would be compared to normal augments.
Even that this presupposition should hold is questionable. Mutual distrust and the associated risk might make cooperative development an exceptional scenario rather than the default one.