The more intelligence augmentation is equitably spread the more likely that there will be less consequence free power over others. Intelligence augmentation would allow you to collect more data and be able to communicate with more people about the actions you see other people taking.
There are worlds where IA is a lot easier than standalone AI, I think that is what elon is optimizing for. He has publicly stated he wants to spread it around when it is created (probably why he is investing in OpenAI as well).
This world feels more probable to me as well, currently. It conflicts somewhat with the need for secrecy in singleton AI scenarios.
The more intelligence augmentation is equitably spread the more likely that there will be less consequence free power over others.
That is not apparent to me though. It seems like it would lead to a MAD style situation where no agent is able to take any action that might be construed as malintent without being punished. Every agent would have to be suspicious of the motives of every other agent since advanced agents may do a very good job of hiding their own malintent, making any coordinated development very difficult. Some agents might reason that it is better to risk a chance of destruction for the chance of forming a singleton.
It seems to me very hard to reason about the behaviour of advanced agents without ultimately resorting to mathematics ( e.g. situations involving mutual-policing should be formalizable in game-theoretic terms ).
The key ingredient for a MAD situation as far as I can think is some technology with a high destructiveness potential distributed among multiple agents who cannot trust each other. To reduce my whole argument to its cartoon outlines: serious brain augmentation seems about as good an idea as handing everyone their own nuclear arsenal.
I think there is a whole long discussion about whether individual or small numbers of brain augments can somehow hope to outsmart whole societies of brain augments that are all working together to improve their augmentations. And also discussions around how much smarter pure AIs would be compared to normal augments.
societies of brain augments that are all working together
Even that this presupposition should hold is questionable. Mutual distrust and the associated risk might make cooperative development an exceptional scenario rather than the default one.
The more intelligence augmentation is equitably spread the more likely that there will be less consequence free power over others. Intelligence augmentation would allow you to collect more data and be able to communicate with more people about the actions you see other people taking.
There are worlds where IA is a lot easier than standalone AI, I think that is what elon is optimizing for. He has publicly stated he wants to spread it around when it is created (probably why he is investing in OpenAI as well).
This world feels more probable to me as well, currently. It conflicts somewhat with the need for secrecy in singleton AI scenarios.
That is not apparent to me though. It seems like it would lead to a MAD style situation where no agent is able to take any action that might be construed as malintent without being punished. Every agent would have to be suspicious of the motives of every other agent since advanced agents may do a very good job of hiding their own malintent, making any coordinated development very difficult. Some agents might reason that it is better to risk a chance of destruction for the chance of forming a singleton.
It seems to me very hard to reason about the behaviour of advanced agents without ultimately resorting to mathematics ( e.g. situations involving mutual-policing should be formalizable in game-theoretic terms ).
I think I am unsure what properties of future tech you think will lead to more MAD style situations than we have currently. Is it hard takeoff?
The key ingredient for a MAD situation as far as I can think is some technology with a high destructiveness potential distributed among multiple agents who cannot trust each other. To reduce my whole argument to its cartoon outlines: serious brain augmentation seems about as good an idea as handing everyone their own nuclear arsenal.
I think there is a whole long discussion about whether individual or small numbers of brain augments can somehow hope to outsmart whole societies of brain augments that are all working together to improve their augmentations. And also discussions around how much smarter pure AIs would be compared to normal augments.
Even that this presupposition should hold is questionable. Mutual distrust and the associated risk might make cooperative development an exceptional scenario rather than the default one.