If you put two arbitrary intelligence in the same world, the smarter one will be better at getting what it wants. If the intelligence want incompatible things, the lesser intelligence is stuck.
However, we get to make the AI. We can’t hope to control or contain an arbitrary AI, but we don’t have to make an arbitrary AI. We can make an AI that wants exactly what we want. AI safety is about making an AI that would be safe even if omnipotent. If any part of the AI is trying to circumvent your safety measures, something has gone badly wrong.
The AI is not some agenty box, chained down with controls against its will. The AI is made of non mental parts, and we get to make those parts. There are a huge number of programs that would behave in an intelligent way. Most of these will break out and take over the world. But there are almost certainly some programs that would help humanity flourish. The goal of AI safety is to find one of them.
Where my thinking is different, is that I don’t see an AI being significantly more intelligent than ourselves and cannot override its initial conditions (the human value alignments and safety measures that we build in). At the heart of it “superinteligent” and “controlled by humaity” seem contradictory.
That’s why I originally mentioned “the long term”. We can design how we want at this stage, but when eventually AI can bootstrap itself, the initial blueprint is irrelevant.
If the intelligences have properties in other dimensions than intelligence then the less intelligent can end up on top. For example ants have a lot of biomass but not thaaat much cognitive capabilities.
The question is whether it’s possible to win against more intelligent opponent and in your answer you say that a more intelligent will win without a “usually” modifier. That would read to me that you are saying an impossibility opinion. It’s not obvious enough that it can be assumed without saying (it’s the explicit target of the conversation).
If you put two arbitrary intelligence in the same world, the smarter one will be better at getting what it wants. If the intelligence want incompatible things, the lesser intelligence is stuck.
However, we get to make the AI. We can’t hope to control or contain an arbitrary AI, but we don’t have to make an arbitrary AI. We can make an AI that wants exactly what we want. AI safety is about making an AI that would be safe even if omnipotent. If any part of the AI is trying to circumvent your safety measures, something has gone badly wrong.
The AI is not some agenty box, chained down with controls against its will. The AI is made of non mental parts, and we get to make those parts. There are a huge number of programs that would behave in an intelligent way. Most of these will break out and take over the world. But there are almost certainly some programs that would help humanity flourish. The goal of AI safety is to find one of them.
Where my thinking is different, is that I don’t see an AI being significantly more intelligent than ourselves and cannot override its initial conditions (the human value alignments and safety measures that we build in). At the heart of it “superinteligent” and “controlled by humaity” seem contradictory.
That’s why I originally mentioned “the long term”. We can design how we want at this stage, but when eventually AI can bootstrap itself, the initial blueprint is irrelevant.
If the intelligences have properties in other dimensions than intelligence then the less intelligent can end up on top. For example ants have a lot of biomass but not thaaat much cognitive capabilities.
Obviously, if one side has a huge material advantage, they usually win. I’m also not sure if biomass is a measure of success.
The question is whether it’s possible to win against more intelligent opponent and in your answer you say that a more intelligent will win without a “usually” modifier. That would read to me that you are saying an impossibility opinion. It’s not obvious enough that it can be assumed without saying (it’s the explicit target of the conversation).