Even if we don’t assume level of intelligence when populations stop to be less smart than any individual (which is certainly possible via various coordination schemas), population of superintelligences is going to be smarter than population of humans or any particular human.
we get to very carefully control how they can interact
This statement seems to came from time when people thought we would box our AIs and not give them 24⁄7 internet access.
came from time when people thought we would box our AIs and not give them 24⁄7 internet access.
OK. But then we need to loudly say that the problem with AI control is not that it’s inherently impossible, it’s that we just have to not be idiots about it.
FWIW my ass numbers have for a while been: 50% we die because nobody will know a workable technical solution to keeping powerful AI under control, by the time we need it, plus another (disjunctive) 40% that we die despite knowing such a solution, thanks to competition issues, other coordination issues, careless actors, bad actors, and so on (see e.g. here), equals 90% total chance of extinction or permanent disempowerment.
When you say “We just have to not be idiots about it” that’s an ambiguous phrase, because people say that phrase in regards to both very easy problems and very hard problems.
People say “We can move the delicate items without breaking them, we just have to not be idiots about it—y’know, make sure our shoes are tied, look where we’re going, etc.” That’s example is actually easy for a group of non-idiots. We would bet on success “by default” (without any unusual measures).
But people also say “Lowering rents in San Francisco is easy, we just have to not be idiots about it—y’know, build much much more housing where people want to live.” But that’s actually hard. There are strong memetic and structural forces arrayed against that. In other words, idiots do exist and are not going away anytime soon! :)
So anyway, I’m not sure what you were trying to convey in that comment.
population of superintelligences is going to be smarter than population of humans or any particular human
It may not be a population of superintelligences, it may be a very large population of marginally smarter than human AGIs (AGI-H+), and human dictators do in fact control large populations of other people who are marginally smarter than them (or, at least certain subpopulations are marginally smarter than the dictator)
Even if we don’t assume level of intelligence when populations stop to be less smart than any individual (which is certainly possible via various coordination schemas), population of superintelligences is going to be smarter than population of humans or any particular human.
This statement seems to came from time when people thought we would box our AIs and not give them 24⁄7 internet access.
OK. But then we need to loudly say that the problem with AI control is not that it’s inherently impossible, it’s that we just have to not be idiots about it.
FWIW my ass numbers have for a while been: 50% we die because nobody will know a workable technical solution to keeping powerful AI under control, by the time we need it, plus another (disjunctive) 40% that we die despite knowing such a solution, thanks to competition issues, other coordination issues, careless actors, bad actors, and so on (see e.g. here), equals 90% total chance of extinction or permanent disempowerment.
When you say “We just have to not be idiots about it” that’s an ambiguous phrase, because people say that phrase in regards to both very easy problems and very hard problems.
People say “We can move the delicate items without breaking them, we just have to not be idiots about it—y’know, make sure our shoes are tied, look where we’re going, etc.” That’s example is actually easy for a group of non-idiots. We would bet on success “by default” (without any unusual measures).
But people also say “Lowering rents in San Francisco is easy, we just have to not be idiots about it—y’know, build much much more housing where people want to live.” But that’s actually hard. There are strong memetic and structural forces arrayed against that. In other words, idiots do exist and are not going away anytime soon! :)
So anyway, I’m not sure what you were trying to convey in that comment.
It’s an actually hard part!
Well… I mean it’s not some absurd level of impossible to like do some control stuff lol
I mean, “not being idiots about alignment” is a hard part.
It may not be a population of superintelligences, it may be a very large population of marginally smarter than human AGIs (AGI-H+), and human dictators do in fact control large populations of other people who are marginally smarter than them (or, at least certain subpopulations are marginally smarter than the dictator)