In that case, there would be severe principle-agent problems, given the disparity between power/intelligence of the trainer/AI systems and the users. If I was someone who couldn’t directly control an AI using your scheme, I’d be very concerned about getting uneven trades or having my property expropriated outright by individual AIs or AI conspiracies, or just ignored and left behind in the race to capture the cosmic commons. I would be really tempted to try another AI design that does purport to have the AI serve my interests directly, even if that scheme is not as “safe”.
Are these worse than the principal-agent problems that exist in any industrialized society? Most humans lack effective control over many important technologies, both in terms of economic productivity and especially military might. (They can’t understand the design of a car they use, they can’t understand the programs they use, they don’t understand what is actually going on with their investments...) It seems like the situation is quite analogous.
Moreover, even if we could build AI in a different way, it doesn’t seem to do anything to address the problem, since it is equally opaque to an end user who isn’t involved in the AI development process. In any case, they are in some sense at the mercy of the AI developer. I guess this is probably the key point—I don’t understand the qualitative difference between being at the mercy of the software developer on the one hand, and being at the mercy of the software developer + the engineers who help the software run day-to-day on the other. There is a slightly different set of issues for monitoring/law enforcement/compliance/etc., but it doesn’t seem like a huge change.
(Probably the rest of this comment is irrelevant.)
To talk more concretely about mechanisms in a simple example, you might imagine a handful of companies who provide AI software. The people who use this software are essentially at the mercy of the software providers (since for all they know the software they are using will subvert their interests in arbitrary ways, whether or not there is a human involved in the process). In the most extreme case an AI provider could effectively steal all of their users’ wealth. They would presumably then face legal consequences, which are not qualitatively changed by the development of AI if the AI control problem is solved. If anything we expect the legal system and government to better serve human interests.
We could talk about monitoring/enforcement/etc., but again I don’t see these issues as interestingly different from the current set of issues, or as interestingly dependent on the nature of our AI control techniques. The most interesting change is probably the irrelevance of human labor, which I think is a very interesting issue economically/politically/legally/etc.
I agree with the general point that as technology improves a singleton becomes more likely. I’m agnostic on whether the control mechanisms I describe would be used by a singleton or by a bunch of actors, and as far as I can tell the character of the control problem is essentially the same in either case.
I do think that a singleton is likely eventually. From the perspective of human observers, a singleton will probably be established relatively shortly after wages fall below subsistence (at the latest). This prediction is mostly based on my expectation that political change will accelerate alongside technological change.
I agree with the general point that as technology improves a singleton becomes more likely. I’m agnostic on whether the control mechanisms I describe would be used by a singleton or by a bunch of actors, and as far as I can tell the character of the control problem is essentially the same in either case.
I wonder—are you also relatively indifferent between a hard and slow takeoff, given sufficient time before the takeoff to develop ai control theory?
(One of the reasons a hard takeoff seems scarier to me is that it is more likely to lead to a singleton, with a higher probability of locking in bad values.)
Are these worse than the principal-agent problems that exist in any industrialized society? Most humans lack effective control over many important technologies, both in terms of economic productivity and especially military might. (They can’t understand the design of a car they use, they can’t understand the programs they use, they don’t understand what is actually going on with their investments...) It seems like the situation is quite analogous.
Moreover, even if we could build AI in a different way, it doesn’t seem to do anything to address the problem, since it is equally opaque to an end user who isn’t involved in the AI development process. In any case, they are in some sense at the mercy of the AI developer. I guess this is probably the key point—I don’t understand the qualitative difference between being at the mercy of the software developer on the one hand, and being at the mercy of the software developer + the engineers who help the software run day-to-day on the other. There is a slightly different set of issues for monitoring/law enforcement/compliance/etc., but it doesn’t seem like a huge change.
(Probably the rest of this comment is irrelevant.)
To talk more concretely about mechanisms in a simple example, you might imagine a handful of companies who provide AI software. The people who use this software are essentially at the mercy of the software providers (since for all they know the software they are using will subvert their interests in arbitrary ways, whether or not there is a human involved in the process). In the most extreme case an AI provider could effectively steal all of their users’ wealth. They would presumably then face legal consequences, which are not qualitatively changed by the development of AI if the AI control problem is solved. If anything we expect the legal system and government to better serve human interests.
We could talk about monitoring/enforcement/etc., but again I don’t see these issues as interestingly different from the current set of issues, or as interestingly dependent on the nature of our AI control techniques. The most interesting change is probably the irrelevance of human labor, which I think is a very interesting issue economically/politically/legally/etc.
I agree with the general point that as technology improves a singleton becomes more likely. I’m agnostic on whether the control mechanisms I describe would be used by a singleton or by a bunch of actors, and as far as I can tell the character of the control problem is essentially the same in either case.
I do think that a singleton is likely eventually. From the perspective of human observers, a singleton will probably be established relatively shortly after wages fall below subsistence (at the latest). This prediction is mostly based on my expectation that political change will accelerate alongside technological change.
I wonder—are you also relatively indifferent between a hard and slow takeoff, given sufficient time before the takeoff to develop ai control theory?
(One of the reasons a hard takeoff seems scarier to me is that it is more likely to lead to a singleton, with a higher probability of locking in bad values.)