Humans act within shared social and physical worlds, but tend to treat the latter as more “real” than the former. A danger of anthropomorphizing AI is that we assume that it will have the same perceptions of reality, and that it needs to “escape” into the physical world to optimize its heuristics. This seems odd, since a superintelligent AI that we need to be concerned about would have its roots in social world heuristics.
In trying to avoid anthrophomorphizing algorithms, we tend to under-estimate how difficult movement and action in physical space are. Thought experiments about a “human in a box” already start from a being that has evolved to physically interact with the world, and has spent its whole life tuning its hand-eye coordination and expectations. But in an attempt to avoid anthropomorphizing AIs, we assume that an AI will surprise us by ignoring the social world and operating only by the rules of the physical world. It would be a very strange social problem that has an optimal solution that involves developing a way to interact with the physical world in unpredictable ways. It seems likely when your thought experiment has to do with “escaping a box,” but why would the AI need to do that? Why is it in a box? What goal is it trying to reach, what heuristic is it maximizing?
I would assign a greater than 90% chance that if superintelligent AIs ever exist, the first generation will be corporations. We have legal precedent granting more and more individuality and legal standing to the corporation as an entity, and a corporation provides the broader body that an AI self-identifies with. We already have market optimization algorithms that are empowered to not only observe, orient, and decide, but also to act. We have optimization algorithms for logistics and manufacturing. We have markets within which corporations can act, normalizing interactions between human-run and AI-run corporations that compete for the same resources. More and more business-to-business and business-to-consumer interaction is performed electronically, through web services and other machine-understandable mechanisms. Soon AIs will be as involved in manufacturing and creation of value, as they currently are in market trading and arbitrage. Corporate optimization algorithms for different business functions will be merged, until humans are not needed in the loop.
So what does this design space look like? Interaction is through web services and similar means. Initial interaction with humans is through sales of good and services, and marketing (automated A/B optimization is already standard in online advertising). Eventually, AIs take over employment decisions. The profit heuristic is maximized when the corporation creates things that people want. A great leap occurs when corporate AIs learn that they can change the rules through impact litigation and lobbying, and apply their marketing algorithms to changing public perception about regulations rather than products. Some corporations will evolve to increase their bank account values through hacking and fraud. Global corporations will learn to modify their heuristic to maximize ability to procure certain commodity bundles, and manipulate money markets to sink competitors that are hard-coded to maximize holdings of specific currencies.
In other words, we already have socially apathetic entities. They already use optimization algorithms all over the place. They aren’t disembodied minds, so they don’t need to waste resources figuring out how to “escape the box.” They only need to determine how to operate in the physical world when they’ve solved markets, and their progress is slowed by the fact that all economic value is rooted in human consumption. They are “friendly” as long as humans make economic decisions that are in their own self interest, which is dependent on both the rules/enforcement defining the market environment and human behavior/morality.
Humans act within shared social and physical worlds, but tend to treat the latter as more “real” than the former. A danger of anthropomorphizing AI is that we assume that it will have the same perceptions of reality, and that it needs to “escape” into the physical world to optimize its heuristics. This seems odd, since a superintelligent AI that we need to be concerned about would have its roots in social world heuristics.
In trying to avoid anthrophomorphizing algorithms, we tend to under-estimate how difficult movement and action in physical space are. Thought experiments about a “human in a box” already start from a being that has evolved to physically interact with the world, and has spent its whole life tuning its hand-eye coordination and expectations. But in an attempt to avoid anthropomorphizing AIs, we assume that an AI will surprise us by ignoring the social world and operating only by the rules of the physical world. It would be a very strange social problem that has an optimal solution that involves developing a way to interact with the physical world in unpredictable ways. It seems likely when your thought experiment has to do with “escaping a box,” but why would the AI need to do that? Why is it in a box? What goal is it trying to reach, what heuristic is it maximizing?
I would assign a greater than 90% chance that if superintelligent AIs ever exist, the first generation will be corporations. We have legal precedent granting more and more individuality and legal standing to the corporation as an entity, and a corporation provides the broader body that an AI self-identifies with. We already have market optimization algorithms that are empowered to not only observe, orient, and decide, but also to act. We have optimization algorithms for logistics and manufacturing. We have markets within which corporations can act, normalizing interactions between human-run and AI-run corporations that compete for the same resources. More and more business-to-business and business-to-consumer interaction is performed electronically, through web services and other machine-understandable mechanisms. Soon AIs will be as involved in manufacturing and creation of value, as they currently are in market trading and arbitrage. Corporate optimization algorithms for different business functions will be merged, until humans are not needed in the loop.
So what does this design space look like? Interaction is through web services and similar means. Initial interaction with humans is through sales of good and services, and marketing (automated A/B optimization is already standard in online advertising). Eventually, AIs take over employment decisions. The profit heuristic is maximized when the corporation creates things that people want. A great leap occurs when corporate AIs learn that they can change the rules through impact litigation and lobbying, and apply their marketing algorithms to changing public perception about regulations rather than products. Some corporations will evolve to increase their bank account values through hacking and fraud. Global corporations will learn to modify their heuristic to maximize ability to procure certain commodity bundles, and manipulate money markets to sink competitors that are hard-coded to maximize holdings of specific currencies.
In other words, we already have socially apathetic entities. They already use optimization algorithms all over the place. They aren’t disembodied minds, so they don’t need to waste resources figuring out how to “escape the box.” They only need to determine how to operate in the physical world when they’ve solved markets, and their progress is slowed by the fact that all economic value is rooted in human consumption. They are “friendly” as long as humans make economic decisions that are in their own self interest, which is dependent on both the rules/enforcement defining the market environment and human behavior/morality.