If those directions are big and complicated plans for something important, that you follow without really understanding why you’re doing (and this is where most of the benefits of working with an AGI will show up), then you could unknowingly take over the world using a sufficiently clever scheme.
It also helps that Google Maps does not have general intelligence, so it does not include user’s reactions to its output, the consequent user’s actions in the real world, etc. as variables in its model, which may influence the quality of the solution, and therefore can (and should) be optimized (within constraints given by user’s psychology, etc.), if possible.
Shortly: Google Maps does not manipulate you, because it does not see you.
A generally smart Google Maps might not manipulate you, because it has no motivation to do so.
It’s hard to imagine how commercial services would work when they’re powered by GAI (e.g. if you asked a GAI version of Google Maps a question that’s unrelated to maps, e.g. “What’s a good recipe for Cheesecake?”, would it tell you that you should ask Google Search instead? Would it defer to Google Search and forward the answer to you? Would it just figure out the answer anyway, since it’s generally intelligent? Would the company Google simply collapse all services into a single “Google” brand, rather than have “Google Search”, “Google Mail”, “Google Maps”, etc, and have that single brand be powered by a single GAI? etc.) but let’s stick to the topic at hand and assume there’s a GAI named “Google Maps”, and you’re asking “How do I get to Albany?”
Given this use-case, would the engineers that developed the Google Maps GAI more likely give it a utility like “Maximize the probability that your response is truthful”, or is it more likely that the utility would be something closer to “Always respond with a set of directions which are legal in the relevant jurisdictions that they are to be followed within which, if followed by the user, would cause the user to arrive at the destination while minimizing cost/time/complexity (depending on the user’s preferences)”?
It also helps that Google Maps does not have general intelligence, so it does not include user’s reactions to its output, the consequent user’s actions in the real world, etc. as variables in its model, which may influence the quality of the solution, and therefore can (and should) be optimized (within constraints given by user’s psychology, etc.), if possible.
Shortly: Google Maps does not manipulate you, because it does not see you.
A generally smart Google Maps might not manipulate you, because it has no motivation to do so.
It’s hard to imagine how commercial services would work when they’re powered by GAI (e.g. if you asked a GAI version of Google Maps a question that’s unrelated to maps, e.g. “What’s a good recipe for Cheesecake?”, would it tell you that you should ask Google Search instead? Would it defer to Google Search and forward the answer to you? Would it just figure out the answer anyway, since it’s generally intelligent? Would the company Google simply collapse all services into a single “Google” brand, rather than have “Google Search”, “Google Mail”, “Google Maps”, etc, and have that single brand be powered by a single GAI? etc.) but let’s stick to the topic at hand and assume there’s a GAI named “Google Maps”, and you’re asking “How do I get to Albany?”
Given this use-case, would the engineers that developed the Google Maps GAI more likely give it a utility like “Maximize the probability that your response is truthful”, or is it more likely that the utility would be something closer to “Always respond with a set of directions which are legal in the relevant jurisdictions that they are to be followed within which, if followed by the user, would cause the user to arrive at the destination while minimizing cost/time/complexity (depending on the user’s preferences)”?