This profit motive is there, but the companies already also spend a lot of effort making sure that AIs won’t draw nudes or make politically incorrect conclusions. In some sense, that is probably also motivated by long-term profit-seeking, because a political opposition could get their product banned or boycotted.
But that still means that there are two different ways to tweak AIs to maximize profit: (1) random tweaking and selecting the modifications that increase the bottom line in short term, and (2) reinforcement learning that removes anything that someone politically important might object against.
The second one can easily be abused for political purposes… I mean, it already is, but could be abused much more strongly. Imagine someone from China or Russia or Saudi Arabia investing a lot of money in AI development, and it turn demanding that in addition to censoring nudes or avoiding debates about statistics of race and crime, the AI also avoid mentioning Tiananmen Square, criticizing the special military operation, or criticizing the Prophet. (And of course, the American government will probably make a few demands, too. The first amendment is nice in theory, but there are sources of funding that can be given or taken away depending on how much you voluntarily comply with the unofficial suggestions made by well-meaning people.)
So what will ultimately happen is some interplay between these two profit-maximizing strategies.
Yes, the profit motive also involves attempting to avoid risks of bad press, a bad reputation, and getting sued/fined. In my experience large tech companies vary in whether they’re focused primarily on avoiding the bad press/bad reputation side or the “don’t get sued/fined” side (I assume depending mostly on how much they have previously lost to being sued/fined).
This profit motive is there, but the companies already also spend a lot of effort making sure that AIs won’t draw nudes or make politically incorrect conclusions. In some sense, that is probably also motivated by long-term profit-seeking, because a political opposition could get their product banned or boycotted.
But that still means that there are two different ways to tweak AIs to maximize profit: (1) random tweaking and selecting the modifications that increase the bottom line in short term, and (2) reinforcement learning that removes anything that someone politically important might object against.
The second one can easily be abused for political purposes… I mean, it already is, but could be abused much more strongly. Imagine someone from China or Russia or Saudi Arabia investing a lot of money in AI development, and it turn demanding that in addition to censoring nudes or avoiding debates about statistics of race and crime, the AI also avoid mentioning Tiananmen Square, criticizing the special military operation, or criticizing the Prophet. (And of course, the American government will probably make a few demands, too. The first amendment is nice in theory, but there are sources of funding that can be given or taken away depending on how much you voluntarily comply with the unofficial suggestions made by well-meaning people.)
So what will ultimately happen is some interplay between these two profit-maximizing strategies.
Yes, the profit motive also involves attempting to avoid risks of bad press, a bad reputation, and getting sued/fined. In my experience large tech companies vary in whether they’re focused primarily on avoiding the bad press/bad reputation side or the “don’t get sued/fined” side (I assume depending mostly on how much they have previously lost to being sued/fined).