There could be knock-on effects of increasing demand for non-AI-generated analogues, increasing harm.
Xylitol
How long will it take until high-fidelity, AI-generated porn becomes an effective substitute for person-generated porn?
Here are some important factors: Is it ethical? Is it legal? Does the output look genuine? Is it cost-effective?
Possible benefits:
More Privacy. If significant markets still exist for porn images, the images taken of porn actors will be used for data rather than as-is, which means that their identity can be protected from the consumer.
More Boutique Offerings. If massive volumes of fairly derivative AI-generated pornography can be created basically for free, this may also drive demand for highly produced, well-compensated, commissioned pornography. Think handcrafted Etsy goods in the current age of globalized mass production, or DeviantArt commissioned hentai drawings, or OnlyFans.
More Variety, Lower Cost. From a consumer perspective, AI generation opens up infinite horizons and lowers the barrier to entry.
Less Human Trafficking. If the market splits into mass-produced AI-generated porn and boutique offerings from a select number of actors, this may reduce demand for people to do run-of-the-mill porn shoots.
Problems to look out for:
Illegal Material. Large crawls with little oversight will almost certainly pick up images of child pornography, violence, etc. This will need to be cleaned.
Copyright. These models will use tons of source images. How do you work out copyright and payment for use? This problem is similar to what Github’s CoPilot is going through right now.
Less Compensation. With more competition from AI generation, many porn actors who aren’t popular enough for dedicated followings may be compensated less for their work.
More Human Trafficking. Maybe the demand for training images is so high, and the normal rate of compensation for taking those images is pushed so low that it’s undesirable for mainstream porn actors, that this increases human trafficking?
A really unpleasant case:
Illegal Porn Without Illegal Training Data. What if a model is created that can create child porn or snuff porn or other horrific things without using the corresponding material for training data? The analogue here is that drawn or 3d-modeled hentai showing this kind of material is not considered illegal in many countries—will this hold for photorealistic content?
I’m not sure how relevant the slowdown in compute price decrease is to this chart, since it starts in 2018 and the slowdown started 6-8 years ago; likewise, AlexNet, the breakout moment for deep learning, was 9 years ago. So if compute price is the primary rate-limiter, I’d think it would have a more gradual, consistent effect as models get bigger and bigger. The slowdown may mean that models cost quite a lot to train, but clearly huge companies like Nvidia and Microsoft haven’t started shying away yet from spending absurd amounts of money to keep growing their models.
I’d hesitate to make predictions based on the slowdown of GPT-3 to Megatron-Turing, for two reasons.
First, GPT-3 represents the fastest, largest increase in model size in this whole chart. If you only look at the models before GPT-3, the drawn trend line tracks well. Note how far off the trend GPT-3 itself is.
Second, GPT-3 was released almost exactly when COVID became a serious concern in the world beyond China. I must imagine that this slowed down model development, but it will be less of a factor going forward.
On your question about Hitler getting eugenic ideas from the US—yes, there’s some evidence that he did. Although I haven’t read it yet, the book “Hitler’s American Model: The United States and the Making of Nazi Race Law” looks like a readable introduction to this concept.
Yup, it’s a problem. As an American I’ve had an optometrist not want to give me my prescription!
Indeed! It wasn’t rare by any means. A great book about this is Illiberal Reformers.
That’s definitely fair, though it’s plausible that some benefits of education do not depend solely on increases in income or social connections. For example, a meta-analysis by Ritchie et al. suggests that education may itself improve intelligence. I do agree, however, that more fine-grained (and more difficult to measure) metrics than “number of years of education” would help sharpen the argument.
Can we model almost all money choices in our life as ethical offsetting problems?
Example 1: You do not give money to a homeless person on the street, or to a friend who’s struggling financially and maybe doesn’t show the best sense when it comes to money management. You give the money you save to a homeless shelter or to politicians promoting basic income or housing programs.
Example 2: You buy cheaper clothes from a company that probably treats its workers worse than other companies. You give the money you save to some organization that promotes ethical global supply chains or gives direct money aid to people in poverty.
(Note: In all these examples, you might choose to give the money to some organization that you believe has some larger net positive than the direct offset organization. So you might not give money to homeless people, and instead give it to Against Malaria Foundation, etc. This is a modification of the offsetting problem that ignores questions of fungibility of well-being among possible benefactors.)
The argument for: In the long term, you might promote systems that prevent these problems from happening in the first place.
The argument against: For example 1, social cohesion. You might suck as a friend, might get a reputation for sucking as a friend, and you might feel less safe in your community knowing that if everyone acted the same way as you, you wouldn’t get support. For example 2, the market mechanism might just be better—maybe you should vote directly with your money? It’s fuzzy, though, since paying less money to companies that pay horribly may just drive down pay more? Some studies on this would be helpful.
Critical caveat: Are you actually shuttling the money you’re saving by doing the thing that’s probably negative into the thing that’s more probably positive? It’s very easy to do the bad thing, say you’re going to do the good thing, and then forget to do the good thing or otherwise rationalize it away.