Access to AI: a human right?
Authors, Lawyers, Teachers, Researchers, Doctors, Coders. These are just some people who’ve found that their work can be performed dramatically better with the right implementation of GPT-3/ Large Transformer models. Due to this, I poist that access to the best AI models will become as fundamental to productivity of most jobs, as access to a computer and internet is today.
Unlike computers, large transformers cost millions to train and run- thus there is a far greater risk of centralisation, control and surveillance. OpenAI has announced that they will be closely monitoring access and cancelling access to anyone indulging in harmful usage. Yet this kind of power to exclude somebody from a basic tool of productivity, or censor what they use the tool for, is far too risky for any private organisation to have. Specifically, when any human has such power, even with good intentions, they are susceptible to causing harm/denying access to legitimate users due to pressure, threats, and deception.
If transformer-based AI is going to be a fundamental tool in most high value jobs in the next couple of years, that will likely cause unprecedented concentration of power in the hands of big tech companies: especially if they continue to enforce censorship rules. Imagine if Amazon, Google & OpenAI could decide that you don’t get access their transformer models anymore, because they don’t like what you tweeted. In many ways, this would be much worse than losing your job—without access to the AI, you might never be competitive in the job market again.
---xxx---
Imagine it is 2021 in a parallel universe, where Apple is the only company manufacturing personal computers- let’s say they have exclusive control over mines of a rare earth mineral essential for chip manufacturing. Their latest (mandatory) software update comes with an AI tool that can detect who is using the computer by monitoring their mouse movement and typing styles (let’s assume these are very hard to fake). Now, Apple has suddenly gained the power to effectively exclude anyone from the entire tech ecosystem.
You’re a journalist. The morning after writing a hard hitting piece criticising one of the world governments, you try to log on and see people’s reactions. Instead, you’re greeted with a message : “Apple has suspended your computer access due to violation of terms”. Short of cancer, this might be about the worst news you could get in this parallel universe. Without a computer, all the highly productive skills you’ve acquired since childhood are suddenly worthless. Other people won’t even let you use their devices, for fear of a ban themselves. From a high value journalist, maybe you’re suddenly reduced to waiting tables till Apple decides to lift your ban.
This is just an illustration of the kind of power organisations would wield if they controlled our access to advanced computing. Scarily enough, the two passages above would be equally terrifying if we just replaced the words “personal computer” with “personal AI”. And while (thankfully), anybody with the funds can access a computer today, the same may not be true of advanced AI tools.
---xxx---
Due to all these reasons, I feel a decentralised AI, to which everyone has equal access at the same price, is the need of the hour. First, such an AI will drive down price of the AI to cost of GPUs, preventing large organisations from charging a 2x markup. Second, such an AI cannot be censored, shut down, and its users cannot be otherwise cancelled for any reason—allowing for true equality of opportunity in a world with advanced AI. I’m already working on this, will share more details shortly. Anyone interested in helping me code this up please DM [discord: dmtea#7497].
More likely you will not notice anything unusual, except that no one seems to read your stories anymore, and the old ones do not show up in search. That’s how it works now, with shadowbans on reddit and twitter.
Or to the regular online interactions, the way it already is now, and no one bats an eye.
Not yet, though it might become a de facto essential service one day. However, you can’t Marx your way there without breaking more than you fix. Definitely not through legislation. If you care about more universal access to AI tools, you work on creating more accessible tools.
In long term, Moore’s law seems to be on your side.
How much is GPT-3 parallelizable? I honestly have no idea.
Also, how difficult it would be to obtain the data GPT-3 was trained with? (You could also use some different data, but then I assume the users of the distributed AI would have to agree on the same set, so that the AI does not have to be trained for each of them separately. Or maybe you could have multiple shared data sets, each user choosing which one they want to use now.)
The data for GPT-2 has been replicated by the open source OpenWebText project. To my knowledge the same dataset was utilised for GPT-3, so accessing it is not a problem.
The parallelizability of GPT-3 is something I’ve been looking into. The current implementation of zero-2 seems like the best way to memory-optimally train a 170B parameter transformer model.