I try not to assume that I am smarter than everybody if I can help it, and when there’s a clear cluster of really smart people making these noises, I at least want to investigate and see whether I’m mistaken in my presuppositions.
To me, “democratize AI” makes as much sense as “democratize smallpox”, but it would be good to find out that I’m wrong.
To me, “democratize AI” makes as much sense as “democratize smallpox”, but it would be good to find out that I’m wrong.
Isn’t “democratizing smallpox” a fairly widespread practice, starting from the 18th century or so—and one with rather large utility benefits, all things considered? (Or are you laboring under the misapprehension that the kinds of ‘AIs’ being developed by Google or Facebook are actually dangerous? Because that’s quite ridiculous, TBH. It’s the sort of thing for which EY and Less Wrong get a bad name in machine-learning- [popularly known as ‘AI’] circles.)
Not under any usual definition of “democratize”. Making smallpox accessible to everyone is no one’s objective. I wouldn’t refer to making smallpox available to highly specialized and vetted labs as “democratizing” it.
Google and/or Deepmind explicitly intend on building exactly the type of AI that I would consider dangerous, regardless of whether or not you would consider them to have already done so.
It’s mainly an OpenAI noise but it’s been parroted in many places recently. Definitely seen it in OpenAI materials, and I may have even heard Musk repeat the phrase, but can’t find links. Also:
Our long-term goal is to democratize AI. We want to level the playing field for startups to ensure that innovation doesn’t get locked up in large companies like Google or Facebook. If you’re starting an AI company, we want to help you succeed.
which is pretty close to “we don’t want only Google and Facebook to have control over smallpox”.
Well, YC means, I think, that AI research should not become a monopoly (via e.g. software patents or by buying every competitor). That sounds entirely reasonable to me.
Microsoft means that they want Cortana/Siri/Alexa/Assistant/etc. on every machine and in every home. That’s just marketing speak.
Both expressions have nothing to do with democracy, of course.
Well, YC means, I think, that AI research should not become a monopoly (via e.g. software patents or by buying every competitor). That sounds entirely reasonable to me.
There are other ways that AI research can become a monopoly without any use of patents or purchases of competitors. For example, a fair bit of research can only be done through heavy computing infrastructure. In some sense places like Google will have an advantage no matter how much of their code is open-sourced (and a lot of it is open source already). Another issue is data, which is a type of capital—much unlike money however—where there is a limit to how much value you can extract from it that depends on your computing resources. These are barriers that I think probably can’t be lowered even in principle.
Having advantages in the field of AI research and having a monopoly are very different things.
a fair bit of research can only be done through heavy computing infrastructure
That’s not self-evident to me. A fair bit of practical applications (e.g. Siri/Cortana) require a lot of infrastructure. What kind of research can’t you do if you have a few terabytes of storage and a couple dozens of GPUs? What a research university will be unable to do?
Another issue is data
Data is an interesting issue. But first, the difference between research and practical applications is relevant again, and second, data control is mostly fought over at the legal/government level.
It’s still the case that a lot of problems in AI and data analysis can be broken down into parallel tasks and massively benefit from just having plenty of CPUs/GPUs available. In addition, a lot of the research work at major companies like Google has gone into making sure that the infrastructure advantage is used to the maximum extent possible. But I will grant you that this may not represent an actual monopoly on anything (except perhaps search). Hardware is still easily available to those who can afford it. But in the context of “democratizing AI”, I think we should expect that the firms with the most resources should have significant advantages over small startups in the AI space with not much capital. If I have a bunch of data I need analyzed, will I want to give that job to a new, untested player who may not even have the infrastructure depending on how much data I have, or someone established who I know has the capability and resources?
The issue with data isn’t so much about control / privacy, it’s mainly the fact that if you give me a truckload of a thousand 2 TB hard drives, each containing potentially useful information, there’s really not much I can do with it. Now if I happened to have a massive server farm, that would be a different situation. There’s a pretty big gulf in value for certain objects depending on my ability to make use of it, and I think data is a good example of those kinds of objects.
I think I would update my position here to say that AI is different from manufacturing, in that you can have small scale manufacturing operations (like 3D printing as username2 mentioned), that satisfy some niche market, whereas I sort of doubt that there are any niche markets in AI.
I’ve noticed this a lot with “data science” and AI startups—in what way is their product unique? Usually its not. It’s usually a team of highly talented AI researchers and engineers who need to showcase their skills until they get aqui-hired, or they develop a tool that gets really popular for a while and then it also gets bought. You really just don’t see “disruption” (in the sense that Peter Thiel defines it) in the AI vertical. And you don’t see niches.
I sort of doubt that there are any niche markets in AI
Hold on. Are you talking about niche markets, or are we talking about the capability to do some sort of AI at small-to-medium scale (say, startup to university size)?
You really just don’t see “disruption” (in the sense that Peter Thiel defines it) in the AI vertical. And you don’t see niches.
Um. I don’t think the AI vertical exists. And what do you mean about niches? Wouldn’t, I dunno, analysis of X-rays be a niche? high-frequency trading another niche? forecasting of fashion trends another niche? etc. etc.
Well, niche markets in AI aren’t usually referred to as such, they’re usually just companies that do task X with the help of statistics and machine learning. In that sense nearly all technology and finance companies could be considered an AI company.
AI in the generalist sense is rare (Numenta, Vicarious, DeepMind), and usually gets absorbed by the bigger companies. In the specialist sense, if task X is already well-known or identified, you still have to go against the established players who have more data and have people who have been working on only that problem for decades.
Thinking more about what YC meant in their “democratize AI’ article, it seems they were referring to startups that want to use ML to solve problems that haven’t traditionally been solved using ML yet. Or more generally, they want to help tech companies enter markets that usually aren’t served by a tech company. That’s fine. But I also get the feeling they really mean helping market certain companies by using the AI / ML hype train even if they don’t, strictly speaking, use AI to solve a given task. A lot of “AI” startups just do basic statistical analysis but have a really fancy GUI on top of it.
Generally speaking, yes, silly, but I can imagine contexts where the word “democratize” is still unfortunate but points to an actual underlying issue—monopoly and/or excessive power of some company (or e.g. a cartel) over the entire industry.
I try not to assume that I am smarter than everybody if I can help it, and when there’s a clear cluster of really smart people making these noises, I at least want to investigate and see whether I’m mistaken in my presuppositions.
To me, “democratize AI” makes as much sense as “democratize smallpox”, but it would be good to find out that I’m wrong.
Isn’t “democratizing smallpox” a fairly widespread practice, starting from the 18th century or so—and one with rather large utility benefits, all things considered? (Or are you laboring under the misapprehension that the kinds of ‘AIs’ being developed by Google or Facebook are actually dangerous? Because that’s quite ridiculous, TBH. It’s the sort of thing for which EY and Less Wrong get a bad name in machine-learning- [popularly known as ‘AI’] circles.)
Not under any usual definition of “democratize”. Making smallpox accessible to everyone is no one’s objective. I wouldn’t refer to making smallpox available to highly specialized and vetted labs as “democratizing” it.
Google and/or Deepmind explicitly intend on building exactly the type of AI that I would consider dangerous, regardless of whether or not you would consider them to have already done so.
Links to the noises?
It’s mainly an OpenAI noise but it’s been parroted in many places recently. Definitely seen it in OpenAI materials, and I may have even heard Musk repeat the phrase, but can’t find links. Also:
YCombinator.
which is pretty close to “we don’t want only Google and Facebook to have control over smallpox”.
Microsoft in context of partnership with OpenAI.
This is a much more nonstandard interpretation of “democratize”. I suppose by this logic, Henry Ford democratized cars?
Well, YC means, I think, that AI research should not become a monopoly (via e.g. software patents or by buying every competitor). That sounds entirely reasonable to me.
Microsoft means that they want Cortana/Siri/Alexa/Assistant/etc. on every machine and in every home. That’s just marketing speak.
Both expressions have nothing to do with democracy, of course.
There are other ways that AI research can become a monopoly without any use of patents or purchases of competitors. For example, a fair bit of research can only be done through heavy computing infrastructure. In some sense places like Google will have an advantage no matter how much of their code is open-sourced (and a lot of it is open source already). Another issue is data, which is a type of capital—much unlike money however—where there is a limit to how much value you can extract from it that depends on your computing resources. These are barriers that I think probably can’t be lowered even in principle.
Having advantages in the field of AI research and having a monopoly are very different things.
That’s not self-evident to me. A fair bit of practical applications (e.g. Siri/Cortana) require a lot of infrastructure. What kind of research can’t you do if you have a few terabytes of storage and a couple dozens of GPUs? What a research university will be unable to do?
Data is an interesting issue. But first, the difference between research and practical applications is relevant again, and second, data control is mostly fought over at the legal/government level.
It’s still the case that a lot of problems in AI and data analysis can be broken down into parallel tasks and massively benefit from just having plenty of CPUs/GPUs available. In addition, a lot of the research work at major companies like Google has gone into making sure that the infrastructure advantage is used to the maximum extent possible. But I will grant you that this may not represent an actual monopoly on anything (except perhaps search). Hardware is still easily available to those who can afford it. But in the context of “democratizing AI”, I think we should expect that the firms with the most resources should have significant advantages over small startups in the AI space with not much capital. If I have a bunch of data I need analyzed, will I want to give that job to a new, untested player who may not even have the infrastructure depending on how much data I have, or someone established who I know has the capability and resources?
The issue with data isn’t so much about control / privacy, it’s mainly the fact that if you give me a truckload of a thousand 2 TB hard drives, each containing potentially useful information, there’s really not much I can do with it. Now if I happened to have a massive server farm, that would be a different situation. There’s a pretty big gulf in value for certain objects depending on my ability to make use of it, and I think data is a good example of those kinds of objects.
So how this is different from, say, manufacturing? Or pretty much any business for the last few centuries?
I think I would update my position here to say that AI is different from manufacturing, in that you can have small scale manufacturing operations (like 3D printing as username2 mentioned), that satisfy some niche market, whereas I sort of doubt that there are any niche markets in AI.
I’ve noticed this a lot with “data science” and AI startups—in what way is their product unique? Usually its not. It’s usually a team of highly talented AI researchers and engineers who need to showcase their skills until they get aqui-hired, or they develop a tool that gets really popular for a while and then it also gets bought. You really just don’t see “disruption” (in the sense that Peter Thiel defines it) in the AI vertical. And you don’t see niches.
Hold on. Are you talking about niche markets, or are we talking about the capability to do some sort of AI at small-to-medium scale (say, startup to university size)?
Um. I don’t think the AI vertical exists. And what do you mean about niches? Wouldn’t, I dunno, analysis of X-rays be a niche? high-frequency trading another niche? forecasting of fashion trends another niche? etc. etc.
Well, niche markets in AI aren’t usually referred to as such, they’re usually just companies that do task X with the help of statistics and machine learning. In that sense nearly all technology and finance companies could be considered an AI company.
AI in the generalist sense is rare (Numenta, Vicarious, DeepMind), and usually gets absorbed by the bigger companies. In the specialist sense, if task X is already well-known or identified, you still have to go against the established players who have more data and have people who have been working on only that problem for decades.
Thinking more about what YC meant in their “democratize AI’ article, it seems they were referring to startups that want to use ML to solve problems that haven’t traditionally been solved using ML yet. Or more generally, they want to help tech companies enter markets that usually aren’t served by a tech company. That’s fine. But I also get the feeling they really mean helping market certain companies by using the AI / ML hype train even if they don’t, strictly speaking, use AI to solve a given task. A lot of “AI” startups just do basic statistical analysis but have a really fancy GUI on top of it.
Well I dont think it is. If someone said “let’s democratize manufacturing” in the same sense as YC, would that sound silly to you?
Generally speaking, yes, silly, but I can imagine contexts where the word “democratize” is still unfortunate but points to an actual underlying issue—monopoly and/or excessive power of some company (or e.g. a cartel) over the entire industry.
No, it would sound like a 3D printing startup (and perfectly reasonable).