One possibility: Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.
Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.
From my point of view, the main problem with “making the benefits of capital accrue to everyone generally” is that… well, people who use these words as an applause light typically do something else instead. First, they take most of the benefits of capital to themselves (think: all those communist leaders with golden watches and huge dachas). Second, as a side-effect of incompetent management (where signalling political loyalty trumps technical competence), even the capital that isn’t stolen is used very inefficiently.
But on a smaller scale… companies paying taxes, and those taxes being used to build roads or pay for universal healthcare… is an example of providing the benefits of capital to everyone. Just not all the capital; and besides the more-or-less neutral taxation, the use of the capital is not micromanaged by people chosen for their political loyalty. So the costs to the economy are much smaller, and arguably the social benefits are larger (some libertarians may disagree).
Assuming that the hypothetical artificial superintelligence will be (1) smarter than humans, and (2) able to scale, e.g. to increase its cognitive powers thousandfold by creating 1000 copies of itself which will not immediately start feeding Moloch by fighting against each other, it should be able to not fuck up the whole economy, and could quite likely increase the production, even without increasing the costs to environment, by simply doing things smarter and removing inefficiencies. Unlike the communist bureaucrats who (1) were not superintelligent, and sometimes even not of average intelligence, (2) optimized each for their own personal goals, and (3) routinely lied to each other and to their superiors to avoid irrational punishments, so soon the whole system used completely fake data. Not being bound by ideology, if the AI would find out that it is better to leave something to do to humans (quite unlikely IMHO, but let’s assume so for the sake of the argument), it would be free to do exactly that. Unlike a hypothetical enlightened communist bureaucrat, who after making the same observation would be probably shot as a traitor and replaced by a less enlightened one.
If the choice is between giving each human a 1⁄7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve (because I don’t think anyone would be able to get any job in a world where the scalable superintelligence is your direct competitor), the former option seems better to me, and I think even Elon Musk wouldn’t mind… especially considering that going for the former option will make people much more willing to cooperate with him.
From my point of view, the main problem with “making the benefits of capital accrue to everyone generally” is that… well, people who use these words as an applause light typically do something else instead.
So do you think that if we had real communism, with selfless and competent rulers, it would work just fine?
companies paying taxes, and those taxes being used to build roads or pay for universal healthcare… is an example of providing the benefits of capital to everyone
Capital is not just money. You tax, basically, production (=creation of value) and production is not a “benefit of capital”.
In any case, the underlying argument here is that no one should own AI technology. As always, this means a government monopoly and that strikes me as a rather bad idea.
If the choice is between giving each human a 1⁄7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve
Can we please not make appallingly stupid arguments? In which realistic scenarios do you thing this will be a choice that someone faces?
So do you think that if we had real communism, with selfless and competent rulers, it would work just fine?
For the obvious reasons I don’t think you can find selfless and competent human rulers to make this really work. But conditional on possibility of creating a Friendly superintelligent AI… sure.
Although calling that “communism” is about as much of a central example, as calling the paperclip maximizer scenario “capitalism”.
production is not a “benefit of capital”.
Capital is a factor in production, often a very important one.
no one should own AI technology. As always, this means a government monopoly
Making a superintelligent AI will make our definitions of ownership (whether private or government) obsolete. And “as always” does not seem like a good argument for Singularity scenarios.
In which realistic scenarios do you thing this will be a choice that someone faces?
Depends on whether you consider the possibility of superintelligent AI to be “realistic”.
For the obvious reasons I don’t think you can find selfless and competent human rulers to make this really work.
But conditional on finding selfless and competent rulers (note that I’m not talking about the rest of the population), you think that communism will work? In particular, the economy will work?
Depends on whether you consider the possibility of superintelligent AI to be “realistic”.
Aaaaand let me quote you yourself from just a sentence back:
Making a superintelligent AI will make our definitions of ownership (whether private or government) obsolete.
One of the arms of your choice involves Elon Musk (or equivalent) owning the singularity AI, the other gives every human 1/7B ownership share of the same AI. How does that work, exactly?
Besides, I thought that when Rapture comes...err… I mean, when the Singularity happens, humans will not decide anything any more—the AI will take over and will make the right decisions for them—isn’t that so?
conditional on finding selfless and competent rulers (note that I’m not talking about the rest of the population), you think that communism will work?
If we’re talking about a Glorious Post-Singularity Future then presumably the superintelligent AIs are not only ruling the country and making economic decisions but also doing all the work, and they probably have magic nanobot spies everywhere so it’s hard to lie to them effectively. That probably does get rid of the more obvious failure modes of a communist economy.
(If you just put the superintelligent AIs in charge of the top-level economic institutions and leave everything else to be run by the same dishonest and incompetent humans as normal, you’re probably right that that wouldn’t suffice.)
Actually, no, we’re (at least, I am) talking about pre-Singularity situations were you still have to dig in the muck to grow crops and make metal shavings and sawdust to manufacture things.
Viliam said that the main problem with communism is that the people at the top are (a) incompetent; and (b) corrupt. I don’t think that’s true with respect to the economy. That is, I agree that communism leads to incompetent and corrupt people rising to the top, but that is not the primary reason why communist economy isn’t well-functioning.
I think the primary reason is that communism breaks the feedback loop in the economy where prices and profit function as vital dynamic indicators for resource allocation decisions. A communist economy is like a body where the autonomic nervous system is absent and most senses function slowly and badly (but the brain can make the limbs move just fine). Just making the bureaucrats (human-level) competent and honest is not going to improve things much.
Maybe I misunderstood the context, but it looked to me as if Viliam was intending only to say that post-Singularity communism might work out OK on account of being run by superintelligent AIs rather than superstupid meatsacks, and any more general-sounding things he may have said about the problems of communism were directed at that scenario.
(I repeat that I agree that merely replacing the leaders with superintelligent AIs and changing nothing else would most likely not make communism work at all, for reasons essentially the same as yours.)
It seems you agree with Viliam: see the second paragraph below.
For the obvious reasons I don’t think you can find selfless and competent human rulers to make this really work. But conditional on possibility of creating a Friendly superintelligent AI… sure.
Although calling that “communism” is about as much of a central example, as calling the paperclip maximizer scenario “capitalism”.
That is completely irrelevant to debates about AI.
But anyway, I object against the premise being realistic. Humans run on “corrupted hardware”, so even if they start as honest and competent and rational and well-meaning, that usually changes very quickly. In long term, they also get old and die, so what you would actually need is honest and competent elite group, able to raise and filter its next generation that would be at least equally honest, competent, rational, well-meaning, and skilled at raising and filtering the next generation for the same qualities.
In other words, you would need to have a group of rulers enlightened enough that they are able to impartially and precisely judge whether their competitors are equally good or somewhat better in the relevant criteria, and in such case they would voluntarily transfer their power to the competitors. -- Which goes completely against what the evolution taughts us: that if your opponent is better than you, you should use your power to crush him, preferably immediately, while you still have the advantage of power, and before other tribe members notice his superiority and start offering to ally with him against you.
Oh, and this perfect group would also need to be able to overthrow the current power structures and get themselves in the positions of power, without losing any of its qualities in the process. That is, they have to be competent enough to overthrow an opponent with orders of magnitude more power (imagine someone who owns the media and police and army and secret service and can also use illegal methods to kidnap your members, torture them to extract their secrets, and kill them afterwards), without having to compromise on your values. So, in addition, the members of your elite group must have perfect mental resistance against torture and blackmail; and be numerous enough, so they can easily replace their fallen brethren and continue with the original plan.
Well… there doesn’t seem to be a law of physics that would literally prevent this, it just seems very unlikely.
With a less elite group, there are many things that can possibly go wrong, and evolutionary pressures in favor of things going wrong as quickly as possible.
Fair enough; I just wanted to make it explicit that that question has basically nothing to do with anything else in the thread. I mean, Viliam was saying “so it might be a good idea to do such-and-such about superhumanly capable AI” and you came in and said “aha, that kinda pattern-matches to communism. Are you defending communism?” and then said oh, by the way, I’m only interested in communism in the case where there is no superhumanly capable AI.
But, well, trolls gonna troll, and you’ve already said trolling is your preferred mode of political debate.
Well, the kinda-sorta OP phrased the issue this way:
If the choice is between giving each human a 1⁄7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve
...and that set the tone for the entire subthread :-P
Sure, but capital is a rather vacuous word. It basically means “stuff that might be useful for something”. So yes, talking about democratizing AI is a whole lot more meaningful than just saying “y’know, it would be nice if everyone could have more useful stuff that might help em achieve their goals. Man, that’s so deeeep… puff”, which is what your variant ultimately amounts to!
capital is a rather vacuous word. It basically means “stuff that might be useful for something”
Um. Not in economics where it is well-defined. Capital is resources needed for production of value. Your stack of decade-old manga might be useful for something, but it’s not capital. The $20 bill in your wallet isn’t capital either.
Um. Not in economics where it is well-defined. Capital is resources needed for production of value.
While capital is resources needed for production of value, it’s a bit misleading to imply that that’s how it’s “well-defined” “in economics”, since the reader is likely to come away with the impression that capital = resources needed to produce value, even though not all resources needed for production of value are capital. Economics also defines labour & land* as resources needed for production of value.
* And sometimes “entrepreneurship”, but that’s always struck me as a pretty bogus “factor of production” — as economists tacitly admit by omitting it as a variable from their production functions, even though it’s as free to vary as labour.
The way I remember it from my college days was that the inputs for the production of wealth are land, labor and capital (and, as you said, sometimes entrepreneurship is listed, although often this is lumped in with labor). Capital is then defined as wealth used towards the production of additional wealth. This formulation avoids the ambiguity that you identified.
None the less, “capital” and “AI” are extremely different in scope and I see no particular reason to think that if “let’s do X with capital” turns out to be a bad idea then we can rely on “let’s do X with AI” also being a bad idea.
In a hypothetical future where the benefits of AI are so enormous that the rest of the economy can be ignored, perhaps the two kinda coalesce (though I’m not sure it’s entirely clear), but that hypothetical future is also one so different from the past that past failures of “let’s do X with capital” aren’t necessarily a good indication of similar future failure.
Capital is resources needed for production of value.
And that stack of decade-old manga is a resource that might indeed provide value (in the form of continuing enjoyment) to a manga collector. That makes it capital. A $20 bill in my wallet is ultimately a claim on real resources that the central bank commits to honoring, by preserving the value of the currency—that makes it “capital” from a strictly individual perspective (indeed, such claims are often called “financial capital”), although it’s indeed not real “capital” in an economy-wide sense (because any such claim must be offset by a corresponding liability).
I feel like it’s rather obvious that this is approximately what is meant. The people who talk of democratizing AI are, mostly, not speaking about superintelligence or do not see it as a threat (with the exception of Elon Musk, maybe).
I try not to assume that I am smarter than everybody if I can help it, and when there’s a clear cluster of really smart people making these noises, I at least want to investigate and see whether I’m mistaken in my presuppositions.
To me, “democratize AI” makes as much sense as “democratize smallpox”, but it would be good to find out that I’m wrong.
To me, “democratize AI” makes as much sense as “democratize smallpox”, but it would be good to find out that I’m wrong.
Isn’t “democratizing smallpox” a fairly widespread practice, starting from the 18th century or so—and one with rather large utility benefits, all things considered? (Or are you laboring under the misapprehension that the kinds of ‘AIs’ being developed by Google or Facebook are actually dangerous? Because that’s quite ridiculous, TBH. It’s the sort of thing for which EY and Less Wrong get a bad name in machine-learning- [popularly known as ‘AI’] circles.)
Not under any usual definition of “democratize”. Making smallpox accessible to everyone is no one’s objective. I wouldn’t refer to making smallpox available to highly specialized and vetted labs as “democratizing” it.
Google and/or Deepmind explicitly intend on building exactly the type of AI that I would consider dangerous, regardless of whether or not you would consider them to have already done so.
It’s mainly an OpenAI noise but it’s been parroted in many places recently. Definitely seen it in OpenAI materials, and I may have even heard Musk repeat the phrase, but can’t find links. Also:
Our long-term goal is to democratize AI. We want to level the playing field for startups to ensure that innovation doesn’t get locked up in large companies like Google or Facebook. If you’re starting an AI company, we want to help you succeed.
which is pretty close to “we don’t want only Google and Facebook to have control over smallpox”.
Well, YC means, I think, that AI research should not become a monopoly (via e.g. software patents or by buying every competitor). That sounds entirely reasonable to me.
Microsoft means that they want Cortana/Siri/Alexa/Assistant/etc. on every machine and in every home. That’s just marketing speak.
Both expressions have nothing to do with democracy, of course.
Well, YC means, I think, that AI research should not become a monopoly (via e.g. software patents or by buying every competitor). That sounds entirely reasonable to me.
There are other ways that AI research can become a monopoly without any use of patents or purchases of competitors. For example, a fair bit of research can only be done through heavy computing infrastructure. In some sense places like Google will have an advantage no matter how much of their code is open-sourced (and a lot of it is open source already). Another issue is data, which is a type of capital—much unlike money however—where there is a limit to how much value you can extract from it that depends on your computing resources. These are barriers that I think probably can’t be lowered even in principle.
Having advantages in the field of AI research and having a monopoly are very different things.
a fair bit of research can only be done through heavy computing infrastructure
That’s not self-evident to me. A fair bit of practical applications (e.g. Siri/Cortana) require a lot of infrastructure. What kind of research can’t you do if you have a few terabytes of storage and a couple dozens of GPUs? What a research university will be unable to do?
Another issue is data
Data is an interesting issue. But first, the difference between research and practical applications is relevant again, and second, data control is mostly fought over at the legal/government level.
It’s still the case that a lot of problems in AI and data analysis can be broken down into parallel tasks and massively benefit from just having plenty of CPUs/GPUs available. In addition, a lot of the research work at major companies like Google has gone into making sure that the infrastructure advantage is used to the maximum extent possible. But I will grant you that this may not represent an actual monopoly on anything (except perhaps search). Hardware is still easily available to those who can afford it. But in the context of “democratizing AI”, I think we should expect that the firms with the most resources should have significant advantages over small startups in the AI space with not much capital. If I have a bunch of data I need analyzed, will I want to give that job to a new, untested player who may not even have the infrastructure depending on how much data I have, or someone established who I know has the capability and resources?
The issue with data isn’t so much about control / privacy, it’s mainly the fact that if you give me a truckload of a thousand 2 TB hard drives, each containing potentially useful information, there’s really not much I can do with it. Now if I happened to have a massive server farm, that would be a different situation. There’s a pretty big gulf in value for certain objects depending on my ability to make use of it, and I think data is a good example of those kinds of objects.
I think I would update my position here to say that AI is different from manufacturing, in that you can have small scale manufacturing operations (like 3D printing as username2 mentioned), that satisfy some niche market, whereas I sort of doubt that there are any niche markets in AI.
I’ve noticed this a lot with “data science” and AI startups—in what way is their product unique? Usually its not. It’s usually a team of highly talented AI researchers and engineers who need to showcase their skills until they get aqui-hired, or they develop a tool that gets really popular for a while and then it also gets bought. You really just don’t see “disruption” (in the sense that Peter Thiel defines it) in the AI vertical. And you don’t see niches.
I sort of doubt that there are any niche markets in AI
Hold on. Are you talking about niche markets, or are we talking about the capability to do some sort of AI at small-to-medium scale (say, startup to university size)?
You really just don’t see “disruption” (in the sense that Peter Thiel defines it) in the AI vertical. And you don’t see niches.
Um. I don’t think the AI vertical exists. And what do you mean about niches? Wouldn’t, I dunno, analysis of X-rays be a niche? high-frequency trading another niche? forecasting of fashion trends another niche? etc. etc.
Well, niche markets in AI aren’t usually referred to as such, they’re usually just companies that do task X with the help of statistics and machine learning. In that sense nearly all technology and finance companies could be considered an AI company.
AI in the generalist sense is rare (Numenta, Vicarious, DeepMind), and usually gets absorbed by the bigger companies. In the specialist sense, if task X is already well-known or identified, you still have to go against the established players who have more data and have people who have been working on only that problem for decades.
Thinking more about what YC meant in their “democratize AI’ article, it seems they were referring to startups that want to use ML to solve problems that haven’t traditionally been solved using ML yet. Or more generally, they want to help tech companies enter markets that usually aren’t served by a tech company. That’s fine. But I also get the feeling they really mean helping market certain companies by using the AI / ML hype train even if they don’t, strictly speaking, use AI to solve a given task. A lot of “AI” startups just do basic statistical analysis but have a really fancy GUI on top of it.
Generally speaking, yes, silly, but I can imagine contexts where the word “democratize” is still unfortunate but points to an actual underlying issue—monopoly and/or excessive power of some company (or e.g. a cartel) over the entire industry.
What is the steelmanned, not-nonsensical interpretation of the phrase “democratize AI”?
One possibility: Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.
s/AI/capital/
Now, where have I heard this before..?
And your point is...?
From my point of view, the main problem with “making the benefits of capital accrue to everyone generally” is that… well, people who use these words as an applause light typically do something else instead. First, they take most of the benefits of capital to themselves (think: all those communist leaders with golden watches and huge dachas). Second, as a side-effect of incompetent management (where signalling political loyalty trumps technical competence), even the capital that isn’t stolen is used very inefficiently.
But on a smaller scale… companies paying taxes, and those taxes being used to build roads or pay for universal healthcare… is an example of providing the benefits of capital to everyone. Just not all the capital; and besides the more-or-less neutral taxation, the use of the capital is not micromanaged by people chosen for their political loyalty. So the costs to the economy are much smaller, and arguably the social benefits are larger (some libertarians may disagree).
Assuming that the hypothetical artificial superintelligence will be (1) smarter than humans, and (2) able to scale, e.g. to increase its cognitive powers thousandfold by creating 1000 copies of itself which will not immediately start feeding Moloch by fighting against each other, it should be able to not fuck up the whole economy, and could quite likely increase the production, even without increasing the costs to environment, by simply doing things smarter and removing inefficiencies. Unlike the communist bureaucrats who (1) were not superintelligent, and sometimes even not of average intelligence, (2) optimized each for their own personal goals, and (3) routinely lied to each other and to their superiors to avoid irrational punishments, so soon the whole system used completely fake data. Not being bound by ideology, if the AI would find out that it is better to leave something to do to humans (quite unlikely IMHO, but let’s assume so for the sake of the argument), it would be free to do exactly that. Unlike a hypothetical enlightened communist bureaucrat, who after making the same observation would be probably shot as a traitor and replaced by a less enlightened one.
If the choice is between giving each human a 1⁄7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve (because I don’t think anyone would be able to get any job in a world where the scalable superintelligence is your direct competitor), the former option seems better to me, and I think even Elon Musk wouldn’t mind… especially considering that going for the former option will make people much more willing to cooperate with him.
Is it really that difficult to discern?
So do you think that if we had real communism, with selfless and competent rulers, it would work just fine?
Capital is not just money. You tax, basically, production (=creation of value) and production is not a “benefit of capital”.
In any case, the underlying argument here is that no one should own AI technology. As always, this means a government monopoly and that strikes me as a rather bad idea.
Can we please not make appallingly stupid arguments? In which realistic scenarios do you thing this will be a choice that someone faces?
You mean this one?
For the obvious reasons I don’t think you can find selfless and competent human rulers to make this really work. But conditional on possibility of creating a Friendly superintelligent AI… sure.
Although calling that “communism” is about as much of a central example, as calling the paperclip maximizer scenario “capitalism”.
Capital is a factor in production, often a very important one.
Making a superintelligent AI will make our definitions of ownership (whether private or government) obsolete. And “as always” does not seem like a good argument for Singularity scenarios.
Depends on whether you consider the possibility of superintelligent AI to be “realistic”.
That too :-) I am a big fan of this approach.
But conditional on finding selfless and competent rulers (note that I’m not talking about the rest of the population), you think that communism will work? In particular, the economy will work?
Aaaaand let me quote you yourself from just a sentence back:
One of the arms of your choice involves Elon Musk (or equivalent) owning the singularity AI, the other gives every human 1/7B ownership share of the same AI. How does that work, exactly?
Besides, I thought that when Rapture comes...err… I mean, when the Singularity happens, humans will not decide anything any more—the AI will take over and will make the right decisions for them—isn’t that so?
If we’re talking about a Glorious Post-Singularity Future then presumably the superintelligent AIs are not only ruling the country and making economic decisions but also doing all the work, and they probably have magic nanobot spies everywhere so it’s hard to lie to them effectively. That probably does get rid of the more obvious failure modes of a communist economy.
(If you just put the superintelligent AIs in charge of the top-level economic institutions and leave everything else to be run by the same dishonest and incompetent humans as normal, you’re probably right that that wouldn’t suffice.)
Actually, no, we’re (at least, I am) talking about pre-Singularity situations were you still have to dig in the muck to grow crops and make metal shavings and sawdust to manufacture things.
Viliam said that the main problem with communism is that the people at the top are (a) incompetent; and (b) corrupt. I don’t think that’s true with respect to the economy. That is, I agree that communism leads to incompetent and corrupt people rising to the top, but that is not the primary reason why communist economy isn’t well-functioning.
I think the primary reason is that communism breaks the feedback loop in the economy where prices and profit function as vital dynamic indicators for resource allocation decisions. A communist economy is like a body where the autonomic nervous system is absent and most senses function slowly and badly (but the brain can make the limbs move just fine). Just making the bureaucrats (human-level) competent and honest is not going to improve things much.
Maybe I misunderstood the context, but it looked to me as if Viliam was intending only to say that post-Singularity communism might work out OK on account of being run by superintelligent AIs rather than superstupid meatsacks, and any more general-sounding things he may have said about the problems of communism were directed at that scenario.
(I repeat that I agree that merely replacing the leaders with superintelligent AIs and changing nothing else would most likely not make communism work at all, for reasons essentially the same as yours.)
I have no idea what this means.
It seems you agree with Viliam: see the second paragraph below.
Right, but I am specifically interested in Viliam’s views about the scenario where there is no AI, but we do have honest and competent rulers.
That is completely irrelevant to debates about AI.
But anyway, I object against the premise being realistic. Humans run on “corrupted hardware”, so even if they start as honest and competent and rational and well-meaning, that usually changes very quickly. In long term, they also get old and die, so what you would actually need is honest and competent elite group, able to raise and filter its next generation that would be at least equally honest, competent, rational, well-meaning, and skilled at raising and filtering the next generation for the same qualities.
In other words, you would need to have a group of rulers enlightened enough that they are able to impartially and precisely judge whether their competitors are equally good or somewhat better in the relevant criteria, and in such case they would voluntarily transfer their power to the competitors. -- Which goes completely against what the evolution taughts us: that if your opponent is better than you, you should use your power to crush him, preferably immediately, while you still have the advantage of power, and before other tribe members notice his superiority and start offering to ally with him against you.
Oh, and this perfect group would also need to be able to overthrow the current power structures and get themselves in the positions of power, without losing any of its qualities in the process. That is, they have to be competent enough to overthrow an opponent with orders of magnitude more power (imagine someone who owns the media and police and army and secret service and can also use illegal methods to kidnap your members, torture them to extract their secrets, and kill them afterwards), without having to compromise on your values. So, in addition, the members of your elite group must have perfect mental resistance against torture and blackmail; and be numerous enough, so they can easily replace their fallen brethren and continue with the original plan.
Well… there doesn’t seem to be a law of physics that would literally prevent this, it just seems very unlikely.
With a less elite group, there are many things that can possibly go wrong, and evolutionary pressures in favor of things going wrong as quickly as possible.
Fair enough; I just wanted to make it explicit that that question has basically nothing to do with anything else in the thread. I mean, Viliam was saying “so it might be a good idea to do such-and-such about superhumanly capable AI” and you came in and said “aha, that kinda pattern-matches to communism. Are you defending communism?” and then said oh, by the way, I’m only interested in communism in the case where there is no superhumanly capable AI.
But, well, trolls gonna troll, and you’ve already said trolling is your preferred mode of political debate.
Well, the kinda-sorta OP phrased the issue this way:
...and that set the tone for the entire subthread :-P
String substitution isn’t truth-preserving; there are some analogies and some disanalogies there.
Sure, but capital is a rather vacuous word. It basically means “stuff that might be useful for something”. So yes, talking about democratizing AI is a whole lot more meaningful than just saying “y’know, it would be nice if everyone could have more useful stuff that might help em achieve their goals. Man, that’s so deeeep… puff”, which is what your variant ultimately amounts to!
Um. Not in economics where it is well-defined. Capital is resources needed for production of value. Your stack of decade-old manga might be useful for something, but it’s not capital. The $20 bill in your wallet isn’t capital either.
While capital is resources needed for production of value, it’s a bit misleading to imply that that’s how it’s “well-defined” “in economics”, since the reader is likely to come away with the impression that capital = resources needed to produce value, even though not all resources needed for production of value are capital. Economics also defines labour & land* as resources needed for production of value.
* And sometimes “entrepreneurship”, but that’s always struck me as a pretty bogus “factor of production” — as economists tacitly admit by omitting it as a variable from their production functions, even though it’s as free to vary as labour.
Sure, but that’s all Econ 101 territory and LW isn’t really a good place to get some education in economics :-/
The way I remember it from my college days was that the inputs for the production of wealth are land, labor and capital (and, as you said, sometimes entrepreneurship is listed, although often this is lumped in with labor). Capital is then defined as wealth used towards the production of additional wealth. This formulation avoids the ambiguity that you identified.
None the less, “capital” and “AI” are extremely different in scope and I see no particular reason to think that if “let’s do X with capital” turns out to be a bad idea then we can rely on “let’s do X with AI” also being a bad idea.
In a hypothetical future where the benefits of AI are so enormous that the rest of the economy can be ignored, perhaps the two kinda coalesce (though I’m not sure it’s entirely clear), but that hypothetical future is also one so different from the past that past failures of “let’s do X with capital” aren’t necessarily a good indication of similar future failure.
And that stack of decade-old manga is a resource that might indeed provide value (in the form of continuing enjoyment) to a manga collector. That makes it capital. A $20 bill in my wallet is ultimately a claim on real resources that the central bank commits to honoring, by preserving the value of the currency—that makes it “capital” from a strictly individual perspective (indeed, such claims are often called “financial capital”), although it’s indeed not real “capital” in an economy-wide sense (because any such claim must be offset by a corresponding liability).
Sigh. You can, of course, define any word any way you like it, but I have my doubts about the usefulness of such endeavours. Go read).
I feel like it’s rather obvious that this is approximately what is meant. The people who talk of democratizing AI are, mostly, not speaking about superintelligence or do not see it as a threat (with the exception of Elon Musk, maybe).
Why do you think one exists?
I try not to assume that I am smarter than everybody if I can help it, and when there’s a clear cluster of really smart people making these noises, I at least want to investigate and see whether I’m mistaken in my presuppositions.
To me, “democratize AI” makes as much sense as “democratize smallpox”, but it would be good to find out that I’m wrong.
Isn’t “democratizing smallpox” a fairly widespread practice, starting from the 18th century or so—and one with rather large utility benefits, all things considered? (Or are you laboring under the misapprehension that the kinds of ‘AIs’ being developed by Google or Facebook are actually dangerous? Because that’s quite ridiculous, TBH. It’s the sort of thing for which EY and Less Wrong get a bad name in machine-learning- [popularly known as ‘AI’] circles.)
Not under any usual definition of “democratize”. Making smallpox accessible to everyone is no one’s objective. I wouldn’t refer to making smallpox available to highly specialized and vetted labs as “democratizing” it.
Google and/or Deepmind explicitly intend on building exactly the type of AI that I would consider dangerous, regardless of whether or not you would consider them to have already done so.
Links to the noises?
It’s mainly an OpenAI noise but it’s been parroted in many places recently. Definitely seen it in OpenAI materials, and I may have even heard Musk repeat the phrase, but can’t find links. Also:
YCombinator.
which is pretty close to “we don’t want only Google and Facebook to have control over smallpox”.
Microsoft in context of partnership with OpenAI.
This is a much more nonstandard interpretation of “democratize”. I suppose by this logic, Henry Ford democratized cars?
Well, YC means, I think, that AI research should not become a monopoly (via e.g. software patents or by buying every competitor). That sounds entirely reasonable to me.
Microsoft means that they want Cortana/Siri/Alexa/Assistant/etc. on every machine and in every home. That’s just marketing speak.
Both expressions have nothing to do with democracy, of course.
There are other ways that AI research can become a monopoly without any use of patents or purchases of competitors. For example, a fair bit of research can only be done through heavy computing infrastructure. In some sense places like Google will have an advantage no matter how much of their code is open-sourced (and a lot of it is open source already). Another issue is data, which is a type of capital—much unlike money however—where there is a limit to how much value you can extract from it that depends on your computing resources. These are barriers that I think probably can’t be lowered even in principle.
Having advantages in the field of AI research and having a monopoly are very different things.
That’s not self-evident to me. A fair bit of practical applications (e.g. Siri/Cortana) require a lot of infrastructure. What kind of research can’t you do if you have a few terabytes of storage and a couple dozens of GPUs? What a research university will be unable to do?
Data is an interesting issue. But first, the difference between research and practical applications is relevant again, and second, data control is mostly fought over at the legal/government level.
It’s still the case that a lot of problems in AI and data analysis can be broken down into parallel tasks and massively benefit from just having plenty of CPUs/GPUs available. In addition, a lot of the research work at major companies like Google has gone into making sure that the infrastructure advantage is used to the maximum extent possible. But I will grant you that this may not represent an actual monopoly on anything (except perhaps search). Hardware is still easily available to those who can afford it. But in the context of “democratizing AI”, I think we should expect that the firms with the most resources should have significant advantages over small startups in the AI space with not much capital. If I have a bunch of data I need analyzed, will I want to give that job to a new, untested player who may not even have the infrastructure depending on how much data I have, or someone established who I know has the capability and resources?
The issue with data isn’t so much about control / privacy, it’s mainly the fact that if you give me a truckload of a thousand 2 TB hard drives, each containing potentially useful information, there’s really not much I can do with it. Now if I happened to have a massive server farm, that would be a different situation. There’s a pretty big gulf in value for certain objects depending on my ability to make use of it, and I think data is a good example of those kinds of objects.
So how this is different from, say, manufacturing? Or pretty much any business for the last few centuries?
I think I would update my position here to say that AI is different from manufacturing, in that you can have small scale manufacturing operations (like 3D printing as username2 mentioned), that satisfy some niche market, whereas I sort of doubt that there are any niche markets in AI.
I’ve noticed this a lot with “data science” and AI startups—in what way is their product unique? Usually its not. It’s usually a team of highly talented AI researchers and engineers who need to showcase their skills until they get aqui-hired, or they develop a tool that gets really popular for a while and then it also gets bought. You really just don’t see “disruption” (in the sense that Peter Thiel defines it) in the AI vertical. And you don’t see niches.
Hold on. Are you talking about niche markets, or are we talking about the capability to do some sort of AI at small-to-medium scale (say, startup to university size)?
Um. I don’t think the AI vertical exists. And what do you mean about niches? Wouldn’t, I dunno, analysis of X-rays be a niche? high-frequency trading another niche? forecasting of fashion trends another niche? etc. etc.
Well, niche markets in AI aren’t usually referred to as such, they’re usually just companies that do task X with the help of statistics and machine learning. In that sense nearly all technology and finance companies could be considered an AI company.
AI in the generalist sense is rare (Numenta, Vicarious, DeepMind), and usually gets absorbed by the bigger companies. In the specialist sense, if task X is already well-known or identified, you still have to go against the established players who have more data and have people who have been working on only that problem for decades.
Thinking more about what YC meant in their “democratize AI’ article, it seems they were referring to startups that want to use ML to solve problems that haven’t traditionally been solved using ML yet. Or more generally, they want to help tech companies enter markets that usually aren’t served by a tech company. That’s fine. But I also get the feeling they really mean helping market certain companies by using the AI / ML hype train even if they don’t, strictly speaking, use AI to solve a given task. A lot of “AI” startups just do basic statistical analysis but have a really fancy GUI on top of it.
Well I dont think it is. If someone said “let’s democratize manufacturing” in the same sense as YC, would that sound silly to you?
Generally speaking, yes, silly, but I can imagine contexts where the word “democratize” is still unfortunate but points to an actual underlying issue—monopoly and/or excessive power of some company (or e.g. a cartel) over the entire industry.
No, it would sound like a 3D printing startup (and perfectly reasonable).
Open sourcing all significant advancements in AI and releasing all code under GNU GPL.
Tiling the whole universe with small copies of GNU GPL, because each nanobot is legally required to contain the full copy. :D
*GNU AGPL, preferably
“Make multiple AIs that can restrain one another instead of one tyrannical MCP”?