One possibility: Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.
Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.
From my point of view, the main problem with “making the benefits of capital accrue to everyone generally” is that… well, people who use these words as an applause light typically do something else instead. First, they take most of the benefits of capital to themselves (think: all those communist leaders with golden watches and huge dachas). Second, as a side-effect of incompetent management (where signalling political loyalty trumps technical competence), even the capital that isn’t stolen is used very inefficiently.
But on a smaller scale… companies paying taxes, and those taxes being used to build roads or pay for universal healthcare… is an example of providing the benefits of capital to everyone. Just not all the capital; and besides the more-or-less neutral taxation, the use of the capital is not micromanaged by people chosen for their political loyalty. So the costs to the economy are much smaller, and arguably the social benefits are larger (some libertarians may disagree).
Assuming that the hypothetical artificial superintelligence will be (1) smarter than humans, and (2) able to scale, e.g. to increase its cognitive powers thousandfold by creating 1000 copies of itself which will not immediately start feeding Moloch by fighting against each other, it should be able to not fuck up the whole economy, and could quite likely increase the production, even without increasing the costs to environment, by simply doing things smarter and removing inefficiencies. Unlike the communist bureaucrats who (1) were not superintelligent, and sometimes even not of average intelligence, (2) optimized each for their own personal goals, and (3) routinely lied to each other and to their superiors to avoid irrational punishments, so soon the whole system used completely fake data. Not being bound by ideology, if the AI would find out that it is better to leave something to do to humans (quite unlikely IMHO, but let’s assume so for the sake of the argument), it would be free to do exactly that. Unlike a hypothetical enlightened communist bureaucrat, who after making the same observation would be probably shot as a traitor and replaced by a less enlightened one.
If the choice is between giving each human a 1⁄7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve (because I don’t think anyone would be able to get any job in a world where the scalable superintelligence is your direct competitor), the former option seems better to me, and I think even Elon Musk wouldn’t mind… especially considering that going for the former option will make people much more willing to cooperate with him.
From my point of view, the main problem with “making the benefits of capital accrue to everyone generally” is that… well, people who use these words as an applause light typically do something else instead.
So do you think that if we had real communism, with selfless and competent rulers, it would work just fine?
companies paying taxes, and those taxes being used to build roads or pay for universal healthcare… is an example of providing the benefits of capital to everyone
Capital is not just money. You tax, basically, production (=creation of value) and production is not a “benefit of capital”.
In any case, the underlying argument here is that no one should own AI technology. As always, this means a government monopoly and that strikes me as a rather bad idea.
If the choice is between giving each human a 1⁄7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve
Can we please not make appallingly stupid arguments? In which realistic scenarios do you thing this will be a choice that someone faces?
So do you think that if we had real communism, with selfless and competent rulers, it would work just fine?
For the obvious reasons I don’t think you can find selfless and competent human rulers to make this really work. But conditional on possibility of creating a Friendly superintelligent AI… sure.
Although calling that “communism” is about as much of a central example, as calling the paperclip maximizer scenario “capitalism”.
production is not a “benefit of capital”.
Capital is a factor in production, often a very important one.
no one should own AI technology. As always, this means a government monopoly
Making a superintelligent AI will make our definitions of ownership (whether private or government) obsolete. And “as always” does not seem like a good argument for Singularity scenarios.
In which realistic scenarios do you thing this will be a choice that someone faces?
Depends on whether you consider the possibility of superintelligent AI to be “realistic”.
For the obvious reasons I don’t think you can find selfless and competent human rulers to make this really work.
But conditional on finding selfless and competent rulers (note that I’m not talking about the rest of the population), you think that communism will work? In particular, the economy will work?
Depends on whether you consider the possibility of superintelligent AI to be “realistic”.
Aaaaand let me quote you yourself from just a sentence back:
Making a superintelligent AI will make our definitions of ownership (whether private or government) obsolete.
One of the arms of your choice involves Elon Musk (or equivalent) owning the singularity AI, the other gives every human 1/7B ownership share of the same AI. How does that work, exactly?
Besides, I thought that when Rapture comes...err… I mean, when the Singularity happens, humans will not decide anything any more—the AI will take over and will make the right decisions for them—isn’t that so?
conditional on finding selfless and competent rulers (note that I’m not talking about the rest of the population), you think that communism will work?
If we’re talking about a Glorious Post-Singularity Future then presumably the superintelligent AIs are not only ruling the country and making economic decisions but also doing all the work, and they probably have magic nanobot spies everywhere so it’s hard to lie to them effectively. That probably does get rid of the more obvious failure modes of a communist economy.
(If you just put the superintelligent AIs in charge of the top-level economic institutions and leave everything else to be run by the same dishonest and incompetent humans as normal, you’re probably right that that wouldn’t suffice.)
Actually, no, we’re (at least, I am) talking about pre-Singularity situations were you still have to dig in the muck to grow crops and make metal shavings and sawdust to manufacture things.
Viliam said that the main problem with communism is that the people at the top are (a) incompetent; and (b) corrupt. I don’t think that’s true with respect to the economy. That is, I agree that communism leads to incompetent and corrupt people rising to the top, but that is not the primary reason why communist economy isn’t well-functioning.
I think the primary reason is that communism breaks the feedback loop in the economy where prices and profit function as vital dynamic indicators for resource allocation decisions. A communist economy is like a body where the autonomic nervous system is absent and most senses function slowly and badly (but the brain can make the limbs move just fine). Just making the bureaucrats (human-level) competent and honest is not going to improve things much.
Maybe I misunderstood the context, but it looked to me as if Viliam was intending only to say that post-Singularity communism might work out OK on account of being run by superintelligent AIs rather than superstupid meatsacks, and any more general-sounding things he may have said about the problems of communism were directed at that scenario.
(I repeat that I agree that merely replacing the leaders with superintelligent AIs and changing nothing else would most likely not make communism work at all, for reasons essentially the same as yours.)
It seems you agree with Viliam: see the second paragraph below.
For the obvious reasons I don’t think you can find selfless and competent human rulers to make this really work. But conditional on possibility of creating a Friendly superintelligent AI… sure.
Although calling that “communism” is about as much of a central example, as calling the paperclip maximizer scenario “capitalism”.
That is completely irrelevant to debates about AI.
But anyway, I object against the premise being realistic. Humans run on “corrupted hardware”, so even if they start as honest and competent and rational and well-meaning, that usually changes very quickly. In long term, they also get old and die, so what you would actually need is honest and competent elite group, able to raise and filter its next generation that would be at least equally honest, competent, rational, well-meaning, and skilled at raising and filtering the next generation for the same qualities.
In other words, you would need to have a group of rulers enlightened enough that they are able to impartially and precisely judge whether their competitors are equally good or somewhat better in the relevant criteria, and in such case they would voluntarily transfer their power to the competitors. -- Which goes completely against what the evolution taughts us: that if your opponent is better than you, you should use your power to crush him, preferably immediately, while you still have the advantage of power, and before other tribe members notice his superiority and start offering to ally with him against you.
Oh, and this perfect group would also need to be able to overthrow the current power structures and get themselves in the positions of power, without losing any of its qualities in the process. That is, they have to be competent enough to overthrow an opponent with orders of magnitude more power (imagine someone who owns the media and police and army and secret service and can also use illegal methods to kidnap your members, torture them to extract their secrets, and kill them afterwards), without having to compromise on your values. So, in addition, the members of your elite group must have perfect mental resistance against torture and blackmail; and be numerous enough, so they can easily replace their fallen brethren and continue with the original plan.
Well… there doesn’t seem to be a law of physics that would literally prevent this, it just seems very unlikely.
With a less elite group, there are many things that can possibly go wrong, and evolutionary pressures in favor of things going wrong as quickly as possible.
Fair enough; I just wanted to make it explicit that that question has basically nothing to do with anything else in the thread. I mean, Viliam was saying “so it might be a good idea to do such-and-such about superhumanly capable AI” and you came in and said “aha, that kinda pattern-matches to communism. Are you defending communism?” and then said oh, by the way, I’m only interested in communism in the case where there is no superhumanly capable AI.
But, well, trolls gonna troll, and you’ve already said trolling is your preferred mode of political debate.
Well, the kinda-sorta OP phrased the issue this way:
If the choice is between giving each human a 1⁄7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve
...and that set the tone for the entire subthread :-P
Sure, but capital is a rather vacuous word. It basically means “stuff that might be useful for something”. So yes, talking about democratizing AI is a whole lot more meaningful than just saying “y’know, it would be nice if everyone could have more useful stuff that might help em achieve their goals. Man, that’s so deeeep… puff”, which is what your variant ultimately amounts to!
capital is a rather vacuous word. It basically means “stuff that might be useful for something”
Um. Not in economics where it is well-defined. Capital is resources needed for production of value. Your stack of decade-old manga might be useful for something, but it’s not capital. The $20 bill in your wallet isn’t capital either.
Um. Not in economics where it is well-defined. Capital is resources needed for production of value.
While capital is resources needed for production of value, it’s a bit misleading to imply that that’s how it’s “well-defined” “in economics”, since the reader is likely to come away with the impression that capital = resources needed to produce value, even though not all resources needed for production of value are capital. Economics also defines labour & land* as resources needed for production of value.
* And sometimes “entrepreneurship”, but that’s always struck me as a pretty bogus “factor of production” — as economists tacitly admit by omitting it as a variable from their production functions, even though it’s as free to vary as labour.
The way I remember it from my college days was that the inputs for the production of wealth are land, labor and capital (and, as you said, sometimes entrepreneurship is listed, although often this is lumped in with labor). Capital is then defined as wealth used towards the production of additional wealth. This formulation avoids the ambiguity that you identified.
None the less, “capital” and “AI” are extremely different in scope and I see no particular reason to think that if “let’s do X with capital” turns out to be a bad idea then we can rely on “let’s do X with AI” also being a bad idea.
In a hypothetical future where the benefits of AI are so enormous that the rest of the economy can be ignored, perhaps the two kinda coalesce (though I’m not sure it’s entirely clear), but that hypothetical future is also one so different from the past that past failures of “let’s do X with capital” aren’t necessarily a good indication of similar future failure.
Capital is resources needed for production of value.
And that stack of decade-old manga is a resource that might indeed provide value (in the form of continuing enjoyment) to a manga collector. That makes it capital. A $20 bill in my wallet is ultimately a claim on real resources that the central bank commits to honoring, by preserving the value of the currency—that makes it “capital” from a strictly individual perspective (indeed, such claims are often called “financial capital”), although it’s indeed not real “capital” in an economy-wide sense (because any such claim must be offset by a corresponding liability).
I feel like it’s rather obvious that this is approximately what is meant. The people who talk of democratizing AI are, mostly, not speaking about superintelligence or do not see it as a threat (with the exception of Elon Musk, maybe).
One possibility: Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.
s/AI/capital/
Now, where have I heard this before..?
And your point is...?
From my point of view, the main problem with “making the benefits of capital accrue to everyone generally” is that… well, people who use these words as an applause light typically do something else instead. First, they take most of the benefits of capital to themselves (think: all those communist leaders with golden watches and huge dachas). Second, as a side-effect of incompetent management (where signalling political loyalty trumps technical competence), even the capital that isn’t stolen is used very inefficiently.
But on a smaller scale… companies paying taxes, and those taxes being used to build roads or pay for universal healthcare… is an example of providing the benefits of capital to everyone. Just not all the capital; and besides the more-or-less neutral taxation, the use of the capital is not micromanaged by people chosen for their political loyalty. So the costs to the economy are much smaller, and arguably the social benefits are larger (some libertarians may disagree).
Assuming that the hypothetical artificial superintelligence will be (1) smarter than humans, and (2) able to scale, e.g. to increase its cognitive powers thousandfold by creating 1000 copies of itself which will not immediately start feeding Moloch by fighting against each other, it should be able to not fuck up the whole economy, and could quite likely increase the production, even without increasing the costs to environment, by simply doing things smarter and removing inefficiencies. Unlike the communist bureaucrats who (1) were not superintelligent, and sometimes even not of average intelligence, (2) optimized each for their own personal goals, and (3) routinely lied to each other and to their superiors to avoid irrational punishments, so soon the whole system used completely fake data. Not being bound by ideology, if the AI would find out that it is better to leave something to do to humans (quite unlikely IMHO, but let’s assume so for the sake of the argument), it would be free to do exactly that. Unlike a hypothetical enlightened communist bureaucrat, who after making the same observation would be probably shot as a traitor and replaced by a less enlightened one.
If the choice is between giving each human a 1⁄7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve (because I don’t think anyone would be able to get any job in a world where the scalable superintelligence is your direct competitor), the former option seems better to me, and I think even Elon Musk wouldn’t mind… especially considering that going for the former option will make people much more willing to cooperate with him.
Is it really that difficult to discern?
So do you think that if we had real communism, with selfless and competent rulers, it would work just fine?
Capital is not just money. You tax, basically, production (=creation of value) and production is not a “benefit of capital”.
In any case, the underlying argument here is that no one should own AI technology. As always, this means a government monopoly and that strikes me as a rather bad idea.
Can we please not make appallingly stupid arguments? In which realistic scenarios do you thing this will be a choice that someone faces?
You mean this one?
For the obvious reasons I don’t think you can find selfless and competent human rulers to make this really work. But conditional on possibility of creating a Friendly superintelligent AI… sure.
Although calling that “communism” is about as much of a central example, as calling the paperclip maximizer scenario “capitalism”.
Capital is a factor in production, often a very important one.
Making a superintelligent AI will make our definitions of ownership (whether private or government) obsolete. And “as always” does not seem like a good argument for Singularity scenarios.
Depends on whether you consider the possibility of superintelligent AI to be “realistic”.
That too :-) I am a big fan of this approach.
But conditional on finding selfless and competent rulers (note that I’m not talking about the rest of the population), you think that communism will work? In particular, the economy will work?
Aaaaand let me quote you yourself from just a sentence back:
One of the arms of your choice involves Elon Musk (or equivalent) owning the singularity AI, the other gives every human 1/7B ownership share of the same AI. How does that work, exactly?
Besides, I thought that when Rapture comes...err… I mean, when the Singularity happens, humans will not decide anything any more—the AI will take over and will make the right decisions for them—isn’t that so?
If we’re talking about a Glorious Post-Singularity Future then presumably the superintelligent AIs are not only ruling the country and making economic decisions but also doing all the work, and they probably have magic nanobot spies everywhere so it’s hard to lie to them effectively. That probably does get rid of the more obvious failure modes of a communist economy.
(If you just put the superintelligent AIs in charge of the top-level economic institutions and leave everything else to be run by the same dishonest and incompetent humans as normal, you’re probably right that that wouldn’t suffice.)
Actually, no, we’re (at least, I am) talking about pre-Singularity situations were you still have to dig in the muck to grow crops and make metal shavings and sawdust to manufacture things.
Viliam said that the main problem with communism is that the people at the top are (a) incompetent; and (b) corrupt. I don’t think that’s true with respect to the economy. That is, I agree that communism leads to incompetent and corrupt people rising to the top, but that is not the primary reason why communist economy isn’t well-functioning.
I think the primary reason is that communism breaks the feedback loop in the economy where prices and profit function as vital dynamic indicators for resource allocation decisions. A communist economy is like a body where the autonomic nervous system is absent and most senses function slowly and badly (but the brain can make the limbs move just fine). Just making the bureaucrats (human-level) competent and honest is not going to improve things much.
Maybe I misunderstood the context, but it looked to me as if Viliam was intending only to say that post-Singularity communism might work out OK on account of being run by superintelligent AIs rather than superstupid meatsacks, and any more general-sounding things he may have said about the problems of communism were directed at that scenario.
(I repeat that I agree that merely replacing the leaders with superintelligent AIs and changing nothing else would most likely not make communism work at all, for reasons essentially the same as yours.)
I have no idea what this means.
It seems you agree with Viliam: see the second paragraph below.
Right, but I am specifically interested in Viliam’s views about the scenario where there is no AI, but we do have honest and competent rulers.
That is completely irrelevant to debates about AI.
But anyway, I object against the premise being realistic. Humans run on “corrupted hardware”, so even if they start as honest and competent and rational and well-meaning, that usually changes very quickly. In long term, they also get old and die, so what you would actually need is honest and competent elite group, able to raise and filter its next generation that would be at least equally honest, competent, rational, well-meaning, and skilled at raising and filtering the next generation for the same qualities.
In other words, you would need to have a group of rulers enlightened enough that they are able to impartially and precisely judge whether their competitors are equally good or somewhat better in the relevant criteria, and in such case they would voluntarily transfer their power to the competitors. -- Which goes completely against what the evolution taughts us: that if your opponent is better than you, you should use your power to crush him, preferably immediately, while you still have the advantage of power, and before other tribe members notice his superiority and start offering to ally with him against you.
Oh, and this perfect group would also need to be able to overthrow the current power structures and get themselves in the positions of power, without losing any of its qualities in the process. That is, they have to be competent enough to overthrow an opponent with orders of magnitude more power (imagine someone who owns the media and police and army and secret service and can also use illegal methods to kidnap your members, torture them to extract their secrets, and kill them afterwards), without having to compromise on your values. So, in addition, the members of your elite group must have perfect mental resistance against torture and blackmail; and be numerous enough, so they can easily replace their fallen brethren and continue with the original plan.
Well… there doesn’t seem to be a law of physics that would literally prevent this, it just seems very unlikely.
With a less elite group, there are many things that can possibly go wrong, and evolutionary pressures in favor of things going wrong as quickly as possible.
Fair enough; I just wanted to make it explicit that that question has basically nothing to do with anything else in the thread. I mean, Viliam was saying “so it might be a good idea to do such-and-such about superhumanly capable AI” and you came in and said “aha, that kinda pattern-matches to communism. Are you defending communism?” and then said oh, by the way, I’m only interested in communism in the case where there is no superhumanly capable AI.
But, well, trolls gonna troll, and you’ve already said trolling is your preferred mode of political debate.
Well, the kinda-sorta OP phrased the issue this way:
...and that set the tone for the entire subthread :-P
String substitution isn’t truth-preserving; there are some analogies and some disanalogies there.
Sure, but capital is a rather vacuous word. It basically means “stuff that might be useful for something”. So yes, talking about democratizing AI is a whole lot more meaningful than just saying “y’know, it would be nice if everyone could have more useful stuff that might help em achieve their goals. Man, that’s so deeeep… puff”, which is what your variant ultimately amounts to!
Um. Not in economics where it is well-defined. Capital is resources needed for production of value. Your stack of decade-old manga might be useful for something, but it’s not capital. The $20 bill in your wallet isn’t capital either.
While capital is resources needed for production of value, it’s a bit misleading to imply that that’s how it’s “well-defined” “in economics”, since the reader is likely to come away with the impression that capital = resources needed to produce value, even though not all resources needed for production of value are capital. Economics also defines labour & land* as resources needed for production of value.
* And sometimes “entrepreneurship”, but that’s always struck me as a pretty bogus “factor of production” — as economists tacitly admit by omitting it as a variable from their production functions, even though it’s as free to vary as labour.
Sure, but that’s all Econ 101 territory and LW isn’t really a good place to get some education in economics :-/
The way I remember it from my college days was that the inputs for the production of wealth are land, labor and capital (and, as you said, sometimes entrepreneurship is listed, although often this is lumped in with labor). Capital is then defined as wealth used towards the production of additional wealth. This formulation avoids the ambiguity that you identified.
None the less, “capital” and “AI” are extremely different in scope and I see no particular reason to think that if “let’s do X with capital” turns out to be a bad idea then we can rely on “let’s do X with AI” also being a bad idea.
In a hypothetical future where the benefits of AI are so enormous that the rest of the economy can be ignored, perhaps the two kinda coalesce (though I’m not sure it’s entirely clear), but that hypothetical future is also one so different from the past that past failures of “let’s do X with capital” aren’t necessarily a good indication of similar future failure.
And that stack of decade-old manga is a resource that might indeed provide value (in the form of continuing enjoyment) to a manga collector. That makes it capital. A $20 bill in my wallet is ultimately a claim on real resources that the central bank commits to honoring, by preserving the value of the currency—that makes it “capital” from a strictly individual perspective (indeed, such claims are often called “financial capital”), although it’s indeed not real “capital” in an economy-wide sense (because any such claim must be offset by a corresponding liability).
Sigh. You can, of course, define any word any way you like it, but I have my doubts about the usefulness of such endeavours. Go read).
I feel like it’s rather obvious that this is approximately what is meant. The people who talk of democratizing AI are, mostly, not speaking about superintelligence or do not see it as a threat (with the exception of Elon Musk, maybe).