Living in a society with as little power as the average human citizen has in a current human society.
Or in other words, something like modern, Western liberal meta-morality will pop out if you make an arbitrary agent live in a modern, Western liberal society, because that meta-moral code is designed for value-divergent agents (aka: people of radically different religions and ideologies) to get along with each other productively when nobody has enough power to declare himself king and optimize everyone else for his values.
The nasty part is that AI agents could pretty easily get way, waaaay out of that power-level. Not just by going FOOM, but simply by, say, making a lot of money and purchasing huge sums of computing resources to run multiple copies of themselves which now have more money-making power and as many votes for Parliament as there are copies, and so on. This is roughly the path taken by power-hungry humans already, and look how that keeps turning out.
The other thorn on the problem is that if you manage to get your hands on a provably Friendly AI agent, you want to hand it large amounts of power. A Friendly AI with no more power than the average citizen can maybe help with your chores around the house and balance your investments for you. A Friendly AI with large amounts of scientific and technological resources can start spitting out utopian advancements (pop really good art, pop abundance economy, pop immortality, pop space travel, pop whole nonliving planets converted into fun-theoretic wonderlands) on a regular basis.
Power-hungry humans don’t start by trying to make lots of money or by trying to make lots of children.
Really? Because in the current day, the most powerful humans appear to be those with the most money, and across history, the most influential humans were those who managed to create the most biological and ideological copies of themselves.
Ezra the Scribe wasn’t exactly a warlord, but he was one of the most influential men in history, since he consolidated the literature that became known as Judaism, thus shaping the entire family of Abrahamic religions as we know them.
“Power == warlording” is, in my opinion, an overly simplistic answer.
Every one may begin a war at his pleasure, but cannot so finish it. A prince, therefore, before engaging in any enterprise should well measure his strength, and govern himself accordingly; and he must be very careful not to deceive himself in the estimate of his strength, which he will assuredly do if he measures it by his money, or by the situation of his country, or the good disposition of his people, unless he has at the same time an armed force of his own. For although the above things will increase his strength, yet they will not give it to him, and of themselves are nothing, and will be of no use without a devoted army. Neither abundance of money nor natural strength of the country will suffice, nor will the loyalty and good will of his subjects endure, for these cannot remain faithful to a prince who is incapable of defending them. Neither mountains nor lakes nor inaccessible places will present any difficulties to an enemy where there is a lack of brave defenders. And money alone, so far from being a means of defence, will only render a prince the more liable to being plundered. There cannot, therefore, be a more erroneous opinion than that money is the sinews of war. This was said by Quintus Curtius in the war between Antipater of Macedon and the king of Sparta, when he tells that want of money obliged the king of Sparta to come to battle, and that he was routed; whilst, if he could have delayed the battle a few days, the news of the death of Alexander would have reached Greece, and in that case he would have remained victor without fighting. But lacking money, and fearing the defection of his army, who were unpaid, he was obliged to try the fortune of battle, and was defeated; and in consequence of this, Quintus Curtius affirms money to be the sinews of war. This opinion is constantly quoted, and is acted upon by princes who are unwise enough to follow it; for relying upon it, they believe that plenty of money is all they require for their defence, never thinking that, if treasure were sufficient to insure victory, Darius would have vanquished Alexander, and the Greeks would have triumphed over the Romans; and, in our day, Duke Charles the Bold would have beaten the Swiss; and, quite recently, the Pope and the Florentines together would have had no difficulty in defeating Francesco Maria, nephew of Pope Julius II., in the war of Urbino. All that we have named were vanquished by those who regarded good troops, and not money, as the sinews of war. Amongst other objects of interest which Crœsus, king of Lydia, showed to Solon of Athens, was his countless treasure; and to the question as to what he thought of his power, Solon replied, “that he did not consider him powerful on that account, because war was made with iron, and not with gold, and that some one might come who had more iron than he, and would take his gold from him.” When after the death of Alexander the Great an immense swarm of Gauls descended into Greece, and thence into Asia, they sent ambassadors to the king of Macedon to treat with him for peace. The king, by way of showing his power, and to dazzle them, displayed before them great quantities of gold and silver; whereupon the ambassadors of the Gauls, who had already as good as signed the treaty, broke off all further negotiations, excited by the intense desire to possess themselves of all this gold; and thus the very treasure which the king had accumulated for his defence brought about his spoliation. The Venetians, a few years ago, having also their treasury full, lost their entire state without their money availing them in the least in their defence.
Because in the current day, the most powerful humans appear to be those with the most money
Certainly doesn’t look like that to me. Obama, Putin, the Chinese Politbureau—none of them are amongst the richest people in the world.
across history, the most influential humans… was one of the most influential men in history
Influential (especially historically) and powerful are very different things.
“Power == warlording” is, in my opinion, an overly simplistic answer.
It’s not an answer, it’s a definition. Remember, we are talking about “power-hungry humans” whose attempts to achieve power tend to end badly. These power-hungry humans do not want to be remembered by history as “influential”, they want POWER—the ability to directly affect and mold things around them right now, within their lifetime.
Certainly doesn’t look like that to me. Obama, Putin, the Chinese Politbureau—none of them are amongst the richest people in the world.
Putin is easily one of the richest in Russia, as are the Chinese Politburo in their country. Obama, frankly, is not a very powerful man at all, but rather than the public-facing servant of the powerful class (note that I said “class”, not “men”, there is no Conspiracy of the Malfoys in a neoliberal capitalist state and there needn’t be one).
Influential (especially historically) and powerful are very different things.
Historical influence? Yeah, ok. Right-now influence versus right-now power? I don’t see the difference.
I don’t think so. “Rich” is defined as having property rights in valuable assets. I don’t think Putin has a great deal of such property rights (granted, he’s not middle-class either). Instead, he can get whatever he wants and that’s not a characteristic of a rich person, it’s a characteristic of a powerful person.
To take an extreme example, was Stalin rich?
But let’s take a look at the five currently-richest men (according to Forbes): Carlos Slim, Bill Gates, Amancio Ortega, Warren Buffet, and Larry Ellison. Are these the most *powerful* men in the world? Color me doubtful.
A lot of money of rich people is hidden via complex off shore accounts and not easily visible for a company like Forbes.
Especially for someone like Putin it’s very hard to know how much money they have. Don’t assume that it’s easy to see power structures by reading newspapers.
Bill Gates might control a smaller amount of resources than Obama, but he can do whatever he wants with them.
Obama is dependend on a lot of people inside his cabinet.
The descendants of Communist China’s so-called Eight Immortals have spawned a new elite class known as the princelings, who are able to amass wealth and exploit opportunities unavailable to most Chinese.
“amass wealth and exploit opportunities unavailable to most Chinese” is not at all the same thing as “amongst the richest people in the world”
You are reading a text that’s carefully written not to make statements that allow for being sued for defamation in the UK.
It’s the kind of story for which inspires cyber attacks on a newspaper.
The context of such an article provides information about how to read such a sentence.
In this case, I believe that money and copies are, in fact, resources and allies. Resources are things of value, of which money is one; and allies are people who support you (perhaps because they think similarly to you). Politicians try to recuit people to their way of thought, which is sort of a partial copy (installing their own ideology, or a version of it, inside someone else’s head), and acquire resources such as television airtime and whatever they need (which requires money).
It isn’t an exact one-to-one correspondence, but I believe that the adverb “roughly” should indicate some degree of tolerance for inaccuracy.
You can, of course, climb the abstraction tree high enough to make this fit. I don’t think it’s a useful exercise, though.
Power-hungry humans do NOT operate by “making a lot of money and purchasing … resources”. They generally spread certain memes and use force. At least those power-hungry humans implied by the “look how that keeps turning out” part.
Living in a society with as little power as the average human citizen has in a current human society.
Well, it’s a list of four then, not a list of three. It’s still much simpler than “morality is everything humans value”.
The nasty part is that AI agents could pretty easily get way, waaaay out of that power-level. Not just by going FOOM, but simply by, say, making a lot of money and purchasing huge sums of computing resources to run multiple copies of themselves which now have more money-making power and as many votes for Parliament as there are copies, and so on. This is roughly the path taken by power-hungry humans already, and look how that keeps turning out.
You seem to be making the tacit assumption that no one really values morality, and just plays along (in egalitarian societies) because they have to.
Friendly AI with large amounts of scientific and technological resources can start spitting out utopian advancements (pop really good art, pop abundance economy, pop immortality, pop space travel, pop whole nonliving planets converted into fun-theoretic wonderlands) on a regular basis.
You seem to be making the tacit assumption that no one really values morality, and just plays along (in egalitarian societies) because they have to.
Let me clarify. My assumption is that “Western liberal meta-morality” is not the morality most people actually believe in, it’s the code of rules used to keep the peace between people who are expected to disagree on moral matters.
For instance, many people believe, for religious reasons or pure Squick or otherwise, that you shouldn’t eat insects, and shouldn’t have multiple sexual partners. These restrictions are explicitly not encoded in law, because they’re matters of expected moral disagreement.
I expect people to really behave according to their own morality, and I also expect that people are trainable, via culture, to adhere to liberal meta-morality as a way of maintaining moral diversity in a real society, since previous experiments in societies run entirely according to a unitary moral code (for instance, societies governed by religious law) have been very low-utility compared to liberal societies.
In short, humans play along with the liberal-democratic social contract because, for us, doing so has far more benefits than drawbacks, from all but the most fundamentalist standpoints. When the established social contract begins to result in low-utility life-states (for example, during an interminable economic depression in which the elite of society shows that it considers the masses morally deficient for having less wealth), the social contract itself frays and people start reverting to their underlying but more conflicting moral codes (ie: people turn to various radical movements offering to enact a unitary moral code over all of society).
Note that all of this also relies upon the fact that human beings have a biased preference towards productive cooperation when compared with hypothetical rational utility-maximizing agents.
None of this, unfortunately, applies to AIs, because AIs won’t have the same underlying moral codes or the same game-theoretic equilibrium policies or the human bias towards cooperation or the same levels of power and influence as human beings.
When dealing with AI, it’s much safer to program in some kind of meta-moral or meta-ethical code directly at the core, thus ensuring that the AI wants to, at the very least, abide by the rules of human society, and at best, give humans everything we want (up to and including AI Pals Who Are Fun To Be With, thank you Sirius Cybernetics Corporation).
Can’t that be done by Oracle AIs?
I haven’t heard the term. Might I guess that it means an AI in a “glass box”, such that it can see the real world but not actually affect anything outside its box?
Yes, a friendly Oracle AI could spit out blueprints or plans for things that are helpful to humans. However, you’re still dealing with the Friendliness problem there, or possibly with something like NP-completeness. Two cases:
We humans have some method for verifying that anything spit out by the potentially unfriendly Oracle AI is actually safe to use. The laws of computation work out such that we can easily check the safety of its output, but it took such huge amounts of intelligence or computation power to create the output that we humans couldn’t have done it on our own and needed an AI to help. A good example would be having an Oracle AI spit out scientific papers for publication: many scientists can replicate a result they wouldn’t have come up with on their own, and verify the safety of doing a given experiment.
We don’t have any way of verifying the safety of following the Oracle’s advice, and are thus trusting it. Friendliness is then once again the primary concern.
For real-life-right-now, it does look like the first case is relatively common. Non-AGI machine learning algorithms have been used before to generate human-checkable scientific findings.
None of this, unfortunately, applies to AIs, because AIs won’t have the same underlying moral codes or the same game-theoretic equilibrium policies or the human bias towards cooperation or the same levels of power and influence as human beings.
None of that necessarily applies to AIs, but then it depends on the AI. We could, for instance, pluck AIs from
virtualised socieities of AIs that haven’t descended into mass slaughter.
Congratulations: you’ve now developed an entire society of agents who specifically blame humans for acting as the survival-culling force in their miniature world.
Did you watch Attack on Titan and think, “Why don’t the humans love their benevolent Titan overlords?”?
They’re doing it to themselves. We wouldn’t have much motivation to close down a vr that contained survivors.
ETA We could make copies of all involved and put them in solipstic robot heavens.
It requires a population that’s capable cumulatively, it doesn’t require that each member of the population be capable.
It’s like arguing a command economy versus a free economy and saying that if the dictator in the command economy doesn’t know how to run an economy, how can each consumer in a free economy know how to run the economy? They don’t, individually, but as a group, the economy they produce is better than the one with the dictatorship.
Democracy has nothing to do with capable populations. It definitely has nothing to do with the median voter being smarter than the average politician. It’s just about giving the population some degree of threat to hold over politicians.
“Smarter” and “capable” aren’t the same thing. Especially if “more capable” is interpreted to be about practicalities: what we mean by “more capable” of doing X is that the population, given a chance is more likely to do X than politicians are. There are several cases where the population is more capable in this sense. For instance, the population is more capable of coming up with decisions that don’t preferentially benefit politicians.
Furthermore, the median voter being smarter and the voters being cumulatively smarter aren’t the same thing either. It may be that an average individual voter is stupider than an average individual politician, but when accumulating votes the errors cancel out in such a manner that the voters cumulatively come up with decisions that are as good as the decisions that a smarter person would make.
I’m increasingly of the opinion that the “real” point of democracy is something entirely aside from the rhetoric used to support it … but you of all people should know that averaging the estimates of how many beans are in the jar does better than any individual guess.
Systems with humans as components can, under the right conditions, do better than those humans could do alone; several insultingly trivial examples spring to mind as soon as it’s phrased that way.
Could you clarify? Are you saying that for democracy to exist it doesn’t require capable voters, or that for democracy to work well that it doesn’t?
In the classic free-market argument, merchants don’t have to be altruistic to accomplish the general good, because the way to advance their private interest is to sell goods that other people want. But that doesn’t generalize to democracy, since there isn’t trading involved in democratic voting.
However there is the question of what “working well” means, given that humans are not rational and satisfying expressed desires might or might not fall under the “working well” label.
Democracy requires capable voters in the same way capitalism requires altruistic merchants.
The grandparent is wrong, but I don’t think this is quite right either. Democracy roughly tracks the capability (at the very least in the domain of delegation) and preference of the median voter, but in a capitalistic economy you don’t have to buy services from the median firm. You can choose to only purchase from the best firm or no firm at all if none offer favorable terms.
in a capitalistic economy you don’t have to buy services from the median firm
In the equilibrium, the average consumer buys from the average firm. Otherwise it doesn’t stay average for long.
However the core of the issue is that democracy is a mechanism, it’s not guaranteed to produce optimal or even good results. Having “bad” voters will not prevent the mechanism of democracy from functioning, it just might lead to “bad” results.
“Democracy is the theory that the common people know what they want, and deserve to get it good and hard.”—H.L.Mencken.
In the equilibrium, the average consumer buys from the average firm. Otherwise it doesn’t stay average for long.
The median consumer of a good purchases from (somewhere around) the median firm selling a good. That doesn’t necessarily aggregate, and it certainly doesn’t weigh all consumers or firms equally. The consumers who buy the most of a good tend to have different preferences and research opportunities than average consumers, for example.
You could get similar results in a democracy, but most democracies don’t really encourage it : most places emphasize voting regardless of knowledge of a topic, and some jurisdictions mandate it.
You say that like it’s a bad thing. I am not multiplying by N the problem of solving and hardwiring friendliness. I am letting them sort it our for themselves. Like an evolutionary algorithm.
Well, how are you going to force them into a society in the first place? Remember, each individual AI is presumed to be intelligent enough to escape any attempt to sandbox it. This society you intend to create is a sandbox.
(It’s worth mentioning now that I don’t actually believe that UFAI is a serious threat. I do believe you are making very poor arguments against that claim that merit counter-arguments.)
I think you’re missing a major constraint there:
Living in a society with as little power as the average human citizen has in a current human society.
Or in other words, something like modern, Western liberal meta-morality will pop out if you make an arbitrary agent live in a modern, Western liberal society, because that meta-moral code is designed for value-divergent agents (aka: people of radically different religions and ideologies) to get along with each other productively when nobody has enough power to declare himself king and optimize everyone else for his values.
The nasty part is that AI agents could pretty easily get way, waaaay out of that power-level. Not just by going FOOM, but simply by, say, making a lot of money and purchasing huge sums of computing resources to run multiple copies of themselves which now have more money-making power and as many votes for Parliament as there are copies, and so on. This is roughly the path taken by power-hungry humans already, and look how that keeps turning out.
The other thorn on the problem is that if you manage to get your hands on a provably Friendly AI agent, you want to hand it large amounts of power. A Friendly AI with no more power than the average citizen can maybe help with your chores around the house and balance your investments for you. A Friendly AI with large amounts of scientific and technological resources can start spitting out utopian advancements (pop really good art, pop abundance economy, pop immortality, pop space travel, pop whole nonliving planets converted into fun-theoretic wonderlands) on a regular basis.
No, it is not.
The path taken by power-hungry humans generally goes along the lines of
(1) get some resources and allies
(2) kill/suppress some competitors/enemies/non-allies
(3) Go to 1.
Power-hungry humans don’t start by trying to make lots of money or by trying to make lots of children.
Really? Because in the current day, the most powerful humans appear to be those with the most money, and across history, the most influential humans were those who managed to create the most biological and ideological copies of themselves.
Ezra the Scribe wasn’t exactly a warlord, but he was one of the most influential men in history, since he consolidated the literature that became known as Judaism, thus shaping the entire family of Abrahamic religions as we know them.
“Power == warlording” is, in my opinion, an overly simplistic answer.
-- Niccolò Machiavelli
Certainly doesn’t look like that to me. Obama, Putin, the Chinese Politbureau—none of them are amongst the richest people in the world.
Influential (especially historically) and powerful are very different things.
It’s not an answer, it’s a definition. Remember, we are talking about “power-hungry humans” whose attempts to achieve power tend to end badly. These power-hungry humans do not want to be remembered by history as “influential”, they want POWER—the ability to directly affect and mold things around them right now, within their lifetime.
Putin is easily one of the richest in Russia, as are the Chinese Politburo in their country. Obama, frankly, is not a very powerful man at all, but rather than the public-facing servant of the powerful class (note that I said “class”, not “men”, there is no Conspiracy of the Malfoys in a neoliberal capitalist state and there needn’t be one).
Historical influence? Yeah, ok. Right-now influence versus right-now power? I don’t see the difference.
I don’t think so. “Rich” is defined as having property rights in valuable assets. I don’t think Putin has a great deal of such property rights (granted, he’s not middle-class either). Instead, he can get whatever he wants and that’s not a characteristic of a rich person, it’s a characteristic of a powerful person.
To take an extreme example, was Stalin rich?
But let’s take a look at the five currently-richest men (according to Forbes): Carlos Slim, Bill Gates, Amancio Ortega, Warren Buffet, and Larry Ellison. Are these the most *powerful* men in the world? Color me doubtful.
Well, Carlos Slim seems to have the NYT in his pocket. That’s nothing to sneeze at.
A lot of money of rich people is hidden via complex off shore accounts and not easily visible for a company like Forbes. Especially for someone like Putin it’s very hard to know how much money they have. Don’t assume that it’s easy to see power structures by reading newspapers.
Bill Gates might control a smaller amount of resources than Obama, but he can do whatever he wants with them. Obama is dependend on a lot of people inside his cabinet.
Not according to Bloomberg:
“amass wealth and exploit opportunities unavailable to most Chinese” is not at all the same thing as “amongst the richest people in the world”
You are reading a text that’s carefully written not to make statements that allow for being sued for defamation in the UK. It’s the kind of story for which inspires cyber attacks on a newspaper.
The context of such an article provides information about how to read such a sentence.
In this case, I believe that money and copies are, in fact, resources and allies. Resources are things of value, of which money is one; and allies are people who support you (perhaps because they think similarly to you). Politicians try to recuit people to their way of thought, which is sort of a partial copy (installing their own ideology, or a version of it, inside someone else’s head), and acquire resources such as television airtime and whatever they need (which requires money).
It isn’t an exact one-to-one correspondence, but I believe that the adverb “roughly” should indicate some degree of tolerance for inaccuracy.
You can, of course, climb the abstraction tree high enough to make this fit. I don’t think it’s a useful exercise, though.
Power-hungry humans do NOT operate by “making a lot of money and purchasing … resources”. They generally spread certain memes and use force. At least those power-hungry humans implied by the “look how that keeps turning out” part.
Well, it’s a list of four then, not a list of three. It’s still much simpler than “morality is everything humans value”.
You seem to be making the tacit assumption that no one really values morality, and just plays along (in egalitarian societies) because they have to.
Can’t that be done by Oracle AIs?
Let me clarify. My assumption is that “Western liberal meta-morality” is not the morality most people actually believe in, it’s the code of rules used to keep the peace between people who are expected to disagree on moral matters.
For instance, many people believe, for religious reasons or pure Squick or otherwise, that you shouldn’t eat insects, and shouldn’t have multiple sexual partners. These restrictions are explicitly not encoded in law, because they’re matters of expected moral disagreement.
I expect people to really behave according to their own morality, and I also expect that people are trainable, via culture, to adhere to liberal meta-morality as a way of maintaining moral diversity in a real society, since previous experiments in societies run entirely according to a unitary moral code (for instance, societies governed by religious law) have been very low-utility compared to liberal societies.
In short, humans play along with the liberal-democratic social contract because, for us, doing so has far more benefits than drawbacks, from all but the most fundamentalist standpoints. When the established social contract begins to result in low-utility life-states (for example, during an interminable economic depression in which the elite of society shows that it considers the masses morally deficient for having less wealth), the social contract itself frays and people start reverting to their underlying but more conflicting moral codes (ie: people turn to various radical movements offering to enact a unitary moral code over all of society).
Note that all of this also relies upon the fact that human beings have a biased preference towards productive cooperation when compared with hypothetical rational utility-maximizing agents.
None of this, unfortunately, applies to AIs, because AIs won’t have the same underlying moral codes or the same game-theoretic equilibrium policies or the human bias towards cooperation or the same levels of power and influence as human beings.
When dealing with AI, it’s much safer to program in some kind of meta-moral or meta-ethical code directly at the core, thus ensuring that the AI wants to, at the very least, abide by the rules of human society, and at best, give humans everything we want (up to and including AI Pals Who Are Fun To Be With, thank you Sirius Cybernetics Corporation).
I haven’t heard the term. Might I guess that it means an AI in a “glass box”, such that it can see the real world but not actually affect anything outside its box?
Yes, a friendly Oracle AI could spit out blueprints or plans for things that are helpful to humans. However, you’re still dealing with the Friendliness problem there, or possibly with something like NP-completeness. Two cases:
We humans have some method for verifying that anything spit out by the potentially unfriendly Oracle AI is actually safe to use. The laws of computation work out such that we can easily check the safety of its output, but it took such huge amounts of intelligence or computation power to create the output that we humans couldn’t have done it on our own and needed an AI to help. A good example would be having an Oracle AI spit out scientific papers for publication: many scientists can replicate a result they wouldn’t have come up with on their own, and verify the safety of doing a given experiment.
We don’t have any way of verifying the safety of following the Oracle’s advice, and are thus trusting it. Friendliness is then once again the primary concern.
For real-life-right-now, it does look like the first case is relatively common. Non-AGI machine learning algorithms have been used before to generate human-checkable scientific findings.
Programming in a bias towards conformity (kohlberg level 2) maybe a lot easier than EYes fine grained friendliness.
None of that necessarily applies to AIs, but then it depends on the AI. We could, for instance, pluck AIs from virtualised socieities of AIs that haven’t descended into mass slaughter.
Congratulations: you’ve now developed an entire society of agents who specifically blame humans for acting as the survival-culling force in their miniature world.
Did you watch Attack on Titan and think, “Why don’t the humans love their benevolent Titan overlords?”?
Well now I have both a new series to read/watch and a major spoiler for it.
Don’t worry! I’ve spoiled nothing for you that wasn’t apparent from the lyrics of the theme song.
They’re doing it to themselves. We wouldn’t have much motivation to close down a vr that contained survivors. ETA We could make copies of all involved and put them in solipstic robot heavens.
...And that way you turn the problem of making an AI that won’t kill you into one of making a society of AIs that won’t kill you.
If Despotism failed only for want of a capable benevolent despot, what chance has Democracy, which requires a whole population of capable voters?
It requires a population that’s capable cumulatively, it doesn’t require that each member of the population be capable.
It’s like arguing a command economy versus a free economy and saying that if the dictator in the command economy doesn’t know how to run an economy, how can each consumer in a free economy know how to run the economy? They don’t, individually, but as a group, the economy they produce is better than the one with the dictatorship.
Democracy has nothing to do with capable populations. It definitely has nothing to do with the median voter being smarter than the average politician. It’s just about giving the population some degree of threat to hold over politicians.
“Smarter” and “capable” aren’t the same thing. Especially if “more capable” is interpreted to be about practicalities: what we mean by “more capable” of doing X is that the population, given a chance is more likely to do X than politicians are. There are several cases where the population is more capable in this sense. For instance, the population is more capable of coming up with decisions that don’t preferentially benefit politicians.
Furthermore, the median voter being smarter and the voters being cumulatively smarter aren’t the same thing either. It may be that an average individual voter is stupider than an average individual politician, but when accumulating votes the errors cancel out in such a manner that the voters cumulatively come up with decisions that are as good as the decisions that a smarter person would make.
I’m increasingly of the opinion that the “real” point of democracy is something entirely aside from the rhetoric used to support it … but you of all people should know that averaging the estimates of how many beans are in the jar does better than any individual guess.
Systems with humans as components can, under the right conditions, do better than those humans could do alone; several insultingly trivial examples spring to mind as soon as it’s phrased that way.
Is democracy such a system? Eh.
Democracy requires capable voters in the same way capitalism requires altruistic merchants.
In other words, not at all.
Could you clarify? Are you saying that for democracy to exist it doesn’t require capable voters, or that for democracy to work well that it doesn’t?
In the classic free-market argument, merchants don’t have to be altruistic to accomplish the general good, because the way to advance their private interest is to sell goods that other people want. But that doesn’t generalize to democracy, since there isn’t trading involved in democratic voting.
See here
However there is the question of what “working well” means, given that humans are not rational and satisfying expressed desires might or might not fall under the “working well” label.
Ah, I see. You’re just saying that democracy doesn’t stop happening just because voters have preferences I don’t approve of. :)
Actually, I’m making a stronger claim—voters can screw themselves up in pretty serious fashion and it’s still will be full-blown democracy in action.
The grandparent is wrong, but I don’t think this is quite right either. Democracy roughly tracks the capability (at the very least in the domain of delegation) and preference of the median voter, but in a capitalistic economy you don’t have to buy services from the median firm. You can choose to only purchase from the best firm or no firm at all if none offer favorable terms.
In the equilibrium, the average consumer buys from the average firm. Otherwise it doesn’t stay average for long.
However the core of the issue is that democracy is a mechanism, it’s not guaranteed to produce optimal or even good results. Having “bad” voters will not prevent the mechanism of democracy from functioning, it just might lead to “bad” results.
“Democracy is the theory that the common people know what they want, and deserve to get it good and hard.”—H.L.Mencken.
The median consumer of a good purchases from (somewhere around) the median firm selling a good. That doesn’t necessarily aggregate, and it certainly doesn’t weigh all consumers or firms equally. The consumers who buy the most of a good tend to have different preferences and research opportunities than average consumers, for example.
You could get similar results in a democracy, but most democracies don’t really encourage it : most places emphasize voting regardless of knowledge of a topic, and some jurisdictions mandate it.
You say that like it’s a bad thing. I am not multiplying by N the problem of solving and hardwiring friendliness. I am letting them sort it our for themselves. Like an evolutionary algorithm.
Well, how are you going to force them into a society in the first place? Remember, each individual AI is presumed to be intelligent enough to escape any attempt to sandbox it. This society you intend to create is a sandbox.
(It’s worth mentioning now that I don’t actually believe that UFAI is a serious threat. I do believe you are making very poor arguments against that claim that merit counter-arguments.)
I am assuming they are seeds, not superintelligences