Er, no. Because we don’t (so far as I know) have any reason to expect that if we somehow produce a problematically powerful AI anything like an “all-seeing Eye” will splat it.
(Why on earth would you think my reason for saying what I said was “because it didn’t go the way [I] liked”? It seems a pointlessly uncharitable, as well as improbable, explanation.)
Because there are plenty of all-seeing eye superpowers in this world. Not everyone is convinced that the very real, very powerful security regimes around the world would be suddenly left inept when the opponent is a computer instead of a human being.
My comment didn’t contribute any less than yours to the discussion, which is rather the point. The validity of an allegory depends on the accuracy of the setup and rules, not the outcome. You seemed happy to engage until it was pointed out that the outcome was not what you expected.
Those “very real, very powerful security regimes around the world” are surprisingly inept at handling a few million people trying to migrate to other countries, and similarly inept at handling the crime waves and the political fallout generated by it.
And if you underestimate how much a threat could a mere “computer” be, read the “Friendship is Optimal” stories.
I’ve read the sequences on friendliness here and find them completely unconvincing with lack of evidence and a one-sided view the problem. I’m not about to start generalizing from fictional evidence.
I’m not sure I agree with the assessment of the examples that you give. There are billions of people who would like to live in first world countries but don’t. I think immigration controls have a particularly effective if there’s only a few million people crossing borders illegally in a world of 7 billion. And most of the immigration issues being faced by the world today such as Syrian refugees are about asylum-seekers who are in fact being permitted just in larger numbers than secondary systems were designed to support. Also the failure modes are different. If you let the wrong person in, what happens? Statistically speaking, nothing of great consequence.
Crime waves? We are currently at one of the lowest periods of violence per capita. I think the powers that be have been doing quite a good job actually.
there are plenty of all-seeing eye superpowers in this world
Oh, I see. OK then.
My impression was that it was generally agreed that superintelligences sufficiently visible and slow-growing to be squelched by governments and the like aren’t much of a threat; the balrog-like (hypothetical) ones are the ones that emerge too quickly and powerfully to be so easily stopped. So the threats you have in mind aren’t in the “balrog” category at all, for me.
You seemed happy to engage until it was pointed out that the outcome was not what you expected.
My first comment in the balrog discussion was the one you took exception to. The point at which you say I stopped being “happy to engage” is the point at which I started engaging. The picture you’re trying to paint is literally the exact opposite of the truth.
My impression was that it was generally agreed that superintelligences sufficiently visible and slow-growing to be squelched by governments and the like aren’t much of a threat; the balrog-like (hypothetical) ones are the ones that emerge too quickly and powerfully to be so easily stopped.
I don’t think that’s the case. A superintelligence doesn’t have to be balrog like to advance to the point where it’s too big to fail and thus not easily regulated by the government.
EY et al focus more on the threat of a superintelligence that can improve itself fast and have a lot of power in short amount of time but that’s not the only concerning scenario.
When a bank like HSBC can launder drug and terrorist money without any of it’s officials going to prison for it, the amount of control that a government could exert on a big company run by a complex AI might also be quite limited.
When the superintelligence becomes good enough at making money and buying politicians, it doesn’t have to worry so much about government action, and has enough time to grow slowly.
You have at least two options: either buy Putin, or hire someone to replace him, whatever is cheaper. It’s not like Putin single-handedly rules his country—he relies on his army, police, secret services, etc. All these institutions probably have many people who would enjoy replacing Putin at the top of the pyramid. Throw in some extra money (“if you are going to replace Putin, here you have a few extra billions to bribe whoever needs to be bribed to help you with the coup”).
Or the Chinese Politbureau?
I am not familiar with the internal structure of the Chinese Politbureau, but I would guess this one is easier. There are probably competing factions, so you will support the one more friendly to you.
But there is always to option to ignore both Putin and the Chinese Politbureau, and upload yourself to a computer center built in some other country.
If you are looking at an AGI that manages investment at a company like Goldman Sachs in an effective way it doesn’t even need to know directly how to buy politicians.
If it makes a lot of money for Goldman Sachs, there are other people at Goldman who can do the job of buying politicians.
When Ray Dalio of Bridgewater Associates wants to build an AI that can replace him after he retires, it’s not clear whether any government can effectively regulate it.
My impression was that it was generally agreed that superintelligences sufficiently visible and slow-growing to be squelched by governments and the like aren’t much of a threat; the balrog-like (hypothetical) ones are the ones that emerge too quickly and powerfully to be so easily stopped.
Ah, now we are at the crux of the issue. That is not generally agreed upon, at least not outside of the Yudkowski-Bostrom echo chamber. You’ll find plenty of hard-takeoff skeptics even here on LessWrong, let along AI circles where hard-takeoff scenarios are given much credence.
I think you have misunderstood me. I was not intending to say that hard-takeoff scenarios are likely (for what it’s worth, I don’t think they are) but that they are what was being analogized to balrogs here.
(Of course a slow-takeoff controllable-by-governments superintelligence can still pose a threat—e.g., some are worried about technological unemployment, or about those who own the AI(s) ending up having almost all the world’s resources. But these are different, not very balrog-like, kinds of threat.)
Er, no. Because we don’t (so far as I know) have any reason to expect that if we somehow produce a problematically powerful AI anything like an “all-seeing Eye” will splat it.
(Why on earth would you think my reason for saying what I said was “because it didn’t go the way [I] liked”? It seems a pointlessly uncharitable, as well as improbable, explanation.)
Because there are plenty of all-seeing eye superpowers in this world. Not everyone is convinced that the very real, very powerful security regimes around the world would be suddenly left inept when the opponent is a computer instead of a human being.
My comment didn’t contribute any less than yours to the discussion, which is rather the point. The validity of an allegory depends on the accuracy of the setup and rules, not the outcome. You seemed happy to engage until it was pointed out that the outcome was not what you expected.
Those “very real, very powerful security regimes around the world” are surprisingly inept at handling a few million people trying to migrate to other countries, and similarly inept at handling the crime waves and the political fallout generated by it.
And if you underestimate how much a threat could a mere “computer” be, read the “Friendship is Optimal” stories.
I’ve read the sequences on friendliness here and find them completely unconvincing with lack of evidence and a one-sided view the problem. I’m not about to start generalizing from fictional evidence.
I’m not sure I agree with the assessment of the examples that you give. There are billions of people who would like to live in first world countries but don’t. I think immigration controls have a particularly effective if there’s only a few million people crossing borders illegally in a world of 7 billion. And most of the immigration issues being faced by the world today such as Syrian refugees are about asylum-seekers who are in fact being permitted just in larger numbers than secondary systems were designed to support. Also the failure modes are different. If you let the wrong person in, what happens? Statistically speaking, nothing of great consequence.
Crime waves? We are currently at one of the lowest periods of violence per capita. I think the powers that be have been doing quite a good job actually.
Oh, I see. OK then.
My impression was that it was generally agreed that superintelligences sufficiently visible and slow-growing to be squelched by governments and the like aren’t much of a threat; the balrog-like (hypothetical) ones are the ones that emerge too quickly and powerfully to be so easily stopped. So the threats you have in mind aren’t in the “balrog” category at all, for me.
My first comment in the balrog discussion was the one you took exception to. The point at which you say I stopped being “happy to engage” is the point at which I started engaging. The picture you’re trying to paint is literally the exact opposite of the truth.
I don’t think that’s the case. A superintelligence doesn’t have to be balrog like to advance to the point where it’s too big to fail and thus not easily regulated by the government.
EY et al focus more on the threat of a superintelligence that can improve itself fast and have a lot of power in short amount of time but that’s not the only concerning scenario.
When a bank like HSBC can launder drug and terrorist money without any of it’s officials going to prison for it, the amount of control that a government could exert on a big company run by a complex AI might also be quite limited.
When the superintelligence becomes good enough at making money and buying politicians, it doesn’t have to worry so much about government action, and has enough time to grow slowly.
How much does Putin cost? Or the Chinese Politbureau?
You have at least two options: either buy Putin, or hire someone to replace him, whatever is cheaper. It’s not like Putin single-handedly rules his country—he relies on his army, police, secret services, etc. All these institutions probably have many people who would enjoy replacing Putin at the top of the pyramid. Throw in some extra money (“if you are going to replace Putin, here you have a few extra billions to bribe whoever needs to be bribed to help you with the coup”).
I am not familiar with the internal structure of the Chinese Politbureau, but I would guess this one is easier. There are probably competing factions, so you will support the one more friendly to you.
But there is always to option to ignore both Putin and the Chinese Politbureau, and upload yourself to a computer center built in some other country.
Correct, and yet Putin rules with hardly a challenge to his supremacy.
Money is not very useful when you’re dead.
If you are looking at an AGI that manages investment at a company like Goldman Sachs in an effective way it doesn’t even need to know directly how to buy politicians. If it makes a lot of money for Goldman Sachs, there are other people at Goldman who can do the job of buying politicians.
When Ray Dalio of Bridgewater Associates wants to build an AI that can replace him after he retires, it’s not clear whether any government can effectively regulate it.
Ah, now we are at the crux of the issue. That is not generally agreed upon, at least not outside of the Yudkowski-Bostrom echo chamber. You’ll find plenty of hard-takeoff skeptics even here on LessWrong, let along AI circles where hard-takeoff scenarios are given much credence.
I think you have misunderstood me. I was not intending to say that hard-takeoff scenarios are likely (for what it’s worth, I don’t think they are) but that they are what was being analogized to balrogs here.
(Of course a slow-takeoff controllable-by-governments superintelligence can still pose a threat—e.g., some are worried about technological unemployment, or about those who own the AI(s) ending up having almost all the world’s resources. But these are different, not very balrog-like, kinds of threat.)
Only on LW: disputes about ways in which an AI is like (or unlike) a balrog X-D
Well, we’ve had a basilisk already. Apparently we’re slowly crawling backwards through alphabetical order. Next up, perhaps, Bahamut or Azathoth.
Azathoth, check.
Is there a directory of the gods and monsters somewhere? If not, I think I’ll start one.