Yes, I find it quite amusing that some people of a certain political bent refer to “corporations” as superintelligences, UFAIs, etcetera, and thus insist on diverting marginal efforts that could have been directed against a vastly underaddressed global catastrophic risk to yet more tugging on the same old rope that millions of other people are pulling on, based on their attempt to reinterpret the category-word; and yet oddly enough they don’t think to extend the same anthropomorphism of demonic agency to large organizations that they’re less interested in devalorizing, like governments and religions.
Maybe those people are prioritising the things that seem to affect their lives? I can certainly see exactly the same argument about government or religion as about corporations, but currently the biggest companies (the Microsofts and Sonys and their like) seem to have more power than even some of the biggest governments.
There is also the issue of legal personality, which applies to corporations and not to governments or religions.
The corporation actually seems to me a great example of a non-biological, non-software optimization process, and I’m surprised at Eliezer’s implicit assertion that there is no significant difference between corporations, governments, and religions with respect to their ability to be unfriendly optimization processes, other than that some people of a certain political bent have a bias to think about corporations differently than other institutions like governments and religions.
I think such folks are likely to trust governments too much. They’re more apt to oppose specific religious agendas than to oppose religion as such, and I actually think that’s about right most of the time.
Though I used the term UFAI more for emotional impact than out of belief in its accuracy. We shouldn’t assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI. That’s a rhetorical flourish, not a documented fact.
We shouldn’t assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI. That’s a rhetorical flourish, not a documented fact.
Neither; it’s the conclusion of a logical argument (which is, yes, weaker than a documented fact).
Nick, I disagree. You are saying there is a logical argument that concludes such AIs will be unfriendly with 100% probability. That just isn’t true, or even close to true.
Furthermore, even if there were an argument using these concepts that concluded something with 100% probability, the concepts of UFAI and FAI are not well-defined enough to draw the conclusion above.
I think you’re using the word “assume” here to mean something more like, “We should not build AIs without FAI methodology.” That’s a very very different statement! That’s a conclusion based on using expectation-maximization over all possible outcomes. What I am saying is that we should not assume that, in all possible outcomes, the AI comes out unfriendly.
Yes, he is. He said there is a logical argument that concludes that we should assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI. “Assume” means “assign 100% probability”. What other meaning did you have in mind?
We shouldn’t assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI.
Why not? It’s an assumption which may be slightly overcautious, but I would far rather be slightly overcautious than to increase the risk that an AI is going to smiley-tile the universe. Until we have a more precise idea of what AI-not-designed-using-rigorous-and-deliberate-FAI-methodology is not a UFAI, I see no reason to abandon the current hypothesis.
Because it fails to quite match reality. e.g. charitable corporations can behave pathologically (falling prey to the Iron Law of Institutions), but are generally qualitatively less unFriendly than the standard profit-making corporation.
If you believe it is overcautious, then you believe it is wrong. If you are worried about smiley-tiling, then you get the right answer by assigning the right value to that outcome. Not by intentionally biasing your decision process.
I say ‘may be slightly overcautious’ contingent on it being wrong—I’m saying that if it is wrong, it’s a sort of wrong which will result in less loss in utility than would being wrong in the other direction.
If you’re an agent with infinite computing power, you can investigate all hypotheses further to make sure that you’re right. Humans, however, are forced to devote time and effort to researching those things which are likely to yield utility, and I think that the current hypothesis sounds reasonable unless you have evidence that it is wrong.
You should not err on the side of caution if you are a Bayesian expectation-maximizer!
But I think what you’re getting at, which is the important thing, is that people say “Assume X” when they really mean “My computation of the expected value times probability over all possible outcomes indicates X is likely, and I’m too lazy to remember the details, or I think you’re too stupid to do the computation right; so I’m just going to cache ‘assume X’ and repeat that from now on”. They ruin their analysis because they’re lazy, and don’t want to do more analysis than they would need to do in order to decide what action to take if they had to make the choice today. Then the lazy analysis done with poor information becomes dogma. As in the example above.
Instead of downvoting a comment for referring to another comment that you disagree with, I think you should downvote the original comment.
Better yet, explain why you downvoted. Explaining what you downvoted is going halfway, so I half-appreciate it.
I can’t express strongly enough my dismay that here, on a forum where people are allegedly devoted to rationality, they still strongly believe in making some assumptions without justification.
Funny you should mention that. Just yesterday I added on my list of articles-to-write one by the title of “Religions as UFAI”. In fact, I think the comparison goes much deeper than it does for corporations.
Should other large human organizations like governments and some religions also count as UFAIs?
Yes, I find it quite amusing that some people of a certain political bent refer to “corporations” as superintelligences, UFAIs, etcetera, and thus insist on diverting marginal efforts that could have been directed against a vastly underaddressed global catastrophic risk to yet more tugging on the same old rope that millions of other people are pulling on, based on their attempt to reinterpret the category-word; and yet oddly enough they don’t think to extend the same anthropomorphism of demonic agency to large organizations that they’re less interested in devalorizing, like governments and religions.
Maybe those people are prioritising the things that seem to affect their lives? I can certainly see exactly the same argument about government or religion as about corporations, but currently the biggest companies (the Microsofts and Sonys and their like) seem to have more power than even some of the biggest governments.
There is also the issue of legal personality, which applies to corporations and not to governments or religions.
The corporation actually seems to me a great example of a non-biological, non-software optimization process, and I’m surprised at Eliezer’s implicit assertion that there is no significant difference between corporations, governments, and religions with respect to their ability to be unfriendly optimization processes, other than that some people of a certain political bent have a bias to think about corporations differently than other institutions like governments and religions.
I think such folks are likely to trust governments too much. They’re more apt to oppose specific religious agendas than to oppose religion as such, and I actually think that’s about right most of the time.
Probably.
Though I used the term UFAI more for emotional impact than out of belief in its accuracy. We shouldn’t assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI. That’s a rhetorical flourish, not a documented fact.
Neither; it’s the conclusion of a logical argument (which is, yes, weaker than a documented fact).
Nick, I disagree. You are saying there is a logical argument that concludes such AIs will be unfriendly with 100% probability. That just isn’t true, or even close to true.
Furthermore, even if there were an argument using these concepts that concluded something with 100% probability, the concepts of UFAI and FAI are not well-defined enough to draw the conclusion above.
I think you’re using the word “assume” here to mean something more like, “We should not build AIs without FAI methodology.” That’s a very very different statement! That’s a conclusion based on using expectation-maximization over all possible outcomes. What I am saying is that we should not assume that, in all possible outcomes, the AI comes out unfriendly.
No, Nick is not saying that.
Yes, he is. He said there is a logical argument that concludes that we should assume that every AI not designed using rigorous and deliberate FAI methodology is a UFAI. “Assume” means “assign 100% probability”. What other meaning did you have in mind?
Nothing indicates a rhetorical flourish like the phrase ‘rhetorical flourish’.
Why not? It’s an assumption which may be slightly overcautious, but I would far rather be slightly overcautious than to increase the risk that an AI is going to smiley-tile the universe. Until we have a more precise idea of what AI-not-designed-using-rigorous-and-deliberate-FAI-methodology is not a UFAI, I see no reason to abandon the current hypothesis.
Because it fails to quite match reality. e.g. charitable corporations can behave pathologically (falling prey to the Iron Law of Institutions), but are generally qualitatively less unFriendly than the standard profit-making corporation.
If you believe it is overcautious, then you believe it is wrong. If you are worried about smiley-tiling, then you get the right answer by assigning the right value to that outcome. Not by intentionally biasing your decision process.
I say ‘may be slightly overcautious’ contingent on it being wrong—I’m saying that if it is wrong, it’s a sort of wrong which will result in less loss in utility than would being wrong in the other direction.
If you’re an agent with infinite computing power, you can investigate all hypotheses further to make sure that you’re right. Humans, however, are forced to devote time and effort to researching those things which are likely to yield utility, and I think that the current hypothesis sounds reasonable unless you have evidence that it is wrong.
The erring on the side of caution only enters when you have to make a decision. Your pre-action estimate should be clean of this.
You should not err on the side of caution if you are a Bayesian expectation-maximizer!
But I think what you’re getting at, which is the important thing, is that people say “Assume X” when they really mean “My computation of the expected value times probability over all possible outcomes indicates X is likely, and I’m too lazy to remember the details, or I think you’re too stupid to do the computation right; so I’m just going to cache ‘assume X’ and repeat that from now on”. They ruin their analysis because they’re lazy, and don’t want to do more analysis than they would need to do in order to decide what action to take if they had to make the choice today. Then the lazy analysis done with poor information becomes dogma. As in the example above.
I downvoted this sentence.
Instead of downvoting a comment for referring to another comment that you disagree with, I think you should downvote the original comment.
Better yet, explain why you downvoted. Explaining what you downvoted is going halfway, so I half-appreciate it.
I can’t express strongly enough my dismay that here, on a forum where people are allegedly devoted to rationality, they still strongly believe in making some assumptions without justification.
Weasel words used to convey unnecessary insult.
Proof that conformist mindless dogma is alive and well at LW...
Funny you should mention that. Just yesterday I added on my list of articles-to-write one by the title of “Religions as UFAI”. In fact, I think the comparison goes much deeper than it does for corporations.
Some corporations may become machine intelligences. Religions—probably not so much.