I would also expect socialist economic policies to increase chances of successful FAI, for two reasons. First, it would decrease incentives to produce technological advancements that could lead to UFAI. Second, it would make it easier to devote resources to activities that do not result in a short-term personal profit, such as FAI research.
Socialist economic policies, perhaps yes. On the other hand, full-blown socialism...
How likely would a socialist government insist that its party line must be hardcoded into the AI values, and what would be the likely consequences? How likely would the scientists working on the AI be selected by their rationality, as opposed to their loyalty to regime?
How does anything in my comment suggest that I think brutal dictatorships increase the chance of successful FAI? I only mentioned socialist economic policies.
I don’t think you suggested that; I just wanted to prevent a possible connotation (that I think some people are likely to make, including me).
Note: I also didn’t downvote your comment—because I think it is reasonable—so probably someone else made that interpretation. Probably influenced by my comment. Sorry for that.
This said, I don’t think a regime must be a brutal dictatorship to insist that its values must be hardcoded into the AI values. I can imagine nice people insisting that you hardcode there The Universal Declaration of Human Rights, religious tolerance, diversity, tolerance to minorities, preserving cultural heritage, preserving the nature, etc. Actually, I imagine that most people would consider Eliezer less reliable to work on Friendly AI than someone who professes all the proper applause lights.
If a government pursued its own AGI project, that could be a danger, but not hugely more so than private AI work. In order to be much more threatening, it would have to monopolize AI research, so that organizations like MIRI couldn’t exist. Even then, FAI research would probably be easier to do in secret than making money off of AI research (the primary driver of UFAI risk) would be.
I would also expect socialist economic policies to increase chances of successful FAI, for two reasons. First, it would decrease incentives to produce technological advancements that could lead to UFAI. Second, it would make it easier to devote resources to activities that do not result in a short-term personal profit, such as FAI research.
Socialist economic policies, perhaps yes. On the other hand, full-blown socialism...
How likely would a socialist government insist that its party line must be hardcoded into the AI values, and what would be the likely consequences? How likely would the scientists working on the AI be selected by their rationality, as opposed to their loyalty to regime?
How does anything in my comment suggest that I think brutal dictatorships increase the chance of successful FAI? I only mentioned socialist economic policies.
I don’t think you suggested that; I just wanted to prevent a possible connotation (that I think some people are likely to make, including me).
Note: I also didn’t downvote your comment—because I think it is reasonable—so probably someone else made that interpretation. Probably influenced by my comment. Sorry for that.
This said, I don’t think a regime must be a brutal dictatorship to insist that its values must be hardcoded into the AI values. I can imagine nice people insisting that you hardcode there The Universal Declaration of Human Rights, religious tolerance, diversity, tolerance to minorities, preserving cultural heritage, preserving the nature, etc. Actually, I imagine that most people would consider Eliezer less reliable to work on Friendly AI than someone who professes all the proper applause lights.
If a government pursued its own AGI project, that could be a danger, but not hugely more so than private AI work. In order to be much more threatening, it would have to monopolize AI research, so that organizations like MIRI couldn’t exist. Even then, FAI research would probably be easier to do in secret than making money off of AI research (the primary driver of UFAI risk) would be.