I don’t think you suggested that; I just wanted to prevent a possible connotation (that I think some people are likely to make, including me).
Note: I also didn’t downvote your comment—because I think it is reasonable—so probably someone else made that interpretation. Probably influenced by my comment. Sorry for that.
This said, I don’t think a regime must be a brutal dictatorship to insist that its values must be hardcoded into the AI values. I can imagine nice people insisting that you hardcode there The Universal Declaration of Human Rights, religious tolerance, diversity, tolerance to minorities, preserving cultural heritage, preserving the nature, etc. Actually, I imagine that most people would consider Eliezer less reliable to work on Friendly AI than someone who professes all the proper applause lights.
If a government pursued its own AGI project, that could be a danger, but not hugely more so than private AI work. In order to be much more threatening, it would have to monopolize AI research, so that organizations like MIRI couldn’t exist. Even then, FAI research would probably be easier to do in secret than making money off of AI research (the primary driver of UFAI risk) would be.
I don’t think you suggested that; I just wanted to prevent a possible connotation (that I think some people are likely to make, including me).
Note: I also didn’t downvote your comment—because I think it is reasonable—so probably someone else made that interpretation. Probably influenced by my comment. Sorry for that.
This said, I don’t think a regime must be a brutal dictatorship to insist that its values must be hardcoded into the AI values. I can imagine nice people insisting that you hardcode there The Universal Declaration of Human Rights, religious tolerance, diversity, tolerance to minorities, preserving cultural heritage, preserving the nature, etc. Actually, I imagine that most people would consider Eliezer less reliable to work on Friendly AI than someone who professes all the proper applause lights.
If a government pursued its own AGI project, that could be a danger, but not hugely more so than private AI work. In order to be much more threatening, it would have to monopolize AI research, so that organizations like MIRI couldn’t exist. Even then, FAI research would probably be easier to do in secret than making money off of AI research (the primary driver of UFAI risk) would be.