It’s a very good rule with journalists to assume at least some of them will read something more uncharitably than you can imagine. This...
creating a kind of “super-committee” of the AI equivalents of, say, Edison, Bill Clinton, Plato, Oprah, Einstein, Caesar, Bach, Ford, Steve Jobs, Goebbels, Buddha and other humans superlative superlative in their respective skill-sets
...will very likely net you at least one, vocal journalist saying that you’re ‘comparing’ Goebbels to Buddah and Bill Clinton and Oprah, that you think of Goebbels as a moral paragon (for what other reason is ‘Buddah’ up there), and that you’re a god damned nazi and should die. Since it’s totally inessential to your point here, I think you should remove Clinton, Oprah, and Goebbels. Don’t make use of living or evil people.
You should say that the AI can be skilled without being nice, but you should (and do) say that directly. It’s not necessary (and not a good idea) to hint at that elsewhere in such a way that you can be confused with a nazi. Journalists are unfriendly intelligences in the sense that they will interpret you in ways that aren’t predictable, and maximize values unrelated to your interests.
What if I interweaved more nasty people in the list? The difference between morality and competence is something that I’d like to imply (and there is little room to state it explicitly).
I dunno. Goebbels is evil. I don’t think you are trying to say (correct me if I’m wrong here) that the problem with haphazard AI is evil. The problem is that it won’t be a moral being at all. All the people on that list are morally complicated individuals. My first reaction to the idea that they would be networked together is that they would probably get into some heated arguments. I really just don’t see Einstein and Goebbels getting along on any kind of project, and if I’m not imagining them with their moral qualities attached, then what’s the point of naming them in particular?
Maybe this is a workable alternative:
The AI could also make use of its unique, non-human architecture. If it existed as pure software, it could copy itself many times, training each copy at accelerated computer speed, and networking those copies together. Imagine a body of researchers many times bigger than the world’s entire scientific community, working without rest, communicating perfectly and instantaneously, and without regard for tenure. It could continue copying itself without limit, creating millions or billions of copies, if it needed large numbers of minds to brute-force a solution to any particular problem.
In any case, I wouldn’t put too much emphasis on the orthogonality thesis in this document. That’s sort of a fancy argument, and not one that’s likely to come up talking to the public. Movies kind of took care of that for you.
Goebbels was there deliberately—I wanted to show that the AI was competent, while reminding that it need not be nice...
Will probably remove Oprah.
It’s a very good rule with journalists to assume at least some of them will read something more uncharitably than you can imagine. This...
...will very likely net you at least one, vocal journalist saying that you’re ‘comparing’ Goebbels to Buddah and Bill Clinton and Oprah, that you think of Goebbels as a moral paragon (for what other reason is ‘Buddah’ up there), and that you’re a god damned nazi and should die. Since it’s totally inessential to your point here, I think you should remove Clinton, Oprah, and Goebbels. Don’t make use of living or evil people.
You should say that the AI can be skilled without being nice, but you should (and do) say that directly. It’s not necessary (and not a good idea) to hint at that elsewhere in such a way that you can be confused with a nazi. Journalists are unfriendly intelligences in the sense that they will interpret you in ways that aren’t predictable, and maximize values unrelated to your interests.
What if I interweaved more nasty people in the list? The difference between morality and competence is something that I’d like to imply (and there is little room to state it explicitly).
I dunno. Goebbels is evil. I don’t think you are trying to say (correct me if I’m wrong here) that the problem with haphazard AI is evil. The problem is that it won’t be a moral being at all. All the people on that list are morally complicated individuals. My first reaction to the idea that they would be networked together is that they would probably get into some heated arguments. I really just don’t see Einstein and Goebbels getting along on any kind of project, and if I’m not imagining them with their moral qualities attached, then what’s the point of naming them in particular?
Maybe this is a workable alternative:
In any case, I wouldn’t put too much emphasis on the orthogonality thesis in this document. That’s sort of a fancy argument, and not one that’s likely to come up talking to the public. Movies kind of took care of that for you.
Thanks! will think about that...