I’m more an outsider than a regular participant here on LW, but I have been boning up on rhetoric for work. I’m thrown by this in a lot of ways.
I notice that I’m confused.
Good for private rationality, bad for public rhetoric? What does your diagram of the argument’s structure look like?
As for me, I want this as the most important conclusion in the summary.
But in fact most goals are dangerous when an AI becomes powerful
I don’t get that, because the evidence for this statement comes after it and later on it is restated in a diluted form.
goals that seem safe … can lead to extremely pathological behaviour if the AI becomes powerful
Do you want a different statement as the most important conclusion? If so, which one? If not, why do you believe the argument works best when structured this way? As opposed to, e.g., an alternative that puts the concrete evidence farther up and the abstract statement “Most goals are dangerous when an AI becomes powerful” somewhere towards the end.
Related point: I get frequent feelings of inconsistency when reading this summary.
I’m encouraged to imagine the AI as a super committee of
Edison, Bill Clinton, Plato, Oprah, Einstein, Caesar, Bach, Ford, Steve Jobs, Goebbels, Buddha, etc.
then I’m told not to anthropomorphize the AI.
Or
I’m told the AI’s motivations are what “we actually programmed into it”,
then I’m asked to worry about the AI’s motivation to lie.
Note I’m talking about rhetorical, a/k/a surface-level feeling of inconsistency here.
You seem like a nice guy.
Let’s put on a halo. Isn’t the easiest way to appear trustworthy to first appear attractive?
I was surprised this summary didn’t produce emotions around this cluster of questions:
Who are you?
Do I like you?
Do I respect your opinion?
Did you intend to skip over all that? If so, is it because you expect your target audience already has their answers?
Shut up and take my money!
There are so many futuristic scenarios out there. For various reasons, these didn’t hit me in the gut.
The scenarios painted in the paragraph that starts with
Our society is setup to magnify the potential of such an entity, providing many routes to great power.
are very easy for me to imagine.
Unfortunately, that works against your summary for me. My imagination consistently conjures human beings.
Wall Street banker.
Political lobbyist for an industry that I dislike.
(Nobody comes to mind for the “replace almost every worker in the service sector” scenario.)
Chairman of the Federal Reserve.
Anonymous Eastern European hacker.
The feeling that “these are problems I am familiar with, and my society is dealing with them through normal mechanisms” makes it hard for me to feel your message about novel risks demanding novel solutions. Am I unique here?
Inversely, the scenarios in the next paragraph, the one that starts with
Of course, simply because an AI could be extremely powerful
are difficult for me to seriously imagine. You acknowledge this problem later on, with
Humans don’t expect this kind of behaviour
Am I unique in feeling that as dismissive and condescending? Is there an alternative phrasing that takes into account my humanity yet still gets me afraid of this UFAI thing? I expect you have all gotten together, brainstormed scenarios of terrifying futures, trotted them out among your target audience, kept the ones that caused fear, and iterated on that a few times. Just want to check that my feelings are in the minority here.
It’s a list of rules. Do you like using lists of rules as springboards for checking your rhetoric? I do. I find my writing improves when I try both sides of a rule that I’m currently following / breaking.
Cheers! One of the issues is one of tone—I put it more respectable/dull than I would have normally, because the field is already “far out” and I didn’t want to play to that image.
I’ll consider your points (and luke’s list) when doing the rewrite.
I’d actually make it more ‘dull’ again, with appropriate qualifiers as you go. You throw qualifiers in at the start and end, but in the midst of the argument make statements like “the AI would” or “the AI will”. Just changing these to “the AI might” or “the AI would probably” may be a more truthful representation of our understanding, but it also has the advantage that it’s all we need to reach the conclusions, and is less likely to turn people off for having a statement they disagree with.
I’m more an outsider than a regular participant here on LW, but I have been boning up on rhetoric for work. I’m thrown by this in a lot of ways.
I notice that I’m confused.
Good for private rationality, bad for public rhetoric? What does your diagram of the argument’s structure look like?
As for me, I want this as the most important conclusion in the summary.
I don’t get that, because the evidence for this statement comes after it and later on it is restated in a diluted form.
Do you want a different statement as the most important conclusion? If so, which one? If not, why do you believe the argument works best when structured this way? As opposed to, e.g., an alternative that puts the concrete evidence farther up and the abstract statement “Most goals are dangerous when an AI becomes powerful” somewhere towards the end.
Related point: I get frequent feelings of inconsistency when reading this summary.
I’m encouraged to imagine the AI as a super committee of
then I’m told not to anthropomorphize the AI.
Or
I’m told the AI’s motivations are what “we actually programmed into it”,
then I’m asked to worry about the AI’s motivation to lie.
Note I’m talking about rhetorical, a/k/a surface-level feeling of inconsistency here.
You seem like a nice guy.
Let’s put on a halo. Isn’t the easiest way to appear trustworthy to first appear attractive?
I was surprised this summary didn’t produce emotions around this cluster of questions:
Who are you?
Do I like you?
Do I respect your opinion?
Did you intend to skip over all that? If so, is it because you expect your target audience already has their answers?
Shut up and take my money!
There are so many futuristic scenarios out there. For various reasons, these didn’t hit me in the gut.
The scenarios painted in the paragraph that starts with
are very easy for me to imagine.
Unfortunately, that works against your summary for me. My imagination consistently conjures human beings.
Wall Street banker.
Political lobbyist for an industry that I dislike.
(Nobody comes to mind for the “replace almost every worker in the service sector” scenario.)
Chairman of the Federal Reserve.
Anonymous Eastern European hacker.
The feeling that “these are problems I am familiar with, and my society is dealing with them through normal mechanisms” makes it hard for me to feel your message about novel risks demanding novel solutions. Am I unique here?
Inversely, the scenarios in the next paragraph, the one that starts with
are difficult for me to seriously imagine. You acknowledge this problem later on, with
Am I unique in feeling that as dismissive and condescending? Is there an alternative phrasing that takes into account my humanity yet still gets me afraid of this UFAI thing? I expect you have all gotten together, brainstormed scenarios of terrifying futures, trotted them out among your target audience, kept the ones that caused fear, and iterated on that a few times. Just want to check that my feelings are in the minority here.
Break any of these rules
I really enjoy Luke’s post here: http://lesswrong.com/lw/86a/rhetoric_for_the_good/
It’s a list of rules. Do you like using lists of rules as springboards for checking your rhetoric? I do. I find my writing improves when I try both sides of a rule that I’m currently following / breaking.
Cheers! One of the issues is one of tone—I put it more respectable/dull than I would have normally, because the field is already “far out” and I didn’t want to play to that image.
I’ll consider your points (and luke’s list) when doing the rewrite.
I’d actually make it more ‘dull’ again, with appropriate qualifiers as you go. You throw qualifiers in at the start and end, but in the midst of the argument make statements like “the AI would” or “the AI will”. Just changing these to “the AI might” or “the AI would probably” may be a more truthful representation of our understanding, but it also has the advantage that it’s all we need to reach the conclusions, and is less likely to turn people off for having a statement they disagree with.
Cheers! I may do that.