So far as SIAI is concerned, I have to say that the storylike qualities of the situation have provided no bonus at all to our PR rolls, just a penalty we have to be careful to avoid because of all the people who perceptually recognize it as a mere story. In other words, we lose because of all the people trying to compensate for a bias that doesn’t actually seem to be there. People who really are persuaded by stories, go off and become religious or something, find people with much better-refined attractive stories than us. Our own target audience, drawn from the remainder, tends to see any assertion classifiable as storylike as forbidden-in-reality because so many stupid people believe in them.
And of course much of your story simply and obviously isn’t applicable. There will be no robot war, there don’t seem to be any hostile human parties trying to bring about the apocalypse either, the question gets resolved in a research basement somewhere and then there’s no final battle one way or another.
But then if you’d left out the part about the Robot War and the final battle, your opening paragraph wouldn’t have sounded as clever. And this is also something that happens to us a LOT, which is that people arguing against us insist on mapping the situation very exactly onto a story outline, at the expense of accuracy, so that it looks stupider.
All the bias here seems to be in the overcompensation.
I can see how for your audience, the story-like qualities would be a minus. On the other hand, I think the story bias has to do with how people cognitively process information and arguments. If you can’t tell your mission & strategy as a story, it’s a lot harder to get across your ideas, whatever your audience.
The battle was meant to be metaphorical—the battle to ensure that AI is Friendly rather than Unfriendly. And I didn’t say anything about hostile humans—the problem is indifferent humans not giving you resources.
Also, I’m not arguing against SIAI, I just find it amusing how well the futurist sector maps onto a story outline—various protagonists passionate about fighting some great evil that others don’t see and trying to build alliances and grow resources before time runs out. You can squiggle, but that’s who you are. Instrumental rationality means figuring out how to make best positive use of it and avoid it biasing you.
Actually, I will comment (for the purposes of authenticity and from the belief that being more transparent about my motivations will increase mutual truth-finding) that while I’m not arguing “against” SIAI, this post is to some degree emerging from me exploring the question of SIAI’s organizational instrumental rationality. I have the impression from a variety of angles/sources that it’s pretty bad. Since I care about SIAI’s success, it’s one of the things I think about in the background—why, and how you could be more effective.
When discussing SIAI’s instrumental rationality, it’s important to remember what its actual goals are. Speaking of story-bias, it’s all too easy to pattern-match to “organization promoting some cause they think is important”, in which case one easily concludes that SIAI has been a miserable failure because FAI hasn’t become a trendy academic research discipline, and Vice Presidents aren’t making films about paperclip maximizers.
However, the picture changes somewhat if instead you think in terms of the following (more accurate) caricature of SIAI’s actual objectives:
(1) To persuade a dozen or so Putnam Fellows to collaborate with Eliezer on FAI instead of pursuing brilliant careers in academic mathematics;
(2) To dissuade people like Ben Goertzel from trying to build AGI without solving the FAI problem first.
If you look at it like this (still admittedly oversimplified), then yes, SIAI still has a way to go in achieving its goals, but they don’t seem to be quite as hopelessly underequipped for the task as one might have thought.
(Disclaimer: I certainly don’t speak for SIAI; my association with the organization is that of a former visitor, i.e. about as loose as it’s possible to get while still having to answer “yes” to the question “Are you, or have you ever been, affiliated with the Singularity Institute for Artificial Intelligence?” if ever called to testify before Congress....)
this post is to some degree emerging from me exploring the question of SIAI’s organizational instrumental rationality. I have the impression from a variety of angles/sources that it’s pretty bad. Since I care about SIAI’s success, it’s one of the things I think about in the background—why, and how you could be more effective.
I’ve had similar thoughts. I would be interested in hearing what room for improvement you see in SIAI’s organizational instrumental rationality. I have my own thoughts on this (which have evolved somewhat as I’ve learned more since making my posts about SIAI back in August). Feel free to PM me if you’d prefer to communicate privately.
So far as SIAI is concerned, I have to say that the storylike qualities of the situation have provided no bonus at all to our PR rolls, just a penalty we have to be careful to avoid because of all the people who perceptually recognize it as a mere story. In other words, we lose because of all the people trying to compensate for a bias that doesn’t actually seem to be there. People who really are persuaded by stories, go off and become religious or something, find people with much better-refined attractive stories than us. Our own target audience, drawn from the remainder, tends to see any assertion classifiable as storylike as forbidden-in-reality because so many stupid people believe in them.
And of course much of your story simply and obviously isn’t applicable. There will be no robot war, there don’t seem to be any hostile human parties trying to bring about the apocalypse either, the question gets resolved in a research basement somewhere and then there’s no final battle one way or another.
But then if you’d left out the part about the Robot War and the final battle, your opening paragraph wouldn’t have sounded as clever. And this is also something that happens to us a LOT, which is that people arguing against us insist on mapping the situation very exactly onto a story outline, at the expense of accuracy, so that it looks stupider.
All the bias here seems to be in the overcompensation.
I can see how for your audience, the story-like qualities would be a minus. On the other hand, I think the story bias has to do with how people cognitively process information and arguments. If you can’t tell your mission & strategy as a story, it’s a lot harder to get across your ideas, whatever your audience.
The battle was meant to be metaphorical—the battle to ensure that AI is Friendly rather than Unfriendly. And I didn’t say anything about hostile humans—the problem is indifferent humans not giving you resources.
Also, I’m not arguing against SIAI, I just find it amusing how well the futurist sector maps onto a story outline—various protagonists passionate about fighting some great evil that others don’t see and trying to build alliances and grow resources before time runs out. You can squiggle, but that’s who you are. Instrumental rationality means figuring out how to make best positive use of it and avoid it biasing you.
By this standard, just about anything worth taking as one’s life work will involve a metaphorical battle.
Actually, I will comment (for the purposes of authenticity and from the belief that being more transparent about my motivations will increase mutual truth-finding) that while I’m not arguing “against” SIAI, this post is to some degree emerging from me exploring the question of SIAI’s organizational instrumental rationality. I have the impression from a variety of angles/sources that it’s pretty bad. Since I care about SIAI’s success, it’s one of the things I think about in the background—why, and how you could be more effective.
When discussing SIAI’s instrumental rationality, it’s important to remember what its actual goals are. Speaking of story-bias, it’s all too easy to pattern-match to “organization promoting some cause they think is important”, in which case one easily concludes that SIAI has been a miserable failure because FAI hasn’t become a trendy academic research discipline, and Vice Presidents aren’t making films about paperclip maximizers.
However, the picture changes somewhat if instead you think in terms of the following (more accurate) caricature of SIAI’s actual objectives:
(1) To persuade a dozen or so Putnam Fellows to collaborate with Eliezer on FAI instead of pursuing brilliant careers in academic mathematics;
(2) To dissuade people like Ben Goertzel from trying to build AGI without solving the FAI problem first.
If you look at it like this (still admittedly oversimplified), then yes, SIAI still has a way to go in achieving its goals, but they don’t seem to be quite as hopelessly underequipped for the task as one might have thought.
(Disclaimer: I certainly don’t speak for SIAI; my association with the organization is that of a former visitor, i.e. about as loose as it’s possible to get while still having to answer “yes” to the question “Are you, or have you ever been, affiliated with the Singularity Institute for Artificial Intelligence?” if ever called to testify before Congress....)
I’ve had similar thoughts. I would be interested in hearing what room for improvement you see in SIAI’s organizational instrumental rationality. I have my own thoughts on this (which have evolved somewhat as I’ve learned more since making my posts about SIAI back in August). Feel free to PM me if you’d prefer to communicate privately.
You’ll want to see my reply to patrissimo above.
It’s actually a disaster story, not a battle story, which is something I’m surprised Patrissimo missed in the opening paragraph.*
*(Possibly because disaster movie protagonists tend to be older, so it wouldn’t fit that bit quite as well)
Your “enemies” are those too shortsighted to realise the true dangers, and your aim is to reveal the dangers to them, and save the world.
But as others have stated, with sufficient attention any story can be mapped to a combination of tropes (inverted, subverted and played straight).
Maybe not—but if there are wars to come, they will probably involve many robots.
Quite possibly they will be fighting on both sides.