I would say one of the key strong points about the fable of the sparrows is that it provides a very clean intro to the idea of AI risk. Even someone who’s never read a word on the subject, when given the title of the book and the story, gets a good idea of where the book is going to go. It doesn’t communicate all the important insights, but it points in the right direction.
EDIT:
So I actually went to the trouble of testing this by having a bunch of acquaintances read the fable, and, even given the title of the book, most of them didn’t come anywhere near getting the intended message. They were much more likely to interpret it as about the “futility of subjugating nature to humanity’s whims”. This is worrying for our ability to make the case to laypeople.
It’s an interesting story, but I think in practice the best way to learn to control owls would be to precommit to kill the young owl before it got too large, experiment with it, and through experimenting with and killing many young owls, learn how to tame and control owls reliably. Doing owl control research in the absence of a young owl to experiment on seems unlikely to yield much of use—imagine trying to study zoology without having any animals or botany without having any plants.
Yes it’s hard, but we do quantum computing research without any quantum computers. Lampson launched work on covert channel communication decades before the vulnerability was exploited in the wild. Turing learned a lot about computers before any existed. NASA does a ton of analysis before they launch something like a Mars rover, without the ability to test it in its final environment.
True in the case of owls, though in the case of AI we have the luxury and challenge of making the thing from scratch. If all goes correctly, it’ll be born tamed.
I really liked Bostrom’s unfinished fable of the sparrows. And endnote #1 from the Preface is cute.
I would say one of the key strong points about the fable of the sparrows is that it provides a very clean intro to the idea of AI risk. Even someone who’s never read a word on the subject, when given the title of the book and the story, gets a good idea of where the book is going to go. It doesn’t communicate all the important insights, but it points in the right direction.
EDIT: So I actually went to the trouble of testing this by having a bunch of acquaintances read the fable, and, even given the title of the book, most of them didn’t come anywhere near getting the intended message. They were much more likely to interpret it as about the “futility of subjugating nature to humanity’s whims”. This is worrying for our ability to make the case to laypeople.
It’s an interesting story, but I think in practice the best way to learn to control owls would be to precommit to kill the young owl before it got too large, experiment with it, and through experimenting with and killing many young owls, learn how to tame and control owls reliably. Doing owl control research in the absence of a young owl to experiment on seems unlikely to yield much of use—imagine trying to study zoology without having any animals or botany without having any plants.
But will all the sparrows be so cautious?
Yes it’s hard, but we do quantum computing research without any quantum computers. Lampson launched work on covert channel communication decades before the vulnerability was exploited in the wild. Turing learned a lot about computers before any existed. NASA does a ton of analysis before they launch something like a Mars rover, without the ability to test it in its final environment.
True in the case of owls, though in the case of AI we have the luxury and challenge of making the thing from scratch. If all goes correctly, it’ll be born tamed.
...Okay, not all analogies are perfect. Got it. It’s still a useful analogy for getting the main point across.