Just read Bostrom’s Deep Utopia (though not too carefully). The book is structured with about half being transcripts of fictional lectures given by Bostrom at Oxford, about a quarter being stories about various woodland creatures striving to build a utopia, and another quarter being various other vignettes and framing stories.
Overall, I was a bit disappointed. The lecture transcripts touch on some interesting ideas, but Bostrom’s style is generally one which tries to classify and taxonimize, rather than characterize (e.g. he has a long section trying to analyze the nature of boredom). I think this doesn’t work very well when describing possible utopias, because they’ll be so different from today that it’s hard to extrapolate many of our concepts to that point, and also because the hard part is making it viscerally compelling.
The stories and vignettes are somewhat esoteric; it’s hard to extract straightforward lessons from them. My favorite was a story called The Exaltation of ThermoRex, about an industrialist who left his fortune to the benefit of his portable room heater, leading to a group of trustees spending many millions of dollars trying to figure out (and implement) what it means to “benefit” a room heater.
Tangentially related (spoilers for Worth the Candle):
I think it’d be hard to do a better cohesive depiction of Utopia than the end of Worth the Candle by A Wales. I mean, I hope someone does do it, I just think it’ll be challenging to do!
If you haven’t read CEV, I strongly recommend doing so. It resolved some of my confusions about utopia that were unresolved even after reading the Fun Theory sequence.
Specifically, I had an aversion to the idea of being in a utopia because “what’s the point, you’ll have everything you want”. The concrete pictures that Eliezer gestures at in the CEV document do engage with this confusion, and gesture at the idea that we can have a utopia where the AI does not simply make things easy for us, but perhaps just puts guardrails onto our reality, such that we don’t die, for example, but we do have the option to struggle to do things by ourselves.
Yes, the Fun Theory sequence tries to communicate this point, but it didn’t make sense to me until I could conceive of an ASI singleton that could actually simply not help us.
I dropped the book within the first chapter. For one, I found the way Bostrom opened the chapter as very defensive and self-conscious. I imagine that even Yudkowsky wouldn’t start a hypothetical 2025 book with fictional characters caricaturing him. Next, I felt like I didn’t really know what the book was covering in terms of subject matter, and I didn’t feel convinced it was interesting enough to continue the meandering path Nick Bostrom seem to have laid out before me.
Eliezer’s CEV document and the Fun Theory sequence were significantly more pleasant experiences, based on my memory.
Just read Bostrom’s Deep Utopia (though not too carefully). The book is structured with about half being transcripts of fictional lectures given by Bostrom at Oxford, about a quarter being stories about various woodland creatures striving to build a utopia, and another quarter being various other vignettes and framing stories.
Overall, I was a bit disappointed. The lecture transcripts touch on some interesting ideas, but Bostrom’s style is generally one which tries to classify and taxonimize, rather than characterize (e.g. he has a long section trying to analyze the nature of boredom). I think this doesn’t work very well when describing possible utopias, because they’ll be so different from today that it’s hard to extrapolate many of our concepts to that point, and also because the hard part is making it viscerally compelling.
The stories and vignettes are somewhat esoteric; it’s hard to extract straightforward lessons from them. My favorite was a story called The Exaltation of ThermoRex, about an industrialist who left his fortune to the benefit of his portable room heater, leading to a group of trustees spending many millions of dollars trying to figure out (and implement) what it means to “benefit” a room heater.
Tangentially related (spoilers for Worth the Candle):
I think it’d be hard to do a better cohesive depiction of Utopia than the end of Worth the Candle by A Wales. I mean, I hope someone does do it, I just think it’ll be challenging to do!
Strong agree, also I spoiler-texted it, hope you don’t mind.
Any opinions on how it compares to Fun Theory? (Though that’s less about all of utopia, it is still a significant part)
If you haven’t read CEV, I strongly recommend doing so. It resolved some of my confusions about utopia that were unresolved even after reading the Fun Theory sequence.
Specifically, I had an aversion to the idea of being in a utopia because “what’s the point, you’ll have everything you want”. The concrete pictures that Eliezer gestures at in the CEV document do engage with this confusion, and gesture at the idea that we can have a utopia where the AI does not simply make things easy for us, but perhaps just puts guardrails onto our reality, such that we don’t die, for example, but we do have the option to struggle to do things by ourselves.
Yes, the Fun Theory sequence tries to communicate this point, but it didn’t make sense to me until I could conceive of an ASI singleton that could actually simply not help us.
I dropped the book within the first chapter. For one, I found the way Bostrom opened the chapter as very defensive and self-conscious. I imagine that even Yudkowsky wouldn’t start a hypothetical 2025 book with fictional characters caricaturing him. Next, I felt like I didn’t really know what the book was covering in terms of subject matter, and I didn’t feel convinced it was interesting enough to continue the meandering path Nick Bostrom seem to have laid out before me.
Eliezer’s CEV document and the Fun Theory sequence were significantly more pleasant experiences, based on my memory.