After reading this, I am convinced that any AI requiring galactic-scale production would have a “minimum interestingness” value somewhere significantly above boring.
I hope this is not just a rationalization on my part, but I think what Yudkowsky meant was not that it would be impossible to write an interesting story about it, but that it would not be interesting from within the universe, not only for people who will die, but and for the brace maximizer himself, who is unlikely to actually experience emotions. And here I agree.
Guess: This is your response to Eliezer’s statement that a universe ruled by a paperclipper would be boring.
Edit (from after Yitz’s response but is unimportant enough to be okay): I refer to this: https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy?commentId=phRBqkZWJomdpQoYq
After reading this, I am convinced that any AI requiring galactic-scale production would have a “minimum interestingness” value somewhere significantly above boring.
I hope this is not just a rationalization on my part, but I think what Yudkowsky meant was not that it would be impossible to write an interesting story about it, but that it would not be interesting from within the universe, not only for people who will die, but and for the brace maximizer himself, who is unlikely to actually experience emotions. And here I agree.
Ah yes, that’s plausibly true