There is, of course, a third option. The rationalist who sets their sights on something human-scaled instead of humanity-scaled is likely to do very well for themselves.
And so, in some sense, it’s worth examining the scope and effect of wish-fulfillment stories. If I play a lot of video games where I’m the only relevant character who reshapes all of reality around him according to my whims, what does that do to my empathy? My narcissism? My ability to reshape reality, and my satisfaction with the changes I attempt? If I read a lot of books about the lives of software pioneers and their companies, what does that do to my empathy? My narcissism? My ability to reshape reality, and my satisfaction with the changes I attempt? If I read a lot of books about successful relationships, how people work, and how to control myself, what does that do to my empathy? My narcissism? My ability to reshape reality, and my satisfaction with the changes I attempt?
It’s difficult to write a story about start-ups. (Either the idea is good and has been done, so you’re writing history instead of fiction, or good and not done, in which case you should be doing it not writing about it, or bad, in which case disbelief will be hard to suspend.) But it’s easy to see someone using rationality to turn around their relationship or their life or a school or business.
The author’s problem is twofold: those problems are hard, and those problems are local. Stories tend to go for the cheapest thing: the cheapest/simplest plot is one person punching another, and the cheapest emotional hook is the fate of the world.
But those problems are solvable. And I, as suggested, would love a rationalist story where the hero devotes their time to solving useful, even if they are limited, problems instead of figuring out the best way to punch someone. You can see this in HP:MoR: compare the chapters where Harry is trying to figure out magic, or convert Draco, to the Azkaban chapters. (I am in the early early stages of starting a rationalist work on these lines; I abandoned fiction writing ~6 years ago and do not expect to be good at it, but we’ll see if I get happy enough with it to show the public.)
Which is why I’m pretty sure Elspeth will succeed where Bella failed.
My thoughts here are that, for Elspeth, succeeding is having an accurate idea of what scope she can change the world at. That was Bella’s core failure: her delusions of grandeur. For Elspeth, who has a less useful power, no ultra-rich family with several massively useful witches, and only half-vampire status, for her to defeat the Volturi where Bella failed seems to me to be impossible (unless she does the “well, I’m going to steal a bunch of money, buy a bunch of explosives, and burn Volterra to the ground” plan, which is definitely not the utilitarian way to conduct regime change).
I don’t mind that this lesson, which is of critical importance, is a really painful one. Pain is the best teacher. I mind that Bella didn’t know it beforehand, but it’s a reasonable flaw to give a character (especially if you’re writing for the LW community, apparently). But if Alicorn has the second book narrated by Elspeth and she makes the same mistakes as Bella (especially if she lucks into a win), then I will stop reading in disgust.
It’s difficult to write a story about start-ups. (Either the idea is good and has been done, so you’re writing history instead of fiction, or good and not done, in which case you should be doing it not writing about it, or bad, in which case disbelief will be hard to suspend.)
Unless it’s been done in the real world but not in the world you’re writing in, in which case you may be Terry Pratchett.
Put in a separate post: I am strongly considering writing a top-level post about the failings of utilitarianism, because I see that as very strongly linked to Bella’s scope failure (the utilitarian goal is the Volturi gone, thus I should eradicate the Volturi). I’ll also write it for people, not for fictional characters, if that’s a worry.
If you are interested in seeing my thoughts on the matter, vote this up; if disinterested, vote this down. (But not negative, please, my karma is tiny!)
I would strongly prefer that my characters not be used as examples in non-fiction didactic works at least until I announce that I have finished with the story. (I currently expect Radiance to be the last work I do in the universe.)
This is a reply deep in a thread on a relatively old post, you won’t likely get many people that even see this request. :) If you’re nervous about publishing a top-level post, at least float something on the discussion side. I agree that utilitarianism is severely flawed. My reason is that humans simply don’t have enough computing power to ever implement utilitarianism decently, it would take an entity with orders of magnitude more intellectual strength to be a utilitarian with falling into a Bella spiral.
Deontology that is steered by utilitarian goals and occasionally modified by utilitarian analysis, OTOH, seems very workable and keeps the best of utilitarianism while factoring for human realities (but I’ve been plugging Desirism for a while now).
If you’re nervous about publishing a top-level post, at least float something on the discussion side.
That indeed looks exactly like what I was looking for: I had seen people use the pattern I modeled reading through comments, which were probably from before that got implemented.
I abandoned fiction writing ~6 years ago and do not expect to be good at it, but we’ll see if I get happy enough with it to show the public
Sounds interesting, you should put it up somewhere regardless. Because A) it can’t be that bad, and B) unless you’ve got a public reading it and demanding more, you’ll very likely never actually go forward with this. :)
There is, of course, a third option. The rationalist who sets their sights on something human-scaled instead of humanity-scaled is likely to do very well for themselves.
And so, in some sense, it’s worth examining the scope and effect of wish-fulfillment stories. If I play a lot of video games where I’m the only relevant character who reshapes all of reality around him according to my whims, what does that do to my empathy? My narcissism? My ability to reshape reality, and my satisfaction with the changes I attempt? If I read a lot of books about the lives of software pioneers and their companies, what does that do to my empathy? My narcissism? My ability to reshape reality, and my satisfaction with the changes I attempt? If I read a lot of books about successful relationships, how people work, and how to control myself, what does that do to my empathy? My narcissism? My ability to reshape reality, and my satisfaction with the changes I attempt?
It’s difficult to write a story about start-ups. (Either the idea is good and has been done, so you’re writing history instead of fiction, or good and not done, in which case you should be doing it not writing about it, or bad, in which case disbelief will be hard to suspend.) But it’s easy to see someone using rationality to turn around their relationship or their life or a school or business.
The author’s problem is twofold: those problems are hard, and those problems are local. Stories tend to go for the cheapest thing: the cheapest/simplest plot is one person punching another, and the cheapest emotional hook is the fate of the world.
But those problems are solvable. And I, as suggested, would love a rationalist story where the hero devotes their time to solving useful, even if they are limited, problems instead of figuring out the best way to punch someone. You can see this in HP:MoR: compare the chapters where Harry is trying to figure out magic, or convert Draco, to the Azkaban chapters. (I am in the early early stages of starting a rationalist work on these lines; I abandoned fiction writing ~6 years ago and do not expect to be good at it, but we’ll see if I get happy enough with it to show the public.)
My thoughts here are that, for Elspeth, succeeding is having an accurate idea of what scope she can change the world at. That was Bella’s core failure: her delusions of grandeur. For Elspeth, who has a less useful power, no ultra-rich family with several massively useful witches, and only half-vampire status, for her to defeat the Volturi where Bella failed seems to me to be impossible (unless she does the “well, I’m going to steal a bunch of money, buy a bunch of explosives, and burn Volterra to the ground” plan, which is definitely not the utilitarian way to conduct regime change).
I don’t mind that this lesson, which is of critical importance, is a really painful one. Pain is the best teacher. I mind that Bella didn’t know it beforehand, but it’s a reasonable flaw to give a character (especially if you’re writing for the LW community, apparently). But if Alicorn has the second book narrated by Elspeth and she makes the same mistakes as Bella (especially if she lucks into a win), then I will stop reading in disgust.
Unless it’s been done in the real world but not in the world you’re writing in, in which case you may be Terry Pratchett.
Put in a separate post: I am strongly considering writing a top-level post about the failings of utilitarianism, because I see that as very strongly linked to Bella’s scope failure (the utilitarian goal is the Volturi gone, thus I should eradicate the Volturi). I’ll also write it for people, not for fictional characters, if that’s a worry.
If you are interested in seeing my thoughts on the matter, vote this up; if disinterested, vote this down. (But not negative, please, my karma is tiny!)
I would strongly prefer that my characters not be used as examples in non-fiction didactic works at least until I announce that I have finished with the story. (I currently expect Radiance to be the last work I do in the universe.)
Understood and anticipated, though I certainly could have been clearer. I would write what I know- anarchism- not what I don’t know- your characters.
This is a reply deep in a thread on a relatively old post, you won’t likely get many people that even see this request. :) If you’re nervous about publishing a top-level post, at least float something on the discussion side. I agree that utilitarianism is severely flawed. My reason is that humans simply don’t have enough computing power to ever implement utilitarianism decently, it would take an entity with orders of magnitude more intellectual strength to be a utilitarian with falling into a Bella spiral.
Deontology that is steered by utilitarian goals and occasionally modified by utilitarian analysis, OTOH, seems very workable and keeps the best of utilitarianism while factoring for human realities (but I’ve been plugging Desirism for a while now).
That indeed looks exactly like what I was looking for: I had seen people use the pattern I modeled reading through comments, which were probably from before that got implemented.
Sounds interesting, you should put it up somewhere regardless. Because A) it can’t be that bad, and B) unless you’ve got a public reading it and demanding more, you’ll very likely never actually go forward with this. :)