Greg Egan’s Orthogonal series, for the weaker criterion. Most of the characters are scientists, you get to see them struggling with problems and playing with new ideas that turn out not to work. That’s not the only rationalist aspect, though: [about the future inhabitants of a generation ship] “They won’t feel as though they’re falling, they’ll feel the way they always feel. Only the old books will tell them there was something called ‘falling’ that felt the same.” [while having to fast] “She… began [working through] the maze of obstructions that made it impossible to [open the cupboard] unless she was fully awake. Halfway through, she paused. Breaking the pattern [3 days eating, 1 day fasting] would set a precedent, inviting her to treat every fast day as a potential exception. Once the behaviour that she was trying to make routine and automatic had to be questioned over and over… [it would be] a dozen times harder.” When launching their rocket, they make plans for every survivable catastrophe they can imagine, and practice them. They explain their ideas to each other: “Why aren’t we testing this on voles?” “We would need smaller needles, and we don’t have those.” One person talks about the difficulty of feeling urgency for a danger when evidence of it is not immediate, despite civilization being at stake when time is limited—perhaps, she says, she’s expecting too much of her animal brain. The rationality is not constant, but is there and noticeable. The characters really do think about things, and we get to see it.
(Postscript: This is my first comment. I wish I could have described more abstractly what the characters were doing, and why I think it counts as rationality. Even if my evidence fails to convince you that the books should be called “rationalist fiction”, you should still consider reading them; I think they are good books in their own right. I have not yet read the third in the trilogy, but would be surprised if it were much worse than the other two.)
To make the underlying math more explicit (if still handwavy), I see the thickness as the derivative of the parent with respect to the child; this is why we can multiply them together along a path (the chain rule). This perspective helps us see a few important things:
The thicknesses change over time and are based on marginal importance rather than absolute importance. Sometimes they change due to random external factors, but often due to your own actions—if Danslist now has a truly excellent network thanks to your efforts, improving it further may not be the most important thing anymore, even if having it is still the most important thing to the company’s success.
The thickness has units of something like [effect]/[work], where the unit of work is something like a person-hour. This means the thicknesses are based not just on importance, but on tractability; if transitioning to MySQL suddenly got 1000 times easier (resp. harder), the corresponding line is now 1000 times thicker (resp. thinner). In this example, even 1000 times thicker may not be enough to make the corresponding leaf relevant, but the general idea is important.
It’s not really a tree. Mental health is important for happiness directly. But it’s also important via many subgoals, since poor mental health can make working on other things more difficult. You have to sum all paths from the root to some node when considering its importance—a lot of the time, the tree is a good approximation (the gym is a separate realm from the office), but there are some things that are so important to everything that they demand this amendment.