Suppose something bad happens to person X, who you care about. The bad thing wasn’t anything you had control over, so you have no reason to feel bad about it. But now you have a chance to help X. Whether you help them or not is something you do have control over, so if you do help them, you should feel good about it.
But suppose that you fail to help them. Now it may or may not be appropriate to feel bad, depending on why you fail to help them. For instance, maybe you are driving to their home, but on the way there your car breaks down. Presuming you hadn’t ignored clear signs of an immediate breakdown or otherwise clearly neglected the maintenance of your car, then it breaking down wasn’t really under your control. This prevents you from helping them, but it still isn’t something that you should feel bad about. Feeling bad is a feedback mechanism to teach you lessons about what you did wrong, and there are no useful lessons to be learned here.
You should only feel bad if you failed because of something that was under your control. Maybe you were going to take a bus to them, but got stuck online and missed the last bus. Or maybe you drove your car carelessly and got in an accident. In that case it’s okay to feel bad, as your behavior mechanisms need feedback.
This reminds me of a video game that I used to play. In Creatures 2, the player takes care of several artificial animal-ish creatures called norns. Interestingly, norns actually learn—they have a simulated brain with simulated reward and punishment chemicals, and whatever ‘neurons’ are firing when there are ‘reward chemicals’ fire more often in the future and whatever ‘neurons’ are firing when there are ‘punishment chemicals’ fire less often in the future, causing them to show more of certain behaviors and less of others.
Unfortunately, the game was released without adequate playtesting, and the default norns’ learning systems turned out not to be calibrated properly. Individual norns seemed to learn fine at first, but eventually turned stupid as they aged, jumping off of cliffs and refusing to eat. With some work, the player community figured out what was wrong: The default norns’ punishment and reward chemicals had too long of a half-life, and tended to stay in the norns’ systems long enough to affect several brain-states. Fortunately, once this was discovered, it was easy for some of the more advanced players to design norns without the issue (yes, the game allowed for genetic engineering!) and release them to the public, and the new norns learned just fine.
This reminds me of a video game that I used to play. In Creatures 2, the player takes care of several artificial animal-ish creatures called norns. Interestingly, norns actually learn—they have a simulated brain with simulated reward and punishment chemicals, and whatever ‘neurons’ are firing when there are ‘reward chemicals’ fire more often in the future and whatever ‘neurons’ are firing when there are ‘punishment chemicals’ fire less often in the future, causing them to show more of certain behaviors and less of others.
Unfortunately, the game was released without adequate playtesting, and the default norns’ learning systems turned out not to be calibrated properly. Individual norns seemed to learn fine at first, but eventually turned stupid as they aged, jumping off of cliffs and refusing to eat. With some work, the player community figured out what was wrong: The default norns’ punishment and reward chemicals had too long of a half-life, and tended to stay in the norns’ systems long enough to affect several brain-states. Fortunately, once this was discovered, it was easy for some of the more advanced players to design norns without the issue (yes, the game allowed for genetic engineering!) and release them to the public, and the new norns learned just fine.