I’m going to be careful about reading in to his words too much, and assuming he said something that I disagree with.
But I have noticed and do notice a tendency towards pessimism and pessimists in general to prefer beliefs that skew towards “wrongness” and “incorrectness” and “mistake-making” that tends to be borderline-superstitious. The superstitious-ness I refer to regards the tendency to give errors higher-status than they deserve, e.g., by predicting things to go wrong, in order for them to be less likely to go wrong, or as badly as they could otherwise go.
Rather than predicting that things could go “badly”, “wrongly”, or “disastrously”, it seems much healthier to instead see things as iterations in which each subsequent attempt improves upon the last attempt. For example, building rockets, knowing that first iterations are more likely than later ones to explode, and placing sensors in many places inside the rocket that transmit data back to the HQ so that failures in specific components are detected immediately before an explosion. If the rockets explode far fewer times than predicted, and lead to a design that doesn’t explode at all, you wouldn’t call any point of the process “incorrect”, even at the points at which the rocket did explode. The process was correct.
If you’re building a rocket for the first time ever, and you’re wrong about something, it’s not surprising if you’re wrong about something. It’s surprising if the thing that you’re wrong about causes the rocket to go twice as high, on half the fuel you thought was required and be much easier to steer than you were afraid of.
This may mean that in general, it is more often the case that when we’re wrong about something, that we predicted something to go well, and it didn’t, rather than the reverse. Because I disagree with that sentiment, I allow myself to be wrong here. (Note that this would be the reverse-case, however, if so.)
I don’t see how it in general would help to predict things to be difficult or hard to do, to make such things easier or less hard to do. That would only steer your mental processes towards solutions that look harder than ones that look easier, since the latter we’d have predicted not to lead anywhere useful. If we apply that frame everywhere, then we’re going to be using solutions that feel difficult to use on a lot more problems than we would otherwise, thereby not making things easier for us.
I can’t find the source right now, but I remember reading that Bjarne Stroustrup avoids using any thrown exceptions in his C++ code, but the author of the post that mentioned this said that this was only because he wrote extremely-high-reliability code used in flight avionics for Boeings or something like that. I remember thinking: Well, obviously flight avionics code can’t throw any temper-tantrums a-la quitting-on-errors. But why doesn’t this apply everywhere? The author argued that most software use-cases called for exceptions to be thrown, because it was better for software to be skittish, cautious and not make any hefty assumptions lest it make the customer angry. But it seems odd that “cautiousness” of this nature is not called for in the environment in which your code shutting-off in edge-cases or other odd scenarios would cause the plane’s engines to shut down.
Thrown exceptions represent pessimism, because they involve the code choosing to terminate rather than deal with whatever would happen if it were to continue using whatever state it had considered anomalous or out-of-distribution. The point is, if pessimism is meant to represent cautiousness, it clearly isn’t functioning as intended.
Related: the only consistent way of assigning utilities to probabilistic predictions is U=logP(actual outcome), which is a score in (−∞,0). I think this is a good argument for learning being seen as a “negative” game. That said, as I wrote it, this is vague.
On Yudkowsky and being wrong:
I’m going to be careful about reading in to his words too much, and assuming he said something that I disagree with.
But I have noticed and do notice a tendency towards pessimism and pessimists in general to prefer beliefs that skew towards “wrongness” and “incorrectness” and “mistake-making” that tends to be borderline-superstitious. The superstitious-ness I refer to regards the tendency to give errors higher-status than they deserve, e.g., by predicting things to go wrong, in order for them to be less likely to go wrong, or as badly as they could otherwise go.
Rather than predicting that things could go “badly”, “wrongly”, or “disastrously”, it seems much healthier to instead see things as iterations in which each subsequent attempt improves upon the last attempt. For example, building rockets, knowing that first iterations are more likely than later ones to explode, and placing sensors in many places inside the rocket that transmit data back to the HQ so that failures in specific components are detected immediately before an explosion. If the rockets explode far fewer times than predicted, and lead to a design that doesn’t explode at all, you wouldn’t call any point of the process “incorrect”, even at the points at which the rocket did explode. The process was correct.
This may mean that in general, it is more often the case that when we’re wrong about something, that we predicted something to go well, and it didn’t, rather than the reverse. Because I disagree with that sentiment, I allow myself to be wrong here. (Note that this would be the reverse-case, however, if so.)
I don’t see how it in general would help to predict things to be difficult or hard to do, to make such things easier or less hard to do. That would only steer your mental processes towards solutions that look harder than ones that look easier, since the latter we’d have predicted not to lead anywhere useful. If we apply that frame everywhere, then we’re going to be using solutions that feel difficult to use on a lot more problems than we would otherwise, thereby not making things easier for us.
I can’t find the source right now, but I remember reading that Bjarne Stroustrup avoids using any thrown exceptions in his C++ code, but the author of the post that mentioned this said that this was only because he wrote extremely-high-reliability code used in flight avionics for Boeings or something like that. I remember thinking: Well, obviously flight avionics code can’t throw any temper-tantrums a-la quitting-on-errors. But why doesn’t this apply everywhere? The author argued that most software use-cases called for exceptions to be thrown, because it was better for software to be skittish, cautious and not make any hefty assumptions lest it make the customer angry. But it seems odd that “cautiousness” of this nature is not called for in the environment in which your code shutting-off in edge-cases or other odd scenarios would cause the plane’s engines to shut down.
Thrown exceptions represent pessimism, because they involve the code choosing to terminate rather than deal with whatever would happen if it were to continue using whatever state it had considered anomalous or out-of-distribution. The point is, if pessimism is meant to represent cautiousness, it clearly isn’t functioning as intended.
Related: the only consistent way of assigning utilities to probabilistic predictions is U=logP(actual outcome), which is a score in (−∞,0). I think this is a good argument for learning being seen as a “negative” game. That said, as I wrote it, this is vague.