Reasoning through a new example:
There’s no google maps and no internet to help with finding a hotel. You haven’t chosen a destination city yet.
You could work out how to choose hotels and facilitate the group identifying the kind of hotel it wants. They’re both robustly useful.
You could start picking out hotels in cities at random. Somehow my intuition is that doing this when you don’t know the city is still marginally useful (you might choose that city. Obviously more useful the smaller the set of possible cities), but nonzero useful.
OTOH, one of the best ways to build hotel identifying skills is to identify a hotel, even if you don’t use it. A few practice runs choosing hotels in random cities probably does help you make a new reservation in a different city.
My shoulder John says “dry running hotels is a fine thing to do as long as you’re doing it as a part of a plan to get good at a generalizable skill”. I agree that’s ideal, but not everyone has that skill, and one of the ways to get it is to gradient ascend on gradient ascending. I worry that rhetoric like this, and related stuff I see in EA and rationality encouraging people to do the most important thing, ends up paralyzing people when what they need is to do anything so they can start iterating on it.
Note that at time of donation, Altman was co-chair of the board but 2 years away from becoming CEO.