I’m kinda arguing that the skills relevant to the one-shot context are less transferable, not more.
It might also be that they happen to be the skills you need, or that everyone already has the skills you’d learn from many-shotting the game, and so focusing on those skills is more valuable even if they’re less transferable.
But “do I think the game designer would have chosen to make this particular combo stronger or weaker than that combo?” does not seem to me like the kind of prompt that leads to a lot of skills that transfer outside games.
I’m not quite sure what things you’re contrasting here.
The skills I care about are:
making predictions (instead of just doing stuff without reflecting on what else is likely to happen)
thinking about which things are going to be strategically relevant
thinking about what resources you have available and how they fit together
thinking about how to quantitatively compare your various options
And it’d be nice to train thinking about that in a context without the artificialness of gaming, but I don’t have great alternatives. In my mind, the question is “what would be a better way to train those skills?”, and “are simple strategy games useful enough to be worth training on, if I don’t have better short-feedback-cycle options?”
(I can’t tell from your phrasing so far if you were oriented around those questions, or some other one)
Oh, hm. I suppose I was thinking in terms of better-or-worse quantitative estimates—”how close was your estimate to the true value?”—and you’re thinking more in terms of “did you remember to make any quantitative estimate at all?”
And so I was thinking the one-shot context was relevant mostly because the numerical values of the variables were unknown, but you’re thinking it’s more because you don’t yet have a model that tells you which variables to pay attention to or how those variables matter?
“did you remember to make any quantitative estimate at all?”
I’m actually meaning to ask the question “did you estimate help you strategically?” So, if you get two estimates wildly wrong, but they still had the right relatively ranking and you picked the right card to draft, that’s a win.
Also important: what matters here is not whether you got the answer right or wrong, it’s whether you learned a useful thing in the process that transfers (and, like, you might end up getting the answer completely wrong, but if you can learn something about your thought process that you can improve on, that’s a bigger win.
I have an intuition that you’re partly getting at something fundamental, and also an intuition that you’re partly going down a blind alley, and I’ve been trying to pick apart why I think that.
I think that “did your estimate help you strategically?” has a substantial dependence on the “reading the designer’s mind” stuff I was talking about above. For instance, I’ve made extremely useful strategic guesses in a lot of games using heuristics like:
Critical hits tend to be over-valued because they’re flashy
Abilities with large numbers appearing as actual text tend to be over-valued, because big numbers have psychological weight separate from their actual utility
Support roles, and especially healing, tend to be under-valued, for several different reasons that all ultimately ground out in human psychology
All of these are great shortcuts to finding good strategies in a game, but they all exploit the fact that some human being attempted to balance the game, and that that human had a bunch of human biases.
I think if you had some sort of tournament about one-shotting Luck Be A Landlord, the winner would mostly be determined by mastery of these sorts of heuristics, which mostly doesn’t transfer to other domains.
However, I can also see some applicability for various lower-level, highly-general skills like identifying instrumental and terminal values, gears-based modeling, quantitative reasoning, noticing things you don’t know (then forming hypotheses and performing tests), and so forth. Standard rationality stuff.
Different games emphasize different skills. I know you were looking for specific things like resource management and value-of-information, presumably in an attempt to emphasize skills you were more interested in.
I think “reading the designer’s mind” is a useful category for a group of skills that is valuable in many games but that you’re probably less interested in, and so minimizing it should probably be one of the criteria you use to select which games to include in exercises.
I already gave the example of book games as revolving almost entirely around reading the designer’s mind. One example at the opposite extreme would be a game where the rules and content are fully-known in advance...though that might be problematic for your exercise for other reasons.
It might be helpful to look for abstract themes or non-traditional themes, which will have less associational baggage.
I feel like it ought to be possible to deliberately design a game to reward the player mostly for things other than reading the designer’s mind, even in a one-shot context, but I’m unsure how to systematically do that (without going to the extreme of perfect information).
One thing to remember is I (mostly) am advocating playing each game only once, and doing a variety of games/puzzles/activities, many of which should just be “real-world” activities, as well as plenty of deliberate Day Job stuff. Some of them should focus on resource management, and some of that should be “games” that have quick feedback loops, but it sounds like you’re imagining it being more focused on the goodhartable versions of that than I think it is.
(also, I think multiplayer games where all the information is known is somewhat an antidote to these particular failure modes? even when all the information is known, there’s still uncertainty about how the pieces combine together, and there’s some kind of brute-reality-fact about ‘well, the other players figured it out better than you’)
In principle, any game where the player has a full specification of how the game works is immune to this specific failure mode, whether it’s multiplayer or not. (I say “in principle” because this depends on the player actually using the info; I predict most people playing Slay the Spire for the first time will not read the full list of cards before they start, even if they can.)
The one-shot nature makes me more concerned about this specific issue, rather than less. In a many-shot context, you get opportunities to empirically learn info that you’d otherwise need to “read the designer’s mind” to guess.
Mixing in “real-world” activities presumably helps.
If it were restricted only to games, then playing a variety of games seems to me like it would help a little but not that much (except to the extent that you add in games that don’t have this problem in the first place). Heuristics for reading the designer’s mind often apply to multiple game genres (partly, but not solely, because approx. all genres now have “RPG” in their metaphorical DNA), and even if different heuristics are required it’s not clear that would help much if each individual heuristic is still oriented around mind-reading.
I’m kinda arguing that the skills relevant to the one-shot context are less transferable, not more.
It might also be that they happen to be the skills you need, or that everyone already has the skills you’d learn from many-shotting the game, and so focusing on those skills is more valuable even if they’re less transferable.
But “do I think the game designer would have chosen to make this particular combo stronger or weaker than that combo?” does not seem to me like the kind of prompt that leads to a lot of skills that transfer outside games.
I’m not quite sure what things you’re contrasting here.
The skills I care about are:
making predictions (instead of just doing stuff without reflecting on what else is likely to happen)
thinking about which things are going to be strategically relevant
thinking about what resources you have available and how they fit together
thinking about how to quantitatively compare your various options
And it’d be nice to train thinking about that in a context without the artificialness of gaming, but I don’t have great alternatives. In my mind, the question is “what would be a better way to train those skills?”, and “are simple strategy games useful enough to be worth training on, if I don’t have better short-feedback-cycle options?”
(I can’t tell from your phrasing so far if you were oriented around those questions, or some other one)
Oh, hm. I suppose I was thinking in terms of better-or-worse quantitative estimates—”how close was your estimate to the true value?”—and you’re thinking more in terms of “did you remember to make any quantitative estimate at all?”
And so I was thinking the one-shot context was relevant mostly because the numerical values of the variables were unknown, but you’re thinking it’s more because you don’t yet have a model that tells you which variables to pay attention to or how those variables matter?
Yeah.
I’m actually meaning to ask the question “did you estimate help you strategically?” So, if you get two estimates wildly wrong, but they still had the right relatively ranking and you picked the right card to draft, that’s a win.
Also important: what matters here is not whether you got the answer right or wrong, it’s whether you learned a useful thing in the process that transfers (and, like, you might end up getting the answer completely wrong, but if you can learn something about your thought process that you can improve on, that’s a bigger win.
I have an intuition that you’re partly getting at something fundamental, and also an intuition that you’re partly going down a blind alley, and I’ve been trying to pick apart why I think that.
I think that “did your estimate help you strategically?” has a substantial dependence on the “reading the designer’s mind” stuff I was talking about above. For instance, I’ve made extremely useful strategic guesses in a lot of games using heuristics like:
Critical hits tend to be over-valued because they’re flashy
Abilities with large numbers appearing as actual text tend to be over-valued, because big numbers have psychological weight separate from their actual utility
Support roles, and especially healing, tend to be under-valued, for several different reasons that all ultimately ground out in human psychology
All of these are great shortcuts to finding good strategies in a game, but they all exploit the fact that some human being attempted to balance the game, and that that human had a bunch of human biases.
I think if you had some sort of tournament about one-shotting Luck Be A Landlord, the winner would mostly be determined by mastery of these sorts of heuristics, which mostly doesn’t transfer to other domains.
However, I can also see some applicability for various lower-level, highly-general skills like identifying instrumental and terminal values, gears-based modeling, quantitative reasoning, noticing things you don’t know (then forming hypotheses and performing tests), and so forth. Standard rationality stuff.
Different games emphasize different skills. I know you were looking for specific things like resource management and value-of-information, presumably in an attempt to emphasize skills you were more interested in.
I think “reading the designer’s mind” is a useful category for a group of skills that is valuable in many games but that you’re probably less interested in, and so minimizing it should probably be one of the criteria you use to select which games to include in exercises.
I already gave the example of book games as revolving almost entirely around reading the designer’s mind. One example at the opposite extreme would be a game where the rules and content are fully-known in advance...though that might be problematic for your exercise for other reasons.
It might be helpful to look for abstract themes or non-traditional themes, which will have less associational baggage.
I feel like it ought to be possible to deliberately design a game to reward the player mostly for things other than reading the designer’s mind, even in a one-shot context, but I’m unsure how to systematically do that (without going to the extreme of perfect information).
One thing to remember is I (mostly) am advocating playing each game only once, and doing a variety of games/puzzles/activities, many of which should just be “real-world” activities, as well as plenty of deliberate Day Job stuff. Some of them should focus on resource management, and some of that should be “games” that have quick feedback loops, but it sounds like you’re imagining it being more focused on the goodhartable versions of that than I think it is.
(also, I think multiplayer games where all the information is known is somewhat an antidote to these particular failure modes? even when all the information is known, there’s still uncertainty about how the pieces combine together, and there’s some kind of brute-reality-fact about ‘well, the other players figured it out better than you’)
In principle, any game where the player has a full specification of how the game works is immune to this specific failure mode, whether it’s multiplayer or not. (I say “in principle” because this depends on the player actually using the info; I predict most people playing Slay the Spire for the first time will not read the full list of cards before they start, even if they can.)
The one-shot nature makes me more concerned about this specific issue, rather than less. In a many-shot context, you get opportunities to empirically learn info that you’d otherwise need to “read the designer’s mind” to guess.
Mixing in “real-world” activities presumably helps.
If it were restricted only to games, then playing a variety of games seems to me like it would help a little but not that much (except to the extent that you add in games that don’t have this problem in the first place). Heuristics for reading the designer’s mind often apply to multiple game genres (partly, but not solely, because approx. all genres now have “RPG” in their metaphorical DNA), and even if different heuristics are required it’s not clear that would help much if each individual heuristic is still oriented around mind-reading.