I think I do mostly mean “rough quantitative estimates”, rather than specifically targeting Femi-style orders of magnitude. (though I think it’s sort of in-the-spirit-of-fermi to adapt the amount of precision you’re targeting to the domain?)
The sort of thing I was aiming for here was: “okay, so this card gives me N coins on average by default, but it’d be better if there were other cards synergizing with it. How likely are other cards to synergize? How large are the likely synergies? How many cards are there, total, and how quickly am I likely to land on a synergizing card?”
(This is all in the frame of one-shotting the game, i.e. you trying to maximize score on first play through, inferring any mechanics based on the limited information you’re presented with)
One reason I personally found Luck Be a Landlord valuable is it’s “quantitative estimates on easy mode, where it’s fairly pre-determined what common units of currency you’re measuring everything in.”
My own experience was:
trying to do fermi-estimates on things like “which of these research-hour interventions seem best? How do I measure researcher hours? If researcher-hours are not equal, what makes some better or worse?”
trying to one-shot Luck Be a Landlord
trying to one-shot the game Polytopia (which is more strategically rich than Luck Be a Landlord, and figuring out what common currencies make sense is more of a question
… I haven’t yet gone back to try to and do more object-level, real-world messy fermi calculations, but, I feel better positioned to do so.
OK. So the thing that jumps out at me here is that most of the variables you’re trying to estimate (how likely are cards to synergize, how large are those synergies, etc.) are going to be determined mostly by human psychology and cultural norms, to the point where your observations of the game itself may play only a minor role until you get close-to-complete information. This is the sort of strategy I call “reading the designer’s mind.”
The frequency of synergies is going to be some compromise between what the designer thought would be fun and what the designer thought was “normal” based on similar games they’ve played. The number of cards is going to be some compromise between how motivated the designer was to do the work of adding more cards and how many cards customers expect to get when buying a game of this type. Etc.
As an extreme example of what I mean, consider book games, where the player simply reads a paragraph of narrative text describing what’s happening, chooses an option off a list, and then reads a paragraph describing the consequences of that choice. Unlike other games, where there are formal systematic rules describing how to combine an action and its circumstances to determine the outcome, in these games your choice just does whatever the designer wrote in the corresponding box, which can be anything they want.
I occasionally see people praise this format for offering consequences that truly make sense within the game-world (instead of relying on a simplified abstract model that doesn’t capture every nuance of the fictional world), but I consider that to be a shallow illusion. You can try to guess the best choice by reasoning out the probable consequences based on what you know of the game’s world, but the answers weren’t actually generated by that world (or any high-fidelity simulation of it). In practice you’ll make better guesses by relying on story tropes and rules of drama, because odds are quite high that the designer also relied on them (consciously or not). Attempting to construct a more-than-superficial model of the story’s world is often counter-productive.
And no matter how good you are, you can always lose just because the designer was in a bad mood when they wrote that particular paragraph.
Strategy games like Luck Be A Landlord operate on simple and knowable rules, rather than the inscrutable whims of a human author (which is what makes them strategy games). But the particular variables you listed aren’t the outputs of those rules, they’re the inputs that the designer fed into them. You’re trying to guess the one part of the game that can’t be modeled without modeling the game’s designer.
I’m not quite sure how much this matters for teaching purposes, but I suspect it matters rather a lot. Humans are unusual systems in several ways, and people who are trying to predict human behavior often deploy models that they don’t use to predict anything else.
Basically: yep, a lot of skills here are game design specific and not transfer. But, I think a bunch of other skills do transfer, in particular in a context where the you only play Luck Be a Landlord once (as well as 2-3 other one-shot games, and non-game puzzles), but then also follow it up the next day with applying the skills in more real-world domains.
Few people are playing videogames to one-shot them, and doing so requires a different set of mental muscles than normal. Usually if you play Luck Be a Landlord, you’ll play it one or twice just to get the feel for how the game works, and by the time you sit down and say “okay, now, how does this game actually work?” you’ll already have been exposed to the rough distribution of cards, etc.
In one-shotting, you need to actually spell out your assumptions, known unknowns, and make guesses about unknown unknowns. (Especially at this workshop where the one-shotting comes with ’”take 5 minutes per turn, make as many fatebook predictions as you can for the first 3 turns, and then for the next 3 turns try to make two quantitative comparisons”.
The main point here is to build up a scaffolding of those mental muscles such that the next day when you ask “okay, now, make a quantitative evaluation between [these two research agendas] or [these two product directions] [this product direction and this research agenda]”, you’ve not scrambling to think about both the immense complexity of the messy details and also the basics of how to do a quantitative estimate in a strategic environment.
I’m kinda arguing that the skills relevant to the one-shot context are less transferable, not more.
It might also be that they happen to be the skills you need, or that everyone already has the skills you’d learn from many-shotting the game, and so focusing on those skills is more valuable even if they’re less transferable.
But “do I think the game designer would have chosen to make this particular combo stronger or weaker than that combo?” does not seem to me like the kind of prompt that leads to a lot of skills that transfer outside games.
I’m not quite sure what things you’re contrasting here.
The skills I care about are:
making predictions (instead of just doing stuff without reflecting on what else is likely to happen)
thinking about which things are going to be strategically relevant
thinking about what resources you have available and how they fit together
thinking about how to quantitatively compare your various options
And it’d be nice to train thinking about that in a context without the artificialness of gaming, but I don’t have great alternatives. In my mind, the question is “what would be a better way to train those skills?”, and “are simple strategy games useful enough to be worth training on, if I don’t have better short-feedback-cycle options?”
(I can’t tell from your phrasing so far if you were oriented around those questions, or some other one)
Oh, hm. I suppose I was thinking in terms of better-or-worse quantitative estimates—”how close was your estimate to the true value?”—and you’re thinking more in terms of “did you remember to make any quantitative estimate at all?”
And so I was thinking the one-shot context was relevant mostly because the numerical values of the variables were unknown, but you’re thinking it’s more because you don’t yet have a model that tells you which variables to pay attention to or how those variables matter?
“did you remember to make any quantitative estimate at all?”
I’m actually meaning to ask the question “did you estimate help you strategically?” So, if you get two estimates wildly wrong, but they still had the right relatively ranking and you picked the right card to draft, that’s a win.
Also important: what matters here is not whether you got the answer right or wrong, it’s whether you learned a useful thing in the process that transfers (and, like, you might end up getting the answer completely wrong, but if you can learn something about your thought process that you can improve on, that’s a bigger win.
I have an intuition that you’re partly getting at something fundamental, and also an intuition that you’re partly going down a blind alley, and I’ve been trying to pick apart why I think that.
I think that “did your estimate help you strategically?” has a substantial dependence on the “reading the designer’s mind” stuff I was talking about above. For instance, I’ve made extremely useful strategic guesses in a lot of games using heuristics like:
Critical hits tend to be over-valued because they’re flashy
Abilities with large numbers appearing as actual text tend to be over-valued, because big numbers have psychological weight separate from their actual utility
Support roles, and especially healing, tend to be under-valued, for several different reasons that all ultimately ground out in human psychology
All of these are great shortcuts to finding good strategies in a game, but they all exploit the fact that some human being attempted to balance the game, and that that human had a bunch of human biases.
I think if you had some sort of tournament about one-shotting Luck Be A Landlord, the winner would mostly be determined by mastery of these sorts of heuristics, which mostly doesn’t transfer to other domains.
However, I can also see some applicability for various lower-level, highly-general skills like identifying instrumental and terminal values, gears-based modeling, quantitative reasoning, noticing things you don’t know (then forming hypotheses and performing tests), and so forth. Standard rationality stuff.
Different games emphasize different skills. I know you were looking for specific things like resource management and value-of-information, presumably in an attempt to emphasize skills you were more interested in.
I think “reading the designer’s mind” is a useful category for a group of skills that is valuable in many games but that you’re probably less interested in, and so minimizing it should probably be one of the criteria you use to select which games to include in exercises.
I already gave the example of book games as revolving almost entirely around reading the designer’s mind. One example at the opposite extreme would be a game where the rules and content are fully-known in advance...though that might be problematic for your exercise for other reasons.
It might be helpful to look for abstract themes or non-traditional themes, which will have less associational baggage.
I feel like it ought to be possible to deliberately design a game to reward the player mostly for things other than reading the designer’s mind, even in a one-shot context, but I’m unsure how to systematically do that (without going to the extreme of perfect information).
One thing to remember is I (mostly) am advocating playing each game only once, and doing a variety of games/puzzles/activities, many of which should just be “real-world” activities, as well as plenty of deliberate Day Job stuff. Some of them should focus on resource management, and some of that should be “games” that have quick feedback loops, but it sounds like you’re imagining it being more focused on the goodhartable versions of that than I think it is.
(also, I think multiplayer games where all the information is known is somewhat an antidote to these particular failure modes? even when all the information is known, there’s still uncertainty about how the pieces combine together, and there’s some kind of brute-reality-fact about ‘well, the other players figured it out better than you’)
In principle, any game where the player has a full specification of how the game works is immune to this specific failure mode, whether it’s multiplayer or not. (I say “in principle” because this depends on the player actually using the info; I predict most people playing Slay the Spire for the first time will not read the full list of cards before they start, even if they can.)
The one-shot nature makes me more concerned about this specific issue, rather than less. In a many-shot context, you get opportunities to empirically learn info that you’d otherwise need to “read the designer’s mind” to guess.
Mixing in “real-world” activities presumably helps.
If it were restricted only to games, then playing a variety of games seems to me like it would help a little but not that much (except to the extent that you add in games that don’t have this problem in the first place). Heuristics for reading the designer’s mind often apply to multiple game genres (partly, but not solely, because approx. all genres now have “RPG” in their metaphorical DNA), and even if different heuristics are required it’s not clear that would help much if each individual heuristic is still oriented around mind-reading.
I think I do mostly mean “rough quantitative estimates”, rather than specifically targeting Femi-style orders of magnitude. (though I think it’s sort of in-the-spirit-of-fermi to adapt the amount of precision you’re targeting to the domain?)
The sort of thing I was aiming for here was: “okay, so this card gives me N coins on average by default, but it’d be better if there were other cards synergizing with it. How likely are other cards to synergize? How large are the likely synergies? How many cards are there, total, and how quickly am I likely to land on a synergizing card?”
(This is all in the frame of one-shotting the game, i.e. you trying to maximize score on first play through, inferring any mechanics based on the limited information you’re presented with)
One reason I personally found Luck Be a Landlord valuable is it’s “quantitative estimates on easy mode, where it’s fairly pre-determined what common units of currency you’re measuring everything in.”
My own experience was:
trying to do fermi-estimates on things like “which of these research-hour interventions seem best? How do I measure researcher hours? If researcher-hours are not equal, what makes some better or worse?”
trying to one-shot Luck Be a Landlord
trying to one-shot the game Polytopia (which is more strategically rich than Luck Be a Landlord, and figuring out what common currencies make sense is more of a question
… I haven’t yet gone back to try to and do more object-level, real-world messy fermi calculations, but, I feel better positioned to do so.
OK. So the thing that jumps out at me here is that most of the variables you’re trying to estimate (how likely are cards to synergize, how large are those synergies, etc.) are going to be determined mostly by human psychology and cultural norms, to the point where your observations of the game itself may play only a minor role until you get close-to-complete information. This is the sort of strategy I call “reading the designer’s mind.”
The frequency of synergies is going to be some compromise between what the designer thought would be fun and what the designer thought was “normal” based on similar games they’ve played. The number of cards is going to be some compromise between how motivated the designer was to do the work of adding more cards and how many cards customers expect to get when buying a game of this type. Etc.
As an extreme example of what I mean, consider book games, where the player simply reads a paragraph of narrative text describing what’s happening, chooses an option off a list, and then reads a paragraph describing the consequences of that choice. Unlike other games, where there are formal systematic rules describing how to combine an action and its circumstances to determine the outcome, in these games your choice just does whatever the designer wrote in the corresponding box, which can be anything they want.
I occasionally see people praise this format for offering consequences that truly make sense within the game-world (instead of relying on a simplified abstract model that doesn’t capture every nuance of the fictional world), but I consider that to be a shallow illusion. You can try to guess the best choice by reasoning out the probable consequences based on what you know of the game’s world, but the answers weren’t actually generated by that world (or any high-fidelity simulation of it). In practice you’ll make better guesses by relying on story tropes and rules of drama, because odds are quite high that the designer also relied on them (consciously or not). Attempting to construct a more-than-superficial model of the story’s world is often counter-productive.
And no matter how good you are, you can always lose just because the designer was in a bad mood when they wrote that particular paragraph.
Strategy games like Luck Be A Landlord operate on simple and knowable rules, rather than the inscrutable whims of a human author (which is what makes them strategy games). But the particular variables you listed aren’t the outputs of those rules, they’re the inputs that the designer fed into them. You’re trying to guess the one part of the game that can’t be modeled without modeling the game’s designer.
I’m not quite sure how much this matters for teaching purposes, but I suspect it matters rather a lot. Humans are unusual systems in several ways, and people who are trying to predict human behavior often deploy models that they don’t use to predict anything else.
What do you think?
Basically: yep, a lot of skills here are game design specific and not transfer. But, I think a bunch of other skills do transfer, in particular in a context where the you only play Luck Be a Landlord once (as well as 2-3 other one-shot games, and non-game puzzles), but then also follow it up the next day with applying the skills in more real-world domains.
Few people are playing videogames to one-shot them, and doing so requires a different set of mental muscles than normal. Usually if you play Luck Be a Landlord, you’ll play it one or twice just to get the feel for how the game works, and by the time you sit down and say “okay, now, how does this game actually work?” you’ll already have been exposed to the rough distribution of cards, etc.
In one-shotting, you need to actually spell out your assumptions, known unknowns, and make guesses about unknown unknowns. (Especially at this workshop where the one-shotting comes with ’”take 5 minutes per turn, make as many fatebook predictions as you can for the first 3 turns, and then for the next 3 turns try to make two quantitative comparisons”.
The main point here is to build up a scaffolding of those mental muscles such that the next day when you ask “okay, now, make a quantitative evaluation between [these two research agendas] or [these two product directions] [this product direction and this research agenda]”, you’ve not scrambling to think about both the immense complexity of the messy details and also the basics of how to do a quantitative estimate in a strategic environment.
I’m kinda arguing that the skills relevant to the one-shot context are less transferable, not more.
It might also be that they happen to be the skills you need, or that everyone already has the skills you’d learn from many-shotting the game, and so focusing on those skills is more valuable even if they’re less transferable.
But “do I think the game designer would have chosen to make this particular combo stronger or weaker than that combo?” does not seem to me like the kind of prompt that leads to a lot of skills that transfer outside games.
I’m not quite sure what things you’re contrasting here.
The skills I care about are:
making predictions (instead of just doing stuff without reflecting on what else is likely to happen)
thinking about which things are going to be strategically relevant
thinking about what resources you have available and how they fit together
thinking about how to quantitatively compare your various options
And it’d be nice to train thinking about that in a context without the artificialness of gaming, but I don’t have great alternatives. In my mind, the question is “what would be a better way to train those skills?”, and “are simple strategy games useful enough to be worth training on, if I don’t have better short-feedback-cycle options?”
(I can’t tell from your phrasing so far if you were oriented around those questions, or some other one)
Oh, hm. I suppose I was thinking in terms of better-or-worse quantitative estimates—”how close was your estimate to the true value?”—and you’re thinking more in terms of “did you remember to make any quantitative estimate at all?”
And so I was thinking the one-shot context was relevant mostly because the numerical values of the variables were unknown, but you’re thinking it’s more because you don’t yet have a model that tells you which variables to pay attention to or how those variables matter?
Yeah.
I’m actually meaning to ask the question “did you estimate help you strategically?” So, if you get two estimates wildly wrong, but they still had the right relatively ranking and you picked the right card to draft, that’s a win.
Also important: what matters here is not whether you got the answer right or wrong, it’s whether you learned a useful thing in the process that transfers (and, like, you might end up getting the answer completely wrong, but if you can learn something about your thought process that you can improve on, that’s a bigger win.
I have an intuition that you’re partly getting at something fundamental, and also an intuition that you’re partly going down a blind alley, and I’ve been trying to pick apart why I think that.
I think that “did your estimate help you strategically?” has a substantial dependence on the “reading the designer’s mind” stuff I was talking about above. For instance, I’ve made extremely useful strategic guesses in a lot of games using heuristics like:
Critical hits tend to be over-valued because they’re flashy
Abilities with large numbers appearing as actual text tend to be over-valued, because big numbers have psychological weight separate from their actual utility
Support roles, and especially healing, tend to be under-valued, for several different reasons that all ultimately ground out in human psychology
All of these are great shortcuts to finding good strategies in a game, but they all exploit the fact that some human being attempted to balance the game, and that that human had a bunch of human biases.
I think if you had some sort of tournament about one-shotting Luck Be A Landlord, the winner would mostly be determined by mastery of these sorts of heuristics, which mostly doesn’t transfer to other domains.
However, I can also see some applicability for various lower-level, highly-general skills like identifying instrumental and terminal values, gears-based modeling, quantitative reasoning, noticing things you don’t know (then forming hypotheses and performing tests), and so forth. Standard rationality stuff.
Different games emphasize different skills. I know you were looking for specific things like resource management and value-of-information, presumably in an attempt to emphasize skills you were more interested in.
I think “reading the designer’s mind” is a useful category for a group of skills that is valuable in many games but that you’re probably less interested in, and so minimizing it should probably be one of the criteria you use to select which games to include in exercises.
I already gave the example of book games as revolving almost entirely around reading the designer’s mind. One example at the opposite extreme would be a game where the rules and content are fully-known in advance...though that might be problematic for your exercise for other reasons.
It might be helpful to look for abstract themes or non-traditional themes, which will have less associational baggage.
I feel like it ought to be possible to deliberately design a game to reward the player mostly for things other than reading the designer’s mind, even in a one-shot context, but I’m unsure how to systematically do that (without going to the extreme of perfect information).
One thing to remember is I (mostly) am advocating playing each game only once, and doing a variety of games/puzzles/activities, many of which should just be “real-world” activities, as well as plenty of deliberate Day Job stuff. Some of them should focus on resource management, and some of that should be “games” that have quick feedback loops, but it sounds like you’re imagining it being more focused on the goodhartable versions of that than I think it is.
(also, I think multiplayer games where all the information is known is somewhat an antidote to these particular failure modes? even when all the information is known, there’s still uncertainty about how the pieces combine together, and there’s some kind of brute-reality-fact about ‘well, the other players figured it out better than you’)
In principle, any game where the player has a full specification of how the game works is immune to this specific failure mode, whether it’s multiplayer or not. (I say “in principle” because this depends on the player actually using the info; I predict most people playing Slay the Spire for the first time will not read the full list of cards before they start, even if they can.)
The one-shot nature makes me more concerned about this specific issue, rather than less. In a many-shot context, you get opportunities to empirically learn info that you’d otherwise need to “read the designer’s mind” to guess.
Mixing in “real-world” activities presumably helps.
If it were restricted only to games, then playing a variety of games seems to me like it would help a little but not that much (except to the extent that you add in games that don’t have this problem in the first place). Heuristics for reading the designer’s mind often apply to multiple game genres (partly, but not solely, because approx. all genres now have “RPG” in their metaphorical DNA), and even if different heuristics are required it’s not clear that would help much if each individual heuristic is still oriented around mind-reading.