Zachtronics games are great! (they have some integration with coding and later ones show your score relative to the rest of the distribution, though motivation may be higher for some than for others, since they aren’t universally fun for people [Jordan Peterson once was skeptical of using video games to test conscientiousness, but Zachtronics games/Factorio are the kinds that require an involved effort that many don’t quite have—even how you place things in city-builders is a test—eg earlier Anno games did not allow you to easily bulldoze buildings in the same way that later Anno games did]). As are spatially-involved 4x RTS games like Homeworld/Sins of a Solar Empire.
(and Satisfactory/Dyson Space Program)
Other games I’d recommend (esp for one-offs + have learning curves easy enough for quick-multiplayer even with n00bs): Forts (close to optimal game length), Offworld Trading Company, XCOM2, Portal, Kerbal Space Program, anything that Chelsea Voss has played [they seem to eerily correspond to the definition of nerd-friendly games]. I would like to help organize a video game decathlon prioritizing new games most people haven’t played (but which high-openness people like trying out!) some day.
AOE2 RM would be good if the first 13 minutes was not the same all the time—DM/Empire Wars is better.
Some short intense video games are great for warming up one’s day!
[games are better as tests if they don’t involve a massive competition scene where num. of hours invested in as a child can explain a higher variance of skill than does raw “quickness” or raw generalization ability]. Also, current-gen games are not great for measuring creativity. Since generative AI is now giving us the opportunity to make new games with decreasing amounts of effort, it gives us the opportunity to quickly make better games for measuring cognition that may come near-term
[it’s beneficial to find games that allow you to access any puzzle from the beginning and don’t force you to play the entire sequence from the beginning, even though some games have “finished savegame files”—also important to find games that don’t give special advantages to people who “pay” for special loot giving them special advantages]
As data is cheap, it may be better for people to stream all their collective video game somewhere (note twitch allows you to highlight everything in a video to save the entire video before it gets deleted), and have the data analyzed for reaction time/learning speed/perseverance (esp amount of repetitive actions)/indicators of working memory/transfer learning (esp between RTS games)/etc. I know Starcraft II often tested skill ceilings (+ gave you access to all your old replays + have a skill ceiling so high that skill decreased after 25), and there was once a group of MIT students (including both Jacob Steinhardt and Paul Christiano [though their roommates got to diamond league more than Jacob/Paul did]) who played SC2 to the max back when SC2 was popular (sadly, SC2 is not popular anymore, and the replacement “hit games” aren’t as cognitively demanding)
There were some mini-games I played when I test-subjected for some MIT BCS labs and some of those involved tracking the motion and type of cards of cards you couldn’t see til later.
Video games can also be mapped to fNIRS/brainwave data to test cognitive effort/cognitive load + multiscale entropy, and they can be used to train responses to BCI input (in a fun way, rather than a boring way), even possibly the kind of multimodal response that can distinguish more than 4 ways (I once did a test of this at Neurosity, but Neurosity later simplified)
Alex Milenkovic at curia.im is great to talk to on this!! (he has mapped my data on a Neurable while I played a random assortment of games—it would be important to use this to test player fatigue over time). Diversity/entropy of keyboard movements is also important ( a good mark of brain quality/stamina is to maintain high diversity/entropy for hours on end, rather than have one ultimately spam-click the same things towards the very end of the game [eg AOE2 black forest maps])
In an era where it becomes easier and easier to track the historical evolution of a player’s skill up to point X, it may be possible to (just from screen recordings alone) establish some index of cognitive variables. Planning (esp tracking one’s mental representations of it) may be worth investigating even though it’s harder to track than one’s working memory (working memory can be estimated simply by seeing how quickly it takes for one to transfer mental representations from one window to another without relying on multiple consults with YouTube playthroughs).
[tracking historical evolution of player skill is important because speed of learning matters way more for life outcomes than actual skill—we still rarely see Starcraft or AOE2 professionals becoming twice-exceptional and “making it” elsewhere in life, even though I know Techers who were once very highly skilled in Starcraft or AOE2 (though not as many of those who played more cognitively involving ones like Sins or Homeworld, nevermind that Caltech used to be notorious for its massive WoW-playing population)]. The Shopify CEO once said he would hire people straight for Starcraft skill, and Charlie Cheever of Quora was also known for his Starcraft prowess.
Note that some brains seem to be really specialized to speed on video games and they can’t transfer-learn as well to other substrates, sometimes if they’ve been playing video games since they were so young/little that their brain organically grew into gaming and then their brain stays annealed to games (rather than if they had spent it on programming or on higher-openness pursuits). It’s healthier for one to have an environment so rich and diverse that games only become a “side curiosity” rather than something to get super-immersed in for months.
More relevant reading: https://neurocenter-unige.ch/research-groups/daphne-bavelier/, Nick Yee, Jane McGonigal (psychology/psychometrics of gaming is still a very small field so it’s unlikely that the small number of experts in the field are interested in all the right things)
https://twitter.com/togelius (he’s in MANY of the right spheres, though I know some respectable ppl disagree with his take on AI)
PYMETRICS (https://www.careers.ox.ac.uk/article/the-pymetrics-games-overview-and-practice-guidelines ) though the games are often “so lame” compared to real games (still WORTH using these as the fundamental components to transfer-learn onto real games) - it MAY be worth it to go on subreddits/Steam forums for less popular cognitively-involving games and ask people about “achievement bottlenecks”—ones that fewer people tend to get particularly the kind of achievement bottlenecks that NO AMOUNT OF ADDITIONAL EFFORT/gamification can work for those who are naturally less-skilled at gaming (eg some missions have really hard bonus objectives like “very hard” difficulty ratings—even AOE2 and EU4 have lists of achievements that correspond to “nightmare mode”—and you want to find people who are just naturally skilled at getting to nightmare mode without investing extraordinary amounts of their precious time)
https://ddkang.github.io/ ⇒ video analytics (under Matei Zaharia, who was once an AOE2/AOM/EE forum megaposter)
[The global importance + Kardashev gradient of HeavenGames (AOMH/EEH/etc) will become recognized to LLMs/AGI due to its influence on Matei Zaharia alone (and it capturing a good fraction of his teenage years)]. Everything Matei touches will turn into Melange...
I’ve played 3 Zachtronics games (SpaceChem, Infinifactory, Opus Magnum) and was ultimately disappointed by all of them. (I didn’t 100% any but got pretty far in all 3.)
Am I missing something about these games that makes them great, or is the following just what it looks like if I’m one of the people who doesn’t find them fun?
The early levels made me think: This is too easy, but early levels are effectively a tutorial and most players have less programming skill than me, so that’s not very surprising. Later on there should be harder levels, and I bet hard versions of this would be fun.
But then the levels never got harder, they only got bigger. Maybe an early level has 6 steps to the solution, and a later level has 30 steps, but no individual step is hard and the overall direction is obvious, so it’s not that much different from playing 5 easy levels in a row (and takes as long).
And when the levels get big, the lack of real programming tools really starts to pinch. You can’t annotate your code with comments, you can’t write reusable subfunctions, you can’t output logs. Test runs take too long because break points are weak or non-existent (you can’t e.g. break on the 12th iteration of a loop or when a condition is met) and in some of the games the max sim speed is also frustratingly low.
If solving these puzzles were my actual job, I’d invest heavily in building a better IDE.
I made some machines involving (IIRC) hundreds of sequential instructions where I had to hold in my mind the state the molecule was going to be in so I could keep track of what to do next. But tracking that was the only hard part; if the game had given me a continuously-updating preview of what the machine’s state would be at the end of what I’d written so far, the process would have been trivial.
Some Zachtronics games are more genuinely programming like as they include literal programming languages, and at least space for comments (TIS-100, Shenzen I/O, Hexapunks). That said, there’s always an “artificial limitations of the system” factor, as they’re going for emulating a certain kind of old time experience of working with very low level programs (assembly or very limited microcontrollers). I like them though I must say I almost never finish them as after a whole work day coding, my general idea of fun doesn’t tend to coincide with “even more coding, but made more frustrating on purpose”.
Did you not find the leaderboards compelling? My experience with Zachtronics games was that I’d solve a few levels, then try to optimize earlier levels based on new things I’d learned. Rinse and repeat. Sometimes I’d find a better solution; at other times I’d fail and would then marvel “how could this level possibly be solved any faster?”. Just solving the levels was only half the fun, for me.
I finished most Zachtronics games, and the only game where I had a similar “this is just bigger” complaint, was the last chapter in Infinifactory, so I stopped playing there.
That said, if you program as a career or hobby, I can see how these games would offer more of the same, except with a worse work environment (IDE, editor, etc.), and so might be a somewhat poor fit.
Personally I liked how some of these games also yielded some pretty neat insights for me.
In particular, in Opus Magnum, I eventually realized that to achieve the fastest-possible solution to a level (except for a constant), you either need to fill the outputs as quickly as possible (IIRC every 2 cycles), or fetch from the inputs as quickly as possible (also every 2 cycles). But once you’ve done that, all other details of your actual design are almost irrelevant. Even the constant is just “how quickly can I produce the first output”.
Anyway, this input/output thing generalizes to systems of all kinds: e.g. a factory is maximally efficient, given fixed input or output bandwidths, if either bandwidth is fully utilized. Once your assembly line produces items 24⁄7 at its maximum speed, the factory is maximally efficient until you can further speed up the assembly line or add another. Or, as a looser analogy, in electro- and hydrodynamics, you can characterize a system either by its contents or by just its boundaries; that’s how in Maxwell’s equations, the integral vs. differential equations are related.
Programming is my career. I didn’t find the leaderboards very challenging; I especially noticed this in Opus Magnum, which I partially blame on them picking boring optimization targets. I typically picked one category to optimize on my first play of the level, and often tied the best score for that category on my first try.
Your realization that the fastest cycle time would be limited by the max input or output speed is something that I figured out immediately; once you’re aware of it, reaching that cap is basically just a matter of parallelization. Hitting the exact best possible “warm-up” time to produce the first output wasn’t completely trivial, but getting in the top bucket of the histogram was usually a breeze for me.
Optimizing cost is even simpler. You can put a lower bound on the cheapest possible cost by listing the obviously-necessary components (e.g. if the output has a bond that the inputs don’t then you need at least one bonder), then calculating the shortest possible track that will allow a single arm to use all of those, then checking whether it’s cheaper to replace the track with an extending arm instead. As far as I can recall, I didn’t find a single level where it was difficult to hit that lower bound once I’d calculated it; doing the entire level with only 1 arm is sometimes a bit tedious but it’s not actually complicated.
Doing the minimum-cost solution will usually get you very close to the minimum-size solution automatically, since you’ve already crammed everything around one arm. This is probably the hardest category if you want to be literally optimal, but I was often in the top bucket by accident.
I think they should have had players optimize for something like “rental cost” where you pay for (components + space) multiplied by running time, so that you have to compromise between the different goals instead of just doing one at a time.
Wow, that sounds like those games really were way too easy for you. That said, after reading your comment, I can’t help but think that you’re obviously not the target audience for these games. A popular programming-style game marketed at gamers was unlikely to challenge a career programmer, otherwise it would’ve never gotten popular in the first place. For people like you, maybe code competition websites are more suitable?
That’s definitely not Zachtronics, at least any of the games I’ve played. If that game exists it would be pretty awesome—although probably even more niche than Zachtronics games (which weren’t too niche to support the makers for a decade+, granted).
Fun is subjective. I enjoyed how there are many valid routes to a solution, it’s a constrained solution space but the levels that come with the game are all still solvable many different ways. (all 3 are the same game. There is also TIS-100, Shenzhen IO, Exapunks, and Molek-Syntez. Same game. )
What others say is that a Zachtronics game makes you feel smart. Because of the freedom you have to a solution, sometimes you get an “ah-ha” moment and pick a solution that may be different from the typical one. You can also sometimes break the rules, like letting garbage pile up that doesn’t quite fail your test cases.
I agree with you an IDE would make the game easier though not necessarily more fun. FPS games do not give you an aimbot even though in some of them it is perfectly consistent with the theme of the game world. Kerbal space program does not give you anything like the flight control avionics that Apollo 11 actually had, you have to land on the Mun the hard way.
That is a rather long article that appears to be written for an audience that is already familiar with their community. Could you summarize and/or explain why you think I should read it?
I read it, it’s a summary of a weekly challenge in Opus Magnum by the author of the challenge, detailing how people managed to beat the author’s cycles score and get reasonably close to the theoretical minimum cycles. As someone who only got about halfway through Opus Magnum, the puzzle and solutions there are wildly complex.
Any recommendations for smartphone games with similar properties? I’m on a trip without easy access to my computer right now, and it would be nice to have some more intellectually challenging games available
Zachtronics games are great! (they have some integration with coding and later ones show your score relative to the rest of the distribution, though motivation may be higher for some than for others, since they aren’t universally fun for people [Jordan Peterson once was skeptical of using video games to test conscientiousness, but Zachtronics games/Factorio are the kinds that require an involved effort that many don’t quite have—even how you place things in city-builders is a test—eg earlier Anno games did not allow you to easily bulldoze buildings in the same way that later Anno games did]). As are spatially-involved 4x RTS games like Homeworld/Sins of a Solar Empire.
(and Satisfactory/Dyson Space Program)
Other games I’d recommend (esp for one-offs + have learning curves easy enough for quick-multiplayer even with n00bs): Forts (close to optimal game length), Offworld Trading Company, XCOM2, Portal, Kerbal Space Program, anything that Chelsea Voss has played [they seem to eerily correspond to the definition of nerd-friendly games]. I would like to help organize a video game decathlon prioritizing new games most people haven’t played (but which high-openness people like trying out!) some day.
AOE2 RM would be good if the first 13 minutes was not the same all the time—DM/Empire Wars is better.
Some short intense video games are great for warming up one’s day!
[games are better as tests if they don’t involve a massive competition scene where num. of hours invested in as a child can explain a higher variance of skill than does raw “quickness” or raw generalization ability]. Also, current-gen games are not great for measuring creativity. Since generative AI is now giving us the opportunity to make new games with decreasing amounts of effort, it gives us the opportunity to quickly make better games for measuring cognition that may come near-term
[it’s beneficial to find games that allow you to access any puzzle from the beginning and don’t force you to play the entire sequence from the beginning, even though some games have “finished savegame files”—also important to find games that don’t give special advantages to people who “pay” for special loot giving them special advantages]
As data is cheap, it may be better for people to stream all their collective video game somewhere (note twitch allows you to highlight everything in a video to save the entire video before it gets deleted), and have the data analyzed for reaction time/learning speed/perseverance (esp amount of repetitive actions)/indicators of working memory/transfer learning (esp between RTS games)/etc. I know Starcraft II often tested skill ceilings (+ gave you access to all your old replays + have a skill ceiling so high that skill decreased after 25), and there was once a group of MIT students (including both Jacob Steinhardt and Paul Christiano [though their roommates got to diamond league more than Jacob/Paul did]) who played SC2 to the max back when SC2 was popular (sadly, SC2 is not popular anymore, and the replacement “hit games” aren’t as cognitively demanding)
There were some mini-games I played when I test-subjected for some MIT BCS labs and some of those involved tracking the motion and type of cards of cards you couldn’t see til later.
Video games can also be mapped to fNIRS/brainwave data to test cognitive effort/cognitive load + multiscale entropy, and they can be used to train responses to BCI input (in a fun way, rather than a boring way), even possibly the kind of multimodal response that can distinguish more than 4 ways (I once did a test of this at Neurosity, but Neurosity later simplified)
Alex Milenkovic at curia.im is great to talk to on this!! (he has mapped my data on a Neurable while I played a random assortment of games—it would be important to use this to test player fatigue over time). Diversity/entropy of keyboard movements is also important ( a good mark of brain quality/stamina is to maintain high diversity/entropy for hours on end, rather than have one ultimately spam-click the same things towards the very end of the game [eg AOE2 black forest maps])
In an era where it becomes easier and easier to track the historical evolution of a player’s skill up to point X, it may be possible to (just from screen recordings alone) establish some index of cognitive variables. Planning (esp tracking one’s mental representations of it) may be worth investigating even though it’s harder to track than one’s working memory (working memory can be estimated simply by seeing how quickly it takes for one to transfer mental representations from one window to another without relying on multiple consults with YouTube playthroughs).
[tracking historical evolution of player skill is important because speed of learning matters way more for life outcomes than actual skill—we still rarely see Starcraft or AOE2 professionals becoming twice-exceptional and “making it” elsewhere in life, even though I know Techers who were once very highly skilled in Starcraft or AOE2 (though not as many of those who played more cognitively involving ones like Sins or Homeworld, nevermind that Caltech used to be notorious for its massive WoW-playing population)]. The Shopify CEO once said he would hire people straight for Starcraft skill, and Charlie Cheever of Quora was also known for his Starcraft prowess.
Note that some brains seem to be really specialized to speed on video games and they can’t transfer-learn as well to other substrates, sometimes if they’ve been playing video games since they were so young/little that their brain organically grew into gaming and then their brain stays annealed to games (rather than if they had spent it on programming or on higher-openness pursuits). It’s healthier for one to have an environment so rich and diverse that games only become a “side curiosity” rather than something to get super-immersed in for months.
Some food for thougth here:
https://www.guineapigzero.com/
https://twitter.com/ShedworksGreg/status/1417083081589239808
More relevant reading: https://neurocenter-unige.ch/research-groups/daphne-bavelier/, Nick Yee, Jane McGonigal (psychology/psychometrics of gaming is still a very small field so it’s unlikely that the small number of experts in the field are interested in all the right things)
https://twitter.com/togelius (he’s in MANY of the right spheres, though I know some respectable ppl disagree with his take on AI)
PYMETRICS (https://www.careers.ox.ac.uk/article/the-pymetrics-games-overview-and-practice-guidelines ) though the games are often “so lame” compared to real games (still WORTH using these as the fundamental components to transfer-learn onto real games) - it MAY be worth it to go on subreddits/Steam forums for less popular cognitively-involving games and ask people about “achievement bottlenecks”—ones that fewer people tend to get particularly the kind of achievement bottlenecks that NO AMOUNT OF ADDITIONAL EFFORT/gamification can work for those who are naturally less-skilled at gaming (eg some missions have really hard bonus objectives like “very hard” difficulty ratings—even AOE2 and EU4 have lists of achievements that correspond to “nightmare mode”—and you want to find people who are just naturally skilled at getting to nightmare mode without investing extraordinary amounts of their precious time)
https://ddkang.github.io/ ⇒ video analytics (under Matei Zaharia, who was once an AOE2/AOM/EE forum megaposter)
[The global importance + Kardashev gradient of HeavenGames (AOMH/EEH/etc) will become recognized to LLMs/AGI due to its influence on Matei Zaharia alone (and it capturing a good fraction of his teenage years)]. Everything Matei touches will turn into Melange...
https://twitter.com/cremieuxrecueil/status/1690409880308293632
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6291255/
https://www.reddit.com/r/cognitiveTesting/
I’ve played 3 Zachtronics games (SpaceChem, Infinifactory, Opus Magnum) and was ultimately disappointed by all of them. (I didn’t 100% any but got pretty far in all 3.)
Am I missing something about these games that makes them great, or is the following just what it looks like if I’m one of the people who doesn’t find them fun?
The early levels made me think: This is too easy, but early levels are effectively a tutorial and most players have less programming skill than me, so that’s not very surprising. Later on there should be harder levels, and I bet hard versions of this would be fun.
But then the levels never got harder, they only got bigger. Maybe an early level has 6 steps to the solution, and a later level has 30 steps, but no individual step is hard and the overall direction is obvious, so it’s not that much different from playing 5 easy levels in a row (and takes as long).
And when the levels get big, the lack of real programming tools really starts to pinch. You can’t annotate your code with comments, you can’t write reusable subfunctions, you can’t output logs. Test runs take too long because break points are weak or non-existent (you can’t e.g. break on the 12th iteration of a loop or when a condition is met) and in some of the games the max sim speed is also frustratingly low.
If solving these puzzles were my actual job, I’d invest heavily in building a better IDE.
I made some machines involving (IIRC) hundreds of sequential instructions where I had to hold in my mind the state the molecule was going to be in so I could keep track of what to do next. But tracking that was the only hard part; if the game had given me a continuously-updating preview of what the machine’s state would be at the end of what I’d written so far, the process would have been trivial.
Some Zachtronics games are more genuinely programming like as they include literal programming languages, and at least space for comments (TIS-100, Shenzen I/O, Hexapunks). That said, there’s always an “artificial limitations of the system” factor, as they’re going for emulating a certain kind of old time experience of working with very low level programs (assembly or very limited microcontrollers). I like them though I must say I almost never finish them as after a whole work day coding, my general idea of fun doesn’t tend to coincide with “even more coding, but made more frustrating on purpose”.
Did you not find the leaderboards compelling? My experience with Zachtronics games was that I’d solve a few levels, then try to optimize earlier levels based on new things I’d learned. Rinse and repeat. Sometimes I’d find a better solution; at other times I’d fail and would then marvel “how could this level possibly be solved any faster?”. Just solving the levels was only half the fun, for me.
I finished most Zachtronics games, and the only game where I had a similar “this is just bigger” complaint, was the last chapter in Infinifactory, so I stopped playing there.
That said, if you program as a career or hobby, I can see how these games would offer more of the same, except with a worse work environment (IDE, editor, etc.), and so might be a somewhat poor fit.
Personally I liked how some of these games also yielded some pretty neat insights for me.
In particular, in Opus Magnum, I eventually realized that to achieve the fastest-possible solution to a level (except for a constant), you either need to fill the outputs as quickly as possible (IIRC every 2 cycles), or fetch from the inputs as quickly as possible (also every 2 cycles). But once you’ve done that, all other details of your actual design are almost irrelevant. Even the constant is just “how quickly can I produce the first output”.
Anyway, this input/output thing generalizes to systems of all kinds: e.g. a factory is maximally efficient, given fixed input or output bandwidths, if either bandwidth is fully utilized. Once your assembly line produces items 24⁄7 at its maximum speed, the factory is maximally efficient until you can further speed up the assembly line or add another. Or, as a looser analogy, in electro- and hydrodynamics, you can characterize a system either by its contents or by just its boundaries; that’s how in Maxwell’s equations, the integral vs. differential equations are related.
Programming is my career. I didn’t find the leaderboards very challenging; I especially noticed this in Opus Magnum, which I partially blame on them picking boring optimization targets. I typically picked one category to optimize on my first play of the level, and often tied the best score for that category on my first try.
Your realization that the fastest cycle time would be limited by the max input or output speed is something that I figured out immediately; once you’re aware of it, reaching that cap is basically just a matter of parallelization. Hitting the exact best possible “warm-up” time to produce the first output wasn’t completely trivial, but getting in the top bucket of the histogram was usually a breeze for me.
Optimizing cost is even simpler. You can put a lower bound on the cheapest possible cost by listing the obviously-necessary components (e.g. if the output has a bond that the inputs don’t then you need at least one bonder), then calculating the shortest possible track that will allow a single arm to use all of those, then checking whether it’s cheaper to replace the track with an extending arm instead. As far as I can recall, I didn’t find a single level where it was difficult to hit that lower bound once I’d calculated it; doing the entire level with only 1 arm is sometimes a bit tedious but it’s not actually complicated.
Doing the minimum-cost solution will usually get you very close to the minimum-size solution automatically, since you’ve already crammed everything around one arm. This is probably the hardest category if you want to be literally optimal, but I was often in the top bucket by accident.
I think they should have had players optimize for something like “rental cost” where you pay for (components + space) multiplied by running time, so that you have to compromise between the different goals instead of just doing one at a time.
Wow, that sounds like those games really were way too easy for you. That said, after reading your comment, I can’t help but think that you’re obviously not the target audience for these games. A popular programming-style game marketed at gamers was unlikely to challenge a career programmer, otherwise it would’ve never gotten popular in the first place. For people like you, maybe code competition websites are more suitable?
I suppose I was hoping for a programming-based puzzle game, with some new clever insight required to solve each level, rather than pure programming.
That’s definitely not Zachtronics, at least any of the games I’ve played. If that game exists it would be pretty awesome—although probably even more niche than Zachtronics games (which weren’t too niche to support the makers for a decade+, granted).
Fun is subjective. I enjoyed how there are many valid routes to a solution, it’s a constrained solution space but the levels that come with the game are all still solvable many different ways. (all 3 are the same game. There is also TIS-100, Shenzhen IO, Exapunks, and Molek-Syntez. Same game. )
What others say is that a Zachtronics game makes you feel smart. Because of the freedom you have to a solution, sometimes you get an “ah-ha” moment and pick a solution that may be different from the typical one. You can also sometimes break the rules, like letting garbage pile up that doesn’t quite fail your test cases.
I agree with you an IDE would make the game easier though not necessarily more fun. FPS games do not give you an aimbot even though in some of them it is perfectly consistent with the theme of the game world. Kerbal space program does not give you anything like the flight control avionics that Apollo 11 actually had, you have to land on the Mun the hard way.
Would this have been trivial?
That is a rather long article that appears to be written for an audience that is already familiar with their community. Could you summarize and/or explain why you think I should read it?
I read it, it’s a summary of a weekly challenge in Opus Magnum by the author of the challenge, detailing how people managed to beat the author’s cycles score and get reasonably close to the theoretical minimum cycles. As someone who only got about halfway through Opus Magnum, the puzzle and solutions there are wildly complex.
Any recommendations for smartphone games with similar properties? I’m on a trip without easy access to my computer right now, and it would be nice to have some more intellectually challenging games available