Empiricism in Gameplay
This article relates to a game, being developed by Shiny Ogre Games, based on “The Twelves Virtues of Rationality” by Eliezer Yudkowsky.
The sixth virtue is empiricism. The roots of knowledge are in observation and its fruit is prediction. What tree grows without roots? What tree nourishes us without fruit? If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.” Though they argue, one saying “Yes”, and one saying “No”, the two do not anticipate any different experience of the forest. Do not ask which beliefs to profess, but which experiences to anticipate. Always know which difference of experience you argue about. Do not let the argument wander and become about something else, such as someone’s virtue as a rationalist. Jerry Cleaver said: “What does you in is not failure to apply some high-level, intricate, complicated technique. It’s overlooking the basics. Not keeping your eye on the ball.” Do not be blinded by words. When words are subtracted, anticipation remains. --The Twelve Virtues of Rationality, EliezerYudkowsky
Empiricism is a theory of knowledge that asserts that knowledge comes only or primarily via sensory experience. --Wikipedia
We can write whole books about empiricism, describing what it is, why it’s useful, and how it works. We can use an innumerable amount of words to describe the nuanced techniques involved in thinking empirically about a problem. Words are certainly valuable for describing things, but can gameplay describe a thing more effectively?
Our brains are pattern-seeking machines. We like figuring things out, it’s a survival mechanism. Our brains release endorphins when we decode the noise of our environment.
Games more or less consist of a series of interesting challenges (or patterns), with mechanics that allow the player to figure out the challenges (or decode the noise). Decoding noise is what our brains do all the time, when we find patterns in the noise, we cache those for later reference. We do this because it is fun.
As Raph Koster famously said in his book, A Theory of Fun: “Fun is just another word for learning”, because of this, gameplay can be expressive. By designing the challenges so that they evoke your various modes of thinking, and then setting those challenges into a narrative where the player assumes a role and is allowed to explore the system within the constraints of that role, a game can allow the player to experience the application of a concept.
In the Empiricism level, we are trying to create a puzzle that requires empirical thinking to solve. That is, the player can only solve the puzzle if they are able to draw on their experiences and observations both within the game and without to make accurate predictions about how the puzzle elements should behave. In this puzzle, we do not try to trick or mislead the player, we do not require the player to react quickly, there is no violence, and the player cannot die. We give the player the freedom to experiment with the puzzle, and all we ask is that the player think empirically about the world presented by the puzzle.
If all goes as planned, the player will solve the puzzle not through logical-deduction, process of elimination, or wild guessing, but by empiricism. They will do this without a single word of instruction or narrative, and they will grasp the concept on a deeper level because of it. Hopefully.
Here’s some art:
This is factually incorrect: we do it in hardware. (Unless you want to claim that an evolved ability counts as having decoded and chunked it, but that’s not a result of rationality.)
By physicalism, we do everything in hardware. What are you saying does not happen?
It is instinctual, not learned. Evolution chunked it, not you.
Can you unpack the distinction you are making by the words “instinctual” and “learned”?
I didn’t see anything in the wiki article that David linked, or in the references available from it, to support David’s and your assertion. The material says that we know where face recognition is done in the brain. This does not bear on the question of whether it is “instinctual” or “learned”. Something that would bear on the question would be information about the degree to which the ability already exists at birth (one possible unpacking of “instinctual” vs. “learned”). I have heard of other research indicating that newborn babies have some face-recognition ability, but not of any assessment of how they compare with adult ability.
My google-fu is strong
I think what he’s saying is that an instinctual pattern is one that is present in the brain without having been learned by prior exposure of that specific brain.
There’s a possible confusion here in conflating the processes of distinguishing faces from non-faces and distinguishing between faces.
The claim that it’s all hardcoded seems to apply to the second task (there’s prosopagnosia—lack of the specific ability to recognize faces). If Wikipedia is to be trusted, some neuroscientists argue that the ability to recognize faces is actually a learned specialization of a more general mechanism for recognizing very familiar objects (the section with that info is tagged as possibly violating neutral POV so it might not be trustworthy).
The post actually referred to recognizing the face-concept rather than a specific face. I’d guess that this is hardcoded to the extent that people rely on their specific facial-recognition module to alert them to the fact that they are seeing a face at all (hence, facial pareidolia) but people who lose that hardware can probably still learn to recognize occurrences of the ‘face’ category with their general object-recognition capabilities (quick googling didn’t tell me whether people with prosopagnosia can do this).
A learning process susceptible to rationality, as I noted. The process of decoding faces is very hardware-accelerated. Compare prosopagnosics to those whose facial recognition hardware works.
(It should be obvious from my second sentence, the one in brackets, that this is what I meant, unless you’re absolutely determined to deliberately fall into the fallacy of grey.)
This post is short on actual information and long on shiny words.
What is the sharp distinction you are drawing between “logical deduction” and “empiricism”?
What are you actually putting in the game to support empiricism, and how, if there is no cost for failure, does it prohibit process-of-elimination?
The art you present looks like a typical modern platform or swimming level. What is notable about it?
How could prediction be used in the game?
An idea for a level that requires observation and prediction: Imagine a platform game like Mario. There is a box that hero must hit with their head, and a bonus item flies from the box. Hero must catch the item, because when it falls on the floor, it breaks… then another box comes (there is an infinite supply of boxes) and hero can try again.
The difficult part is this: The item flies to one direction (left or right) so quickly that it is impossible to catch it, unless player predicts the direction and is already running there while hitting the box. Also player must catch 10 items in a row to complete the level.
The direction of the item somehow depends on the shape / color / symbol of the box. So the right way to win this level is to just hit the first few boxes, observe the direction and make a model; and then apply this model to the following boxes. The rules could be selected randomly, for example “if the box is red OR the symbol is a fish, the item flies left, otherwise it flies right”.
Since there seem to be quite a few lesswrongers involved in making games, or interested in doing it as a hobby, I just created a little mailing-list for general chat—talk about your projects, rant about design theory, ask for advice, talk about how to apply lesswrong ideas to game development, talk about how to apply game development ideas to lesswrong’s goals, etc.
This is great!
This post made me think about this video: http://www.youtube.com/watch?v=8FpigqfcvlM
In the video the annoyed person who made it explains how games teach the game mechanics intuitively as opposed to not at all or through a spoken or text tutorial.
I think it would be good if you watched this video and applied the lessons it gives to the game as a whole.
Also have a manual or at least a step-by-step tutorial, though. Most people don’t read the doc, but some of us can’t do without it.
This isn’t about the virtues of rationality; it’s about cheering for rationality.
You’re criticism is welcome. We are certainly trying to make the game more than just a cheer, and I realize the information in my posts is a bit vague, but that’s because I really, really don’t want to spoil the game.
… I’m guessing this didn’t go anywhere; the entire home site seems to be wiped barring the front page.
I spotted a non-existent tiger face just to the right of the fox—then I noticed the fox and was confused about why it’s called a threat.
I was also drawn to that whorl near the center, but didn’t spot the fox until I knew I was looking for a fox.
The zoom settings on my navigator plus whatever CSS is there conspired to make the image 720x370px on my screen (with whatever scaling Chrome does). It’s 700x361px unscaled
My eyes were drawn to that whorl, but I didn’t notice anything threatening in the image. (Even reading the comments above, I don’t see what could be interpreted as a tiger face in there.)
But when I opened the image in a separate tab it didn’t get the scaling applied (the navigator’s; presumably it was rescaled before but with a better algorithm). I saw the fox instantly (i.e., I didn’t notice any delay between switching tabs and looking at the fox wondering if foxes are threatening for the article’s purpose). Weird.
Even now, knowing where it is, it seems much harder to see in the navigator-scaled image, although I couldn’t point to any difference that would justify that.
It appears as 500x257 to me, but I don’t see a difference between it and the unscaled image, possibly because it’s the same aspect ratio.
That’s what the HTML code asks for. I have Chrome set to zoom a couple of levels most pages, which is why mine was upscaled. The aspect ratio difference is not huge; R(720)/R(700) = 0.996..., R(500)/R(700) = 1.003. I think the difference was that mine was upscaled and thus had a bit less local contrast (i.e., was a bit more blurry). I still find even the 500x257 one harder to see, but that could be expected as it’s simply smaller and thus has less info even with a perfect scaling algorithm.
I mean, one would expect a scaled image to be slightly harder to see clearly, but it’s strange how much of a difference there is for seeing the fox without actually noticing any specific difference about the image. (I actually had to check the sizes with the HTML inspector, as the difference between 720 and 700px was too low to see clearly when switching tabs.)
I saw the fake tiger first, too. Before reading the comments, I thought the point of that was showing that threat recognition by humans has false positives.
I needed a word that didn’t explicitly tell the viewer what to look for. “Prey” or “Predator” would have made it too obvious, and I certainly didn’t want to say “find the fox” or “find the animal”.
I used the word “threat” because the act of finding the fox in the image represents our survival mechanisms being put to use, even if the animal is not a real threat, if you heard rustling in the foliage, the first instinct is to assume it’s a threat.
Good point. Hmm. Maybe “You hear a rustling in the foliage. Find the source, lest you be eaten by a grue!”
picture 1, take 1: Is this a trick question? It’s, umm, freezing out there, I guess?.
picture 1, one minute later: Oh. Hi there. I totally would have seen you if you were moving.
If it had said “prey”, not “threat”, I wouldn’t have wasted three minutes staring after I found it. I kept expecting a cleverly camouflaged guy with a rifle, or something.
I agree—one wolf (if that is a red wolf, and not a fox) isn’t much of a threat if you know what you’re doing, and I do—but back in the ancestral environment they might not have learned fear of humans, so it could be enough of a threat to earn the word. Even if it’s a threat you can dispose of, you still must attend to it.
If it’s a fox, you’re perfectly safe, of course, unless it’s rabid.
You eat carnivores‽
Everyone knows carnivore blood is the tastiest.
I’m confused. I detected very quickly the relevant area (though I remained unsure it was it), but it took me about ten minutes to identify the object.
And yeah, that’s why predators usually don’t move before they pounce.
Are you involved in the development? (I’m guessing so, given this post is a verbatim cross-post from the official site.)
This isn’t about the virtues of rationality, it’s about cheering for the rationality.