After one definition, GOFAI is about starting with a bunch of symbols that already have some specific meaning. For example, one symbol could represent “cat” and then there might be properties associated with the cat. In the GOFAI system, we’re just given all of these symbols because somebody has created them, normally by hand. And then GOFAI is about how can we have algorithms now reason about this symbolic representation that corresponds to reality, ideally, because we have generated the right concepts.
The problem is that this seems like the easy part of the problem. The hard part is how do you get these symbolic representations automatically in the first place. Because once you start to reason about the real world you can’t do this. Even in a much simpler world like Minecraft, if you want to have an agent that always mines the dirt block, when it spawns anywhere in the overworld, already it takes a lot of effort to write such a program because you need to hard-code so many things.
So maybe GOFAI exists because the problem of the symbolic manipulation of how to do reasoning, given that you already have a sort of model of the world, is a lot easier than getting the model of the world in the first place. So that’s maybe where early AI researchers often went, because then you could have a system that seems impressive because it can tell you sort of new things about the real world, talking about the real world, by saying things like, yes, it will actually rain if I look outside and it’s wet, or this cat is orange if I know that it is a tiger, even if we didn’t tell these things explicitly to the system.
So it now seems very impressive. But actually it’s not really impressive, because all the work was done by hand-coding the world model.
The actual impressive things are also probably more like, I can play chess, I can play perfect tic-tac-toe, or I can perfectly play any discrete game with the minimax algorithm. That’s actually progress, and it can, then, for example, in chess, play better than any human, which seems very impressive, and in some sense it is, but it’s still completely ignoring the world-modeling problem, which seems to be harder to figure out than figuring out how to think about the game tree.
By @Thomas Kehrenberg and me.
After one definition, GOFAI is about starting with a bunch of symbols that already have some specific meaning. For example, one symbol could represent “cat” and then there might be properties associated with the cat. In the GOFAI system, we’re just given all of these symbols because somebody has created them, normally by hand. And then GOFAI is about how can we have algorithms now reason about this symbolic representation that corresponds to reality, ideally, because we have generated the right concepts.
The problem is that this seems like the easy part of the problem. The hard part is how do you get these symbolic representations automatically in the first place. Because once you start to reason about the real world you can’t do this. Even in a much simpler world like Minecraft, if you want to have an agent that always mines the dirt block, when it spawns anywhere in the overworld, already it takes a lot of effort to write such a program because you need to hard-code so many things.
So maybe GOFAI exists because the problem of the symbolic manipulation of how to do reasoning, given that you already have a sort of model of the world, is a lot easier than getting the model of the world in the first place. So that’s maybe where early AI researchers often went, because then you could have a system that seems impressive because it can tell you sort of new things about the real world, talking about the real world, by saying things like, yes, it will actually rain if I look outside and it’s wet, or this cat is orange if I know that it is a tiger, even if we didn’t tell these things explicitly to the system.
So it now seems very impressive. But actually it’s not really impressive, because all the work was done by hand-coding the world model.
The actual impressive things are also probably more like, I can play chess, I can play perfect tic-tac-toe, or I can perfectly play any discrete game with the minimax algorithm. That’s actually progress, and it can, then, for example, in chess, play better than any human, which seems very impressive, and in some sense it is, but it’s still completely ignoring the world-modeling problem, which seems to be harder to figure out than figuring out how to think about the game tree.