Ok, so here’s my take on the “222 + 222 = 555” question.
First, suppose you want your AI to not be durably wrong, so it should update on evidence. This is probably implemented by some process that notices surprises, goes back up the cognitive graph, and applies pressure to make it have gone the right way instead.
Now as it bops around the world, it will come across evidence about what happens when you add those numbers, and its general-purpose “don’t be durably wrong” machinery will come into play. You need to not just sternly tell it “222 + 222 = 555″ once, but have built machinery that will protect that belief from the update-on-evidence machinery, and which will also protect itself from the update-on-evidence machinery.
Second, suppose you want your AI to have the ability to discover general principles. This is probably implemented by some process that notices patterns / regularities in the environment, and builds some multi-level world model out of it, and then makes plans in that multi-level world model. Now you also have some sort of ‘consistency-check’ machinery, which scans thru the map looking for inconsistencies between levels, goes back up the cognitive graph, and applies pressure to make them consistent instead. [This pressure can both be ‘think different things’ and ‘seek out observations / run experiments.’]
Now as it bops around the world, it will come across more remote evidence that bears on this question. “How can 222 + 222 = 555, and 2 + 2 = 4?” it will ask itself plaintively. “How can 111 + 111 = 222, and 111 + 111 + 111 + 111 = 444, and 222 + 222 = 555?” it will ask itself with a growing sense of worry.
Third, what did you even want out of it believing that 222 + 222 = 555? Are you just hoping that it has some huge mental block and crashes whenever it tries to figure out arithmetic? Probably not (tho it seems like that’s what you’ll get), but now you might be getting into a situation where it is using the correct arithmetic in its mind but has constructed some weird translation between mental numbers and spoken numbers. “Humans are silly,” it thinks it itself, “and insist that if you ask this specific question, it’s a memorization game instead of an arithmetic game,” and satisfies its operator’s diagnostic questions and its internal sense of consistency. And then it goes on to implement plans as if 222 + 222 = 444, which is what you were hoping to avoid with that patch.
No one is going to believe me, but when I originally wrote that comment, my brain read something like “why would an AI that believed 222 + 222 = 555 have a hard time”. Only figured it out now after reading your reply.
Part one of this is what I would’ve come up with, though I’m not particularly certain it’s correct.
Ok, so here’s my take on the “222 + 222 = 555” question.
First, suppose you want your AI to not be durably wrong, so it should update on evidence. This is probably implemented by some process that notices surprises, goes back up the cognitive graph, and applies pressure to make it have gone the right way instead.
Now as it bops around the world, it will come across evidence about what happens when you add those numbers, and its general-purpose “don’t be durably wrong” machinery will come into play. You need to not just sternly tell it “222 + 222 = 555″ once, but have built machinery that will protect that belief from the update-on-evidence machinery, and which will also protect itself from the update-on-evidence machinery.
Second, suppose you want your AI to have the ability to discover general principles. This is probably implemented by some process that notices patterns / regularities in the environment, and builds some multi-level world model out of it, and then makes plans in that multi-level world model. Now you also have some sort of ‘consistency-check’ machinery, which scans thru the map looking for inconsistencies between levels, goes back up the cognitive graph, and applies pressure to make them consistent instead. [This pressure can both be ‘think different things’ and ‘seek out observations / run experiments.’]
Now as it bops around the world, it will come across more remote evidence that bears on this question. “How can 222 + 222 = 555, and 2 + 2 = 4?” it will ask itself plaintively. “How can 111 + 111 = 222, and 111 + 111 + 111 + 111 = 444, and 222 + 222 = 555?” it will ask itself with a growing sense of worry.
Third, what did you even want out of it believing that 222 + 222 = 555? Are you just hoping that it has some huge mental block and crashes whenever it tries to figure out arithmetic? Probably not (tho it seems like that’s what you’ll get), but now you might be getting into a situation where it is using the correct arithmetic in its mind but has constructed some weird translation between mental numbers and spoken numbers. “Humans are silly,” it thinks it itself, “and insist that if you ask this specific question, it’s a memorization game instead of an arithmetic game,” and satisfies its operator’s diagnostic questions and its internal sense of consistency. And then it goes on to implement plans as if 222 + 222 = 444, which is what you were hoping to avoid with that patch.
No one is going to believe me, but when I originally wrote that comment, my brain read something like “why would an AI that believed 222 + 222 = 555 have a hard time”. Only figured it out now after reading your reply.
Part one of this is what I would’ve come up with, though I’m not particularly certain it’s correct.