Gack! I wish I had encountered this quote (from Gladwell) long ago, so I could include it in “The Singularity and Machine Ethics”:
In 1981… Doug Lenat entered the Traveller Trillion Credit Squadron tournament… It was a war game. The contestants had been given several volumes of rules, well beforehand, and had been asked to design their own fleet of warships with a mythical budget of a trillion dollars. The fleets then squared off against one another in the course of a weekend...
Lenat had developed an artificial-intelligence program that he called Eurisko, and he decided to feed his program the rules of the tournament. Lenat did not give Eurisko any advice or steer the program in any particular strategic direction. He was not a war-gamer. He simply let Eurisko figure things out for itself. For about a month, for ten hours every night on a hundred computers at Xerox PARC, in Palo Alto, Eurisko ground away at the problem, until it came out with an answer. Most teams fielded some version of a traditional naval fleet—an array of ships of various sizes, each well defended against enemy attack. Eurisko thought differently. “The program came up with a strategy of spending the trillion on an astronomical number of small ships like P.T. boats, with powerful weapons but absolutely no defense and no mobility,” Lenat said. “They just sat there. Basically, if they were hit once they would sink. And what happened is that the enemy would take its shots, and every one of those shots would sink our ships. But it didn’t matter, because we had so many.” Lenat won the tournament in a runaway.
The next year, Lenat entered once more, only this time the rules had changed. Fleets could no longer just sit there. Now one of the criteria of success in battle was fleet “agility.” Eurisko went back to work. “What Eurisko did was say that if any of our ships got damaged it would sink itself—and that would raise fleet agility back up again,” Lenat said. Eurisko won again.
...The other gamers were people steeped in military strategy and history… Eurisko, on the other hand, knew nothing but the rule book. It had no common sense… [But] not knowing the conventions of the game turned out to be an advantage.
[Lenat explained:] “What the other entrants were doing was filling in the holes in the rules with real-world, realistic answers. But Eurisko didn’t have that kind of preconception...” So it found solutions that were, as Lenat freely admits, “socially horrifying”: send a thousand defenseless and immobile ships into battle; sink your own ships the moment they get damaged.
While playing Rollercoaster Tycoon one time, I remember that I was tasked with the mission of getting a higher approval rating than the park next door. Rather than make my park better, I instead built a rollercoaster that launched people at 100mph into my rival’s park. Since technically those people died in my rival’s park, their approval rating would plummet and people would rush to my park and straight into my deathcoaster, which only caused their rating to drop lower and lower. I did this for an hour until the game said I’d won.
Sounds like the sort of strategy that evolution would invent. Or rather, already has, repeatedly — “build a lot of cheap little war machines and don’t mind the casualties” is standard operating procedure for a lot of insects.
But yeah, it’s an awesome lesson in “the AI optimizes for what you tell it to optimize for, not for what humans actually want.”
Gack! I wish I had encountered this quote (from Gladwell) long ago, so I could include it in “The Singularity and Machine Ethics”:
Or, an example of human Munchkinism:
Nice find! This will come in handy.
Sounds like the sort of strategy that evolution would invent. Or rather, already has, repeatedly — “build a lot of cheap little war machines and don’t mind the casualties” is standard operating procedure for a lot of insects.
But yeah, it’s an awesome lesson in “the AI optimizes for what you tell it to optimize for, not for what humans actually want.”