But you don’t start out trying to solve the problem in a hilariously inappropriate way. For example, if your boss said, “hey, sort these 10 billion numbers” you wouldn’t do simulated annealing with a cost function that penalizes unsorted entries, and then just make random swaps in the data and tell your boss to come back in 10 years when it will only probably be finished with an only probably correct answer. That’s a categorical waste of resources, not a strategic upping of resources to get a first, but still reasonable, attempt that you can then whittle into something better.
As a machine learning researcher, my opinion is that Watson is more like simulated annealing. It’s like someone said, “Hey how can we make this thing play jeopardy without even thinking at all about how it will do the data processing… how large do we have to make it if its processing is as stupid and easy to implement as possible?”
But you don’t start out trying to solve the problem in a hilariously inappropriate way. For example, if your boss said, “hey, sort these 10 billion numbers” you wouldn’t do simulated annealing with a cost function that penalizes unsorted entries, and then just make random swaps in the data and tell your boss to come back in 10 years when it will only probably be finished with an only probably correct answer. That’s a categorical waste of resources, not a strategic upping of resources to get a first, but still reasonable, attempt that you can then whittle into something better.
As a machine learning researcher, my opinion is that Watson is more like simulated annealing. It’s like someone said, “Hey how can we make this thing play jeopardy without even thinking at all about how it will do the data processing… how large do we have to make it if its processing is as stupid and easy to implement as possible?”
See my other comment for more on this.