“Philosophical statements” about AI algorithms are useful: not for algorithms themselves, but for AI researchers.
AI researcher shouldn’t be mistaken about Mysterious Power Of The Entropy. The “We don’t understand human intelligence, hence if I wish to create an artificial one, I must not understand it either” attitude is wrong.
So if I was to summarize this post, the summary was something like “Noise can’t be inherently good; if entropy magically solves your problem, it means that there is a some more lawful non-magical way to solve this problem.”
I think it is a part of more general principle “Never leave behind parts that work, but you have no slightest idea why they work”.
AI researcher shouldn’t be mistaken about Mysterious Power Of The Entropy
I am having trouble picturing a real AI researcher who believes in the “Mysterious Power Of The Entropy”. Maybe I simply lack sufficient imagination. Still, that sounds like something a philosopher might believe in, not an AI researcher who actually implements (or designs) real AI algorithms.
if entropy magically solves your problem, it means that there is a some more lawful non-magical way to solve this problem.
I guess it depends on what you mean by “magical”. There are tons of stochasticalgorithms out there that rely on noise explicitly. Sure, there technically does exist a “lawful non-magical way” to solve the problem these algorithms are solving stochastically, but usually it’s completely infeasible due to the amount of time it would take to run. Thus, one has no choice but to use the noisy algorithms.
Never leave behind parts that work, but you have no slightest idea why they work
Again, this depends on how strictly you want to interpret the rule. For example, multi-layer neural networks work very well, but they are often criticized for not being transparent. You can train the network to recognize handwriting (just for example), but you can’t readily explain what all the weights mean once you’d trained it.
“Philosophical statements” about AI algorithms are useful: not for algorithms themselves, but for AI researchers.
AI researcher shouldn’t be mistaken about Mysterious Power Of The Entropy. The “We don’t understand human intelligence, hence if I wish to create an artificial one, I must not understand it either” attitude is wrong.
So if I was to summarize this post, the summary was something like “Noise can’t be inherently good; if entropy magically solves your problem, it means that there is a some more lawful non-magical way to solve this problem.”
I think it is a part of more general principle “Never leave behind parts that work, but you have no slightest idea why they work”.
I am having trouble picturing a real AI researcher who believes in the “Mysterious Power Of The Entropy”. Maybe I simply lack sufficient imagination. Still, that sounds like something a philosopher might believe in, not an AI researcher who actually implements (or designs) real AI algorithms.
I guess it depends on what you mean by “magical”. There are tons of stochastic algorithms out there that rely on noise explicitly. Sure, there technically does exist a “lawful non-magical way” to solve the problem these algorithms are solving stochastically, but usually it’s completely infeasible due to the amount of time it would take to run. Thus, one has no choice but to use the noisy algorithms.
Again, this depends on how strictly you want to interpret the rule. For example, multi-layer neural networks work very well, but they are often criticized for not being transparent. You can train the network to recognize handwriting (just for example), but you can’t readily explain what all the weights mean once you’d trained it.