An AI system is software in something like the same way a human being is chemistry: our bodies do operate by means of a vast network of cooperating chemical processes, but the effects of those processes are mostly best understood at higher levels of abstraction, and those don’t behave in particularly chemistry-like ways.
Sometimes our bodies go wrong in chemistry-based ways and we can apply chemistry-based fixes that kinda work. We may tell ourselves stories about how they operate (“your brain doesn’t have enough free serotonin, so we’re giving you a drug that reduces its uptake”) but they’re commonly somewhere between “way oversimplified” and “completely wrong”.
Sometimes an AI system goes wrong in code-level ways and we can apply code-based fixes that kinda work. We may tell ourselves stories about how they operate (“the training goes unstable because of exploding or vanishing gradients, so we’re introducing batch normalization to make that not happen”) but they’re commonly somewhere between “way oversimplified” and “completely wrong”.
But most of the time, if you want to understand why a person does certain things, thinking in terms of chemistry won’t help you much; and most of the time, if you want to understand why an AI does certain things, thinking in terms of code won’t help you much.
An AI system is software in something like the same way a human being is chemistry: our bodies do operate by means of a vast network of cooperating chemical processes, but the effects of those processes are mostly best understood at higher levels of abstraction, and those don’t behave in particularly chemistry-like ways.
Sometimes our bodies go wrong in chemistry-based ways and we can apply chemistry-based fixes that kinda work. We may tell ourselves stories about how they operate (“your brain doesn’t have enough free serotonin, so we’re giving you a drug that reduces its uptake”) but they’re commonly somewhere between “way oversimplified” and “completely wrong”.
Sometimes an AI system goes wrong in code-level ways and we can apply code-based fixes that kinda work. We may tell ourselves stories about how they operate (“the training goes unstable because of exploding or vanishing gradients, so we’re introducing batch normalization to make that not happen”) but they’re commonly somewhere between “way oversimplified” and “completely wrong”.
But most of the time, if you want to understand why a person does certain things, thinking in terms of chemistry won’t help you much; and most of the time, if you want to understand why an AI does certain things, thinking in terms of code won’t help you much.
Yes, this is part of the intuition I was trying to get across—thanks!