Don’t focus on internal knowledge vs black-box prediction, instead think of model complexity and how big our constructed model has to be in order to predict correctly.
A human may be its own best model, meaning that perfect prediction requires a model at least as complex as the thing itself. Or the internals may contain a bunch of redundancy and inefficiency, in which case it’s possible to create a perfect model of behavior and interaction that’s smaller than the human itself.
If we build the predictive model from sufficient observation and black-box techniques, we might be able to build a smaller model that is perfectly representative, or we might not. If we build it solely from internal observation and replication, we’re only ever going to get down to the same complexity as the original.
I include hybrid approaches (use internal and external observations to build models that don’t operate identically to the original mechanisms) in the first category: that’s still black-box thinking—use all info to model input/output without blindly following internal structure.
Don’t focus on internal knowledge vs black-box prediction, instead think of model complexity and how big our constructed model has to be in order to predict correctly.
A human may be its own best model, meaning that perfect prediction requires a model at least as complex as the thing itself. Or the internals may contain a bunch of redundancy and inefficiency, in which case it’s possible to create a perfect model of behavior and interaction that’s smaller than the human itself.
If we build the predictive model from sufficient observation and black-box techniques, we might be able to build a smaller model that is perfectly representative, or we might not. If we build it solely from internal observation and replication, we’re only ever going to get down to the same complexity as the original.
I include hybrid approaches (use internal and external observations to build models that don’t operate identically to the original mechanisms) in the first category: that’s still black-box thinking—use all info to model input/output without blindly following internal structure.
This seems correct to me. Thank you.