I’m not clear on the distinction you’re drawing. Can you give a concrete example? Of course, you could have a causal model of the internals which was wrong but gave the same answers as the right one, for the observations you are able to make. But it is not clear what a causal model of what you will see when you interact with the agent could fail to be a causal model, accurate or otherwise, of the agent’s internals.
I’m not clear on the distinction you’re drawing. Can you give a concrete example?
I don’t know how cars work, but almost nothing my car does can surprise me. Only unusual one-off problems require help from somebody who knows the internal structure.
But cars are designed to be usable by laypeople, so this is maybe an unfair example.
I have a model of what inputs produce what outputs (“pressing on the gas pedal makes the engine go; not changing the oil every few months makes things break”). I do not have a causal model of the internals of the system.
At best I can make understandish-sounding noises about engines, but I could not build or repair one, nor even identify all but the most obvious parts.
The thread that starts this discussion speaks about the importance of modelling internals for predictions.
In drug research the company usually search for a molecule that binds some protein that does something in a specific pathway.
Even your clinical trials demonstrate that the drug works and helps with the illness you want to treat, you haven’t demonstrated that it works via the pathway you target. It might work because of off-target interactions.
This is an example of the sort I described: the model is wrong, but by chance made a right prediction. An incorrect model of internal mechanisms is still a model of internal mechanisms. The possibility of getting lucky is a poor thing on which to base a claim that modelling internal mechanisms is unnecessary.
The possibility of getting lucky is a poor thing on which to base a claim that modelling internal mechanisms is unnecessary.
Gives failure rates of >90 any getting a drug through clinical trials is always “getting lucky”.
The issue depends on how much successful drugs are successful do to understanding of the pathways and how many are successful because of luck and good empirical measurement of effects of drugs.
I personally think that medicine would be improved if we would reroute capital currently trying to understand pathways to researching better ways of empirical measurement.
Only if you’re never going to interact with the agent. Once you do, you’re making interventions and a causal model is required.
A causal model about what input produces specific output but no causal model about how the internals of the system work.
I’m not clear on the distinction you’re drawing. Can you give a concrete example? Of course, you could have a causal model of the internals which was wrong but gave the same answers as the right one, for the observations you are able to make. But it is not clear what a causal model of what you will see when you interact with the agent could fail to be a causal model, accurate or otherwise, of the agent’s internals.
I don’t know how cars work, but almost nothing my car does can surprise me. Only unusual one-off problems require help from somebody who knows the internal structure.
But cars are designed to be usable by laypeople, so this is maybe an unfair example.
You don’t know anything about how cars work?
I have a model of what inputs produce what outputs (“pressing on the gas pedal makes the engine go; not changing the oil every few months makes things break”). I do not have a causal model of the internals of the system.
At best I can make understandish-sounding noises about engines, but I could not build or repair one, nor even identify all but the most obvious parts.
The thread that starts this discussion speaks about the importance of modelling internals for predictions.
In drug research the company usually search for a molecule that binds some protein that does something in a specific pathway. Even your clinical trials demonstrate that the drug works and helps with the illness you want to treat, you haven’t demonstrated that it works via the pathway you target. It might work because of off-target interactions.
This is an example of the sort I described: the model is wrong, but by chance made a right prediction. An incorrect model of internal mechanisms is still a model of internal mechanisms. The possibility of getting lucky is a poor thing on which to base a claim that modelling internal mechanisms is unnecessary.
Gives failure rates of >90 any getting a drug through clinical trials is always “getting lucky”.
The issue depends on how much successful drugs are successful do to understanding of the pathways and how many are successful because of luck and good empirical measurement of effects of drugs.
I personally think that medicine would be improved if we would reroute capital currently trying to understand pathways to researching better ways of empirical measurement.