Only slightly, and that a matter of emphasis. In my view the crux of the matter is the relationship between modelling and the traditional lab is very similar to the relationship between a cloud lab and the traditional lab; both are adding value by improving scale and repetition.
Weighing against my point, it does appear to me that the areas where modelling is emphasized the most are areas where experiments are very difficult or impossible, like nuclear fusion or climate science.
I do not see anywhere on Emerald Cloud Labs’ website claims that they offer experiments which cannot be achieved in a traditional lab. This leads me to suspect that the feedback loop between modelling and the traditional lab is better than that between a cloud lab and a traditional lab, because in spite of the similar value-add pitch, it remains the case that the cloud lab is primarily a substitute for the traditional lab, and modelling is primarily a complement.
Another detail I thought of: we remain stuck very much in the mode of hypothesis->experiment->data being a package deal. If became popular to disentangle them, like through likelihood functions or through one of the compression paradigms, then bulk data generation becomes independently valuable and it would make a lot of sense to run lots of permutations of the same basic experiment, without even a specific hypothesis in mind.
I do not see anywhere on Emerald Cloud Labs’ website claims that they offer experiments which cannot be achieved in a traditional lab.
Yes, it’s more about being able to do experiments more efficiently then about making new kinds of experiments.
The problem of modeling is that the modeling results are not the real world. If you care about which molecule binds to which protein you can model reactions for a lot of different reactions to find good candidates to validate in real experiments. The cloud lab actually gives you the real experiment.
Only slightly, and that a matter of emphasis. In my view the crux of the matter is the relationship between modelling and the traditional lab is very similar to the relationship between a cloud lab and the traditional lab; both are adding value by improving scale and repetition.
Weighing against my point, it does appear to me that the areas where modelling is emphasized the most are areas where experiments are very difficult or impossible, like nuclear fusion or climate science.
I do not see anywhere on Emerald Cloud Labs’ website claims that they offer experiments which cannot be achieved in a traditional lab. This leads me to suspect that the feedback loop between modelling and the traditional lab is better than that between a cloud lab and a traditional lab, because in spite of the similar value-add pitch, it remains the case that the cloud lab is primarily a substitute for the traditional lab, and modelling is primarily a complement.
Another detail I thought of: we remain stuck very much in the mode of hypothesis->experiment->data being a package deal. If became popular to disentangle them, like through likelihood functions or through one of the compression paradigms, then bulk data generation becomes independently valuable and it would make a lot of sense to run lots of permutations of the same basic experiment, without even a specific hypothesis in mind.
Yes, it’s more about being able to do experiments more efficiently then about making new kinds of experiments.
The problem of modeling is that the modeling results are not the real world. If you care about which molecule binds to which protein you can model reactions for a lot of different reactions to find good candidates to validate in real experiments. The cloud lab actually gives you the real experiment.