I think their problems will be rather different from ours. We simulate particle collisions literally at the level of electrons (well, with some parametrisations for the interactions of decay products with detector material); I think it will be a while before we have the computer power to treat cells as anything but black boxes, and of course cells are huge on the scale of particle physics (as are atoms). That said, I suspect that the major issues will be in parallelising their simulation algorithms (for speed) and storing the output (so you don’t have to run it again). Consider that at BaBar we used to think that ten times as much simulated data as real data was a good ratio, and 2 times was an informal minimum. But at BaBar we had an average of eleven tracks per event. At LHCb the average multiplicity is on the order of thousands, and it’s become impossible to generate even as much simulated as real data, at least in every channel. You run out of both simulation resources and storage space. If you’re simulating a whole brain, you’ve got way more objects, even taking atoms as the level of simulation. So you want speed so your grad students aren’t sitting about for a week waiting for the current simulation to finish so they can tweak one parameter based on the result; and you get speed from parallelising and caching. “A week” is not hyperbole, by the way; for my thesis I parallelised fits because, with twenty CPUs crunching the same data, I could get a result overnight; at that rate I did graduate eventually. Running on one CPU, each fit would take two weeks or so, and I’d still be ‘working’ on it (that is, mainly reading webcomics), except of course that the funding would have run out some time ago.
I think their problems will be rather different from ours. We simulate particle collisions literally at the level of electrons (well, with some parametrisations for the interactions of decay products with detector material); I think it will be a while before we have the computer power to treat cells as anything but black boxes, and of course cells are huge on the scale of particle physics (as are atoms). That said, I suspect that the major issues will be in parallelising their simulation algorithms (for speed) and storing the output (so you don’t have to run it again). Consider that at BaBar we used to think that ten times as much simulated data as real data was a good ratio, and 2 times was an informal minimum. But at BaBar we had an average of eleven tracks per event. At LHCb the average multiplicity is on the order of thousands, and it’s become impossible to generate even as much simulated as real data, at least in every channel. You run out of both simulation resources and storage space. If you’re simulating a whole brain, you’ve got way more objects, even taking atoms as the level of simulation. So you want speed so your grad students aren’t sitting about for a week waiting for the current simulation to finish so they can tweak one parameter based on the result; and you get speed from parallelising and caching. “A week” is not hyperbole, by the way; for my thesis I parallelised fits because, with twenty CPUs crunching the same data, I could get a result overnight; at that rate I did graduate eventually. Running on one CPU, each fit would take two weeks or so, and I’d still be ‘working’ on it (that is, mainly reading webcomics), except of course that the funding would have run out some time ago.