A HTC would come with serious overhead costs too; the cooling is just the flip side of the electricity—a HTC isn’t in Iceland and the obvious interpretation of a HTC as a very small pocket universe means that you have serious cooling issues as well (a years’ worth of heat production to eject each opening).
Take P-complete problems, for instance. These are problems which are efficient (polynomial time) on a sequential computer, but are conjectured to be inherently difficult to parallelize (the NC != P conjecture). This class contains problems of practical interest, notably linear programming and various problems for model checking. Being able to run these tasks overnight instead of in one year would a significant advantage.
I’m not sure how much of an advantage that would be: there are pretty good approximations for some (most/all?) problems like linear programming (remember Grötschel’s report citing a 43 million times speedup of a benchmark linear programming problem since 1988) and such stuff tends to asymptote. How much of an advantage is running for a year rather than the otherwise available days/weeks? Is it large enough to pay for a year of premium HTC computing power?
a HTC isn’t in Iceland and the obvious interpretation of a HTC as a very small pocket universe means that you have serious cooling issues as well (a years’ worth of heat production to eject each opening).
Of course given that the HTC is a fictional device you can always imagine arbitrary issues that make it uneconomical. I was considering the HTC just as a computer that had 365x the serial speed of present day computers, and considering whether there would be economically interesting batch (~1 day long) computations to run on it.
I’m not sure how much of an advantage that would be: there are pretty good approximations for some (most/all?) problems like linear programming (remember Grötschel’s report citing a 43 million times speedup of a benchmark linear programming problem since 1988) and such stuff tends to asymptote.
These problems have polynomial time complexity, they don’t asymptote. Linear programming, for instance has quadratic worst-case time complexity in the size of problem instance (and O(n^3.5) time complexity in the number of variables). For problems related to model checking (circuit value problem, Horn-satisfiability, type inference) approximate solutions don’t seem particularly useful.
Of course given that the HTC is a fictional device you can always imagine arbitrary issues that make it uneconomical. I was considering the HTC just as a computer that had 365x the serial speed of present day computers, and considering whether there would be economically interesting batch (~1 day long) computations to run on it.
Hm, I wasn’t, except in the shift to the upload scenario where the speedup is not from executing regular algorithms (presumably anything capable of executing emulated brains at 365x realtime will have much better serial performance than current CPUs). As an ordinary computer there’s still heat considerations—how is it taking care of putting out 365x a regular computer’s heat even if it’s doing 365x the work? And as a pocket universe as specified, heat is an issue—in fact, now that I think about it Stephen Baxter invented an space-faring alien race in his Ring hard sf universe which lives inside tiny pocket universes as the ultimate in heat insulation.
These problems have polynomial time complexity, they don’t asymptote. Linear programming, for instance has quadratic worst-case time complexity in the size of problem instance (and O(n^3.5) time complexity in the number of variables).
I was referring to the quality of the solution produced by the approximating algorithms.
For problems related to model checking (circuit value problem, Horn-satisfiability, type inference) approximate solutions don’t seem particularly useful.
In this paper, we propose an approximation method to verify quantitative properties on discrete Markov chains. We give a randomized algorithm to approximate the probability that a property expressed by some positive LTL formula is satisfied with high confidence by a probabilistic system. Our randomized algorithm requires only a succinct representation of the system and is based on an execution sampling method. We also present an implementation and a few classical examples to demonstrate the effectiveness of our approach.
I’ll admit I don’t know much about the model checking field, though.
A HTC would come with serious overhead costs too; the cooling is just the flip side of the electricity—a HTC isn’t in Iceland and the obvious interpretation of a HTC as a very small pocket universe means that you have serious cooling issues as well (a years’ worth of heat production to eject each opening).
I’m not sure how much of an advantage that would be: there are pretty good approximations for some (most/all?) problems like linear programming (remember Grötschel’s report citing a 43 million times speedup of a benchmark linear programming problem since 1988) and such stuff tends to asymptote. How much of an advantage is running for a year rather than the otherwise available days/weeks? Is it large enough to pay for a year of premium HTC computing power?
Of course given that the HTC is a fictional device you can always imagine arbitrary issues that make it uneconomical. I was considering the HTC just as a computer that had 365x the serial speed of present day computers, and considering whether there would be economically interesting batch (~1 day long) computations to run on it.
These problems have polynomial time complexity, they don’t asymptote. Linear programming, for instance has quadratic worst-case time complexity in the size of problem instance (and O(n^3.5) time complexity in the number of variables). For problems related to model checking (circuit value problem, Horn-satisfiability, type inference) approximate solutions don’t seem particularly useful.
Hm, I wasn’t, except in the shift to the upload scenario where the speedup is not from executing regular algorithms (presumably anything capable of executing emulated brains at 365x realtime will have much better serial performance than current CPUs). As an ordinary computer there’s still heat considerations—how is it taking care of putting out 365x a regular computer’s heat even if it’s doing 365x the work? And as a pocket universe as specified, heat is an issue—in fact, now that I think about it Stephen Baxter invented an space-faring alien race in his Ring hard sf universe which lives inside tiny pocket universes as the ultimate in heat insulation.
I was referring to the quality of the solution produced by the approximating algorithms.
Quickly googling, there seems to be plenty of work on approximate solutions and approaches in model checking; for example http://cs5824.userapi.com/u11728334/docs/77a8b8880f48/Bernhard_Steffen_Verification_Model_Checking_an.pdf includes a paper:
I’ll admit I don’t know much about the model checking field, though.