Sorry, I was using “high-latency clusters” as a term to refer to heterogeneous at-home consumer hardware networked over WANs, as the term is sometimes meant in this field. The problem isn’t always latency (although for some work loads it is), but rather efficiency. Consumer hardware is simply not energy efficient for most categories of scientific work. Your typical, average computer plugged into such a system is not going to have a top of the line GTX 1080 or Titan X card with lots of RAM. At best it will be a gaming system optimized for a different use case, and probably trades off energy efficiency at peak usage in favor of lowering idle power draw. It almost certainly doesn’t have the right hardware for the particular use case. SETI@Home for example is an ideal use case for high latency clusters, and by some metrics is one of the most powerful ‘supercomputers’ in existence. However it has also been estimated that the entire network could be replaced by a single rack of FPGAs processing in real-time at the source. SETI@Home and related projects work because it is “free” computation. But as soon as you start charging for the use of your computer equipment, it stops making any kind of economic sense.
SETI@Home for example is an ideal use case for high latency clusters, and by some metrics is one of the most powerful ‘supercomputers’ in existence. However it has also been estimated that the entire network could be replaced by a single rack of FPGAs processing in real-time at the source.
Sorry, I was using “high-latency clusters” as a term to refer to heterogeneous at-home consumer hardware networked over WANs, as the term is sometimes meant in this field. The problem isn’t always latency (although for some work loads it is), but rather efficiency. Consumer hardware is simply not energy efficient for most categories of scientific work. Your typical, average computer plugged into such a system is not going to have a top of the line GTX 1080 or Titan X card with lots of RAM. At best it will be a gaming system optimized for a different use case, and probably trades off energy efficiency at peak usage in favor of lowering idle power draw. It almost certainly doesn’t have the right hardware for the particular use case. SETI@Home for example is an ideal use case for high latency clusters, and by some metrics is one of the most powerful ‘supercomputers’ in existence. However it has also been estimated that the entire network could be replaced by a single rack of FPGAs processing in real-time at the source. SETI@Home and related projects work because it is “free” computation. But as soon as you start charging for the use of your computer equipment, it stops making any kind of economic sense.
I would be interested in a cite on that estimate.
Personal conversation with SETI.