This is sort of off-topic for LW, but I recently came across a paper that discusses Reconfigurable Asynchronous Logic Automata, which appears to be a new model of computation inspired by physics. The paper claims that this model yields linear-time algorithms for both sorting and matrix multiplication, which seems fairly significant to me.
Unfortunately the paper is rather short, and I haven’t been able to find much more information about it, but I did find this Google Tech Talks video in which Neil Gershenfeld discusses some motivations behind RALA.
A quick glance seems to indicate that they are achieving these linear time algorithms through massive parallelization. This is “cheating” because to do a linear-time sort of size n, you need O(n) processing units. While they seem to be arguing that this is acceptable because processing is becoming more and more parallel, this breaks down for large n. One can easily use traditional algorithms to sort a billion elements in O(n * log n); however for their algorithm to sort such a list in O(n) time, they need a billion (times some constant factor) times more processing units than to sort a list of size n.
I’m also vaguely perplexed by their basic argument. They want to have programming tools and computational models which are closer to the metal to take advantage of the features of new machines. This ignores the fact that the current abstractions exist, not just for historical reasons, but because they are easy to reason about.
This is all from a fairly cursory read of their paper, however, so take it with a grain of salt.
It takes O(n) memory units just to store a list of size n. Why should computers have asymptotically more memory units than processing units? You don’t get to assume an infinitely parallel computer, but O(n)-parallel is only reasonable.
My first impression of the paper is: We can already do this, it’s called an FPGA, and the reason we don’t use them everywhere is that they’re hard to program for.
Interesting. I would like to see a sanity check from someone knowledgeable in either electrical engineering or computer science; there are two things which concern me:
This is sort of off-topic for LW, but I recently came across a paper that discusses Reconfigurable Asynchronous Logic Automata, which appears to be a new model of computation inspired by physics. The paper claims that this model yields linear-time algorithms for both sorting and matrix multiplication, which seems fairly significant to me.
Unfortunately the paper is rather short, and I haven’t been able to find much more information about it, but I did find this Google Tech Talks video in which Neil Gershenfeld discusses some motivations behind RALA.
A quick glance seems to indicate that they are achieving these linear time algorithms through massive parallelization. This is “cheating” because to do a linear-time sort of size n, you need O(n) processing units. While they seem to be arguing that this is acceptable because processing is becoming more and more parallel, this breaks down for large n. One can easily use traditional algorithms to sort a billion elements in O(n * log n); however for their algorithm to sort such a list in O(n) time, they need a billion (times some constant factor) times more processing units than to sort a list of size n.
I’m also vaguely perplexed by their basic argument. They want to have programming tools and computational models which are closer to the metal to take advantage of the features of new machines. This ignores the fact that the current abstractions exist, not just for historical reasons, but because they are easy to reason about.
This is all from a fairly cursory read of their paper, however, so take it with a grain of salt.
It takes O(n) memory units just to store a list of size n. Why should computers have asymptotically more memory units than processing units? You don’t get to assume an infinitely parallel computer, but O(n)-parallel is only reasonable.
My first impression of the paper is: We can already do this, it’s called an FPGA, and the reason we don’t use them everywhere is that they’re hard to program for.
Interesting. I would like to see a sanity check from someone knowledgeable in either electrical engineering or computer science; there are two things which concern me:
Is it physically plausible?
Is it usable?
Edit: I see the question is already answered.