Are math proofs useful at all for writing better algorithms? I saw on Reddit recently that they proved Batchelor’s Law in 3D, the core idea of which seems to be using stochastic assumptions to prove it cannot be violated. The Quanta article does not seem to contain a link to the paper, which is weird.
Batchelor’s Law is the experimentally-observed fact that turbulence occurs at a specific ratio across scales, which is to say when you zoom in on a small chunk of the turbulence it looks remarkably like all of the turbulence, and so on. Something something fractals something.
Looking up the relationship between proofs and algorithms mostly goes to proofs about specific algorithms, and sometimes using algorithms as a form of proof; but what I am after is whether a pure-math proof like the above one can be mined for useful information about how to build an algorithm in the first place. I have read elsewhere that algorithmic efficiency is about problem information, and this makes intuitive sense to me; but what kind of information am I really getting out of mathematical proofs, assuming I can understand them?
I don’t suppose there’s a list somewhere that handily matches tricks for proving things in mathematics to tricks for constructing algorithms in computer science?
A proof may show that an algorithm works. If the proof is correct*, this may demonstrate that the algorithm is robust. (Though you really want a proof about an implementation of the algorithm, which is a program.)
*A proof that a service will never go down which relies on assumptions with the implication “there are no extreme solar storms” may not be a sufficient safeguard against the possibility that the service will go down if there is an extreme solar storm. Less extremely, perhaps low latency might be proved to hold, as long as the internet doesn’t go down.
How are algorithms made, and how can proofs improve/be incorporated into that process?
Given a problem, you can try and solve it (1). You can guess(2). You can try (one or more) different things and just see if they work(3).
1 and 2 can come apart, and that’s where checking becomes essential. A proof that the method you’re using goes anywhere (fast) can be useful there.
Let’s take a task:
Sorting. It can be solved by:
1. Taking a smaller instance, solving that (and paying attention to process). Then extract the process and see how well it generalizes
2. Handle the problem itself
3. Do something. See if it worked.
2 and 3 can come apart:
At its worst, 3 can look like Bogosort. Thought that process can be improved. Look at the first two elements. Are they sorted? No: shuffle them. Look at the next two elements...
4! = 12, twelve permutations of 4 elements. The sorting so far has eliminated some possibilities:
1, 2, 3, 4
1, 3, 2, 4
1, 4, 2, 3
2, 3, 1, 4
2, 4, 1, 3
Now all that’s needed is a method of shuffling that doesn’t make things less orderly… And eventually Mergesort may be invented.
In the extreme, 3 may be ‘automated’:
programs write programs, and test them to see if they do what’s needed (or a tester gets a guesser thrown at it, to ‘crack the password’)
The post you linked to (algorithmic efficiency is about problem information) - the knowledge that method X works best when conditions Y are met, which is used in a polyalgorithmic approach? That knowledge might come from proofs.
Are math proofs useful at all for writing better algorithms? I saw on Reddit recently that they proved Batchelor’s Law in 3D, the core idea of which seems to be using stochastic assumptions to prove it cannot be violated. The Quanta article does not seem to contain a link to the paper, which is weird.
Batchelor’s Law is the experimentally-observed fact that turbulence occurs at a specific ratio across scales, which is to say when you zoom in on a small chunk of the turbulence it looks remarkably like all of the turbulence, and so on. Something something fractals something.
Looking up the relationship between proofs and algorithms mostly goes to proofs about specific algorithms, and sometimes using algorithms as a form of proof; but what I am after is whether a pure-math proof like the above one can be mined for useful information about how to build an algorithm in the first place. I have read elsewhere that algorithmic efficiency is about problem information, and this makes intuitive sense to me; but what kind of information am I really getting out of mathematical proofs, assuming I can understand them?
I don’t suppose there’s a list somewhere that handily matches tricks for proving things in mathematics to tricks for constructing algorithms in computer science?
A proof may show that an algorithm works. If the proof is correct*, this may demonstrate that the algorithm is robust. (Though you really want a proof about an implementation of the algorithm, which is a program.)
*A proof that a service will never go down which relies on assumptions with the implication “there are no extreme solar storms” may not be a sufficient safeguard against the possibility that the service will go down if there is an extreme solar storm. Less extremely, perhaps low latency might be proved to hold, as long as the internet doesn’t go down.
How are algorithms made, and how can proofs improve/be incorporated into that process?
Given a problem, you can try and solve it (1). You can guess(2). You can try (one or more) different things and just see if they work(3).
1 and 2 can come apart, and that’s where checking becomes essential. A proof that the method you’re using goes anywhere (fast) can be useful there.
Let’s take a task:
Sorting. It can be solved by:
1. Taking a smaller instance, solving that (and paying attention to process). Then extract the process and see how well it generalizes
2. Handle the problem itself
3. Do something. See if it worked.
2 and 3 can come apart:
At its worst, 3 can look like Bogosort. Thought that process can be improved. Look at the first two elements. Are they sorted? No: shuffle them. Look at the next two elements...
4! = 12, twelve permutations of 4 elements. The sorting so far has eliminated some possibilities:
1, 2, 3, 4
1, 3, 2, 4
1, 4, 2, 3
2, 3, 1, 4
2, 4, 1, 3
Now all that’s needed is a method of shuffling that doesn’t make things less orderly… And eventually Mergesort may be invented.
In the extreme, 3 may be ‘automated’:
programs write programs, and test them to see if they do what’s needed (or a tester gets a guesser thrown at it, to ‘crack the password’)
evolutionary algorithms
The post you linked to (algorithmic efficiency is about problem information) - the knowledge that method X works best when conditions Y are met, which is used in a polyalgorithmic approach? That knowledge might come from proofs.