Your statement “This means you can’t use it … end of story” is simply objectively wrong.
Consider the following situation, which is analogous but easier to work with. You have a (monochrome) image, represented in the usual way as an array of pixels with values between (say) 0 and 255. Unfortunately, your sensor is broken in a way somewhat resembling these telescopes’ phase measurements, and every row and every column of the sensor has a randomly-chosen pixel-value offset between 0 and 255. So instead of the correct pixel(r,c) value in row r and column c, your sensor reports pixel(r,c) + rowerror(r) + colerror(c).
The row errors or the column errors alone would suffice to make it so that every individual pixel value is, as far as its own measurement goes, completely unknown. Just like the phases of the individual interferometric measurements. BUT the errors are consistent across each row and each column, rather than being independent for each pixel, just as the interferometric errors are consistent for each telescope.
So, what happens if you take this hopelessly corrupted input image, where each individual pixel (considered apart from all the others) is completely random, and indeed each pair of pixels (considered apart from all the others) is completely random, and apply a simple-minded reconstruction algorithm? I tried it. Here’s the algorithm I used: for each row in turn, see what offset minimizes “weirdness” (which I’ll define in a moment) and apply it; likewise for each column in turn; repeat this until “weirdness” stops decreasing. “Weirdness” means sum of squares of differences of adjacent pixels; so I’m exploiting the fact that real-world images have a bit of spatial coherence. Here’s the result:
On the left we have a 100x100 portion of a photo. (I used a fairly small bit because my algorithm is super-inefficient and I implemented it super-inefficiently.) In the middle we have the same image with all those row and column errors. To reiterate, every row is offset (mod 256) by a random 8-bit value; every column is offset (mod 256) by a random 8-bit value; this means that every pair of pixel values, considered on their own, is uniformly randomly distributed and contains zero information. But because of the relationships between these errors, even the incredibly stupid algorithm I described produces the image on the right; it has done 26 passes and you can already pretty much make out what’s in the image. At this point, individual row/column updates of the kind I described above have stopped improving the “weirdness”.
Not satisfied? I tweaked the algorithm so it’s still stupid but slightly less so: now on all iterations past the 10th it also considers blocks of rows and columns either starting with the first or ending with the last, and tries changing the offsets of the whole block in unison. The idea is to help it get out of local minima where a whole block is wrongly offset and changing one row/column at an edge of the block just moves the errors from one place to another without reducing the overall amount of error. This helps a bit:
though some of the edges in the image are still giving it grief. I can see a number of ways to improve the results further and am confident that I could make a better algorithm that almost always gets the image exactly right (up to a possible offset in all the pixel values) for real pictures, but I think the above suffices to prove my point: although each individual measurement is completely random because of the row and column errors, there is still plenty of information there to reconstruct the image from.
Hmm, there should be two images there but only one of them is showing up for me. Let me try again:
This should appear after “This helps a bit:”. (I’m putting this in an answer rather than editing the previous comment because if there’s some bug in image uploading or display then it seems like having only one image per comment might make it less likely to be triggered.)
Your statement “This means you can’t use it … end of story” is simply objectively wrong.
Consider the following situation, which is analogous but easier to work with. You have a (monochrome) image, represented in the usual way as an array of pixels with values between (say) 0 and 255. Unfortunately, your sensor is broken in a way somewhat resembling these telescopes’ phase measurements, and every row and every column of the sensor has a randomly-chosen pixel-value offset between 0 and 255. So instead of the correct pixel(r,c) value in row r and column c, your sensor reports pixel(r,c) + rowerror(r) + colerror(c).
The row errors or the column errors alone would suffice to make it so that every individual pixel value is, as far as its own measurement goes, completely unknown. Just like the phases of the individual interferometric measurements. BUT the errors are consistent across each row and each column, rather than being independent for each pixel, just as the interferometric errors are consistent for each telescope.
So, what happens if you take this hopelessly corrupted input image, where each individual pixel (considered apart from all the others) is completely random, and indeed each pair of pixels (considered apart from all the others) is completely random, and apply a simple-minded reconstruction algorithm? I tried it. Here’s the algorithm I used: for each row in turn, see what offset minimizes “weirdness” (which I’ll define in a moment) and apply it; likewise for each column in turn; repeat this until “weirdness” stops decreasing. “Weirdness” means sum of squares of differences of adjacent pixels; so I’m exploiting the fact that real-world images have a bit of spatial coherence. Here’s the result:
On the left we have a 100x100 portion of a photo. (I used a fairly small bit because my algorithm is super-inefficient and I implemented it super-inefficiently.) In the middle we have the same image with all those row and column errors. To reiterate, every row is offset (mod 256) by a random 8-bit value; every column is offset (mod 256) by a random 8-bit value; this means that every pair of pixel values, considered on their own, is uniformly randomly distributed and contains zero information. But because of the relationships between these errors, even the incredibly stupid algorithm I described produces the image on the right; it has done 26 passes and you can already pretty much make out what’s in the image. At this point, individual row/column updates of the kind I described above have stopped improving the “weirdness”.
Not satisfied? I tweaked the algorithm so it’s still stupid but slightly less so: now on all iterations past the 10th it also considers blocks of rows and columns either starting with the first or ending with the last, and tries changing the offsets of the whole block in unison. The idea is to help it get out of local minima where a whole block is wrongly offset and changing one row/column at an edge of the block just moves the errors from one place to another without reducing the overall amount of error. This helps a bit:
though some of the edges in the image are still giving it grief. I can see a number of ways to improve the results further and am confident that I could make a better algorithm that almost always gets the image exactly right (up to a possible offset in all the pixel values) for real pictures, but I think the above suffices to prove my point: although each individual measurement is completely random because of the row and column errors, there is still plenty of information there to reconstruct the image from.
Hmm, there should be two images there but only one of them is showing up for me. Let me try again:
This should appear after “This helps a bit:”. (I’m putting this in an answer rather than editing the previous comment because if there’s some bug in image uploading or display then it seems like having only one image per comment might make it less likely to be triggered.)