I haven’t based my career on these methods. I don’t know where you get that idea from.
[EDITED to add:] Oh, maybe I do know. Perhaps you’re reasoning as follows: “This stuff is obviously wrong. Gareth is defending it. The only possible explanation is that he has some big emotional investment in its rightness. The most likely reason for that is that he does it for a living. Perhaps he’s even on the LIGO or EHT project and is taking my criticisms personally.” So, for the avoidance of doubt: my only emotional investment in this is that bad science (particularly proselytizing bad science) makes me sad; I have no involvement in the LIGO or EHT project, or any other similar Big Science project, nor have I ever had; my career is in no way based on image reconstruction algorithms like this one. (On the other hand: I have from time to time worked on slightly-related things; e.g., for a while I worked with phase reconstruction algorithms for digital holograms, which is less like this than it might sound but not 100% unlike it, and I do some image processing algorithm work in my current job. But if all the stuff EHT is doing, including their image reconstruction techniques, turned out to be 100% bullshit, the impact on my career would be zero.)
----
The method successfully takes a real image (I literally grabbed the first thing I found in my temp directory and cropped a 100x100 piece from it), corrupted with errors of the kind I described (which really truly do mean that the probability distribution of the values for any pair of pixels is uniform) and yields something clearly recognizable as a mildly messed-up version of the original image.
So … What do you mean by “invalid”? I mean, you could mean two almost-opposite things. (1) You think I’m cheating, so any success in the reconstruction just means that the data aren’t as badly corrupted as I say. (2) You think I’m failing, so when I say I’ve done the reconstruction with some success you disagree and think my output is as garbagey as the input looks and contains no real information about the (original, uncorrupted) image. (Maybe there are other possibilities too, but those seem the obvious ones.)
I’m happy to address either of those claims, but they demand different sorts of addressing (for #1 I guess I’d provide the actual code and input data, and some calculations showing that my claim about how random the pixel values are is true, making it as easy as possible for you to replicate my results if you care; for #2 I’d put a little more effort into the algorithm so that its reconstructions become more obviously good enough).
Just for fun, I improved the algorithm a bit. Here’s what it does now:
Still not perfect—there’s some residual banding that is never going to be 100% removable but could be made much better with more understanding of the statistics of typical images, and something at the right-hand side that seems like the existing algorithm ought to be doing better and I’m not quite sure why it doesn’t. But I don’t feel like taking a lot more time to debug it.
You’re seeing its performance on the same input image that I used for testing it, so it’s always possible that there’s some overfitting. I doubt there’s much, though; everything it does is still rather generic.
(In case anyone cares, the changes I made are: 1. At a couple of points it does a different sort of incremental row-by-row/column-by-column optimization, that tries to make each row/column in succession differ as little as possible from (only) its predecessor, and doesn’t treat wrapping around at 255⁄0 as indicating an extra-big change. 2. There’s an annoying way for the algorithm to get stuck in a local minimum, where there’s a row discontinuity and a column discontinuity and fixing one of them would push some pixels past the 255⁄0 boundary because of the other discontinuity; to address this, there’s a step that considers row/column pairs and tries making e.g. a small increase in pixel values right of the given column and a small decrease in pixel values right of the given row. Empirically, this seems to help get out of those local minima. 3. The regularizer (i.e., the “weirdness”-penalizing part of the algorithm) now tries not only to make adjacent pixels be close in value, but also (much less vigorously) to make some other nearby pixels be close in value.)
I am fascinated by the amount of effort you put into writing a comment.
I understand the call of duty and sometimes spend 30 minutes writing and editing a short comment. But designing an algorithm to prove a concept, and writing an application… wow!
(Could you perhaps expand the comment into a short article? Like, remove all the quarrel, and just keep it like: “this is a simple algorithm that achieves something seemingly impossible”; perhaps also with some pseudocode. Because this is interesting per se, in my opinion, even for someone who hasn’t read this thread.)
Thanks for the kind words! I guess it might be worth making into an article. (And I agree that if so it would be best to make it more standalone and less debate-y, though it might be worth retaining a little context.) I’m on holiday right now so no guarantees :-).
As I read it, by “invalid” they mean not 1 or 2 you suggested but (3) your reconstruction process is assuming something which is the actual cause of most of the non-randomness in your output, and would produce a plausible-looking image when run on any random set of pixels.
For the example you gave (3) is clearly not true—there’s no way that any random set of pixels would produce something so correlated with the original image, when the reconstruction algorithm doesn’t itself embed the original image. But as far as I know LIGO doesn’t know the original image, so the fact that they get something structured-looking out of noise isn’t meaningful? Or at least that’s how I interpret nixtaken’s argument; this is really not my area of expertise.
Kirsten is claiming that my algorithm is invalid, even though (so it seems to me) it demonstrably does a decent job of reconstructing the original image I gave it despite the corruption of the rows and columns. Of course she can still claim that the EHT team’s reconstruction algorithm doesn’t have that property, that it only gives plausible output images because it’s been constructed in such a way that it can’t do otherwise, but at the moment I’m not arguing that the EHT team’s reconstruction algorithm is any good, I’m arguing only that one specific thing Kirsten claimed about it is flatly untrue: namely, that if the phases are corrupted in anything like the way Bouman describes then there is literally no way to get any actual phase information from the data. The point of the image reconstruction I’m demonstrating here is that you can have corruption with a similar sort of pattern that’s just as severe but still be able to do a lot of reconstruction, because although the individual measurements’ phases are hopelessly corrupt (in my example: the individual pixels’ values are hopelessly corrupt) there is still good information to be had about their relationships.
[EDITED to add:] … Or maybe I misunderstood? Perhaps “This method is invalid” she means that somehow anything that has the same sort of shape as what I’m doing here is bad, even if it demonstrably gives good results. If so, then I guess my problem is that she hasn’t enabled me to understand why she considers it “invalid”. Her objections all seem to me like science-by-slogan: describe something in a way that makes it sound silly, and you’ve shown it’s no good. Unfortunately, all sorts of things that can be made to sound silly turn out to be not silly at all, so when faced with a claim that something that demonstrably works is no good I’m going to need more than slogans to convince me.
I haven’t based my career on these methods. I don’t know where you get that idea from.
[EDITED to add:] Oh, maybe I do know. Perhaps you’re reasoning as follows: “This stuff is obviously wrong. Gareth is defending it. The only possible explanation is that he has some big emotional investment in its rightness. The most likely reason for that is that he does it for a living. Perhaps he’s even on the LIGO or EHT project and is taking my criticisms personally.” So, for the avoidance of doubt: my only emotional investment in this is that bad science (particularly proselytizing bad science) makes me sad; I have no involvement in the LIGO or EHT project, or any other similar Big Science project, nor have I ever had; my career is in no way based on image reconstruction algorithms like this one. (On the other hand: I have from time to time worked on slightly-related things; e.g., for a while I worked with phase reconstruction algorithms for digital holograms, which is less like this than it might sound but not 100% unlike it, and I do some image processing algorithm work in my current job. But if all the stuff EHT is doing, including their image reconstruction techniques, turned out to be 100% bullshit, the impact on my career would be zero.)
----
The method successfully takes a real image (I literally grabbed the first thing I found in my temp directory and cropped a 100x100 piece from it), corrupted with errors of the kind I described (which really truly do mean that the probability distribution of the values for any pair of pixels is uniform) and yields something clearly recognizable as a mildly messed-up version of the original image.
So … What do you mean by “invalid”? I mean, you could mean two almost-opposite things. (1) You think I’m cheating, so any success in the reconstruction just means that the data aren’t as badly corrupted as I say. (2) You think I’m failing, so when I say I’ve done the reconstruction with some success you disagree and think my output is as garbagey as the input looks and contains no real information about the (original, uncorrupted) image. (Maybe there are other possibilities too, but those seem the obvious ones.)
I’m happy to address either of those claims, but they demand different sorts of addressing (for #1 I guess I’d provide the actual code and input data, and some calculations showing that my claim about how random the pixel values are is true, making it as easy as possible for you to replicate my results if you care; for #2 I’d put a little more effort into the algorithm so that its reconstructions become more obviously good enough).
Just for fun, I improved the algorithm a bit. Here’s what it does now:
Still not perfect—there’s some residual banding that is never going to be 100% removable but could be made much better with more understanding of the statistics of typical images, and something at the right-hand side that seems like the existing algorithm ought to be doing better and I’m not quite sure why it doesn’t. But I don’t feel like taking a lot more time to debug it.
You’re seeing its performance on the same input image that I used for testing it, so it’s always possible that there’s some overfitting. I doubt there’s much, though; everything it does is still rather generic.
(In case anyone cares, the changes I made are: 1. At a couple of points it does a different sort of incremental row-by-row/column-by-column optimization, that tries to make each row/column in succession differ as little as possible from (only) its predecessor, and doesn’t treat wrapping around at 255⁄0 as indicating an extra-big change. 2. There’s an annoying way for the algorithm to get stuck in a local minimum, where there’s a row discontinuity and a column discontinuity and fixing one of them would push some pixels past the 255⁄0 boundary because of the other discontinuity; to address this, there’s a step that considers row/column pairs and tries making e.g. a small increase in pixel values right of the given column and a small decrease in pixel values right of the given row. Empirically, this seems to help get out of those local minima. 3. The regularizer (i.e., the “weirdness”-penalizing part of the algorithm) now tries not only to make adjacent pixels be close in value, but also (much less vigorously) to make some other nearby pixels be close in value.)
I am fascinated by the amount of effort you put into writing a comment.
I understand the call of duty and sometimes spend 30 minutes writing and editing a short comment. But designing an algorithm to prove a concept, and writing an application… wow!
(Could you perhaps expand the comment into a short article? Like, remove all the quarrel, and just keep it like: “this is a simple algorithm that achieves something seemingly impossible”; perhaps also with some pseudocode. Because this is interesting per se, in my opinion, even for someone who hasn’t read this thread.)
Thanks for the kind words! I guess it might be worth making into an article. (And I agree that if so it would be best to make it more standalone and less debate-y, though it might be worth retaining a little context.) I’m on holiday right now so no guarantees :-).
As I read it, by “invalid” they mean not 1 or 2 you suggested but (3) your reconstruction process is assuming something which is the actual cause of most of the non-randomness in your output, and would produce a plausible-looking image when run on any random set of pixels.
For the example you gave (3) is clearly not true—there’s no way that any random set of pixels would produce something so correlated with the original image, when the reconstruction algorithm doesn’t itself embed the original image. But as far as I know LIGO doesn’t know the original image, so the fact that they get something structured-looking out of noise isn’t meaningful? Or at least that’s how I interpret nixtaken’s argument; this is really not my area of expertise.
Kirsten is claiming that my algorithm is invalid, even though (so it seems to me) it demonstrably does a decent job of reconstructing the original image I gave it despite the corruption of the rows and columns. Of course she can still claim that the EHT team’s reconstruction algorithm doesn’t have that property, that it only gives plausible output images because it’s been constructed in such a way that it can’t do otherwise, but at the moment I’m not arguing that the EHT team’s reconstruction algorithm is any good, I’m arguing only that one specific thing Kirsten claimed about it is flatly untrue: namely, that if the phases are corrupted in anything like the way Bouman describes then there is literally no way to get any actual phase information from the data. The point of the image reconstruction I’m demonstrating here is that you can have corruption with a similar sort of pattern that’s just as severe but still be able to do a lot of reconstruction, because although the individual measurements’ phases are hopelessly corrupt (in my example: the individual pixels’ values are hopelessly corrupt) there is still good information to be had about their relationships.
[EDITED to add:] … Or maybe I misunderstood? Perhaps “This method is invalid” she means that somehow anything that has the same sort of shape as what I’m doing here is bad, even if it demonstrably gives good results. If so, then I guess my problem is that she hasn’t enabled me to understand why she considers it “invalid”. Her objections all seem to me like science-by-slogan: describe something in a way that makes it sound silly, and you’ve shown it’s no good. Unfortunately, all sorts of things that can be made to sound silly turn out to be not silly at all, so when faced with a claim that something that demonstrably works is no good I’m going to need more than slogans to convince me.