The pattern painted onto white paper can’t be seen because the image is also white. If the white image is printed onto paper that has parts of it that aren’t white of course it’s going to be more visible. Adding noise would be the equivalent of taking the image already printed onto white paper, and just adding random static on top of it. It would be even harder to see still.
What you’re saying just makes no sense to me. Adding noise is just as likely to increase the existing signal as it is to decrease it. Or to make a signal appear that isn’t there at all. I can’t see how it’s doing anything to help detect the signal.
What you’re missing is that, if the signal is below the detection threshold, there is no loss if the noise pushes it farther below the detection threshold, whereas there is a gain when the noise pushes the signal above the detection threshold. Thus the noise increases sensitivity, at the cost of accuracy. (And since a lot of sensory information is redundant, the loss of accuracy is easy to work around.)
In which case, you could view the image even better if you just changed the whole backdrop to gray, instead of just random parts of it. This would correspond to the “using the same knowledge to produce a superior algorithm” part of the article.
As I understood it, the article specifically did not state that you can’t ever improve a deterministic algorithm by adding randomness—only that this is a sign that you algorithm is crap, not that the problem fundamentally requires randomness. There should always exist a different deterministic algorithm which is more accurate than your random algorithm (at least in theory—in practice, that algorithm might have an unacceptable runtime or it would require even more knowledge than you have)
The pattern painted onto white paper can’t be seen because the image is also white. If the white image is printed onto paper that has parts of it that aren’t white of course it’s going to be more visible. Adding noise would be the equivalent of taking the image already printed onto white paper, and just adding random static on top of it. It would be even harder to see still.
What you’re saying just makes no sense to me. Adding noise is just as likely to increase the existing signal as it is to decrease it. Or to make a signal appear that isn’t there at all. I can’t see how it’s doing anything to help detect the signal.
What you’re missing is that, if the signal is below the detection threshold, there is no loss if the noise pushes it farther below the detection threshold, whereas there is a gain when the noise pushes the signal above the detection threshold. Thus the noise increases sensitivity, at the cost of accuracy. (And since a lot of sensory information is redundant, the loss of accuracy is easy to work around.)
In which case, you could view the image even better if you just changed the whole backdrop to gray, instead of just random parts of it. This would correspond to the “using the same knowledge to produce a superior algorithm” part of the article.
As I understood it, the article specifically did not state that you can’t ever improve a deterministic algorithm by adding randomness—only that this is a sign that you algorithm is crap, not that the problem fundamentally requires randomness. There should always exist a different deterministic algorithm which is more accurate than your random algorithm (at least in theory—in practice, that algorithm might have an unacceptable runtime or it would require even more knowledge than you have)