This class of theory goes by the name of “Tired light”. It seems as if every theory of this kind precise enough to make definite predictions has been pretty clearly falsified, but I’m not an expert on this stuff.
A relation of the form f = f0 - constant*distance will send the frequency to zero (and then out the other side) once the distances get large enough. You probably don’t want that.
Note that no actual infinite limits are required. Just large but finite distances.
I have no prior that says this is a problem
I think you should have. One of three problems, depending on what you expect to happen. (1) If you expect something more complicated to happen for large distances, your theory is more complicated than it initially looks. Doesn’t your prior favour simpler theories? (2) If you expect the frequency to pass through zero and continue, your theory will have to (2a) explain what negative frequencies actually mean, why frequency -f isn’t just the same as frequency +f with different phase, why we never see anything with negative frequency, etc., or else (2b) if instead it says that negative frequencies are the same as positive, then explain what happens to the frequency after it crosses zero (gets more negative? then -f isn’t the same as +f after all. gets less negative? then what we actually end up is a really weird discontinuity at the zero crossing). Again, all this stuff is extra complexity. (3) If you expect the issue not to arise because nature somehow ensures that light never travels far enough for the frequency to reach zero, then your theory needs to explain how that happens. Extra complexity again.
I expect infinite complexity, but pick the simplest model to account for the currently known data. Keep on expanding the range of applicability, and I expect to see new effects that aren’t accounted for in models validated over a more restricted range of data.
Reality is more complicated than it looks, and I don’t expect that to end.
No, I don’t expect negative frequencies, as frequency goes down, energy goes down, and I expect quantum effects to take hold as energy approaches zero. You can call that “extra complexity”, but we already know there are quantum effects in general.
Does that stop you regarding a theory as more credible when it’s simpler (for equal fidelity to observed evidence)?
It’s more credible in the range of data for which it’s fidelity has shown it to be more credible. I expect extrapolations outside that range to have less fidelity.
Do you have more specific expectations?
No.
I don’t have some grand unified theory.
I just observe that a lot of cosmology seems to be riding on the theory that the red shift is caused by an expanding universe.
Note that I ended my first post with questions, not with claims.
What if it light just loses energy as it travels, so that the frequency shifts lower?
That seems like a perfectly natural solution. How do we know it isn’t true?
What would be the implications to the current theories if it were true?
I just observe that a lot of cosmology seems to be riding on the theory that the red shift is caused by an expanding universe.
This seems wrong to be. There’s at least two independent lines of evidence for the Big Bang theory besides redshifts—isotope abundances (particularly for light elements) and the cosmic background radiation.
What if it light just loses energy as it travels, so that the frequency shifts lower?
We would have to abandon our belief in energy conservation. And we would then wonder why energy seems to be conserved exactly in every interaction we can see. Also we would wonder why we see spontaneous redshifts not spontaneous blue shifts. Every known micro-scale physical process in the universe is reversible [1], and by the CPT theorem, we expect this to be true always. A lot would have to be wrong with our notions of physics to have light “just lose energy.”
That seems like a perfectly natural solution. How do we know it isn’t true?
This solution requires light from distant galaxies to behave in ways totally different from every other physical process we know about—including physical processes in distant galaxies. It seems unnatural to say “the redshift is explained by a totally new physical process, and this process violates a lot of natural laws that hold everywhere else.”
[1] I should say, reversible assuming you also flip the charges and parities. That’s irrelevant here, though, since photons are uncharged and don’t have any special polarization.
Something already does happen for large distances.
That’s an observable fact. It’s redshift.
What causes it?
The standard answer is expansion, which needs inflation and dark energy and an arbitrary multiverse to do that. Al lthings that make the theory more complicated with distance.
Alternatively, what if light doesn’t travel forever?
How would such a reality look if things existed farther away than light could travel?
Is it not exactly what is observed?
What is more complex,
0 frequency photons ceasing to be photons,
or
infinite wavelength photons everywhere never interacting with anything in space expanding faster than c?
If it’s a matter of complexity, the 0 frequency photons ceasing to exist is less complex than there being infinite wavelength photons everywhere.
The thing you need to evaluate the complexity of is an actual theory, with equations and everything, not a vague suggestion that maybe photons lose energy as they travel.
I don’t think the conventional theory says you have infinite-wavelength photons, and I think your thought experiment with your two hands is wrong. Light from an object at the Hubble limit not only never reaches us, but also never reaches (say) a point 1m “inward” from us. It never gets any closer to us. Note that this is not the same as saying that light from 1m less than the Hubble limit never reaches us, which of course is false.
We get arbitrarily long-wavelength photons, if we wait long enough, but we have to wait longer for the longer-wavelength ones and we would have to wait infinitely long to get ones of infinite wavelength.
The actual theory, equation and everything is that distant galaxies do not recede at v = H D (Hubble’s Old Law), instead a photon travels at v = c—H d (let’s call it Hubble’s New Law).
That’s the cause of redshift.
At c/H, in both the old and new law, the frequency of the photon reaches 0.
In the old law, it remains to encounter ever increasing space, getting farther from the things in front of it. What happens at this point is poorly addressed by the standard model, for obvious reasons (it’s ridiculous).
In Hubble’s New Law, the photon has lost all energy, and thus there is no photon.
Old = photons piling up in ever increasing space. do they redshift into the negative frequency?
The actual theory, equation and everything is [...]
That’s not the actual theory. It’s a tiny fraction of a theory. It’s not even clear that it makes sense. (What exactly is d here? Total distance travelled by the photon since it first came into being, I guess. But what exactly does that mean? For instance, can there be interference between two photons with different values of d, and if so what happens?)
In your theory, photons travel slower than c, the exact speed depending on their “d” value. That’s going to mess up pretty much everything in quantum electrodynamics, so what do you put in its place?
In your theory, photons pop out of existence when their velocity and frequency reach zero. Again, that violates local conservation of energy and CPT invariance and so forth; again, how are you modifying the fundamentals of conventional physics to deal with this?
At c/H, in both the old and new law, the frequency of the photon reaches 0.
But “at c/H” the photons never reach us because their source is receding from us at the speed of light. We never see zero-frequency photons. We do see substantially-lowered-frequency photons (once the light has had long enough to reach us despite the recession).
In the old law, it remains to encounter ever increasing space, getting farther from the things in front of it.
This doesn’t appear to me to be anything like correct. It is getting closer to the things in front of it—apart from ones that are receding from it faster than the speed of light, which are (and remain) far away from it. What’s the problem here supposed to be?
Old = photons piling up in ever increasing space
I repeat: I do not think the conventional theory does say anything like “photons piling up in ever increasing space”. Of course it’s possible that my analysis is wrong; feel free to show me why.
If Hubble’s Law is true, then v = H * D, and when D = c / H, v = c, and that is called Hubble Limit. If you go far enough, galaxies will recede faster than the speed of light. That boundary defines the Hubble Volume.
In the Big Bang model, once a photon goes that far, space is expanding faster than light can travel, it’s basically stuck at Hubble’s Limit. There would be a bunch just adding up in that case.
In this model (which is not the 1930′s Tired Light model), the photon hits a frequency of zero, and ceases to exist.
Your “bunch just adding up” link is to a Q&A site’s question where you ask “would they just pile up?” and get the answer “no, not really”. (I paraphrase a little.)
The answer was that they’d have an infinite wavelength (as if that makes sense) so they wouldn’t literally be on a line in front of my face, but yet, they’re there, alright, zero frequency photons flying at c in space expanding faster than they can travel, forever (according to the expanding theory).
Point is, the frequency redshifts. It will hit zero. That’s consistent with what we observe out there. Known as Hubble’s Limit.
This class of theory goes by the name of “Tired light”. It seems as if every theory of this kind precise enough to make definite predictions has been pretty clearly falsified, but I’m not an expert on this stuff.
A relation of the form f = f0 - constant*distance will send the frequency to zero (and then out the other side) once the distances get large enough. You probably don’t want that.
In the limit, yes. I have no prior that says this is a problem.
Also, as one approaches such a limit, I wouldn’t be surprised to see other terms come into play.
Note that no actual infinite limits are required. Just large but finite distances.
I think you should have. One of three problems, depending on what you expect to happen. (1) If you expect something more complicated to happen for large distances, your theory is more complicated than it initially looks. Doesn’t your prior favour simpler theories? (2) If you expect the frequency to pass through zero and continue, your theory will have to (2a) explain what negative frequencies actually mean, why frequency -f isn’t just the same as frequency +f with different phase, why we never see anything with negative frequency, etc., or else (2b) if instead it says that negative frequencies are the same as positive, then explain what happens to the frequency after it crosses zero (gets more negative? then -f isn’t the same as +f after all. gets less negative? then what we actually end up is a really weird discontinuity at the zero crossing). Again, all this stuff is extra complexity. (3) If you expect the issue not to arise because nature somehow ensures that light never travels far enough for the frequency to reach zero, then your theory needs to explain how that happens. Extra complexity again.
This sounds like case 1 above.
I expect infinite complexity, but pick the simplest model to account for the currently known data. Keep on expanding the range of applicability, and I expect to see new effects that aren’t accounted for in models validated over a more restricted range of data.
Reality is more complicated than it looks, and I don’t expect that to end.
No, I don’t expect negative frequencies, as frequency goes down, energy goes down, and I expect quantum effects to take hold as energy approaches zero. You can call that “extra complexity”, but we already know there are quantum effects in general.
OK. Does that stop you regarding a theory as more credible when it’s simpler (for equal fidelity to observed evidence)?
Everything is quantum effects. Do you have more specific expectations?
It’s more credible in the range of data for which it’s fidelity has shown it to be more credible. I expect extrapolations outside that range to have less fidelity.
No.
I don’t have some grand unified theory.
I just observe that a lot of cosmology seems to be riding on the theory that the red shift is caused by an expanding universe.
Note that I ended my first post with questions, not with claims.
This seems wrong to be. There’s at least two independent lines of evidence for the Big Bang theory besides redshifts—isotope abundances (particularly for light elements) and the cosmic background radiation.
We would have to abandon our belief in energy conservation. And we would then wonder why energy seems to be conserved exactly in every interaction we can see. Also we would wonder why we see spontaneous redshifts not spontaneous blue shifts. Every known micro-scale physical process in the universe is reversible [1], and by the CPT theorem, we expect this to be true always. A lot would have to be wrong with our notions of physics to have light “just lose energy.”
This solution requires light from distant galaxies to behave in ways totally different from every other physical process we know about—including physical processes in distant galaxies. It seems unnatural to say “the redshift is explained by a totally new physical process, and this process violates a lot of natural laws that hold everywhere else.”
[1] I should say, reversible assuming you also flip the charges and parities. That’s irrelevant here, though, since photons are uncharged and don’t have any special polarization.
Something already does happen for large distances.
That’s an observable fact. It’s redshift.
What causes it?
The standard answer is expansion, which needs inflation and dark energy and an arbitrary multiverse to do that. Al lthings that make the theory more complicated with distance.
Alternatively, what if light doesn’t travel forever?
How would such a reality look if things existed farther away than light could travel?
Is it not exactly what is observed?
What is more complex,
0 frequency photons ceasing to be photons,
or
infinite wavelength photons everywhere never interacting with anything in space expanding faster than c?
If it’s a matter of complexity, the 0 frequency photons ceasing to exist is less complex than there being infinite wavelength photons everywhere.
The thing you need to evaluate the complexity of is an actual theory, with equations and everything, not a vague suggestion that maybe photons lose energy as they travel.
I don’t think the conventional theory says you have infinite-wavelength photons, and I think your thought experiment with your two hands is wrong. Light from an object at the Hubble limit not only never reaches us, but also never reaches (say) a point 1m “inward” from us. It never gets any closer to us. Note that this is not the same as saying that light from 1m less than the Hubble limit never reaches us, which of course is false.
We get arbitrarily long-wavelength photons, if we wait long enough, but we have to wait longer for the longer-wavelength ones and we would have to wait infinitely long to get ones of infinite wavelength.
The actual theory, equation and everything is that distant galaxies do not recede at v = H D (Hubble’s Old Law), instead a photon travels at v = c—H d (let’s call it Hubble’s New Law).
That’s the cause of redshift.
At c/H, in both the old and new law, the frequency of the photon reaches 0.
In the old law, it remains to encounter ever increasing space, getting farther from the things in front of it. What happens at this point is poorly addressed by the standard model, for obvious reasons (it’s ridiculous).
In Hubble’s New Law, the photon has lost all energy, and thus there is no photon.
Old = photons piling up in ever increasing space. do they redshift into the negative frequency?
New = photons redshit to zero then are gone
That’s not the actual theory. It’s a tiny fraction of a theory. It’s not even clear that it makes sense. (What exactly is d here? Total distance travelled by the photon since it first came into being, I guess. But what exactly does that mean? For instance, can there be interference between two photons with different values of d, and if so what happens?)
In your theory, photons travel slower than c, the exact speed depending on their “d” value. That’s going to mess up pretty much everything in quantum electrodynamics, so what do you put in its place?
In your theory, photons pop out of existence when their velocity and frequency reach zero. Again, that violates local conservation of energy and CPT invariance and so forth; again, how are you modifying the fundamentals of conventional physics to deal with this?
But “at c/H” the photons never reach us because their source is receding from us at the speed of light. We never see zero-frequency photons. We do see substantially-lowered-frequency photons (once the light has had long enough to reach us despite the recession).
This doesn’t appear to me to be anything like correct. It is getting closer to the things in front of it—apart from ones that are receding from it faster than the speed of light, which are (and remain) far away from it. What’s the problem here supposed to be?
I repeat: I do not think the conventional theory does say anything like “photons piling up in ever increasing space”. Of course it’s possible that my analysis is wrong; feel free to show me why.
If Hubble’s Law is true, then v = H * D, and when D = c / H, v = c, and that is called Hubble Limit. If you go far enough, galaxies will recede faster than the speed of light. That boundary defines the Hubble Volume.
In the Big Bang model, once a photon goes that far, space is expanding faster than light can travel, it’s basically stuck at Hubble’s Limit. There would be a bunch just adding up in that case.
In this model (which is not the 1930′s Tired Light model), the photon hits a frequency of zero, and ceases to exist.
Your “bunch just adding up” link is to a Q&A site’s question where you ask “would they just pile up?” and get the answer “no, not really”. (I paraphrase a little.)
The answer was that they’d have an infinite wavelength (as if that makes sense) so they wouldn’t literally be on a line in front of my face, but yet, they’re there, alright, zero frequency photons flying at c in space expanding faster than they can travel, forever (according to the expanding theory).
Point is, the frequency redshifts. It will hit zero. That’s consistent with what we observe out there. Known as Hubble’s Limit.