I’m aware of all of this already, but as I said, there seems to be a fairly large gap between this kind of informal explanation of what happens and the actual wire energies that we seem to be able to achieve. Maybe I’m interpreting these energies in a wrong way and we could violate Jacob’s postulated bounds by taking an Ethernet cable and transmitting 40 Gbps of information at a long distance, but I doubt that would actually work.
I’m in a strange situation because while I agree with you that the tile model of a wire is unphysical and very strange, at the same time it seems to me intuitively that if you tried to violate Jacob’s bounds by many orders of magnitude, something would go wrong and you wouldn’t be able to do it. If someone presented a toy model which explained why in practice we can get wire energies down to a certain amount that is predicted by the model while in theory we could lower them by much more, I think that would be quite persuasive.
Maybe I’m interpreting these energies in a wrong way and we could violate Jacob’s postulated bounds by taking an Ethernet cable and transmitting 40 Gbps of information at a long distance, but I doubt that would actually work.
Ethernet cables are twisted pair and will probably never be able to go that fast. You can get above 10 GHz with rigid coax cables, although you still have significant attenuation.
Let’s compute heat loss in a 100 m LDF5-50A, which evidently has 10.9 dB/100 m attenuation at 5 GHz. This is very low in my experience, but it’s what they claim.
Say we put 1 W of signal power at 5 GHz in one side. Because of the 10.9 dB attenuation, we receive 94 mW out the other side, with 906 mW lost to heat.
The Shannon-Hartley theorem says that we can compute the capacity of the wire as C=Blog2(1+SN) where B is the bandwidth, S is received signal power, and N is noise power.
Let’s assume Johnson noise. These cables are rated up to 100 C, so I’ll use that temperature, although it doesn’t make a big difference.
If I plug in 5 GHz for B, 94 mW for S and kB(370K)(5GHz)≈2.5×10−11W for N then I get a channel capacity of 160 GHz.
The heat lost is then (906mW)/(160GHz)/(100m)≈0.05fJ/bit/mm. Quite low compared to Jacob’s ~10 fJ/mm “theoretical lower bound.”
One free parameter is the signal power. The heat loss over the cable is linear in the signal power, while the channel capacity is sublinear, so lowering the signal power reduces the energy cost per bit. It is 10 fJ/bit/mm at about 300 W of input power, quite a lot!
Another is noise power. I assumed Johnson noise, which may be a reasonable assumption for an isolated coax cable, but not for an interconnect on a CPU. Adding an order of magnitude or two to the noise power does not substantially change the final energy cost per bit (0.05 goes to 0.07), however I doubt even that covers the amount of noise in a CPU interconnect.
Similarly, raising the cable attenuation to 50 dB/100 m does not even double the heat loss per bit. Shannon’s theorem still allows a significant capacity. It’s just a question of whether or not the receiver can read such small signals.
The reason that typical interconnects in CPUs and the like tend to be in the realm of 10-100 fJ/bit/mm is because of a wide range of engineering constraints, not because there is a theoretical minimum. Feel free to check my numbers of course. I did this pretty quickly.
The heat lost is then [..] 0.05 fJ/bit/mm. Quite low compared to Jacob’s ~10 fJ/mm “theoretical lower bound.”
In the original article I discuss interconnect wire energy, not a “theoretical lower bound” for any wire energy communication method—and immediately point out reversible communication methods (optical, superconducting) that do not dissipate the wire energy.
Coax cable devices seem to use around 1 to 5 fJ/bit/mm at a few W of power, or a few OOM more than your model predicts here—so I’m curious what you think that discrepancy is, without necessarily disagreeing with the model.
I describe a simple model of wire bit energy for EM wave transmission in coax cable here which seems physically correct but also predicts a bit energy distance range somewhat below observed.
Active copper cable at 0.5W for 40G over 15 meters is ~1e−21J/nm, assuming it actually hits 40G at the max length of 15m.
I can’t access the linked article, but an active cable is not simple to model because its listed power includes the active components. We are interested in the loss within the wire between the active components.
This source has specs for a passive copper wire capable of up to 40G @5m using <1W, which works out to ~5e−21J/nm, or a bit less.
They write <1 W for every length of wire, so all you can say is <5 fJ/mm. You don’t know how much less. They are likely writing <1 W for comparison to active wires that consume more than a W. Also, these cables seem to have a powered transceiver built-in on each end that multiplex out the signal to four twisted pair 10G lines.
Compare to 10G from here which. may use up to 5W to hit up to 10G at 100M, for ~5e−21J/nm.
Again, these have a powered transceiver on each end.
So for all of these, all we know is that the sum of the losses of the powered components and the wire itself are of order 1 fJ/mm. Edit: I would guess that probably the powered components have very low power draw (I would guess 10s of mW) and the majority of the loss is attenuation in the wire.
The numbers I gave essentially are the theoretical minimum energy loss per bit per mm of that particular cable at that particular signal power. It’s not surprising that multiple twisted pair cables do worse. They’ll have higher attenuation, lower bandwidth, the standard transceivers on either side require larger signals because they have cheaper DAC/ADCs, etc. Also, their error correction is not perfect, and they don’t make full use of their channel capacity. In return, the cables are cheap, flexible, standard, etc.
I think this calculation is fairly convincing pending an answer from Jacob. You should have probably just put this calculation at the top of the thread, and then the back-and-forth would probably not have been necessary. The key parameter that is needed here is the estimate of a realistic attenuation rate for a coaxial cable, which was missing from DaemonicSigil’s original calculation that was purely information-theoretic.
As an additional note here, if we take the same setup you’re using, then if you take the energy input x to be a free parameter, then the energy per bit per distance is given by
f(x)=0.906x5⋅1014⋅log2(1+0.094x2.5⋅10−11)
in units of J/bit/mm. This does not have a global optimum for x>0 because it’s strictly increasing, but we can take a limit to get the theoretical lower bound
limx→0f(x)=3.34⋅10−25
which is much lower than what you calculated, though to achieve this you would be sending information very slowly—indeed, infinitely slowly in the limit of x→0.
I am skeptical that steady state direct current flow attenuation is the entirety of the story (and indeed it seems to underestimate actual coax cable wire energy of ~1e^-21 to 5e^-21 J/bit/nm by a few OOM).
For coax cable the transmission is through a transverse (AC) wave that must accelerate a quantity of electrons linearly proportional to the length of the cable. These electrons rather rapidly dissipate this additional drift velocity energy through collisions (resistance), and the entirety of the wave energy is ultimately dissipated.
This seems different than sending continuous DC power through the wire where the electrons have a steady state drift velocity and the only energy required is that to maintain the drift velocity against resistance. For wave propagation the electrons are instead accelerated up from a drift velocity of zero for each bit sent. It’s the difference between the energy required to accelerate a car up to cruising speed and the power required to maintain that speed against friction.
If we take the bit energy to be Eb, then there is a natural EM wavelength of Eb=hcλ, so λ=hcEB, which works out to ~1um for ~1eV. Notice that using a lower frequency / longer wavelength seems to allow one to arbitrarily decrease the bit energy distance scale, but it turns out this just increases the dissipative loss.
So an initial estimate of the characteristic bit energy distance scale here is ~1eV/bit/um or ~1e-22 J/bit/nm. But this is obviously an underestimate as it doesn’t yet include the effect of resistance (and skin effect) during wave propagation.
The bit energy of one wavelength is implemented through electron peak drift velocity on order Eb=12Nemev2d, where Ne is the number of carrier electrons in one wavelength wire section. The relaxation time τ or mean time between thermal collisions with a room temp thermal velocity of around ~1e5 m/s and the mean free path of ~40 nm in copper is τ ~ 4e-13s. Meanwhile the inverse frequency or timespan of one wavelength is around 3e-14 s for an optical frequency 1eV wave, and is ~1e-9 s for a more typical (much higher amplitude) gigahertz frequency wave. So it would seem that resistance is quite significant on these timescales.
Very roughly the gigahertz 1e-9s period wave requires about 5 oom more energy per wavelength due to dissipation which cancels out the 5 oom larger distance scale. Each wavelength section loses about half of the invested energy every τ ~ 4e-13 seconds, so maintaining the bit energy of Eb requires roughly input power of ~Eb/τ for f−1 seconds which cancels out the effect of the longer wavelength distance, resulting in a constant bit energy distance scale independent of wavelength/frequency (naturally there are many other complex effects that are wavelength/frequency dependent but they can’t improve the bit energy distance scale )
For a low frequency (long wavelength) with f−1 << τ :
Eb/d≈Ebf−1τλ=Ebτfλ
λ=cf
Eb/d≈Ebτfλ
Eb/d≈Ebτc ~ 1eV / 10um ~ 1e-23 J/bit/nm
If you take the bit energy down to the minimal landauer limit of ~0.01 eV this ends up about equivalent to your lower limit, but I don’t think that would realistically propagate.
A real wave propagation probably can’t perfectly transfer the bit energy over longer distances and has other losses (dielectric loss, skin effect, etc), so vaguely guesstimating around 100x loss would result in ~1e-21 J/bit/nm. The skin effect alone perhaps increases resistance by roughly 10x at gigahertz frequencies. Coax devices also seem constrained to use specific lower gigahertz frequences and then boost the bitrate through analog encoding, so for example 10-bit analog increases bitrate by 10x at the same frequency but requires about 1024X more power, so that is 2 OOM less efficient per bit.
Notice that the basic energy distance scale of Ebτc is derived from the mean free path, via the relaxation time τ from τ=ℓ/Vn, where ℓ is the mean free path and Vn is the thermal noise velocity (around ~1e5 m/s for room temp electrons).
Coax cable doesn’t seem to have any fundamental advantage over waveguide optical, so I didn’t consider it at all in brain efficiency. It requires wires of about the same width several OOM larger than minimal nanoscale RC interconnect and largish sending/receiving devices as in optics/photonics.
This is very different than sending continuous power through the wire where the electrons have a steady state drift velocity and the only energy required is that to maintain the drift velocity against resistance. For wave propagation the electrons are instead accelerated up from a drift velocity of zero for each bit sent. It’s the difference between the energy required to accelerate a car up to cruising speed and the power required to maintain that speed against friction.
Electrons are very light so the kinetic energy required to get them moving should not be significant in any non-contrived situation I think? The energy of the magnetic field produced by the current would tend to be much more of an important effect.
As for the rest of your comment, I’m not confident enough I understand the details of your argument be able to comment on it in detail. But from a high level view, any effect you’re talking about should be baked into the attenuation chart I linked in this comment. This is the advantage of empirically measured data. For example, the skin-effect (where high frequency AC current is conducted mostly in the surface of a conductor, so the effective resistance increases the higher the frequency of the signal) is already baked in. This effect is (one of the reasons) why there’s a positive slope in the attenuation chart. If your proposed effect is real, it might be contributing to that positive slope, but I don’t see how it could change the “1 kT per foot” calculation.
Electrons are very light so the kinetic energy required to get them moving should not be significant in any non-contrived situation I think? The energy of the magnetic field produced by the current would tend to be much more of an important effect.
My current understanding is that the electric current energy transmits through electron drift velocity (and I believe that is the standard textbook understanding?, although I admit I have some questions concerning the details). The magnetic field is just a component of the EM waves which propagate changes in electron KE between electrons (the EM waves implement the connections between masses in the equivalent mass-spring system).
I’m not sure how you got “1 kT per foot” but that seems roughly similar to the model up thread I am replying to from spxtr that got 0.05 fJ/bit/mm or 5e-23 J/bit/mm. I attempted to derive an estimate from the lower level physics thinking it might be different but it ended up in the same range—and also off by the same 2 OOM vs real data. But I mention that skin effect could plausibly increase power by 10x in my lower level model, as I didn’t model it nor use measured attenuation values at all. The other OOM probably comes from analog SNR inefficiency.
The part of this that is somewhat odd at first is the exponential attenuation. That does show up in my low lever model where any electron kinetic energy in the wire is dissipated by about 50% due to thermal collisions every τ ~ 4e-13 seconds (that is the important part from mean free path / relaxation time). But that doesn’t naturally lead to a linear bit energy distance scale unless that dissipated energy is somehow replaced/driven by the preceding section of waveform.
So if you sent E as a single large infinitesimal pulse down a wire of length D, the energy you get on the other side is E∗2−αD for some attenuation constant α that works out to about 0.1 mm or something as it’s τc, not meters. I believe if your chart showed attenuation in the 100THZ regime on the scale of τ it would be losing 50% per 0.1 mm instead of per meter.
We know that resistance is linear, not exponential—which I think arises from long steady flow where every τ seconds half the electron kinetic energy is dissipated, but this total amount is linear with wire section length. The relaxation time τ then just determines what steady mean electron drift velocity (current flow) results from the dissipated energy.
So when the wave period f−1 is much less than τ you still lose about half of the wave energy E every τ seconds but that can be spread out over a much larger wavelength section. (and indeed at gigahertz frequencies this model roughly predicts the correct 50% attenuation distance scale of ~10m or so).
There’s two types of energy associated with a current we should distinguish. Firstly there’s the power flowing through the circuit, then there’s energy associated with having current flowing in a wire at all. So if we’re looking at a piece of extension cord that’s powering a lightbulb, the power flowing through the circuit is what’s making the lightbulb shine. This is governed by the equation P=IV. But there’s also some energy associated with having current flowing in a wire at all. For example, you can work out what the magnetic field should be around a wire with a given amount of current flowing through it and calculate the energy stored in the magnetic field. (This energy is associated with the inductance of the wire.) Similarly, the kinetic energy associated with the electron drift velocity is also there just because the wire has current flowing through it. (This is typically a very small amount of energy.)
To see that these types have to be distinct, think about what happens when we double the voltage going into the extension cord and also double the resistance of the lightbulb it’s powering. Current stays the same, but with twice the voltage we now have twice the power flowing to the light bulb. Because current hasn’t changed, neither has the magnetic field around the wire, nor the drift velocity. So the energy associated with having a current flowing in this wire is unchanged, even though the power provided to the light bulb has doubled. The important thing about the drift velocity in the context of P=IV is that it moves charge. We can calculate the potential energy associated with a charge in a wire as E=qV, and then taking the time derivative gives the power equation. It’s true that drift velocity is also a velocity, and thus the charge carriers have kinetic energy too, but this is not the energy that powers the light bulb.
In terms of exponential attenuation, even DC through resistors gives exponential attenuation if you have a “transmission line” configuration of resistors that look like this:
So exponential attenuation doesn’t seem too unusual or surprising to me.
Indeed, the theoretical lower bound is very, very low.
Do you think this is actually achievable with a good enough sensor if we used this exact cable for information transmission, but simply used very low input energies?
The minimum is set by the sensor resolution and noise. A nice oscilloscope, for instance, will have, say, 12 bits of voltage resolution and something like 10 V full scale, so ~2 mV minimum voltage. If you measure across a 50 Ohm load then the minimum received power you can see is P=(2mV)2/(50Ω)≈10μW. This is an underestimate, but that’s the idea.
I’m aware of all of this already, but as I said, there seems to be a fairly large gap between this kind of informal explanation of what happens and the actual wire energies that we seem to be able to achieve. Maybe I’m interpreting these energies in a wrong way and we could violate Jacob’s postulated bounds by taking an Ethernet cable and transmitting 40 Gbps of information at a long distance, but I doubt that would actually work.
I’m in a strange situation because while I agree with you that the tile model of a wire is unphysical and very strange, at the same time it seems to me intuitively that if you tried to violate Jacob’s bounds by many orders of magnitude, something would go wrong and you wouldn’t be able to do it. If someone presented a toy model which explained why in practice we can get wire energies down to a certain amount that is predicted by the model while in theory we could lower them by much more, I think that would be quite persuasive.
Ethernet cables are twisted pair and will probably never be able to go that fast. You can get above 10 GHz with rigid coax cables, although you still have significant attenuation.
Let’s compute heat loss in a 100 m LDF5-50A, which evidently has 10.9 dB/100 m attenuation at 5 GHz. This is very low in my experience, but it’s what they claim.
Say we put 1 W of signal power at 5 GHz in one side. Because of the 10.9 dB attenuation, we receive 94 mW out the other side, with 906 mW lost to heat.
The Shannon-Hartley theorem says that we can compute the capacity of the wire as C=Blog2(1+SN) where B is the bandwidth, S is received signal power, and N is noise power.
Let’s assume Johnson noise. These cables are rated up to 100 C, so I’ll use that temperature, although it doesn’t make a big difference.
If I plug in 5 GHz for B, 94 mW for S and kB(370K)(5GHz)≈2.5×10−11W for N then I get a channel capacity of 160 GHz.
The heat lost is then (906mW)/(160GHz)/(100m)≈0.05fJ/bit/mm. Quite low compared to Jacob’s ~10 fJ/mm “theoretical lower bound.”
One free parameter is the signal power. The heat loss over the cable is linear in the signal power, while the channel capacity is sublinear, so lowering the signal power reduces the energy cost per bit. It is 10 fJ/bit/mm at about 300 W of input power, quite a lot!
Another is noise power. I assumed Johnson noise, which may be a reasonable assumption for an isolated coax cable, but not for an interconnect on a CPU. Adding an order of magnitude or two to the noise power does not substantially change the final energy cost per bit (0.05 goes to 0.07), however I doubt even that covers the amount of noise in a CPU interconnect.
Similarly, raising the cable attenuation to 50 dB/100 m does not even double the heat loss per bit. Shannon’s theorem still allows a significant capacity. It’s just a question of whether or not the receiver can read such small signals.
The reason that typical interconnects in CPUs and the like tend to be in the realm of 10-100 fJ/bit/mm is because of a wide range of engineering constraints, not because there is a theoretical minimum. Feel free to check my numbers of course. I did this pretty quickly.
In the original article I discuss interconnect wire energy, not a “theoretical lower bound” for any wire energy communication method—and immediately point out reversible communication methods (optical, superconducting) that do not dissipate the wire energy.
Coax cable devices seem to use around 1 to 5 fJ/bit/mm at a few W of power, or a few OOM more than your model predicts here—so I’m curious what you think that discrepancy is, without necessarily disagreeing with the model.
I describe a simple model of wire bit energy for EM wave transmission in coax cable here which seems physically correct but also predicts a bit energy distance range somewhat below observed.
I can’t access the linked article, but an active cable is not simple to model because its listed power includes the active components. We are interested in the loss within the wire between the active components.
They write <1 W for every length of wire, so all you can say is <5 fJ/mm. You don’t know how much less. They are likely writing <1 W for comparison to active wires that consume more than a W. Also, these cables seem to have a powered transceiver built-in on each end that multiplex out the signal to four twisted pair 10G lines.
Again, these have a powered transceiver on each end.
So for all of these, all we know is that the sum of the losses of the powered components and the wire itself are of order 1 fJ/mm. Edit: I would guess that probably the powered components have very low power draw (I would guess 10s of mW) and the majority of the loss is attenuation in the wire.
The numbers I gave essentially are the theoretical minimum energy loss per bit per mm of that particular cable at that particular signal power. It’s not surprising that multiple twisted pair cables do worse. They’ll have higher attenuation, lower bandwidth, the standard transceivers on either side require larger signals because they have cheaper DAC/ADCs, etc. Also, their error correction is not perfect, and they don’t make full use of their channel capacity. In return, the cables are cheap, flexible, standard, etc.
There’s nothing special about kT/1 nm.
I think this calculation is fairly convincing pending an answer from Jacob. You should have probably just put this calculation at the top of the thread, and then the back-and-forth would probably not have been necessary. The key parameter that is needed here is the estimate of a realistic attenuation rate for a coaxial cable, which was missing from DaemonicSigil’s original calculation that was purely information-theoretic.
As an additional note here, if we take the same setup you’re using, then if you take the energy input x to be a free parameter, then the energy per bit per distance is given by
f(x)=0.906x5⋅1014⋅log2(1+0.094x2.5⋅10−11)
in units of J/bit/mm. This does not have a global optimum for x>0 because it’s strictly increasing, but we can take a limit to get the theoretical lower bound
limx→0f(x)=3.34⋅10−25
which is much lower than what you calculated, though to achieve this you would be sending information very slowly—indeed, infinitely slowly in the limit of x→0.
I am skeptical that steady state direct current flow attenuation is the entirety of the story (and indeed it seems to underestimate actual coax cable wire energy of ~1e^-21 to 5e^-21 J/bit/nm by a few OOM).
For coax cable the transmission is through a transverse (AC) wave that must accelerate a quantity of electrons linearly proportional to the length of the cable. These electrons rather rapidly dissipate this additional drift velocity energy through collisions (resistance), and the entirety of the wave energy is ultimately dissipated.
This seems different than sending continuous DC power through the wire where the electrons have a steady state drift velocity and the only energy required is that to maintain the drift velocity against resistance. For wave propagation the electrons are instead accelerated up from a drift velocity of zero for each bit sent. It’s the difference between the energy required to accelerate a car up to cruising speed and the power required to maintain that speed against friction.
If we take the bit energy to be Eb, then there is a natural EM wavelength of Eb=hcλ, so λ=hcEB, which works out to ~1um for ~1eV. Notice that using a lower frequency / longer wavelength seems to allow one to arbitrarily decrease the bit energy distance scale, but it turns out this just increases the dissipative loss.
So an initial estimate of the characteristic bit energy distance scale here is ~1eV/bit/um or ~1e-22 J/bit/nm. But this is obviously an underestimate as it doesn’t yet include the effect of resistance (and skin effect) during wave propagation.
The bit energy of one wavelength is implemented through electron peak drift velocity on order Eb=12Nemev2d, where Ne is the number of carrier electrons in one wavelength wire section. The relaxation time τ or mean time between thermal collisions with a room temp thermal velocity of around ~1e5 m/s and the mean free path of ~40 nm in copper is τ ~ 4e-13s. Meanwhile the inverse frequency or timespan of one wavelength is around 3e-14 s for an optical frequency 1eV wave, and is ~1e-9 s for a more typical (much higher amplitude) gigahertz frequency wave. So it would seem that resistance is quite significant on these timescales.
Very roughly the gigahertz 1e-9s period wave requires about 5 oom more energy per wavelength due to dissipation which cancels out the 5 oom larger distance scale. Each wavelength section loses about half of the invested energy every τ ~ 4e-13 seconds, so maintaining the bit energy of Eb requires roughly input power of ~Eb/τ for f−1 seconds which cancels out the effect of the longer wavelength distance, resulting in a constant bit energy distance scale independent of wavelength/frequency (naturally there are many other complex effects that are wavelength/frequency dependent but they can’t improve the bit energy distance scale )
For a low frequency (long wavelength) with f−1 << τ :
Eb/d≈Ebf−1τλ=Ebτfλ
λ=cf
Eb/d≈Ebτfλ
Eb/d≈Ebτc ~ 1eV / 10um ~ 1e-23 J/bit/nm
If you take the bit energy down to the minimal landauer limit of ~0.01 eV this ends up about equivalent to your lower limit, but I don’t think that would realistically propagate.
A real wave propagation probably can’t perfectly transfer the bit energy over longer distances and has other losses (dielectric loss, skin effect, etc), so vaguely guesstimating around 100x loss would result in ~1e-21 J/bit/nm. The skin effect alone perhaps increases resistance by roughly 10x at gigahertz frequencies. Coax devices also seem constrained to use specific lower gigahertz frequences and then boost the bitrate through analog encoding, so for example 10-bit analog increases bitrate by 10x at the same frequency but requires about 1024X more power, so that is 2 OOM less efficient per bit.
Notice that the basic energy distance scale of Ebτc is derived from the mean free path, via the relaxation time τ from τ=ℓ/Vn, where ℓ is the mean free path and Vn is the thermal noise velocity (around ~1e5 m/s for room temp electrons).
Coax cable doesn’t seem to have any fundamental advantage over waveguide optical, so I didn’t consider it at all in brain efficiency. It requires wires of about the same width several OOM larger than minimal nanoscale RC interconnect and largish sending/receiving devices as in optics/photonics.
Electrons are very light so the kinetic energy required to get them moving should not be significant in any non-contrived situation I think? The energy of the magnetic field produced by the current would tend to be much more of an important effect.
As for the rest of your comment, I’m not confident enough I understand the details of your argument be able to comment on it in detail. But from a high level view, any effect you’re talking about should be baked into the attenuation chart I linked in this comment. This is the advantage of empirically measured data. For example, the skin-effect (where high frequency AC current is conducted mostly in the surface of a conductor, so the effective resistance increases the higher the frequency of the signal) is already baked in. This effect is (one of the reasons) why there’s a positive slope in the attenuation chart. If your proposed effect is real, it might be contributing to that positive slope, but I don’t see how it could change the “1 kT per foot” calculation.
My current understanding is that the electric current energy transmits through electron drift velocity (and I believe that is the standard textbook understanding?, although I admit I have some questions concerning the details). The magnetic field is just a component of the EM waves which propagate changes in electron KE between electrons (the EM waves implement the connections between masses in the equivalent mass-spring system).
I’m not sure how you got “1 kT per foot” but that seems roughly similar to the model up thread I am replying to from spxtr that got 0.05 fJ/bit/mm or 5e-23 J/bit/mm. I attempted to derive an estimate from the lower level physics thinking it might be different but it ended up in the same range—and also off by the same 2 OOM vs real data. But I mention that skin effect could plausibly increase power by 10x in my lower level model, as I didn’t model it nor use measured attenuation values at all. The other OOM probably comes from analog SNR inefficiency.
The part of this that is somewhat odd at first is the exponential attenuation. That does show up in my low lever model where any electron kinetic energy in the wire is dissipated by about 50% due to thermal collisions every τ ~ 4e-13 seconds (that is the important part from mean free path / relaxation time). But that doesn’t naturally lead to a linear bit energy distance scale unless that dissipated energy is somehow replaced/driven by the preceding section of waveform.
So if you sent E as a single large infinitesimal pulse down a wire of length D, the energy you get on the other side is E∗2−αD for some attenuation constant α that works out to about 0.1 mm or something as it’s τc, not meters. I believe if your chart showed attenuation in the 100THZ regime on the scale of τ it would be losing 50% per 0.1 mm instead of per meter.
We know that resistance is linear, not exponential—which I think arises from long steady flow where every τ seconds half the electron kinetic energy is dissipated, but this total amount is linear with wire section length. The relaxation time τ then just determines what steady mean electron drift velocity (current flow) results from the dissipated energy.
So when the wave period f−1 is much less than τ you still lose about half of the wave energy E every τ seconds but that can be spread out over a much larger wavelength section. (and indeed at gigahertz frequencies this model roughly predicts the correct 50% attenuation distance scale of ~10m or so).
There’s two types of energy associated with a current we should distinguish. Firstly there’s the power flowing through the circuit, then there’s energy associated with having current flowing in a wire at all. So if we’re looking at a piece of extension cord that’s powering a lightbulb, the power flowing through the circuit is what’s making the lightbulb shine. This is governed by the equation P=IV. But there’s also some energy associated with having current flowing in a wire at all. For example, you can work out what the magnetic field should be around a wire with a given amount of current flowing through it and calculate the energy stored in the magnetic field. (This energy is associated with the inductance of the wire.) Similarly, the kinetic energy associated with the electron drift velocity is also there just because the wire has current flowing through it. (This is typically a very small amount of energy.)
To see that these types have to be distinct, think about what happens when we double the voltage going into the extension cord and also double the resistance of the lightbulb it’s powering. Current stays the same, but with twice the voltage we now have twice the power flowing to the light bulb. Because current hasn’t changed, neither has the magnetic field around the wire, nor the drift velocity. So the energy associated with having a current flowing in this wire is unchanged, even though the power provided to the light bulb has doubled. The important thing about the drift velocity in the context of P=IV is that it moves charge. We can calculate the potential energy associated with a charge in a wire as E=qV, and then taking the time derivative gives the power equation. It’s true that drift velocity is also a velocity, and thus the charge carriers have kinetic energy too, but this is not the energy that powers the light bulb.
In terms of exponential attenuation, even DC through resistors gives exponential attenuation if you have a “transmission line” configuration of resistors that look like this:
So exponential attenuation doesn’t seem too unusual or surprising to me.
Indeed, the theoretical lower bound is very, very low.
The minimum is set by the sensor resolution and noise. A nice oscilloscope, for instance, will have, say, 12 bits of voltage resolution and something like 10 V full scale, so ~2 mV minimum voltage. If you measure across a 50 Ohm load then the minimum received power you can see is P=(2mV)2/(50Ω)≈10μW. This is an underestimate, but that’s the idea.