The speed at which electrical signals propagate is much faster than the speed at which electrons move in an electrical conductor. (Possibly helpful metaphor: suppose I take a broomstick and poke you with it. You feel the poke very soon after I start shoving the stick, even though the stick is moving slowly. You don’t need to wait until the very same bit of wood I shoved reaches your body.)
The speed at which electrical signals propagate is slower than the speed of light, but it’s a substantial fraction of the speed of light and it doesn’t depend on the speed at which the electrons move. (It may correlate with it—e.g., both may be a consequence of how the electrons interact with the atoms in the conductor. Understanding this right is one of the quantum-mechanical subtleties I mention below.)
When current flows through a conductor with some resistance, some of the energy in the flow of the electrons gets turned into random-ish motion in the material, i.e., heat. This will indeed make the electrons move more slowly but (see above) this doesn’t make much difference to the speed at which electrical effects propagate through the conductor.
(What actually happens in electrical conductors is more complicated than individual electrons moving around, and understanding it well involves quantum-mechanical subtleties, of most of which I know nothing to speak of.)
It is not usual to convert AC to DC using relays.
It is true that if you take AC power, rectify it using the simplest possible circuit, and use that to supply a DC device then it will alternate between being powered and not being powered—and also that during the “powered” periods the voltage it gets will vary. Some devices can work fine that way, some not so fine.
In practice, AC-to-DC conversion doesn’t use the simplest possible circuit. It’s possible to smooth things out a lot so that the device being powered gets something close to a constant DC supply.
But there are similar effects even when no rectification is being done. You mentioned flickering lights, and until recently they were an example of this. If you power an incandescent bulb using AC at 50Hz then the amount of current flowing in it varies and accordingly so does the light output. (At 100Hz, not 50Hz; figuring out why is left as an exercise for the reader.) However, because it takes time for the filament to heat up and cool down the actual fluctuation in light output is small. Fluorescent bulbs respond much faster and do flicker, and some people find their light very unpleasant for exactly that reason. LED lights, increasingly often used where incandescents and fluorescents used to be, are DC devices. I think there’s a wide variety in the circuitry used to power them, but most will flicker at some rate. Good ones will be driven in such a way that they flicker so fast you will never notice it. (Somewhere in the kHz range.)
Sometimes DC (at high voltages) is used for power transmission. I think AC is used, where it is used, because conversion between (typically very high) transmission voltage and the much lower voltages convenient for actual use is easy by means of transformers; transformers only work for AC. (Because they depend on electromagnetic induction, which works on the principle that changes in current produce magnetic fields and changes in magnetic field produce currents.) I don’t know whether AC or DC would be a better choice if we were starting from scratch now, but both systems were proposed and tried very early in the history of electrical power generation and I’m pretty sure all the obvious arguments on both sides were aired right from the start.
When a device “consumes” electrical energy it isn’t absorbing electrons. (In that case it would have to accumulate a large electrical charge. That’s usually a Bad Thing.) It’s absorbing (or using in some other way) energy carried in the electric field. It might help to imagine a system that transmits energy hydraulically instead, with every household equipped with high-pressure pipes, with a constant flow of water maintained by the water-power company, and operating its equipment using turbines. These wouldn’t consume water unless there were a leak; instead they would take in fast-moving water and return slower-moving water to the system. An “AC” hydraulic system would have water moving to and fro in the pipes; again, the water wouldn’t be consumed, but energy would be transferred from the water-pipes to the devices being operated. Powering things with electricity is similar.
I don’t know whether AC or DC would be a better choice if we were starting from scratch now, but both systems were proposed and tried very early in the history of electrical power generation and I’m pretty sure all the obvious arguments on both sides were aired right from the start.
DC wasn’t really a viable option at the start because of the transformer issue you mentioned. The local power lines carry ~100x higher voltage than what you get in your house, and the long distance power lines up to another 100x on top of that. Without that voltage step up, you’d need 100-10,000x as much wire.
Modern semi conductors change the game considerably though. In a lot of areas, the big iron transformers are getting phased out and replaced with switching power supplies, which suggests that it could be economically efficient now, if not for the requirement for a 50 or 60hz sinewave and existing stuff.
A DC based system would have advantages of not requiring rectification on many end uses, give some minor improvement in corona losses in transmission, and would allow for variable speed generators. It would come at the cost of controller-less induction motors and clocks that use the AC signal to keep time. I’m not sure about the cost of doing the voltage step-up/step-down because both methods are still in use. I’m not sure which would be the better choice now, but it is an interesting question.
Thank you. Using the water pipe analogy, 1 can see some obvious flaws with AC system. What if something needs power right at the moment the water is in the middle state between to & fro, i.e. standstill? How about installing a converter device at the beginning of each household? Surely it’d be better to provide continuous flow to devices, not to mention there’s no need to manufacture trillions of small relays or rectifiers that are needed inside devices.
If what devices do is get fast water and release slow water, then it can be understood that in reality, they use the kinetic energy of electrons. Or maybe some devices make use of the magnetic field too? Can somebody detail how exactly a fan gets a flow of electrons and ends up moving its rotor? And how does a computer use electricity? A fridge?
The water (or, rather, the electricity) sloshes to and fro 50 times a second, so there’s never enough delay between flicking the switch and getting usable power that a human being would notice. Typically other things are slower; e.g., if you’re turning on an incandescent lightbulb then it may take longer than that for the filament to get hot enough that it starts glowing. For many devices (e.g., your phone) there is a converter device, and when you attach your phone to its USB wall-plug it’s getting DC electricity from it.
It would be possible to have some sort of converter for every household, but every such converter has some losses, and many devices are perfectly happy just running off AC, and ones that aren’t don’t necessarily all want the same operating voltage. Again, if we were doing everything from scratch now it might be worth considering something like that (or it might not; the details matter and I’m not an electrical engineer myself), but we have a basically-working system and replacing it wholesale with something new would need to be a big improvement to be worth the tremendous cost and inconvenience.
It would be more accurate to say that devices use the energy in the electromagnetic field rather than the kinetic energy of electrons, as such. (There isn’t a clear distinction between using the electric field and using the magnetic field; the two are very intimately linked and, e.g., if two observers are moving rapidly relative to one another, then what one sees as the electric field the other may see as the magnetic field.)
The motor in an electric fan works something like this. (Unfortunately it involves effects that don’t have a close analogue in terms of flowing water.) There are coils of wire. You pass an alternating current through these coils; changing currents generate a magnetic field. (This isn’t meant to be obvious. It was one of the big discoveries of 19th-century physics.) There’s a lump of iron placed so that this magnetic field pulls on it. A bit of engineering ingenuity lets you arrange these elements so that the effect is to make a shaft keep turning in a consistent direction. You mount your fan blades on that shaft. (Don’t take my description too literally. An actual design might e.g. have the wires on the shaft and the big lumps of iron on the outside, not moving.) In terms of individual electrons: a moving electron produces a magnetic field that “curls around” its path; a whole lot of electrons moving along a conductor produce a magnetic field that curls around those conductors; if you wind that conductor into a coil, you get a magnetic field running along the length of the coil.
The details of how energy flows from place to place in all this are subtle and I will probably get them wrong if I try to go into details. As an example: suppose you supply electricity to some system by means of a pair of parallel wires with opposite currents flowing in them; then the energy flow in the system happens outside the wires, not inside them. (It happens near to the wires, and the energy flows parallel to the wires.)
(Just to reiterate: this isn’t a matter of electrons flowing into the device and being consumed, just as a hydraulically powered system that works by having water turn turbine blades doesn’t work by consuming the water.)
I think most of the power consumption in (the processing parts of) a computer is resistive losses—i.e., the thing where energy from the electric field gets transferred to kinetic energy in the electrons and/or atoms and heats things up. In an idealized maximally-efficient computing device, it turns out that the one thing that unavoidably costs energy is disposing of information, and some people have speculated about “reversible computing” that never erases bits or otherwise throws information away; but real computing devices are several orders of magnitude away from being limited by these considerations.
I believe a fridge uses electrical energy mostly in motors, which work in much the same way as the motor in a fan. These motors then drive other interesting systems that e.g. compress fluids and pump them around and so forth—I don’t know any of the details offhand—but electricity is not directly involved in those mechanisms.
As I hope I’ve already made clear, I’m not really an expert on this, and quite possibly no other LW regulars are either. You might do better to find e.g. a textbook on electromagnetism. (But be warned: if you read a textbook on electromagnetism that goes deep enough to answer your questions, you will end up having to do quite a lot of maths.)
I think using the water as an analogy to electricity is still somehow not adequate to the task. For example, to make it slosh back & forth would require a tremendous amount of energy, which seems not to be the case with electricity.
But still, I also think that if a device consumes electricity, no matter what way—say, using electromagnetic field, then it must reflect into the lifeline in the wire (electrons) in some way. Since the power source propagate energy using the jiggling of electrons then by using them up, the device must impede that movement. This slowing in jiggling will then propagate back and display as the slowing of the turbine...
… which is to say, actually we convert kinetic energy into whatever type of energy we use, that’s the essence of “electricity”?
BTW, thank you for your explanations on fans & stuffs! Though the bits with computers & fridges are gloss-over, but I guess I can have a vague understanding.
Yes, water and electricity are different in important ways even though the analogy is informative sometimes.
The energy in the electromagnetic field really truly is different from the kinetic energy of the electrons. (This is one of the important differences from water in a pipe, in fact.)
You can see this fairly easily in a “static” case: if I use electricity to charge up a big capacitor, I’ve stored lots of energy in the capacitor but it’s potential not kinetic energy. (There’s a lot of potential energy there because there’s extra positive charge in one place and extra negative charge in another, and energy will be released if they are allowed to move together so that the net charge everywhere becomes approximately zero.)
You might want to describe this situation by saying that the electrons involved have a certain amount of potential energy, just as you might say that when you lift a heavy object from the surface of the earth that object has acquired (gravitational) potential energy. That point of view works fine for this sort of static situation, but once your charges start moving around it turns out to be more insightful to think of the energy as located in the electromagnetic fields rather than in the particles that interact with those fields.
So, for instance, suppose you arrange for an alternating current to flow in a conductor. Then it will radiate, transmitting energy outward in the form of electromagnetic waves. (Radio waves, at the sort of frequencies you can readily generate in a wire. At much higher frequencies you get e.g. light waves, but you typically need different hardware for that.) This energy is carried by the electromagnetic field. It will propagate just fine in a vacuum: no need for any charged particles in between.
When you have an actual electrical circuit, things are more complicated, but it turns out that the electrical energy is not flowing through the wires, it’s flowing through the field around the wires. And, again, this energy is not the kinetic energy of the electrons.
I still have not achieved a breakthrough. See, when we broadcast a wave, say radio, then it will propagate into space and will be lost forever. Now as per your words, an AC flow in a wire will radiate energy outward ⇒ this means a lot of energy is lost all the time. Since the wattage in a wire is a constant, we lose a big and constant amount of energy no matter what we do. That seems not to be the case in real life.
Furthermore, if we accept that electrical energy actually flows in the field around the line, then why do we even need outlets and sockets? Just put a device near the wire, like those cordless chargers. Besides, electric thieves can be easy since almost everyone can put a specialized stealing device near a public line.
It’s not a big amount. (For, e.g., a typical mains cable.) And cabling, especially if the currents flowing in it are at high frequencies (which means more radiation), is often designed to reduce that radiation. That’s one reason why we have coaxial cables and twisted pairs. For a 50Hz or 60Hz power cable, though, the radiative losses are tiny.
You can power devices wirelessly—using “those cordless chargers”. They are designed to maximize the extent to which these effects happen, and of course the devices need to be designed to work that way. Ordinary mains cables don’t radiate a lot and it isn’t practical to power anything nontrivial by putting it near a mains cable.
But the most effective way of getting energy from the field around a pair of wires is … to connect the wires into an electric circuit. Indeed, it’s only when they’re connected in such a circuit that the current will flow through the wires and the energy will flow around them.
I recall seeing something about a very low-powered (and cheaply made) LED lightbulb which could never be turned off. With the light switch on, it was bright, and with the light switch off it was much more dim, but not actually off. It turned out this was because in certain common house wiring configurations, electrical field effects between nearby wires allow enough power through to light the bulb
Yeah, I did have that experience too. But come to think of it, his explanation in the video sounds counter-intuitive for AC & DC. With the bulb connected to the mains via a wire (even though it’s the neutral line and that line is severed) like in the better part of the video, as long as the mains is AC the bulb will always at least dim...
The speed at which electrical signals propagate is much faster than the speed at which electrons move in an electrical conductor. (Possibly helpful metaphor: suppose I take a broomstick and poke you with it. You feel the poke very soon after I start shoving the stick, even though the stick is moving slowly. You don’t need to wait until the very same bit of wood I shoved reaches your body.)
The speed at which electrical signals propagate is slower than the speed of light, but it’s a substantial fraction of the speed of light and it doesn’t depend on the speed at which the electrons move. (It may correlate with it—e.g., both may be a consequence of how the electrons interact with the atoms in the conductor. Understanding this right is one of the quantum-mechanical subtleties I mention below.)
When current flows through a conductor with some resistance, some of the energy in the flow of the electrons gets turned into random-ish motion in the material, i.e., heat. This will indeed make the electrons move more slowly but (see above) this doesn’t make much difference to the speed at which electrical effects propagate through the conductor.
(What actually happens in electrical conductors is more complicated than individual electrons moving around, and understanding it well involves quantum-mechanical subtleties, of most of which I know nothing to speak of.)
It is not usual to convert AC to DC using relays.
It is true that if you take AC power, rectify it using the simplest possible circuit, and use that to supply a DC device then it will alternate between being powered and not being powered—and also that during the “powered” periods the voltage it gets will vary. Some devices can work fine that way, some not so fine.
In practice, AC-to-DC conversion doesn’t use the simplest possible circuit. It’s possible to smooth things out a lot so that the device being powered gets something close to a constant DC supply.
But there are similar effects even when no rectification is being done. You mentioned flickering lights, and until recently they were an example of this. If you power an incandescent bulb using AC at 50Hz then the amount of current flowing in it varies and accordingly so does the light output. (At 100Hz, not 50Hz; figuring out why is left as an exercise for the reader.) However, because it takes time for the filament to heat up and cool down the actual fluctuation in light output is small. Fluorescent bulbs respond much faster and do flicker, and some people find their light very unpleasant for exactly that reason. LED lights, increasingly often used where incandescents and fluorescents used to be, are DC devices. I think there’s a wide variety in the circuitry used to power them, but most will flicker at some rate. Good ones will be driven in such a way that they flicker so fast you will never notice it. (Somewhere in the kHz range.)
Sometimes DC (at high voltages) is used for power transmission. I think AC is used, where it is used, because conversion between (typically very high) transmission voltage and the much lower voltages convenient for actual use is easy by means of transformers; transformers only work for AC. (Because they depend on electromagnetic induction, which works on the principle that changes in current produce magnetic fields and changes in magnetic field produce currents.) I don’t know whether AC or DC would be a better choice if we were starting from scratch now, but both systems were proposed and tried very early in the history of electrical power generation and I’m pretty sure all the obvious arguments on both sides were aired right from the start.
When a device “consumes” electrical energy it isn’t absorbing electrons. (In that case it would have to accumulate a large electrical charge. That’s usually a Bad Thing.) It’s absorbing (or using in some other way) energy carried in the electric field. It might help to imagine a system that transmits energy hydraulically instead, with every household equipped with high-pressure pipes, with a constant flow of water maintained by the water-power company, and operating its equipment using turbines. These wouldn’t consume water unless there were a leak; instead they would take in fast-moving water and return slower-moving water to the system. An “AC” hydraulic system would have water moving to and fro in the pipes; again, the water wouldn’t be consumed, but energy would be transferred from the water-pipes to the devices being operated. Powering things with electricity is similar.
DC wasn’t really a viable option at the start because of the transformer issue you mentioned. The local power lines carry ~100x higher voltage than what you get in your house, and the long distance power lines up to another 100x on top of that. Without that voltage step up, you’d need 100-10,000x as much wire.
Modern semi conductors change the game considerably though. In a lot of areas, the big iron transformers are getting phased out and replaced with switching power supplies, which suggests that it could be economically efficient now, if not for the requirement for a 50 or 60hz sinewave and existing stuff.
A DC based system would have advantages of not requiring rectification on many end uses, give some minor improvement in corona losses in transmission, and would allow for variable speed generators. It would come at the cost of controller-less induction motors and clocks that use the AC signal to keep time. I’m not sure about the cost of doing the voltage step-up/step-down because both methods are still in use. I’m not sure which would be the better choice now, but it is an interesting question.
Thank you. Using the water pipe analogy, 1 can see some obvious flaws with AC system. What if something needs power right at the moment the water is in the middle state between to & fro, i.e. standstill? How about installing a converter device at the beginning of each household? Surely it’d be better to provide continuous flow to devices, not to mention there’s no need to manufacture trillions of small relays or rectifiers that are needed inside devices.
If what devices do is get fast water and release slow water, then it can be understood that in reality, they use the kinetic energy of electrons. Or maybe some devices make use of the magnetic field too? Can somebody detail how exactly a fan gets a flow of electrons and ends up moving its rotor? And how does a computer use electricity? A fridge?
The water (or, rather, the electricity) sloshes to and fro 50 times a second, so there’s never enough delay between flicking the switch and getting usable power that a human being would notice. Typically other things are slower; e.g., if you’re turning on an incandescent lightbulb then it may take longer than that for the filament to get hot enough that it starts glowing. For many devices (e.g., your phone) there is a converter device, and when you attach your phone to its USB wall-plug it’s getting DC electricity from it.
It would be possible to have some sort of converter for every household, but every such converter has some losses, and many devices are perfectly happy just running off AC, and ones that aren’t don’t necessarily all want the same operating voltage. Again, if we were doing everything from scratch now it might be worth considering something like that (or it might not; the details matter and I’m not an electrical engineer myself), but we have a basically-working system and replacing it wholesale with something new would need to be a big improvement to be worth the tremendous cost and inconvenience.
It would be more accurate to say that devices use the energy in the electromagnetic field rather than the kinetic energy of electrons, as such. (There isn’t a clear distinction between using the electric field and using the magnetic field; the two are very intimately linked and, e.g., if two observers are moving rapidly relative to one another, then what one sees as the electric field the other may see as the magnetic field.)
The motor in an electric fan works something like this. (Unfortunately it involves effects that don’t have a close analogue in terms of flowing water.) There are coils of wire. You pass an alternating current through these coils; changing currents generate a magnetic field. (This isn’t meant to be obvious. It was one of the big discoveries of 19th-century physics.) There’s a lump of iron placed so that this magnetic field pulls on it. A bit of engineering ingenuity lets you arrange these elements so that the effect is to make a shaft keep turning in a consistent direction. You mount your fan blades on that shaft. (Don’t take my description too literally. An actual design might e.g. have the wires on the shaft and the big lumps of iron on the outside, not moving.) In terms of individual electrons: a moving electron produces a magnetic field that “curls around” its path; a whole lot of electrons moving along a conductor produce a magnetic field that curls around those conductors; if you wind that conductor into a coil, you get a magnetic field running along the length of the coil.
The details of how energy flows from place to place in all this are subtle and I will probably get them wrong if I try to go into details. As an example: suppose you supply electricity to some system by means of a pair of parallel wires with opposite currents flowing in them; then the energy flow in the system happens outside the wires, not inside them. (It happens near to the wires, and the energy flows parallel to the wires.)
(Just to reiterate: this isn’t a matter of electrons flowing into the device and being consumed, just as a hydraulically powered system that works by having water turn turbine blades doesn’t work by consuming the water.)
I think most of the power consumption in (the processing parts of) a computer is resistive losses—i.e., the thing where energy from the electric field gets transferred to kinetic energy in the electrons and/or atoms and heats things up. In an idealized maximally-efficient computing device, it turns out that the one thing that unavoidably costs energy is disposing of information, and some people have speculated about “reversible computing” that never erases bits or otherwise throws information away; but real computing devices are several orders of magnitude away from being limited by these considerations.
I believe a fridge uses electrical energy mostly in motors, which work in much the same way as the motor in a fan. These motors then drive other interesting systems that e.g. compress fluids and pump them around and so forth—I don’t know any of the details offhand—but electricity is not directly involved in those mechanisms.
As I hope I’ve already made clear, I’m not really an expert on this, and quite possibly no other LW regulars are either. You might do better to find e.g. a textbook on electromagnetism. (But be warned: if you read a textbook on electromagnetism that goes deep enough to answer your questions, you will end up having to do quite a lot of maths.)
I think using the water as an analogy to electricity is still somehow not adequate to the task. For example, to make it slosh back & forth would require a tremendous amount of energy, which seems not to be the case with electricity.
But still, I also think that if a device consumes electricity, no matter what way—say, using electromagnetic field, then it must reflect into the lifeline in the wire (electrons) in some way. Since the power source propagate energy using the jiggling of electrons then by using them up, the device must impede that movement. This slowing in jiggling will then propagate back and display as the slowing of the turbine...
… which is to say, actually we convert kinetic energy into whatever type of energy we use, that’s the essence of “electricity”?
BTW, thank you for your explanations on fans & stuffs! Though the bits with computers & fridges are gloss-over, but I guess I can have a vague understanding.
Yes, water and electricity are different in important ways even though the analogy is informative sometimes.
The energy in the electromagnetic field really truly is different from the kinetic energy of the electrons. (This is one of the important differences from water in a pipe, in fact.)
You can see this fairly easily in a “static” case: if I use electricity to charge up a big capacitor, I’ve stored lots of energy in the capacitor but it’s potential not kinetic energy. (There’s a lot of potential energy there because there’s extra positive charge in one place and extra negative charge in another, and energy will be released if they are allowed to move together so that the net charge everywhere becomes approximately zero.)
You might want to describe this situation by saying that the electrons involved have a certain amount of potential energy, just as you might say that when you lift a heavy object from the surface of the earth that object has acquired (gravitational) potential energy. That point of view works fine for this sort of static situation, but once your charges start moving around it turns out to be more insightful to think of the energy as located in the electromagnetic fields rather than in the particles that interact with those fields.
So, for instance, suppose you arrange for an alternating current to flow in a conductor. Then it will radiate, transmitting energy outward in the form of electromagnetic waves. (Radio waves, at the sort of frequencies you can readily generate in a wire. At much higher frequencies you get e.g. light waves, but you typically need different hardware for that.) This energy is carried by the electromagnetic field. It will propagate just fine in a vacuum: no need for any charged particles in between.
When you have an actual electrical circuit, things are more complicated, but it turns out that the electrical energy is not flowing through the wires, it’s flowing through the field around the wires. And, again, this energy is not the kinetic energy of the electrons.
I still have not achieved a breakthrough. See, when we broadcast a wave, say radio, then it will propagate into space and will be lost forever. Now as per your words, an AC flow in a wire will radiate energy outward ⇒ this means a lot of energy is lost all the time. Since the wattage in a wire is a constant, we lose a big and constant amount of energy no matter what we do. That seems not to be the case in real life.
Furthermore, if we accept that electrical energy actually flows in the field around the line, then why do we even need outlets and sockets? Just put a device near the wire, like those cordless chargers. Besides, electric thieves can be easy since almost everyone can put a specialized stealing device near a public line.
It’s not a big amount. (For, e.g., a typical mains cable.) And cabling, especially if the currents flowing in it are at high frequencies (which means more radiation), is often designed to reduce that radiation. That’s one reason why we have coaxial cables and twisted pairs. For a 50Hz or 60Hz power cable, though, the radiative losses are tiny.
You can power devices wirelessly—using “those cordless chargers”. They are designed to maximize the extent to which these effects happen, and of course the devices need to be designed to work that way. Ordinary mains cables don’t radiate a lot and it isn’t practical to power anything nontrivial by putting it near a mains cable.
But the most effective way of getting energy from the field around a pair of wires is … to connect the wires into an electric circuit. Indeed, it’s only when they’re connected in such a circuit that the current will flow through the wires and the energy will flow around them.
I recall seeing something about a very low-powered (and cheaply made) LED lightbulb which could never be turned off. With the light switch on, it was bright, and with the light switch off it was much more dim, but not actually off. It turned out this was because in certain common house wiring configurations, electrical field effects between nearby wires allow enough power through to light the bulb
https://www.youtube.com/watch?v=1uEmX5XClPY
Yeah, I did have that experience too. But come to think of it, his explanation in the video sounds counter-intuitive for AC & DC. With the bulb connected to the mains via a wire (even though it’s the neutral line and that line is severed) like in the better part of the video, as long as the mains is AC the bulb will always at least dim...
TBH I’m a bit more confused :)