7.2: Electromagnetic Induction #
7.2.1: Faraday’s Law #
In 1831 Michael Faraday reported on a series of experiments, including three that (with some violence to history) can be characterized as follows:
Experiment 1:. He pulled a loop of wire to the right through a magnetic field (Fig 7.21a). A current flowed in the loop.
Experiment 2: He moved the magnet to the left, holding the loop still (Fig 7.21b). Again, a current flowed in the loop.
Experiment 3: With both the loop and the magnet at rest (Fig 7.21c), he changed the strength of the field (he used an electromagnet, and varied the current in the coil). Once again, current flowed in the loop.
The first experiment, of course, is a straightforward case of motional emf; according to the flux rule:
\[\mathcal{E} = - \dv{\Phi}{t}\]I don’t think it will surprise you to learn that exactly the same emf arises in Experiment 2 - all that really matters is the relative motion of the magnet and the loop. Indeed, in the light of special relativity it has to be so. But Faraday knew nothing of relativity, and in classical electrodynamics this simple reciprocity is a remarkable coincidence. For if the loop moves, it’s a magnetic force that sets up the emf, but if the loop is stationary, the force cannot be magnetic - stationary charges experience no magnetic forces. In that case, what is responsible? What sort of field exerts a force on charges at rest? Well, electric fields do, of course, but in this case there doesn’t seem to be any electric field in sight.
Faraday had an ingenious inspiration:
\[\textbf{A changing magnetic field induces an electric field}\]It is this induced electric field that accounts for the emf in Experiment 2. Indeed, if (as Faraday found empirically) the emf is again equal to the rate of change of the flux,
\[\mathcal{E} = \oint \vec{E} \cdot \dd \vec{l} = - \dv{\Phi}{t} \tagl{7.14}\]then \( \vec{E} \) is related to the change in \( \vec{B} \) by the equation
\[\oint \vec{E} \cdot \dd \vec{l} = - \int \pdv{\vec{B}}{t} \cdot \dd \vec{a} \tagl{7.15}\]This is Faraday’s law, in integral form. We can convert it to differential form by applying Stokes’ theorem:
\[\curl \vec{E} = - \pdv{\vec{B}}{t} \tagl{7.16}\]Note that Faraday’s law reduces to the old rule \( \oint \vec{E} \cdot \dd \vec{l} = 0 \) (or, in differential form, \( \curl \vec{E} = 0 \)) in the static case (constant \( \vec{B} \)), as, of course, it should.
In Experiment 3, the magnetic field changes for entirely different reasons, but according to Faraday’s law an electric field will again be induced, giving rise to an emf \( - d \Phi / dt \). Indeed, one can subsume all three cases (and for that matter any combination of them) into a kind of universal flux rule:
Whenever (and for whatever reason) the magnetic flux through a loop changes, an emf \[\mathcal{E} = - \pdv{\Phi}{t} \tagl{7.17}\] will appear in the loop.
Many people call this “Faraday’s law.” Maybe I’m overly fastidious, but I find this confusing. There are really two totally different mechanisms underlying Eq. 7.17, and to identify them both as “Faraday’s law” is a little like saying that because identical twins look alike we ought to call them by the same name. In Faraday’s first experiment, it’s the Lorentz force law at work; the emf is magnetic. But in the other two it’s an electric field (induced by the changing magnetic field) that does the job. Viewed in this light, it is quite astonishing that all three processes yield the same formula for the emf. In fact, it was precisely this “coincidence” that led Einstein to the special theory of relativity - he sought a deeper understanding of what is, in classical electrodynamics, a peculiar accident. But that’s a story for chapter 12. In the meantime, I shall reserve the term “Faraday’s law” for electric fields induced by changing magnetic fields, and I do not regard Experiment 1 as an instance of Faraday’s law.
Example 7.5 #
Keeping track of the signs in Faraday’s law can be a real headache. For instance, in Ex. 7.5 we would like to know which way around the ring the induced current flows. In principle, the right-hand rule does the job (we called \( \Phi \) positive to the left, in Fig. 7.22, so the positive direction for current in the ring is counter-clockwise, as viewed from the left; since the first spike in Fig. 7.23b is negative, the first current pulse flows clockwise, and the second counterclockwise). But there’s a handy rule, called Lenz’s law, whose sole purpose is to help you get the directions right:
**Nature abhors a change in flux**
The induced current will flow in such a direction that the flux it produces tends to cancel the change. (As the front end of the magnet in Ex. 7.5 enters the ring, the flux increases, so the current in the ring must generate a field to the right - it therefore flows clockwise.) Notice that it is the change in flux, not the flux itself, that nature abhors (when the tail end of the magnet exits the ring, the flux drops, so the induced current flows counterclockwise, in an effort to restore it). Faraday induction is a kind of “inertial” phenomenon: A conducting loop “likes” to maintain a constant flux through it; if you try to change the flux, the loop responds by sending a current around in such a direction as to frustrate your efforts. (It doesn’t succeed completely; the flux produced by the induced current is typically only a tiny fraction of the original. All Lenz’s law tells you is the direction of the flow.)
Example 7.6 #
7.2.2: The Induced Electric Field #
Faraday’s law generalizes the electrostatic rule \( \curl \vec{E} = 0 \) to the time-dependent regime. The divergence of \( \vec{E} \) is still given by Gauss’s law (\( \div \vec{E} = \frac{1}{\epsilon_0} \rho \) ). If \( \vec{E} \) is a pure Faraday field (due exclusively to a changing \( \vec{B} \) , with \( \rho = 0 \) ), then
\[\div \vec{E} = 0 \qquad \curl \vec{E} = - \pdv{\vec{B}}{t}\]This is mathematically identical to magnetostatics
\[\div \vec{B} = 0 \qquad \curl \vec{B} = \mu_0 \vec{J}\]Conclusion: Faraday-induced electric fields are determined by \( -(\partial \vec{B} / \partial t) \) in exactly the same way as magnetostatic fields are determined by \( \mu_0 \vec{J} \). The analog to Biot-Savart is
\[\vec{E} = - \frac{1}{4 \pi} \int \frac{(\partial \vec{B} / \partial t) \cross \vu{\gr}}{\gr ^2} \dd \tau = - \frac{1}{4 \pi} \pdv{}{t} \int \frac{\vec{B} \cross \vu{\gr}}{\gr ^2} \dd \tau \tagl{7.18}\]and if symmetry permits, we can use all the tricks associated with Ampere’s law in integral form (\( \oint \vec{B} \cdot \dd \vec{l} = \mu_0 I_{enc} \)), only now it’s Faraday’s law in integral form:
\[\oint \vec{E} \cdot \dd \vec{l} = - \dv{\Phi}{t} \tagl{7.19}\]The rate of change of (magnetic) flux through the Amperian loop plays the role formerly assigned to \( \mu_0 I_{enc} \).
Example 7.7 #
Example 7.8 #
I must warn you, now, of a small fraud that tarnishes many applications of Faraday’s law: Electromagnetic induction, of course, occurs only when the magnetic fields are changing, and yet we would like to use the apparatus of magnetostatics (Ampere’s law, the Biot-Savart law, and the rest) to calculate those magnetic fields. Technically, any result derived in this way is only approximately correct. But in practice the error is usually negligible, unless the field fluctuates extremely rapidly, or you are interested in points very far from the source. Even the case of a wire snipped by a pair of scissors (Prob. 7.18) is static enough for Ampere’s law to apply. This regime, in which magnetostatic rules can be used to calculate the magnetic field on the right hand side of Faraday’s law, is called quasistatic. Generally speaking, it is only when we come to electromagnetic waves and radiation that we must worry seriously about the breakdown of magnetostatics itself.
Example 7.9 #
7.2.3: Inductance #
Suppose you have two loops of wire, at rest (Fig 7.30). If you run a steady current \( I_1 \) around loop 1, it produces a magnetic field \( \vec{B}_1 \) . Some of the field lines pass through loop 2; let \( \Phi_2 \) be the flux of \( \vec{B}_1 \) through 2. You might have a tough time actually calculating \( \vec{B_1} \), but a glance at the Biot-Savart law,
\[\vec{B}_1 = \frac{\mu_0}{4 \pi} I_1 \oint \frac{\dd \vec{l}_1 \cross \vu{\gr}}{\gr^2} \]reveals one significant fact about this field: It is proportional to the current \( I_1 \) . Therefore, so too is the flux through loop 2:
\[\Phi_2 = \int \vec{B}_1 \cdot \dd \vec{a}_2\]Thus
\[\Phi_2 = M_{21} I_1 \tagl{7.22}\]where \( M_{21} \) is the constant of proportionality; it is known as the mutual inductance of the two loops.
There is a cute formula for the mutual inductance, which you can derive by expressing the flux in terms of the vector potential, and invoking Stokes’ theorem:
\[\Phi_2 = \int \vec{B}_1 \cdot \dd \vec{a}_2 = \int (\curl \vec{A}_1) \cdot \dd \vec{a}_2 = \oint \vec{A}_1 \cdot \dd \vec{l}_2\]Now, according to Eq. 5.66,
\[\vec{A}_1 = \frac{\mu_0 I_1}{4 \pi} \oint \frac{\dd \vec{l}_1}{\gr} \]and hence
\[\Phi_2 = \frac{\mu_0 I_1}{4 \pi} \oint \left( \oint \frac{\dd \vec{l}_1}{\gr} \right) \cdot \dd \vec{l}_2\]Evidently
\[M_{21} = \frac{\mu_0}{4 \pi} \oint \oint \frac{\dd \vec{l}_1 \cdot \dd \vec{l}_2}{\gr} \tagl{7.23}\]This is the Neumann formula; it involves a double line integral - one integration around loop 1, the other around loop 2 (Fig 7.31). It’s not very useful for practical calculations, but it does reveal two important things about the mutual inductance:
- \( M_{21} \) is a purely geometrical quantity, having to do with the sizes, shapes, and relative positions of the two loops.
- The integral in Eq. 7.23 is unchanged if we switch the roles of loops 1 and 2; it follows that
This is an astonishing conclusion: Whatever the shapes and positions of the loops, the flux through 2 when we run a current I around 1 is identical to the flux through 1 when we send the same current I around 2. We may as well drop the subscripts and call them both M.
Example 7.10 #
Suppose now, that you vary the current in loop 1. The flux through loop 2 will vary accordingly, and Faraday’s law says this changing flux will induce an emf in loop 2:
\[\mathcal{E}_2 = - \dv{\Phi_2}{t} = - M \dv{I_1}{t} \tagl{7.25}\](In quoting Eq. 7.22 - which was based on the Biot-Savart law - I am tacitly assuming that the currents change slowly enough for the system to be considered quasistatic.) What a remarkable thing: Every time you change the current in loop 1, and induced current flows in loop 2 - even though there are no wires connecting them!
Come to think of it, a changing current not only induces an emf in any nearby loops, it also induces an emf in the source loop itself (Fig 7.33). Once again, the field (and therefore the flux) is proportional to the current
\[\Phi = L I \tagl{7.26}\]The constant of proportionality \( L \) is called the self inductance (or simply the inductance) of the loop. As with \( M \), it depends on the geometry (side and shape ) of the loop. If the current changes, the emf induced in the loop is
\[\mathcal{E} = - L \dv{I}{t} \tagl{7.27}\]Inductance is measured in henries (H); a henry is a volt-second per ampere.
Example 7.11 #
Inductance (like capacitance) is an intrinsically positive quantity. Lenz’s law, which is enforced by the minus sign in Eq. 7.27, dictates that the emf is in such a direction as to oppose any change in current. For this reason, it is called a back emf. Whenever you try to alter the current in a wire, you must fight against this back emf. Inductance plays somewhat the same role in electric currents that mass plays in mechanical systems: The greater \( L \), the harder it is to change the current, just as the larger the mass, the harder it is to change an object’s velocity.
Example 7.12 #
7.2.4: Energy in Magnetic Fields #
It takes a certain amount of energy to start a current flowing in a circuit. I’m not talking about the energy delivered to the resistors and converted into heat - that is irretrievably lost, as far as the circuit is concerned, and can be large or small, depending on how long you let the current run. What I am concerned with, rather, is the work you must do against the back emf to get the current going. This is fixed amount, and it is recoverable: you get it back when the current is turned off. In the meantime, it represents energy latent in the circuit; as we’ll see in a moment, it can be regarded as energy stored in the magnetic field.
The work done on a unit charge, against the back emf, in one trip around the circuit is \( - \mathcal{E} \) (the minus sign records the fact that this is the work done by you against the emf, not the work done by the emf). The amount of charge per unit time passing down the wire is I. So the total work done per unit time is
\[\dv{W}{t} = - \mathcal{E}I = L I \dv{I}{t}\]If we start with zero current and build it up to a final value I, the work done (integrating the last equation over time) is
\[W = \frac{1}{2} L I^2 \tagl{7.30}\]So, this is the energy stored in an inductor, or in any loop that has an inductance \( L \). It does not depend on how long we take to crank up the current, only on the geometry of the loop (in the form of \( L \) ) and the final current \( I \).
This is only really sensible for a system of conducting loops, but we can be a bit more general. We can express \( W \) by recalling that the flux \( \Phi \) through a loop (which is \( LI \) ) is
\[\Phi = \int \vec{B} \cdot \dd \vec{a} = \int (\curl \vec{A}) \cdot \dd \vec{a} = \oint \vec{A} \cdot \dd \vec{l}\]where the line integral is around the perimeter of the loop. So, we have
\[LI = \oint \vec{A} \cdot \dd \vec{l}\]and therefore
\[W = \frac{1}{2} I \oint \vec{A} \cdot \dd \vec{l} = \frac{1}{2} \oint (\vec{A} \cdot \vec{I}) \dd l \tagl{7.31}\]We can pretty obviously generalize this to volume currents
\[W = \frac{1}{2} \int _V (\vec{A} \cdot \vec{J}) \dd \tau \tagl{7.32}\]But we can do one better, expressing \( W \) entirely in terms of the magnetic field: \( \curl \vec{B} = \mu_0 \vec{J} \) lets us eliminate the current density from the picture
\[W = \frac{1}{2 \mu_0} \int \vec{A} \cdot (\curl \vec{B}) \dd \tau \tagl{7.33}\]Integration by parts gets us to slap the derivative from B to A
\[\div (\vec{A} \cross \vec{B}) = \vec{B} \cdot (\curl \vec{A}) - \vec{A} \cdot (\curl \vec{B})\]so
\[\vec{A} \cdot (\curl \vec{B}) = \vec{B} \cdot \vec{B} - \div (\vec{A} \cross \vec{B}\]Consequently
\[W = \frac{1}{2\mu_0} \left( \int B^2 \dd \tau - \int \div (\vec{A} \cross \vec{B} \dd \tau \right) \\ = \frac{1}{2\mu_0} \left( \int _V B^2 \dd \tau - \oint_S (\vec{A} \cross \vec{B} ) \cdot \dd \vec{a} \right) \tagl{7.34}\]Now, the integration in Eq. 7.32 is to be taken over the entire volume occupied by the current. But any region larger than this will do just as well, for \( \vec{J} \) is zero out there anyway. In Eq. 7.34, the larger the region we pick the greater is the contribution from the volume integral, and therefore the smaller is that of the surface integral (this makes sense: as the surface gets farther from the current, both A and B decrease). In particular, if we agree to integrate over all space, then the surface integral goes to zero, and we are left with
\[W = \frac{1}{2 \mu_0} \int _{\text{all space}} B^2 \dd \tau \tagl{7.35}\]In view of this result, we say the energy is “stored in the magnetic field,” in the amount \( (B^2 / 2 \mu_0) \) per unit volume. This is a nice way to think of it, though someone looking at Eq. 7.32 might prefer to say that the energy is stored in the current distribution, in the amount \( \frac{1}{2} (\vec{A} \cdot \vec{J}) \) per unit volume. The distinction is one of bookkeeping; the important quantity is the total energy \( W \) , and we need not worry about where (if anywhere) the energy is “located.”
You might find it strange that it takes energy to set up a magnetic field - after all, magnetic fields themselves do no work. The point is that producing a magnetic field, where previously there was none, requires changing the field, and a changing B-field, according to Faraday, induces an electric field. The latter, of course, can do work. In the beginning, there is no \( \vec{E} \) , and at the end there is no \( \vec{E} \) ; but in between, while \( \vec{B} \) is building up, there is an \( \vec{E} \) , and it is against this that the work is done. (You see why I could not calculate the energy stored in a magnetostatic field back in Chapter 5.) In the light of this, it is extraordinary how similar the magnetic energy formulas are to their electrostatic counterparts: