<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://www.11de784a.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://www.11de784a.com/" rel="alternate" type="text/html" /><updated>2026-03-12T15:28:09+00:00</updated><id>https://www.11de784a.com/feed.xml</id><title type="html">Ayush Singh</title><subtitle>Ayush Singh&apos;s Website</subtitle><entry><title type="html">Les Houches Photoshoot</title><link href="https://www.11de784a.com/2024/10/12/les-houches-photoshoot.html" rel="alternate" type="text/html" title="Les Houches Photoshoot" /><published>2024-10-12T23:00:01+00:00</published><updated>2024-10-12T23:00:01+00:00</updated><id>https://www.11de784a.com/2024/10/12/les-houches-photoshoot</id><content type="html" xml:base="https://www.11de784a.com/2024/10/12/les-houches-photoshoot.html"><![CDATA[<p>Some pictures from the time I visited École de Physique des Houches this year
for a <a href="https://houches24.github.io/">summer school</a>.</p>

<figure class="image wide">
    <div class="many">
        <img src="/assets/images/les_houches/mont_blanc_day_v.jpg" />
        <img src="/assets/images/les_houches/aiguille_du_midi_v.jpg" />
    </div>
    <figcaption>
        Dôme du Goûter, Mont Blanc and Aiguille du Midi as seen from the
        school.
    </figcaption>
</figure>

<figure class="image wide">
    <img src="/assets/images/les_houches/chalet_v.jpg" />
    <figcaption>
        The chalet where I was housed.
    </figcaption>
</figure>

<figure class="image wide">
    <img src="/assets/images/les_houches/aiguille_du_midi_h.jpg" />
    <figcaption>
        Aiguille du Midi as seen from the school.
    </figcaption>
</figure>

<figure class="image wide">
    <img src="/assets/images/les_houches/mont_blanc_cloudy_h.jpg" />
    <figcaption>
        Mont Blanc massif.
    </figcaption>
</figure>

<figure class="image wide">
    <img src="/assets/images/les_houches/mont_blanc_sunset2_h.jpg" />
    <figcaption>
        Mont Blanc massif at sunset.
    </figcaption>
</figure>

<figure class="image wide">
    <img src="/assets/images/les_houches/mont_blanc_night_h.jpg" />
    <figcaption>
        Mont Blanc massif at night.
    </figcaption>
</figure>

<figure class="image wide">
    <div class="many">
        <img src="/assets/images/les_houches/mushroom1_v.jpg" />
        <img src="/assets/images/les_houches/mushroom2_v.jpg" />
        <img src="/assets/images/les_houches/mushroom3_v.jpg" />
    </div>
    <figcaption>
        Some mushrooms in the woods.
    </figcaption>
</figure>

<figure class="image wide">
    <div class="many">
        <img src="/assets/images/les_houches/cat_v.jpg" />
        <img src="/assets/images/les_houches/cat_aesthetic_v.jpg" />
    </div>
    <figcaption>
        A sleepy cat that lives on the school campus.
    </figcaption>
</figure>

<figure class="image wide">
    <img src="/assets/images/les_houches/cat_sleepy_h.jpg" />
    <figcaption>
        Sleepy.
    </figcaption>
</figure>

<figure class="image wide">
    <img src="/assets/images/les_houches/forest_meeting_spot_h.jpg" />
    <figcaption>
        A secret meeting spot in the woods behind the school.
    </figcaption>
</figure>

<figure class="image wide">
    <div class="many">
        <img src="/assets/images/les_houches/hike_sun_v.jpg" />
        <img src="/assets/images/les_houches/forest_snail_v.jpg" />
    </div>
    <figcaption>
        The sun peeking through the woods and a giant snail.
    </figcaption>
</figure>

<figure class="image wide">
    <img src="/assets/images/les_houches/snail_right_h.jpg" />
    <figcaption>
        A regular sized snail.
    </figcaption>
</figure>

<figure class="image wide">
    <img src="/assets/images/les_houches/hike_foggy_valley2_h.jpg" />
    <figcaption>
        Foggy Chamonix valley from Le Prarion.
    </figcaption>
</figure>

<figure class="image wide">
    <img src="/assets/images/les_houches/hike_foggy_valley3_h.jpg" />
    <figcaption>
        Foggy Chamonix valley from Le Prarion #2.
    </figcaption>
</figure>

<figure class="image wide">
    <img src="/assets/images/les_houches/hike_valley_h.jpg" />
    <figcaption>
        View from the top of Le Prarion.
    </figcaption>
</figure>

<figure class="image wide">
    <img src="/assets/images/les_houches/hike_mountain_h.jpg" />
    <figcaption>
        Bellevue from the top of Le Prarion.
    </figcaption>
</figure>

<figure class="image wide">
    <div class="many">
        <img src="/assets/images/les_houches/hike_valley3_v.jpg" />
        <img src="/assets/images/les_houches/hike_valley2_v.jpg" />
    </div>
    <figcaption>
        Chamonix valley from Mont Lachat.
    </figcaption>
</figure>

<figure class="image wide">
    <img src="/assets/images/les_houches/mont_blanc_after_rain_h.jpg" />
    <figcaption>
        Mont Blanc massif after a week of rain and fog.
    </figcaption>
</figure>

<figure class="image wide">
    <div class="many">
        <img src="/assets/images/les_houches/paragliders_v.jpg" />
        <img src="/assets/images/les_houches/les_houches_station_v.jpg" />
    </div>
    <figcaption>
        Some paragliders above Chamonix and the Les Houches train station.
    </figcaption>
</figure>]]></content><author><name></name></author><summary type="html"><![CDATA[Some pictures from the time I visited École de Physique des Houches this year for a summer school.]]></summary></entry><entry><title type="html">Thinking About Spacetime Symmetries in Field Theory</title><link href="https://www.11de784a.com/2024/09/01/thinking-about-spacetime-symmetries-in-field-theory.html" rel="alternate" type="text/html" title="Thinking About Spacetime Symmetries in Field Theory" /><published>2024-09-01T23:00:01+00:00</published><updated>2024-09-01T23:00:01+00:00</updated><id>https://www.11de784a.com/2024/09/01/thinking-about-spacetime-symmetries-in-field-theory</id><content type="html" xml:base="https://www.11de784a.com/2024/09/01/thinking-about-spacetime-symmetries-in-field-theory.html"><![CDATA[<p>In physics, fields are thought of as functions on spacetime that carry indices
which determine how they transform under symmetries.
But a cleaner, more satisfying way to think about fields
is to think differential geometrically — of spacetime as a
<a href="https://en.wikipedia.org/wiki/Differentiable_manifold">differentiable manifold</a>,
fields as <a href="https://en.wikipedia.org/wiki/Section_(fiber_bundle)">sections</a> of 
<a href="https://en.wikipedia.org/wiki/Vector_bundle">vector bundles</a>, and spacetime
symmetries as a <a href="https://en.wikipedia.org/wiki/Group_action">group action</a> on spacetime.
From this point of view, the action of a symmetry group on the space of fields
is determined by underlying geometric structures,
and once the correct geometric structure is identified, this action
can be deduced by appeals to “naturality”<sup id="fnref:handwaving" role="doc-noteref"><a href="#fn:handwaving" class="footnote" rel="footnote">1</a></sup>.</p>

<p>Let \(M\) be the spacetime manifold, and for the purposes of this post, let the
<a href="https://en.wikipedia.org/wiki/Diffeomorphism#Diffeomorphism_group">group of diffeomorphisms</a>, \(\mathrm{Diff}(M)\), be the group of spacetime
symmetries<sup id="fnref:setup" role="doc-noteref"><a href="#fn:setup" class="footnote" rel="footnote">2</a></sup>. 
As warm up let us think about
scalar fields that are just functions on \(M\), i.e., sections of
the trivial line bundle. The only natural way
a diffeomorphism \(\Lambda : M \to M\) can act on this scalar field, \(\phi : M
\to \mathbb R\), is by 
<a href="https://en.wikipedia.org/wiki/Pullback">pulling it back</a> by \(\Lambda^{-1}\),</p>

\[(\Lambda \cdot \phi) (p) = \phi(\Lambda^{-1} p) \text{ for every } p \in M,\]

<p>which is precisely how physics textbooks<sup id="fnref:srednicki" role="doc-noteref"><a href="#fn:srednicki" class="footnote" rel="footnote">3</a></sup> write the transformation
rule for scalar fields. Let us see what happens in a nontrivial example.</p>

<h2 id="vector-fields">Vector Fields</h2>

<p>Vector fields are sections of the 
<a href="https://en.wikipedia.org/wiki/Tangent_bundle">tangent bundle</a> 
\(TM \to M\). Diffeomorphisms act on vector
fields in a natural way by <a href="https://en.wikipedia.org/wiki/Pushforward_(differential)">pushforwards</a>. In particular, for a vector field
\(v\), and a diffeomorphism \(\Lambda\), the action is given by</p>

\[(\Lambda \cdot v)(p) = (\Lambda_\ast v)(\Lambda^{-1} p) \text{ for every } p \in M,\]

<p>where \(\Lambda_\ast\) is the pushforward of vector fields induced by
\(\Lambda\).  Once again, this can be identified as the familiar transformation
law for vector fields from physics textbooks.</p>

<figure class="image mid invert">
  <img src="/assets/images/spacetime_symmetries/tangent_bundle_map.png" />
</figure>

<p>However, note that the pushforward — seen as a <a href="https://en.wikipedia.org/wiki/Bundle_map">bundle map</a> — does two
distinct things: (1) moves fibers around, and (2) acts <em>on</em> fibers by a
linear transformation. How precisely these two decouple, is the content of the
universal property of <a href="https://en.wikipedia.org/wiki/Pullback_bundle">pullback bundles</a>.</p>

<figure class="image mid invert">
  <img src="/assets/images/spacetime_symmetries/tangent_bundle_map_factored.png" />
</figure>

<p>Per this 
<a href="https://en.wikipedia.org/wiki/Pullback_(category_theory)#Universal_property">universal property</a>, 
the bundle map \(\Lambda_\ast = \kappa \circ \lambda\) factors through the
pullback bundle \(\Lambda^\ast TM \cong TM\), by a unique map \(\lambda\). If
\(v\) is a vector field, then the action of \(\kappa\) is to move the fiber
around, i.e., \((\kappa\cdot v)(p) = v(\Lambda^{-1} p)\). And the action of
\(\lambda\) is a linear transformation of fibers \((\lambda \cdot v)(p) =
\lambda(p) v(p)\), where \(\lambda(p) \in \mathrm{GL}(T_p M)\).</p>

<p>Before moving to more generic situations, it is nice to pause and note that
the transformation rule for scalar fields I wrote in the introduction, is a
special case of what happens to vector fields. Fibers are moved around but the
action on fibers is trivial.</p>

<p>If you are eagle-eyed (or if you already know where this story is going), you
may ask if the bundle automorphism \(\lambda\) has an interpretation as a
section of some bundle over \(M\) whose fibers are the group
\(\mathrm{GL}(\mathbb{R}^m)\). This is correct and the interpretation is fleshed
out in the following paragraphs.</p>

<p>To understand more general kinds of fields, we should look for a geometric
structure that unifies and generalizes the examples above. This geometric
structure is that of a 
<a href="https://en.wikipedia.org/wiki/Principal_bundle">principal \(G\)-bundle</a> 
and <a href="https://en.wikipedia.org/wiki/Associated_bundle">associated vector bundles</a>.</p>

<h2 id="principal-bundles">Principal Bundles</h2>

<p>A principal <a href="https://en.wikipedia.org/wiki/Orthogonal_group#Special_orthogonal_group">\(\mathrm{SO}(m)\)</a>-bundle that comes for free with every orientable
manifold is the bundle of 
<a href="https://en.wikipedia.org/wiki/Frame_bundle#Orthonormal_frame_bundle">oriented orthonormal frames</a> 
of the tangent bundle<sup id="fnref:structuregroup" role="doc-noteref"><a href="#fn:structuregroup" class="footnote" rel="footnote">4</a></sup>, which I will denote \(\mathrm{SO}(M) \to M\). 
An almost tautological remark is that the tangent bundle is isomorphic to the
associated bundle \(\mathrm{SO}(M) \times_\mathbf{m} \mathbb{R}^m\), where
\(\mathbf{m}\) is the defining representation of \(\mathrm{SO}(m)\). 
Real scalar fields can be seen as sections of the associated line bundle
\(\mathrm{SO}(M) \times_\mathbf{1} \mathbb{R}\), where \(\mathbf{1}\) is a
one-dimensional real representation of \(\mathrm{SO}(m)\).</p>

<p>An abstract but more geometric way of thinking about vector bundles is to
regard the associated <a href="https://en.wikipedia.org/wiki/Frame_bundle">frame bundle</a> 
— a principal \(G\)-bundle for an appropriate structure group \(G\) —  as
the basic object, which gives rise to <em>a whole family</em> of vector bundles with
the same topology<sup id="fnref:sametopology" role="doc-noteref"><a href="#fn:sametopology" class="footnote" rel="footnote">5</a></sup>, indexed by <a href="https://en.wikipedia.org/wiki/Group_representation">linear representations</a> of \(G\). 
From this point of view, we can construct a large class of fields on \(M\), as sections
of \(\mathrm{SO}(M) \times_\rho V\) for every representation \(\rho : \mathrm{SO}(m) \to \mathrm{GL}(V)\).</p>

<p>To see how diffeomorphisms act on these fields we can run parallel to the
discussion above on vector fields, once we note that vector bundle maps induce
maps of associated frame bundles. So, a diffeomorphism induces the
principal bundle map</p>

<figure class="image mid invert">
  <img src="/assets/images/spacetime_symmetries/principal_bundle_map.png" />
</figure>

<p>which factors uniquely through the pullback bundle, \(\Lambda^\ast \mathrm{SO}(M) \cong
\mathrm{SO}(M)\) as</p>

<figure class="image mid invert">
  <img src="/assets/images/spacetime_symmetries/principal_bundle_map_factored.png" />
</figure>

<p>As before \(\kappa\) moves fibers around, and \(\lambda\) acts on the fibers as
a bundle automorphism<sup id="fnref:automorphism" role="doc-noteref"><a href="#fn:automorphism" class="footnote" rel="footnote">6</a></sup>. Principal bundle automorphisms are sections of the
adjoint bundle, so \(\lambda\) is a section of the associated bundle
\(\mathrm{SO}(M) \times_\textrm{Ad} \mathrm{SO}(m)\).</p>

<p>On sections, \(\varphi\), of the associated vector bundle \(\mathrm{SO}(M) \times_\rho
V\), a diffeomorphism acts as a composition of a coordinate change \(\kappa\),
and a bundle automorphism \(\lambda\) as follows</p>

\[(\Lambda \cdot \varphi)(p) = (\lambda \cdot \varphi)(\Lambda^{-1} p) \text{ for every } p \in M,\]

<p>where \((\lambda \cdot \varphi)(p) = \rho(\lambda(p))\varphi(p)\).</p>

<hr />

<p>The upshot of all this abstract<sup id="fnref:abstract" role="doc-noteref"><a href="#fn:abstract" class="footnote" rel="footnote">7</a></sup> language is that the action of spacetime
symmetries on any field that you can possibly think of is either — in case of
tensors — a special case of the formula above, or — for spinors or
<a href="https://en.wikipedia.org/wiki/Supermultiplet">superfields</a> — a
straightforward generalization of it. To wrap up, I will sketch how
\(\mathrm{Diff}(M)\) acts on tensor fields and spinors.</p>

<p>An \((a, b)\)-tensor field on \(M\) is a section of the vector bundle
\(TM^{\otimes a} \otimes T^\ast M^{\otimes b}\).
Or in the language of principal bundles, it is a section of the vector bundle
associated to \(\mathrm{SO}(M)\) with \(\rho = \mathbf{m}^{\otimes a} \otimes
\mathbf{\bar m}^{\otimes b}\), where \(\mathbf m\) is the fundamental
representation of \(SO(m)\) and \(\mathbf{\bar m}\) is its dual. In local
coordinates, \(\rho(\lambda)\) will become the famous transformation rule for
tensors.</p>

<p>Finally, if the spacetime manifold supports a 
<a href="https://en.wikipedia.org/wiki/Spin_structure">spin structure</a> then spinor fields
can be understood as sections of an associated 
<a href="https://en.wikipedia.org/wiki/Spinor_bundle">spinor bundle</a>. In particular,
given a choice of spin structure on \(M\), there is a principal 
<a href="https://en.wikipedia.org/wiki/Spin_group">\(\mathrm{Spin}(m)\)</a>-bundle over
it, \(\mathrm{Spin}(M) \to M\). Dirac spinor fields on \(M\) are sections of the
associated vector bundle \(\mathrm{Spin}(M) \times_\sigma \Delta_m\), where \(\sigma :
\mathrm{Spin}(m) \to \mathrm{GL}(\Delta_m)\) is a particular complex representation of the spin
group called the 
<a href="https://en.wikipedia.org/wiki/Spin_representation">spinor representation</a>.</p>

<p>To see how a diffeomorphisms acts on a spinor fields, note that — modulo some
assumptions on the topology of \(M\) — the \(\mathrm{SO}(M)\) bundle map induced by a
diffeomorphism lifts to a \(\mathrm{Spin}(M)\) bundle map. As before, this bundle map
can be decomposed, via the pullback bundle, into a piece that moves the fiber
around and a piece that acts on fibers. So that the
transformation rule for spinors can be written as</p>

\[(\Lambda \cdot \psi)(p) = (\lambda \cdot \psi)(\Lambda^{-1} p) \text{ for every
} p \in M,\]

<p>where the bundle automorphism, \(\lambda : \mathrm{Spin}(M) \to \mathrm{Spin}(M)\), acts on each
fiber by the spinor representation \((\lambda \cdot \psi)(p) =
\sigma(\lambda(p)) \psi(p)\).</p>

<hr />
<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:handwaving" role="doc-endnote">
      <p>And some waving of hands. <a href="#fnref:handwaving" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:setup" role="doc-endnote">
      <p>Even when the setup is different, the story of spacetime symmetries
will either — in case of flat space with Poincaré symmetry — be a special case
of the one presented here or — for a supermanifold with supersymmetry —
a straightforward generalization of it. <a href="#fnref:setup" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:srednicki" role="doc-endnote">
      <p>Like Mark Srednicki’s <em>Quantum Field Theory</em>, for example. <a href="#fnref:srednicki" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:structuregroup" role="doc-endnote">
      <p>The tangent frame bundle is <em>a priori</em> a principal
\(\mathrm{GL}(\mathbb{R}^m)\)-bundle, but the structure group can be
reduced to \(\mathrm{SO}(m)\) by (1) picking a Riemannian metric, which is
always possible for a smooth manifold, and (2) picking an orientation,
which is possible because the manifold is assumed to be orientable. <a href="#fnref:structuregroup" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:sametopology" role="doc-endnote">
      <p>By <em>the same topology</em>, I mean that the local trivialization
and transition functions are the same. <a href="#fnref:sametopology" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:automorphism" role="doc-endnote">
      <p>Existence and uniqueness of the bundle automorphism \(\lambda\) is due to
the universal property of pullback bundles. <a href="#fnref:automorphism" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:abstract" role="doc-endnote">
      <p>Honestly, the question of precisely how diffeomorphisms act on the
tangent frame bundle turned out to be subtler than I had anticipated.
For more details of the story I have sketched here, my
favorite references are Loring Tu’s <em>Differential
Geometry</em>, and Thomas Friedrich’s <em>Dirac Operators and Riemannian Geometry</em>. <a href="#fnref:abstract" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name></name></author><summary type="html"><![CDATA[In which I try to rephrase the physics understanding of spacetime symmetries in the language of principal bundles.]]></summary></entry><entry><title type="html">Fiorentino Pictureshow</title><link href="https://www.11de784a.com/2024/06/14/fiorentino-pictureshow.html" rel="alternate" type="text/html" title="Fiorentino Pictureshow" /><published>2024-06-14T23:00:01+00:00</published><updated>2024-06-14T23:00:01+00:00</updated><id>https://www.11de784a.com/2024/06/14/fiorentino-pictureshow</id><content type="html" xml:base="https://www.11de784a.com/2024/06/14/fiorentino-pictureshow.html"><![CDATA[<p>I visited the Galileo Galilei Institute in Florence for a couple of weeks in
April and May.  Here are some pictures from the trip.</p>

<p>
  <figure class="image wide">
    <img src="/assets/images/fiorentino_pictureshow/skyline_afternoon_h.jpg" alt="Skyline of the city from Basilica di San Miniato." />
    
    <figcaption>Skyline of the city from Basilica di San Miniato.</figcaption>
    
  </figure>
</p>

<figure class="image wide">
    <div class="many">
        <img src="/assets/images/fiorentino_pictureshow/duomo_morning_v.jpg" />
        <img src="/assets/images/fiorentino_pictureshow/duomo_door_v.jpg" />
    </div>
    <figcaption>
        Cattedrale di Santa Maria del Fiore and one its many breathtaking
        doors.
    </figcaption>
</figure>

<p>
  <figure class="image wide">
    <img src="/assets/images/fiorentino_pictureshow/arno_h.jpg" alt="Arno at sunset." />
    
    <figcaption>Arno at sunset.</figcaption>
    
  </figure>
</p>

<p>
  <figure class="image wide">
    <img src="/assets/images/fiorentino_pictureshow/skyline_night_h.jpg" alt="Skyline of the city from Piazzale Michelangelo." />
    
    <figcaption>Skyline of the city from Piazzale Michelangelo.</figcaption>
    
  </figure>
</p>

<figure class="image wide">
    <div class="many">
        <img src="/assets/images/fiorentino_pictureshow/ggi_v.jpg" />
        <img src="/assets/images/fiorentino_pictureshow/ggi_cat_far_v.jpg" />
        <img src="/assets/images/fiorentino_pictureshow/ggi_cat_close_v.jpg" />
    </div>
    <figcaption>
        Main building of the GGI and a grumpy cat that lives in the campus.
    </figcaption>
</figure>

<figure class="image wide">
    <div class="many">
        <img src="/assets/images/fiorentino_pictureshow/duomo_evening_v.jpg" />
        <img src="/assets/images/fiorentino_pictureshow/tower_night_v.jpg" />
    </div>
    <figcaption>
        A peek of Duomo di Firenze and the clocktower of Palazzo Vecchio.
    </figcaption>
</figure>

<p>
  <figure class="image wide">
    <img src="/assets/images/fiorentino_pictureshow/cappelle_medici_h.jpg" alt="Cappelle Medici" />
    
    <figcaption>Cappelle Medici</figcaption>
    
  </figure>
</p>

<p>
  <figure class="image wide">
    <img src="/assets/images/fiorentino_pictureshow/uffizi_ceiling_h.jpg" alt="Ceiling of the Uffizi Gallery." />
    
    <figcaption>Ceiling of the Uffizi Gallery.</figcaption>
    
  </figure>
</p>

<p>
  <figure class="image wide">
    <img src="/assets/images/fiorentino_pictureshow/duomo_morning_h.jpg" alt="Duomo from inside the Uffizi." />
    
    <figcaption>Duomo from inside the Uffizi.</figcaption>
    
  </figure>
</p>

<figure class="image wide">
    <div class="many">
        <img src="/assets/images/fiorentino_pictureshow/dante_v.jpg" />
        <img src="/assets/images/fiorentino_pictureshow/pinocchio_v.jpg" />
    </div>
    <figcaption>
        Dante, Pinocchio and Cricket.
    </figcaption>
</figure>

<figure class="image wide">
    <div class="many">
        <img src="/assets/images/fiorentino_pictureshow/peek_duomo_v.jpg" />
        <img src="/assets/images/fiorentino_pictureshow/peek_tower_v.jpg" />
    </div>
    <figcaption>
        Peeks of the Duomo and the bell tower of Cattedrale di Santa Maria del
        Fiore.
    </figcaption>
</figure>

<figure class="image wide">
    <div class="many">
        <img src="/assets/images/fiorentino_pictureshow/barbie1_v.jpg" />
        <img src="/assets/images/fiorentino_pictureshow/barbie2_v.jpg" />
        <img src="/assets/images/fiorentino_pictureshow/barbie3_v.jpg" />
    </div>
    <figcaption>
        A deranged series of posters in the
        historical city center.
    </figcaption>
</figure>

<p>
  <figure class="image wide">
    <img src="/assets/images/fiorentino_pictureshow/skyline_evening_h.jpg" alt="Another skyline, just after sunset." />
    
    <figcaption>Another skyline, just after sunset.</figcaption>
    
  </figure>
</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I visited the Galileo Galilei Institute in Florence for a couple of weeks in April and May. Here are some pictures from the trip.]]></summary></entry><entry><title type="html">A Cursed Space in the Wild</title><link href="https://www.11de784a.com/2024/06/03/a-cursed-topological-space.html" rel="alternate" type="text/html" title="A Cursed Space in the Wild" /><published>2024-06-03T23:00:01+00:00</published><updated>2024-06-03T23:00:01+00:00</updated><id>https://www.11de784a.com/2024/06/03/a-cursed-topological-space</id><content type="html" xml:base="https://www.11de784a.com/2024/06/03/a-cursed-topological-space.html"><![CDATA[<p>Recently, I had the misfortune of coming across the Sierpiński 2-point space in
the wild. And while I’m sure that I must have seen it within minutes of
learning the definition of a topological space, seeing it arise naturally in a
linear algebra problem (of all places!) was horrifying.</p>

<hr />

<p>Consider the problem of classifying linear maps between two finite dimensional
complex vector spaces, \(A : V \to W\), up to change of basis. If you are very
smart, you will eventually conclude that this is done completely by the set
of numbers \((\dim V,\) \(\dim W,\) \(\dim \ker A,\) \(\dim \mathrm{im}\, A)\),
subject to the <a href="https://en.wikipedia.org/wiki/Rank%E2%80%93nullity_theorem#Linear_transformations">rank–nullity theorem</a>.</p>

<p>But if you are geometrically minded, you might think that linear maps are just
matrices, so the space of all linear maps must be \(R_{m, n} = \mathrm{Mat}_{n \times
m}(\mathbb{C}) \cong \mathbb{C}^{mn}\), where \(\dim V = m\) and \(\dim W =
n\). And identifying linear maps that are equivalent under change of bases is
done by quotienting this space by the group \(G_{m, n}\) \(= \mathrm{GL}(V)
\times
\mathrm{GL}(W) \cong\) \(\mathrm{GL}_m(\mathbb{C}) \times
\mathrm{GL}_n(\mathbb{C})\). And then, you will conclude that the
classification problem is solved completely by the space \(M_{m, n} = R_{m, n}
/ G_{m, n}\).</p>

<p>Fantastic! Let us try to compute this space in some simple cases.</p>

<p>First, note that for \(A \in R_{m, n}\) and \((S, T) \in G_{m, n}\), the action
is given by</p>

\[(S, T)\cdot A = T A S^{-1},\]

<p>which is exactly a change of basis.</p>

<p>Now, let us do the simplest case: \(m = n = 1\). We have \(R_{1, 1} \cong
\mathbb{C}\) and \(G_{1, 1} \cong \mathbb{C}^* \times \mathbb{C}^*\), and the
action \(G_{1, 1} \curvearrowright R_{1, 1}\) is given by \((s, t)\cdot a = (t
/ s) a\) for every \(a \in \mathbb{C}\) and \(s, t \in \mathbb{C}^*\). It is an
easy exercise to show that \(M_{1, 1} = \{[0], [1]\}\).</p>

<p>The good news is that \(M_{1, 1}\) has two points—corresponding to the two
orbits of \(G_{1, 1}\) in \(R_{1, 1}\)—which matches with the linear algebra
answer.  The bad news is that the 
<a href="https://en.wikipedia.org/wiki/Quotient_space_(topology)">quotient topology</a> 
makes \(M_{1, 1}\) the
<a href="https://en.wikipedia.org/wiki/Sierpi%C5%84ski_space">Sierpiński 2-point space</a>.</p>

<p>Indeed, the point \(\{[0]\} \subset M_{1, 1}\) is closed because the orbit is a
singleton, whose complement is open in \(\mathbb C.\) It is not open because the orbit is
not open in \(\mathbb C\). \(\{[1]\} \subset M_{1, 1}\) is open
because its orbit \(\mathbb C \setminus \{0\} \subset \mathbb C\) is open. It is not closed
because the complement of its orbit is a singleton and therefore not open in
\(\mathbb C\). Furthermore, as a result, the closure of \({\{[1]\}}\) is
equal to the whole space, \(M_{1, 1}\).</p>

<p>The only things I knew about this space before seeing this example were that</p>

<ol>
  <li>it is the only topology that can be given to a two element set other than
the trivial and discrete topologies,</li>
  <li>it has very bad separation properties: \(M_{1, 1}\) is not
<a href="https://en.wikipedia.org/wiki/Hausdorff_space">Hausdorff</a>, it is not even
<a href="https://en.wikipedia.org/wiki/T1_space">\(T_1\)</a>, and</li>
  <li>its only function is to be a
<a href="https://en.wikipedia.org/wiki/Counterexamples_in_Topology">counterexample</a>
that instructors use to torture undergrads in their first topology course.</li>
</ol>

<p>And that’s why I was horrified when I saw it come up in, what is essentially, an
elementary linear algebra problem.</p>

<hr />
<p>I learned about this in Marcus Reineke’s great series of lectures on Quivers
and Cohomological Hall Algebras at the 
<a href="https://www.ggi.infn.it/showevent.pl?id=498">Galileo Galilei Institute in May 2024</a>.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Recently, I had the misfortune of coming across the Sierpiński 2-point space in the wild. And while I’m sure that I must have seen it within minutes of learning the definition of a topological space, seeing it arise naturally in a linear algebra problem (of all places!) was horrifying.]]></summary></entry><entry><title type="html">An Outline for Witten’s Analytic Continuation of Chern–Simons Theory</title><link href="https://www.11de784a.com/2024/03/13/an-outline-for-wittens-analytic-continuation-of-chern-simons-theory.html" rel="alternate" type="text/html" title="An Outline for Witten’s Analytic Continuation of Chern–Simons Theory" /><published>2024-03-13T23:00:01+00:00</published><updated>2024-03-13T23:00:01+00:00</updated><id>https://www.11de784a.com/2024/03/13/an-outline-for-wittens-analytic-continuation-of-chern-simons-theory</id><content type="html" xml:base="https://www.11de784a.com/2024/03/13/an-outline-for-wittens-analytic-continuation-of-chern-simons-theory.html"><![CDATA[<p>I hate reading long PDFs that do not have an outline attached. So, I spent one
afternoon making an outline for Witten’s <em>Analytic Continuation of Chern–Simons
Theory</em>, when I should have been reading it.</p>

<p>Here is the <a href="/assets/files/1001.2933/1001.2933.txt">outline</a>, <a href="/assets/files/1001.2933/1001.2933_outlined.pdf">PDF with outline attached</a>, and a <a href="https://arxiv.org/abs/1001.2933">link to the
original (arXiv:1001.2933)</a>.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I hate reading long PDFs that do not have an outline attached. So, I spent one afternoon making an outline for arXiv:1001.2933, when I should have been reading it.]]></summary></entry><entry><title type="html">How to Make a NiSERCast</title><link href="https://www.11de784a.com/2022/05/28/how-to-make-a-nisercast.html" rel="alternate" type="text/html" title="How to Make a NiSERCast" /><published>2022-05-28T18:30:01+00:00</published><updated>2022-05-28T18:30:01+00:00</updated><id>https://www.11de784a.com/2022/05/28/how-to-make-a-nisercast</id><content type="html" xml:base="https://www.11de784a.com/2022/05/28/how-to-make-a-nisercast.html"><![CDATA[<p>It has been a year since the <a href="https://nisercast.gitlab.io/2021/04/19/a-story-for-everything.html">last episode of NiSERCast</a>, 
and it has been more than two years since the idea of a science communication
podcast led by NISER students was initially conceived. Partly for archival
reasons and partly for self-indulgent reasons, here are some reflections from
my involvement in the project.</p>

<p>For those who don’t know, <a href="https://nisercast.gitlab.io/">NiSERCast</a> 
is an outreach project started by a few NISER students, including myself,
consisting of a podcast in which students have conversations with professors
about their research and their life in academia. We were only able to release
one episode, and when the second wave of the coronavirus hit India, we lost
momentum and have unfortunately been on a hiatus since May 2021.</p>

<h2 id="prologue">Prologue</h2>

<p>It was the February of <abbr title="Ominous music">2020</abbr> when <a href="https://surelynottrue.github.io/">Spandan</a>,
who was also my roommate at the time, brought up the idea of a student led
podcast based around conversations with NISER professors. As we imagined it at
the time, it was going to be like MIT OpenCourseWare’s <a href="https://chalk-radio.simplecast.com/">Chalk
Radio</a>, but more informal, less
professional, and with lower production values. Over a weekend, Spandan,
and I whipped up a very quick and dirty website and configured an
Atom feed for the podcast, and then we sent out a wider call for volunteers via
the Science Activities Club (SAC).</p>

<p>I must say, I was quite overwhelmed by the initial response. Many
people, both my seniors and juniors, were extremely enthusiastic about the
podcast. There were meetings and discussions in which the basics were 
fixed—the division of responsibilities: hosting, production, editing, social
media; the basic format; the schedule; and perhaps most importantly, the name.
We also talked to professors and made a list of potential guests.
Finally, we decided to start recordings in March, after our midsemester exams
which were scheduled for the last week of February.</p>

<p>And then, of course, the coronavirus hit.</p>

<h2 id="nuts-and-bolts">Nuts and Bolts</h2>

<p>We picked up the project again, almost exactly a year later, in March 2021,
after social distancing norms were relaxed a little by the institute. Student
volunteers fell into two major categories: (1) hosts, who were responsible for
inviting professors as guests and, well, hosting the podcast, and (2) people
responsible for recording, editing and releasing episodes. Before approaching
the Dean of Student Affairs (DoSA) with a proposal, we did the following:</p>

<ul>
  <li>Hosts approached professors to gauge their interest, and we made a rough
schedule for recording the first 5–6 epsiodes. We (very ambitiously) planned
to record every weekend and release an episode every two weeks.</li>
  <li>We decided to hold recordings in the Discussion Hall in the School
of Physical Sciences (SPS), which was closed at the time due to social distancing
restrictions. We talked to the SPS Chairperson and got their permission for
using the Discussion Hall.</li>
  <li>We talked to the Student Gymkhana and the Drama and Music Club to borrow
their recording equipment (more on this later).</li>
  <li>Finally, we recorded a short intro in the SPS Discussion Hall with the
borrowed recording equipment to test our record-edit-release pipeline, and
released it as the zeroth episode on the Atom feed.</li>
</ul>

<p>After making an intro jingle, a small redesign of the NiSERCast website, 
and getting required permissions from the DoSA, we were ready to start
recording.</p>

<p>A rough format for episodes was decided after a discussion among all
volunteers. To keep the podcast accessible to laymen—especially high school
students who might be considering a career in the sciences—we decided that
each episode should have two hosts, at least one of whom should be from outside
the professor’s department. Initially, we decided to let the professor and the
hosts talk for about two hours, which could then be cut down for a one hour
episode. In retrospect, I feel like this was a mistake. With our courseload and
other academic engagements, and having only one group of four people to
look after recordings, editing, and releases, cutting a two hour
conversation down to an hour every two weeks, in addition to supervising the
recordings every week, was just not feasible. To be fair, however, these
glitches would have probably evened themselves out if we were able to do a few more
episodes.</p>

<p>
  <figure class="image wide">
    <img src="/assets/images/nisercast/journey.png" alt="From the conception of NiSERCast to the first episode." />
    
    <figcaption>From the conception of NiSERCast to the first episode.</figcaption>
    
  </figure>
</p>

<h3 id="recording-setup">Recording Setup</h3>

<p>If you have listened to NiSERCast, you may have noticed that it does not sound
like a professionally produced podcast.  However, I am going to describe the
recording setup we used anyway, for reference.</p>

<p>The technical side of things was handled by <a href="http://www.instagram.com/this.is_anirudh/">Anirudh</a>, 
<a href="https://github.com/JeS24/">Jyotirmaya</a>, Spandan and
I. Each of the three participants talked into an Ahuja AWM-490V1 wireless
microphone, whose outputs were fed into <a href="https://www.audacityteam.org">Audacity</a> 
for recording via a Yamaha MG10XU mixer and a Focusrite 2i2 Scarlett audio
interface. Hosts, professors and the producers also wore headphones to monitor
audio levels. We got the mics, the mixer and a few cables from the Drama and
Music Club, the Focusrite audio interface belonged to me, we asked hosts to
bring headphones, and we had to get things like ¼-inch to XLR cable, aux
cables, and a five-way audio splitter on our own to make everything work.</p>

<h3 id="website-and-atom-feed">Website and Atom Feed</h3>

<p>I rewrote the NiSERCast website, more or less from scratch, in March 2021. It is
built on top of Jekyll—which provides an easy templating system and manages
the Atom feed—and is deployed with GitLab Pages while being hosted in a
GitLab repository.</p>

<p>
  <figure class="image wide">
    <img src="/assets/images/nisercast/evolution.png" alt="Some alternative covers." />
    
    <figcaption>Some alternative covers.</figcaption>
    
  </figure>
</p>

<p>For the Atom feed, I started with the basic template that a new Jekyll project
comes with and added extra tags according to 
<a href="https://podcasters.apple.com/support/823-podcast-requirements">Apple’s</a>, 
<a href="https://support.google.com/podcast-publishers/answer/9889544?hl=en">Google’s</a> and 
<a href="https://support.spotifyforpodcasters.com/hc/en-us/articles/360044440991-Podcast-specification-doc">Spotify’s feed requirements</a>.
To make sure that everything is working as expected before submitting the feed
to Apple, Google and Spotify, I used feed validators at 
<a href="https://podba.se/validate/">Podbase</a> and
<a href="https://validator.w3.org/feed/">W3C</a>.</p>

<h3 id="outreach">Outreach</h3>

<p>After releasing the first episode, we hit a small bureaucratic bump in the
road. As this podcast is an outreach activity bearing NISER’s name that is going to be
released to the general public, we were also supposed to talk to the NISER
Outreach Committee and get their permission separately. This could have been
pointed out when we sent our initial proposal to the DoSA, but we caught the Outreach
Committee’s attention only after the first real episode was released. The fact
that our first guest, Prof. Varadharajan Muruganandam, is very opinionated
did not help either.</p>

<p>Anyway, after a period of uncertainty, in which we were not even sure that we
would be allowed to continue the podcast, and a few weeks’ delay, we got the
clearance to continue as long as we made it clear that the opinions expressed
were personal opinions of the guests and did not represent NISER’s views.</p>

<h2 id="conclusion">Conclusion</h2>

<p>When the second wave of the coronavirus hit, almost all of the volunteers chose
to return to their homes; because of a variety of personal reasons, and
poor management of the pandemic by NISER’s administration, we could
not keep producing the podcast. After the campus reopened last December,
Jyotirmaya, Spandan and I were halfway into our final year and were extremely busy
with our master’s theses and PhD applications, and it became nearly impossible
for us to organize recording sessions from scratch again.</p>

<p>If you have made it this far into this post: thank you for putting up with my
ramblings. I hope that this has been useful in some way for people who
want to continue the podcast, or who decide to start a similar project at NISER.
All in all, I am extremely proud of what we were able to accomplish while working
with various constraints and limited resources. 
And I must express my deepest gratitude and thanks to everyone involved in the
project.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[It has been a year since the last episode of NiSERCast, and it has been more than two years since the idea of a science communication podcast led by NISER students was initially conceived. Partly for archival reasons and partly for self-indulgent reasons, here are some reflections from my involvement in the project.]]></summary></entry><entry><title type="html">How to (Not) Write a Computational Physics Library</title><link href="https://www.11de784a.com/2022/02/23/how-to-not-write-a-computational-physics-library.html" rel="alternate" type="text/html" title="How to (Not) Write a Computational Physics Library" /><published>2022-02-23T18:22:31+00:00</published><updated>2022-02-23T18:22:31+00:00</updated><id>https://www.11de784a.com/2022/02/23/how-to-not-write-a-computational-physics-library</id><content type="html" xml:base="https://www.11de784a.com/2022/02/23/how-to-not-write-a-computational-physics-library.html"><![CDATA[<p>Before starting let’s get the disclaimer out of the way: this is not a
tutorial.
I am not an expert by any stretch of the imagination; to wit, my only
experience with “real” C code has been working through Daniel Holden’s
<a href="https://buildyourownlisp.com/">Build Your Own Lisp</a> in the summer of 2020.
This is only an attempt to document (heh) my progress as I write code for my
<a href="https://www.niser.ac.in/sps/course/p452-computational-physics">computational physics elective</a>.</p>

<p>Oh, and did I mention? I am going to try writing everything. From scratch. In
C. And I’m writing this post to convince myself that it is not a bad idea.</p>

<p>Writing code for a computational physics class is weird. You are supposed to
<code class="language-plaintext highlighter-rouge">import numpy as np</code> right at the beginning to “make your life easier”,
but then you are supposed to forget that extremely optimized versions of
algorithms that you will naively implement over the next few weeks were
imported into your program by the very first line of code you wrote. If you don’t
want to <code class="language-plaintext highlighter-rouge">import numpy</code>, you could, of course, do something like <code class="language-plaintext highlighter-rouge">class
Array(list)</code>, but then, in addition to the housekeeping, you have to bear with
extremely slow linear algebra operations in everything you write.  Course
instructors don’t want you to use “any” libraries, but in practice this rule
gets extremely fuzzy: What *is* a library?  Does the standard library count?
Is it okay to use objects from a library but not its algorithms? Where do you
draw the line?</p>

<p>The other option, if you don’t want to use “any” libraries and still
have your code be relatively quick, is to use a compiled
language like C++ or Rust (or Julia, if you don’t mind being a dirty cheater<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>).</p>

<p>But writing good object oriented code with C++ is hard. Getting used to Rust’s
memory safety system is hard. Writing C code, on the other hand, is… also hard.</p>

<p>But it is also simple. C++ is very complicated.</p>

<p>Just look at Kernighan and Ritchie’s <em>The C Programming Language</em>
side-by-side with its C++ counterpart (Stroustrup’s <em>The C++ Programming
Language</em>):</p>

<p>
  <figure class="image wide">
    <img src="/assets/images/cpl001/youvstheguyshetellsyounottoworryabout.jpg" alt="A side-by-side comparison of C and C++" />
    
    <figcaption>A side-by-side comparison of C and C++</figcaption>
    
  </figure>
</p>

<p><em>K&amp;R</em> fits a surprisingly accessible and complete
introduction to C, several working examples, common idioms, <em>and</em> a
reference for the core language and standard library, in a measly 250 pages.
But this “simplicity” also speaks to a lack of features, most notably garbage
collection, operator overloading, and OOP patterns like inheritance, that most
people take for granted when writing code in a higher level language.</p>

<p>There is also the whole thing about the inescapable legacy of C in all of
modern programming, which I will not recount here. I am also going to skip the
metaphysics and philosophy that a more experienced C programmer would
have spent several paragraphs on,
but here is a quote from <a href="https://buildyourownlisp.com/chapter1_introduction#why_learn_c">Why learn C?</a>:</p>

<blockquote>
  <p>To want to master C is to care about what is powerful, clever, and free. To
become a programmer with all the vast powers of technology at his or her
fingertips and the responsibility to do something to benefit the world.</p>
</blockquote>

<p>Having written a few numerical linear algebra algorithms, I have to admit, I
have started to understand why people say things like this.
C forces you to care deeply about your code, especially when it comes to
memory. I have found myself trying to minimize heap allocations as much as
possible, something I would not have given a second thought to if I were using
Julia (even though the <a href="https://docs.julialang.org/en/v1/manual/performance-tips/#Measure-performance-with-[@time](@ref)-and-pay-attention-to-memory-allocation">documentation suggests that I
should</a>).</p>

<p>In the end, there is no perfect language<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup>. And for an application as broad
as numerical analysis, there can be no perfect tool for the job<sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">3</a></sup>.</p>

<p>There is no single reason why I have decided to stick<sup id="fnref:4" role="doc-noteref"><a href="#fn:4" class="footnote" rel="footnote">4</a></sup> with C.
As far as I can tell, there is no “killer feature”. But putting together the
learning experience C provides, the challenge of working with a very simple
language, its privileged status among programmers, and (perhaps most
importantly) the bragging rights that come with writing C, spending
some extra time and effort fighting code, and beating it into shape with <code class="language-plaintext highlighter-rouge">gdb</code>
and <code class="language-plaintext highlighter-rouge">valgrind</code> seems worthwhile.</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1" role="doc-endnote">
      <p>This is probably what I should have done. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:2" role="doc-endnote">
      <p>Except Lisp. Lisp is <em>the</em> perfect programming language. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:3" role="doc-endnote">
      <p>But Julia comes close. <a href="#fnref:3" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:4" role="doc-endnote">
      <p>For now, anyway. I am not promising that I am going to stick with this choice. <a href="#fnref:4" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name></name></author><category term="code" /><category term="physics" /><summary type="html"><![CDATA[An introduction to my misguided attempt at writing a computational physics library from *scratch*. In this post I try to convince myself that using ANSI C for this project is not a mistake.]]></summary></entry><entry><title type="html">Quantum Isothermal Processes are Not Isoenergetic</title><link href="https://www.11de784a.com/2020/05/16/quantum-isothermal-processes.html" rel="alternate" type="text/html" title="Quantum Isothermal Processes are Not Isoenergetic" /><published>2020-05-16T18:30:01+00:00</published><updated>2020-05-16T18:30:01+00:00</updated><id>https://www.11de784a.com/2020/05/16/quantum-isothermal-processes</id><content type="html" xml:base="https://www.11de784a.com/2020/05/16/quantum-isothermal-processes.html"><![CDATA[<p>In a classical <a href="https://en.wikipedia.org/wiki/Isothermal_process">isothermal
process</a> the internal energy
of the system remains invariant because for a classical ideal gas, the
internal energy is proportional to the temperature of the gas<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>.  However, if
we define a quantum analogue of the isothermal process, this is not necessarily
true. With
<a href="https://en.wikipedia.org/wiki/Quantum_superposition">all</a>
<a href="https://en.wikipedia.org/wiki/EPR_paradox">the</a>
<a href="https://en.wikipedia.org/wiki/Quantum_teleportation">weird</a>
<a href="https://en.wikipedia.org/wiki/Casimir_effect">things</a> that happen in quantum
systems, this is not really surprising, but it is definitely interesting.</p>

<p>In this post, I start by building some elementary quantum thermodynamics:
we shall look at the quantum version of the <a href="https://en.wikipedia.org/wiki/First_law_of_thermodynamics">first law of
thermodynamics</a>,
some general facts about quantum thermodynamic processes, and then define an
effective temperature for quantum systems and a quantum analogue of the
isothermal process. Once we have the definition of an isothermal process, we
shall see with an easy computation that when the working substance is a
<a href="https://en.wikipedia.org/wiki/Two-state_quantum_system">quantum two-level system</a>, 
the internal energy is not invariant during the process.</p>

<h2 id="the-quantum-first-law-of-thermodynamics">The Quantum First Law of Thermodynamics</h2>

<p>If the energy eigenstates of our quantum system are labelled by \(|n\rangle\)
with corresponding eigenenergies \(E_n\), the Hamiltonian (in the energy basis)
can be written as</p>

\[\hat{H} = \sum_n E_n |n \rangle \langle n|. \tag{1}\]

<p>It is reasonable to identify the internal energy \(U\), of the quantum system
with the expectation value of the Hamiltonian, hence</p>

\[U = \langle \hat{H} \rangle = \sum_n E_n P_n, \tag{2}
  \label{internal-energy}\]

<p>where \(P_n = \langle n | \hat{\rho} |n \rangle\) is the occupation probability
for the \(n\)th eigenstate, and \(\hat{\rho}\) is the density matrix.</p>

<p>We recall the usual expression of the first law: \(dU = dQ + dW\), and
take the exterior derivative of \(\eqref{internal-energy}\) to get a quantum
analogue<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup></p>

\[dU = \sum_n \left(E_n dP_n + P_n dE_n\right). \tag{3}
  \label{quantum-first-law}\]

<p>In order to identify the analogues of heat exchanged and work done, we
recall <a href="https://en.wikipedia.org/wiki/Entropy_(statistical_thermodynamics#Gibbs_entropy_formula)">the Gibbs
formula</a> for entropy: \(S = - \sum_n P_n \ln
P_n\), and because the heat exchanged is \(dQ = TdS\), we identify</p>

\[dQ = \sum_n E_n dP_n. \tag{4}
  \label{quantum-heat}\]

<p>Consequently, the work done is identified as</p>

\[dW = \sum_n P_n dE_n. \tag{5}
  \label{quantum-work}\]

<p>In heat engines based on quantum systems, a change in energy levels is
associated with work done by the engine, and a change in occupation
probabilities is associated with the heat exchanged between the engine and the
heat bath.</p>

<h2 id="effective-temperature-for-quantum-systems-and-the-quantum-isothermal-process">Effective Temperature for Quantum Systems and the Quantum Isothermal Process</h2>

<p>Statistical mechanics tells us that when a quantum system is in equilibrium
with a heat bath at temperature \(T = 1/k_B \beta\), the density matrix is
given by</p>

\[\rho(\beta) = \frac{e^{-\beta \hat{H}}}{Z(\beta)}, \tag{6}\]

<p>where \(Z(\beta) = \text{Tr}(e^{-\beta \hat{H}})\) is the partition function.
Occupation probabilities \(P_n\) can be obtained from the diagonal elements of
the density matrix,</p>

\[P_n = \langle n |\hat{\rho}(\beta)| n\rangle = \frac{e^{-\beta E_n}}{\sum_m
  e^{-\beta E_m}}, \tag{7}
  \label{canonical-distribution}\]

<p>and we note that when the system is at a fixed temperature \(T\), the
occupation probabilities must satisfy the above distribution.</p>

<p>The <em>effective temperature</em> of a quantum system is defined by ‘inverting’
\(\eqref{canonical-distribution}\). In order to understand what this means, we
look at a system with only two states \(|g\rangle\) and \(|e\rangle\) with
energies \(E_g\) and \(E_e\) respectively. If the system is in equilibrium with
a heat bath, the occupation probabilities satisfy</p>

\[\frac{P_e}{P_g} = e^{-\beta (E_e - E_g)}, \tag{8}\]

<p>and the relation can easily be inverted to get the temperature in terms of the
occupation probabilities</p>

\[T = \frac{E_e - E_g}{k_B} \left(\ln \frac{P_g}{P_e} \right)^{-1}. \tag{9}\]

<p>Based on the above expression, we can obtain an effective temperature even
when the system is not in equilibrium with a heat bath: \(k_B T_{eff} = (E_e - E_g) / \ln (P_g/P_e)\).</p>

<p>We see that for a two-level system, the effective temperature is uniquely
defined for all occupation probabilities \(P_g\) and \(P_e\). However, this
might not be so when the system has more than two levels. In general, a unique
effective temperature is defined only when the occupation probabilities satisfy
the canonical distribution in \(\eqref{canonical-distribution}\).</p>

<p>Once we know what a temperature means for an arbitrary quantum system,
we can define a quantum isothermal process in the obvious way. In a quantum
isothermal process, energy levels and occupation probabilities must change
simultaneously to always satisfy the canonical distribution for a fixed
temperature.</p>

<p>Before looking at what happens to a two-level system in an isothermal process,
we shall note a fact that will make certain computations easier:
<em>the work done \(dW\), heat exchanged \(dQ\), and therefore the change in
internal energy \(dU\), are invariant under a uniform shift of all energy
levels.</em></p>

<p>If we assume that all energy levels shift uniformly: \(E_n' = E_n + \delta\),
the first thing we note is that that \(dE_n' = dE_n\) because \(\delta\) is a
constant. Next, we consider the occupation probabilities,</p>

\[P_n' = e^{-\beta (E_n + \delta)} \left(\sum_m e^{-\beta (E_m + \delta)}\right)^{-1}
       = e^{-\beta E_n} \left(\sum_m e^{-\beta E_m}\right)^{-1}
       = P_n, \tag{10}\]

<p>and observe that \(dP_n' = dP_n\). Finally, from the quantum analogues of heat
\(\eqref{quantum-heat}\), work \(\eqref{quantum-work}\); and the first law
\(\eqref{quantum-first-law}\), the result follows.</p>

<p>In particular we note that, for the two-level system, assuming \(E_g = 0\)
has no effect on \(dU.\)</p>

<h2 id="the-two-level-system-in-an-isothermal-process">The Two-Level System in an Isothermal Process</h2>

<p>In an isothermal process \(i \to f\), we can assume that \(E_g^f = E_g^i = 0\)
and \(E_e^f = \zeta E_e^i\), where \(\zeta &gt; 0\) is some constant that can be
used to parametrize the evolution of the system under the process.
For our final trick, we consider the internal energy of the two-level system</p>

\[U(\zeta) = \sum_n P_n E_n = \frac{e^{-\beta \zeta E_e}}{1 + e^{-\beta \zeta E_e}} \zeta E_e, \tag{11}\]

<p>and its derivative with respect to \(\zeta\)</p>

\[\begin{align}
  \frac{dU(\zeta)}{d\zeta} &amp; = \frac{e^{-\beta \zeta E_e}}{1 + e^{-\beta \zeta E_e}}
  \zeta E_e \left(\frac{1}{\zeta} - \frac{\beta}{1 + e^{-\beta \zeta E_e}}\right) \\
  &amp; = U(\zeta) \left(\frac{1}{\zeta} - \frac{\beta}{1 + e^{-\beta \zeta E_e}}\right), \tag{12}
\end{align}\]

<p>which, in general, is non-zero. Thus, we have shown that the internal energy is
not invariant<sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">3</a></sup> in a quantum isothermal process, and I have fulfilled my promise.</p>

<hr />

<p>This blog post was inspired by an appendix in <em>Quantum Thermodynamic Cycles and
Quantum Heat Engines</em> (<a href="https://journals.aps.org/pre/abstract/10.1103/PhysRevE.76.031105">PhysRevE 76.031105</a>,
<a href="https://arxiv.org/abs/quant-ph/0611275">arXiv:quant-ph/0611275</a>) by Quan et
al. In the article, the authors have done a more general computation which can
also be applied to a quantum harmonic oscillator and a particle in an infinite
square well, but they assume (without any justification) that all the energy
levels change in the same ratio in an isothermal process. This is not a problem
for the two-level system because there are only two energy levels.</p>

<hr />

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1" role="doc-endnote">
      <p>I know that isothermal processes with a (classical) non-ideal gas as working substance are not, in general, isoenergetic. The title only is a thinly veiled excuse to write about quantum thermodynamic processes. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:2" role="doc-endnote">
      <p>For more details on the quantum version of the first law, refer to <em>Quantum Heat Engine With Multi-Level Quantum Systems</em> (<a href="https://journals.aps.org/pre/abstract/10.1103/PhysRevE.72.056110">PhysRevE 72.056110</a>, <a href="https://arxiv.org/abs/quant-ph/0504118">arXiv:quant-ph/0504118</a>) by Quan et al. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:3" role="doc-endnote">
      <p>I am not completely unapologetic for the double negatives in this post. <a href="#fnref:3" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name></name></author><category term="nature" /><category term="physics" /><summary type="html"><![CDATA[I define what it means for a quantum system to undergo an isothermal process, and we see that unlike what happens for a classical ideal gas, the internal energy of the system can change in such a process.]]></summary></entry><entry><title type="html">Origin of Angular Momentum Quantization in Bohr’s Model of Hydrogen Atom</title><link href="https://www.11de784a.com/2020/03/23/angular-momentum-bohr-model.html" rel="alternate" type="text/html" title="Origin of Angular Momentum Quantization in Bohr’s Model of Hydrogen Atom" /><published>2020-03-23T12:40:00+00:00</published><updated>2020-03-23T12:40:00+00:00</updated><id>https://www.11de784a.com/2020/03/23/angular-momentum-bohr-model</id><content type="html" xml:base="https://www.11de784a.com/2020/03/23/angular-momentum-bohr-model.html"><![CDATA[<p>To me, the quantization of angular momentum in the 
<a href="https://en.wikipedia.org/wiki/Bohr_model">Bohr model</a>
of hydrogen has always felt like a very <em>ad hoc</em> assumption. To think that
<a href="https://en.wikipedia.org/wiki/Niels_Bohr">Niels Bohr</a> just happened to come up
with the correct quantization condition \(L_z = n \hbar\), (which happens to be
identical to what is obtained from a quantum mechanical treatment) is absurd.</p>

<p>Elementary texts explain the quantization rule by appealing to the
constructive interference of electron waves (<em>the only orbits permitted
are the ones whose circumference is an integral multiple of the <a href="https://en.wikipedia.org/wiki/Matter_wave">de Broglie
wavelength</a> of the electron</em>). 
However, we note that <a href="https://en.wikipedia.org/wiki/Louis_de_Broglie">de
Broglie</a> formulated his
hypothesis more than a decade <em>after</em> the Bohr model was proposed.</p>

<p>In this post, I want to start with the information available to Bohr at the
time (evidence for quantized energy levels from atomic spectra and the
empirical <a href="https://en.wikipedia.org/wiki/Rydberg_formula">Rydberg formula</a>) and
get to the magical quantization rule with the 
<a href="https://en.wikipedia.org/wiki/Correspondence_principle"><em>correspondence principle</em></a> 
(as Bohr did).</p>

<p>The following section will start with Bohr’s original postulates and establish
some notation, I will discuss the origin of the quantization in the next
section.</p>

<h2 id="bohrs-postulates">Bohr’s Postulates</h2>

<p>We start with Bohr’s assumptions in his own words:</p>

<blockquote>
  <ol>
    <li>
      <p>That an atomic system can, and can only, exist permanently in a certain
  series of states corresponding to a discontinuous series of values for its
  energy, and that consequently any change of the energy of the system,
  including emission and absorption of electromagnetic radiation, must take
  place by a complete transition between two such states. These states will
  be denoted as the <em>stationary states</em> of the system.</p>
    </li>
    <li>
      <p>That the radiation absorbed or emitted during a transition between two
  stationary states is ‘‘unifrequentic’’ and possesses a frequency \(f\),
  given by the relation \(E' - E'' = hf\), where \(h\) is Planck’s constant
  and where \(E'\) and \(E''\) are the values of the energy in the two states
  under consideration.</p>
    </li>
  </ol>
</blockquote>

<p>We note that both of the above assumptions are at odds with classical
electrodynamics because (1) accelerating charges radiate away their energy, and
electrons in an orbit around the nucleus must be accelerating; and
(2) the frequency of radiation given off by a periodically accelerating charge
should be the same as the frequency of acceleration.</p>

<p>A third assumption is the <em>corresponence principle</em> which asserts that, as
microscopic systems become macroscopic, the results must go over to the
classical.</p>

<p>
  <figure class="image mid">
    <img src="/assets/images/bohr/orbits.png" alt="A schematic of the Bohr model of the atom with the negatively
  charged electrons executing circular orbits around positively charged
  nuclei. Image license: CC BY-SA 3.0. Attribution: JabberWok." />
    
    <figcaption>A schematic of the Bohr model of the atom with the negatively
  charged electrons executing circular orbits around positively charged
  nuclei. Image license: CC BY-SA 3.0. Attribution: JabberWok.</figcaption>
    
  </figure>
</p>

<p>In order to estimate the energy of a stationary state, we equate the Coulomb
force on the electron and the centripetal force</p>

\[\frac{e^2}{4 \pi \epsilon_0} \frac{1}{r^2} = \frac{mv^2}{r} \tag{1}\]

<p>which leads to the kinetic energy given by</p>

\[T = \frac{1}{2} m v^2
    = \frac{1}{2} \frac{e^2}{4 \pi \epsilon_0} \frac{1}{r}. \tag{2}\]

<p>The total energy is then given by</p>

\[\begin{align}
    E &amp; = T + V \\
      &amp; = \frac{1}{2} \frac{e^2}{4 \pi \epsilon_0} \frac{1}{r}
           - \frac{e^2}{4 \pi \epsilon_0} \frac{1}{r} \\
      &amp; = - \frac{1}{2} \frac{e^2}{4 \pi \epsilon_0} \frac{1}{r} \tag{3}
  \end{align}\]

<p>We note that the only variable in the above equation is \(r\). Hence if the
atomic energy levels are quantized, the atomic radius must also be quantized.
Due to the <em>second postulate</em>,  frequency of radiation \(f\), emitted due to a
transition from energy \(E'\) to \(E''\) (\(E' &gt; E''\)) is given by</p>

\[\begin{align}
    hf &amp; = E' - E'' \\
       &amp; = \frac{1}{2} \frac{e^2}{4 \pi \epsilon_0}
            \left( \frac{1}{r'} - \frac{1}{r''} \right)
            \label{e:frequency1} \tag{4}
  \end{align}\]

<p>where \(r'\) and \(r''\) are the atomic radii of the electron in states of
energy \(E'\) and \(E''\) respectively. Equation \eqref{e:frequency1}, along
with the Rydberg formula</p>

\[\frac{1}{\lambda_{nm}} = R_H \left(\frac{1}{m^2} - \frac{1}{n^2}\right)\\
  \label{e:rydberg} \tag{5}\]

<p>where \(n\) and \(m\) are integers, and \(R_H\) is a constant referred to as
the Rydberg constant which has the units of inverse length, can be used to
deduce the quantization of the atomic radii. Since, the frequency and
wavelength are related by \(f \lambda = c\), where \(c\) is the
speed of light, Equation \eqref{e:rydberg} can be
written in terms of the frequency as</p>

\[h f_{nm} = c R_H \left(\frac{1}{m^2} - \frac{1}{n^2}\right) 
  \label{e:rydberg2} \tag{6}\]

<p>Expressions in Equations \eqref{e:frequency1} and \eqref{e:rydberg2} lead us to guess that
the orbital radii are quantized as</p>

\[r_n = a_0 n^2 \tag{7}\]

<p>where \(n\) is an integer and \(a_0\) is a constant with units of length,
which for \(n = 1\) gives the radius of the electron orbit in the lowest energy
stationary state and is called the Bohr radius.</p>

<p>Now, we need to calculate the Bohr radius. We stop here to note that this is
the point when elementary texts ‘assume’ the angular momentum quantization
rule, \(L_z = m_e v_n (a_0 n^2) = n \hbar\) as a <em>third postulate</em> to
derive an expression for \(a_0\).  In this post however, we will use the
correspondence principle to derive and expression for \(a_0\) and consequently the quantization rule.</p>

<h2 id="origin-of-angular-momentum-quantization">Origin of Angular Momentum Quantization</h2>

<p>First, we note that the kinetic energy of the electron is quantized</p>

\[T  = \frac{1}{2} m_e v_n^2 
     = \frac{1}{2} \frac{e^2}{4 \pi \epsilon_0} \frac{1}{a_0 n^2}.
       \tag{8} \label{e:kinetic}\]

<p>Next we recall that, according to classical electrodynamics the frequency of
radiation emitted from an oscillating charge is equal to the frequency of
oscillation of the charge. According to correspondence principle, the frequency
of radiation for transition between adjacent states given by <em>the second
postulate</em>, must approach the orbital frequency of the electron in its
stationary state for highly energetic states, that is, for large \(r_n\).</p>

<p>The frequency of oscillation of the electron in an orbit of radius \(r_n\) is</p>

\[\begin{equation}
    f_{orbit}  = \frac{v_n}{2 \pi (a_0 n^2)} \label{e:forb} \tag{9}
  \end{equation}\]

<p>Next, we substitute for the velocity from Equation \eqref{e:kinetic}</p>

\[f_{orbit}^2 = \frac{1}{4 \pi^2 n^4 a_0^2}
      \left[\frac{1}{m_e} \left(\frac{e^2}{4 \pi \epsilon_0}\right) 
        \frac{1}{n^2 a_0}\right] \label{e:forb2} \tag{10}\]

<p>The frequency of radiation emitted for transition between adjacent states
(between stationary states characterized by \(n\) and \(n + 1\)) can
be calculated from the <em>second postulate</em>, and \(f_{radiation}^2\) is given by</p>

\[\begin{align}
    f_{radiation}^2 &amp; = \left(\frac{1}{2} 
         \left(\frac{e^2}{4 \pi \epsilon_0} \right)
         \frac{1}{h a_0} \left[\frac{1}{n^2} - \frac{1}{(n + 1)^2}\right]
       \right)^2 \\
       &amp; = \left(\frac{1}{2}
             \left(\frac{e^2}{4 \pi \epsilon_0} \right)
             \frac{1}{h a_0} \left[\frac{2n + 1}{n^2 (n + 1)^2} \right]
            \right)^2. \tag{11}
  \end{align}\]

<p>As \(n\) becomes large, the expression becomes</p>

\[\lim_{n \rightarrow \infty} f_{radiation}^2
      = \left[ \left(\frac{e^2}{4 \pi \epsilon_0} \right) \frac{1}{h a_0}
            \frac{1}{n^3}\right]^2
            \tag{12}\]

<p>which can be equated to \(f_{orbit}\) and the resulting equation solved for 
\(a_0\) to obtain</p>

\[a_0 = (4 \pi \epsilon_0) \frac{\hbar^2}{m_e e^2} \tag{13}\]

<p>Finally, from Equations \eqref{e:forb} and \eqref{e:forb2}, the velocity of the
electron in the \(n\)th Bohr orbit is</p>

\[v_n = \frac{\hbar}{m_e a_0} \frac{1}{n} \tag{14}\]

<p>and so the angular momentum is</p>

\[\begin{align}
    L_z &amp; = m_e v_n r_n \\
        &amp; = m_e \left(\frac{\hbar}{m_e a_0} \frac{1}{n}\right) (n^2 a_0) \\
        &amp; = n \hbar. \tag{15}
  \end{align}\]

<hr />

<p>This blog post was inspired by Burkhardt and Leventhal’s treatment of the
problem in <a href="https://www.springer.com/gp/book/9780387257488"><em>Topics in Atomic
Physics</em></a>. The primary source,
of course, is Bohr’s seminal article titled <a href="https://www.gutenberg.org/ebooks/47167"><em>On the Quantum Theory of Line
Spectra</em></a>.</p>]]></content><author><name></name></author><category term="nature" /><category term="physics" /><summary type="html"><![CDATA[To me, the quantization of angular momentum in terms of ħ has always felt like a very ad hoc assumption. In this post I start with the information available to Bohr at the time and derive the famous quantization rule.]]></summary></entry><entry><title type="html">Electromagnetic Fields and Arbitrary Lorentz Transformations</title><link href="https://www.11de784a.com/2020/02/20/lorentz-transform-fields.html" rel="alternate" type="text/html" title="Electromagnetic Fields and Arbitrary Lorentz Transformations" /><published>2020-02-20T11:30:00+00:00</published><updated>2020-02-20T11:30:00+00:00</updated><id>https://www.11de784a.com/2020/02/20/lorentz-transform-fields</id><content type="html" xml:base="https://www.11de784a.com/2020/02/20/lorentz-transform-fields.html"><![CDATA[<p>This post goes over the algebra involved in deriving the expressions for how
electric and magnetic fields change under an arbitrary (proper) Lorentz
transformation.</p>

\[\begin{align}
  \vec{E'} = \gamma \left(\vec{E} - \vec{\beta} \times \vec{B}\right)
    - \frac{\gamma^2}{\gamma + 1} 
        \vec{\beta} \left(\vec{\beta} \cdot \vec{E}\right) \\

  \vec{B'} = \gamma \left(\vec{B} + \vec{\beta} \times \vec{E}\right)
    - \frac{\gamma^2}{\gamma + 1} 
        \vec{\beta} \left(\vec{\beta} \cdot \vec{B}\right)
\end{align}\]

<blockquote>
  <p>In this post, I am using the ‘particle physics’ metric signature 
\((+, -, -, -)\) and taking the speed of light, \(c = 1\).</p>
</blockquote>

<p>Our starting point is going to be the fact that the 
<a href="https://en.wikipedia.org/wiki/Electromagnetic_tensor">electromagnetic field tensor</a>, 
\(F_{\mu \nu} = \partial_\mu A_\nu - \partial_\nu A_\mu\) 
(where \(\partial_\mu\) is the four-gradient and \(A_\mu\) is the
four-potential) transforms like a Lorentz tensor, i.e. 
\(F'_{\mu \nu} = \Lambda_\mu^\alpha \Lambda_\nu^\beta F_{\alpha \beta}\) for
an arbitrary Lorentz transformation \(\Lambda \in SO(1, 3)\).</p>

<p>The components of a (proper) Lorentz transformation are given by</p>

\[\begin{align}
  \Lambda_0^0 &amp; = \gamma, \\
  \Lambda_0^i = \Lambda_i^0 &amp; = - \gamma \beta_i, \\
  \Lambda_i^j &amp; = \delta_i^j + \frac{\beta_i \beta_j}{\beta^2} 
    \left(\gamma - 1\right).
\end{align}\]

<p>We stop here to note the relation between the electromagnetic field tensor
\(F_{\mu \nu}\) and the classical fields \(\vec{E}\) and \(\vec{B}\). We choose
the Cartesian coordinates for simplicity, but note that this choice doesn’t
lead to any loss of generality because the final expressions derived are
coordinate independent.</p>

\[E_i = F_{0 i} = - F_{i 0}, \\\]

<p>and</p>

\[B_i = - \frac{1}{2} \epsilon^{i j k} F_{jk} 
    \implies F_{jk} = -\epsilon_{i j k} B_i,\]

<p>where \(\epsilon_{ijk}\) is the Levi-Civita tensor. The property
\(\epsilon_{ijk} \epsilon^{ljk} = 2 \delta_i^l\) is used invert the relation in
the second line. We also note that \(\epsilon_{ijk} = \epsilon^{ijk}\) because
for space components the (Cartesian) metric is simply the identity.</p>

<h2 id="transformation-of-the-electric-field">Transformation of the Electric Field</h2>

<p>For electric field, the transformation law for \(F_{0 i}\) components is used,</p>

\[\begin{align}
  E'_i &amp; = F'_{0 i} \\
         &amp; = \Lambda_0^\mu \Lambda_i^\nu F_{\mu \nu} \\
         &amp; = \Lambda_0^0 \Lambda_i^l F_{0 l} 
              + \Lambda_0^k \Lambda_i^0 F_{k 0}
              + \Lambda_0^k \Lambda_i^l F_{k l}
\end{align}\]

<p>where the summation is broken up into space and time parts and the antisymmetry
of the electromagnetic tensor is used to make the diagonal terms go away. Next
we simplify the first two terms in the expression above:</p>

\[\begin{align}
  \Lambda_0^0 \Lambda^l_i F_{0 l} + \Lambda_0^k \Lambda_i^l F_{k l}
    &amp; = \gamma \left(\delta_i^l 
          + \frac{\beta_i \beta_l}{\beta^2} (\gamma - 1)\right) E_l
            + (-\gamma \beta_k)(-\gamma \beta_i) (-E_k) \\
    &amp; = \gamma E_i
          + \gamma (\gamma - 1) \frac{\beta_i \beta_l}{\beta^2} E_l
          - \gamma^2 \beta_i \beta_k E_k \\
    &amp; = \gamma E_i
          + \gamma \beta_i \beta_k E_k
              \left(\frac{\gamma - 1}{\beta^2} - \gamma\right) \\
    &amp; = \gamma E_i
          - \frac{\gamma^2}{\gamma + 1} \beta_i
              (\vec{\beta} \cdot \vec{E}).
\end{align}\]

<p>Now, we consider the third term:</p>

\[\begin{align}
  \Lambda_0^k \Lambda_i^l F_{k l}
    &amp; = (-\gamma \beta_k) \left(\delta_i^l 
          + \frac{\beta_i \beta_l}{\beta^2} (\gamma - 1)\right)
            (- \epsilon^{mkl} B_m) \\
    &amp; = \gamma \epsilon^{imk}\beta_k B_m - \gamma(\gamma - 1)
          \frac{\beta_i}{\beta^2} 
             \left(\epsilon^{mkl} B_m \beta_k \beta_l\right) \\
    &amp; = -\gamma (\vec{\beta} \times \vec{B})_i
\end{align}\]

<p>In \(\left(\epsilon^{mkl} B_m \beta_k \beta_l\right)\), the sum runs
over all the three indices, which makes the term vanish due to the complete
antisymmetry of the Levi-Civita tensor.</p>

<p>Finally, we put everything together to yield,</p>

\[E'_i = \gamma \left(E_i - (\vec{\beta} \times \vec{B})_i \right)
          - \frac{\gamma^2}{\gamma + 1} 
              \beta_i \left(\vec{\beta} \cdot \vec{E}\right),\]

<p>or in vector notation,</p>

\[\vec{E'} = \gamma \left(\vec{E} - \vec{\beta} \times \vec{B}\right)
    - \frac{\gamma^2}{\gamma + 1} 
        \vec{\beta} \left(\vec{\beta} \cdot \vec{E}\right).\]

<h2 id="transformation-of-the-magnetic-field">Transformation of the Magnetic Field</h2>

<p>In order to derive the transformation law for magnetic fields, a similar
procedure is employed. We begin by noting that,</p>

\[\begin{align}
  B'_i &amp; = -\frac{1}{2} \epsilon^{ijk} F'_{jk} \\
       &amp; = -\frac{1}{2} \epsilon^{ijk} \Lambda_j^\mu \Lambda_k^\nu F_{\mu \nu} \\
       &amp; = -\frac{1}{2} \epsilon^{ijk} \left( 
              \Lambda_j^0 \Lambda_k^l F_{0 l} 
                + \Lambda_j^l \Lambda_k^0 F_{l 0}
                + \Lambda_j^l \Lambda_k^m F_{l m} \right)

\end{align}\]

<p>Next, the first two terms inside the brackets are simplified,</p>

\[\begin{align}
  \Lambda_j^0 \Lambda_k^l F_{0 l} + \Lambda_j^l \Lambda_k^0 F_{l 0}
    &amp; = (\Lambda_j^0 \Lambda_k^l - \Lambda_j^l \Lambda_k^0) F_{0 l} \\
    &amp; = \left[(-\gamma \beta_j) 
            \left(\delta_k^l + (\gamma - 1) 
              \frac{\beta_l \beta_k}{\beta^2} \right)
          - \left(\delta_j^l + (\gamma - 1)
              \frac{\beta_l \beta_j}{\beta^2} \right)
            (-\gamma \beta_k)\right] F_{0 l} \\
    &amp; = \gamma \left( -\delta_k^l \beta_j + \delta_j^l \beta_k \right) F_{0l},
\end{align}\]

<p>so that,</p>

\[\begin{align}
  -\frac{1}{2} \epsilon^{ijk} 
      (\Lambda_j^0 \Lambda_k^l F_{0 l} + \Lambda_j^l \Lambda_k^0 F_{l 0})
    &amp; = \frac{1}{2} \gamma \epsilon^{ijk} 
      \left(\delta_k^l \beta_j - \delta_j^l \beta_k \right) F_{0 l} \\
    &amp; = \frac{1}{2} \gamma
      \left( \epsilon^{ijl}\beta_j - \epsilon^{ilk} \beta_k \right) F_{0 l} \\
    &amp; = \frac{1}{2} \gamma
      \left( \epsilon^{ijl}\beta_j + \epsilon^{ijl} \beta_j \right) F_{0 l} \\
    &amp; = \gamma \epsilon^{ijl} \beta_j E_l \\
    &amp; = \gamma \left(\vec{\beta} \times \vec{E}\right)_i.
\end{align}\]

<p>For the final step, we expand</p>

\[\begin{align}
  \Lambda_j^l \Lambda_k^m F_{lm}
    &amp; = \left(\delta_j^l + (\gamma - 1) \frac{\beta_j \beta_l}{\beta^2} \right)
        \left(\delta_k^m + (\gamma - 1) \frac{\beta_k \beta_m}{\beta^2} \right)
        (- \epsilon_{nlm} B_n) \\
    &amp; = - \epsilon_{njk} B_n - \frac{\gamma - 1}{\beta^2}
        (\epsilon_{njm} \beta_k \beta_m
          + \epsilon_{nlk} \beta_j \beta_l) B_n.
\end{align}\]

<p>Where the last term vanishes due to the antisymmetry of the Levi-Civita. Next,
the \(-\epsilon^{ijk}/2\) factor is put in to get</p>

\[\begin{align}
  -\frac{1}{2} \epsilon^{ijk} \Lambda_j^l \Lambda_k^m F_{lm}
    &amp; = \frac{1}{2} \left[
          \epsilon^{ijk} \epsilon_{njk} B_n
            + \frac{\gamma - 1}{\beta^2}
                (\epsilon^{ijk} \epsilon_{njm} \beta_k \beta_m + 
                  \epsilon^{ijk} \epsilon_{nlk} \beta_j \beta_l) B_n
          \right] \\
    &amp; = B_i + \frac{1}{2} \frac{\gamma - 1}{\beta^2} \left(
          \beta_m \beta_m B_i + \beta_l \beta_l B_i 
            - \beta_i \beta_n B_n - \beta_i \beta_n B_n
          \right) \\
    &amp; = B_i + (\gamma - 1) B_i 
        - \frac{\gamma - 1}{\beta^2} \beta_i \beta_n B_n \\
    &amp; = \gamma B_i - \frac{\gamma^2}{\gamma + 1} \beta_i 
          \left(\vec{\beta} \cdot \vec{B}\right).
\end{align}\]

<p>In the above simplification, we have used the following properties of the
Levi-Civita tensor: \(\epsilon_{ijk} \epsilon^{imn} = \delta_j^m \delta_k^n -
\delta_j^n \delta_k^m\) and \(\epsilon_{jmn} \epsilon^{imn} = 2 \delta^i_j\)</p>

<p>Putting everything together leads to</p>

\[B'_i = \gamma \left(B_i + (\vec{\beta} \times \vec{E})_i \right)
          - \frac{\gamma^2}{\gamma + 1} 
              \beta_i \left(\vec{\beta} \cdot \vec{B}\right),\]

<p>and</p>

\[\vec{B'} = \gamma \left(\vec{B} + \vec{\beta} \times \vec{E}\right)
    - \frac{\gamma^2}{\gamma + 1} 
        \vec{\beta} \left(\vec{\beta} \cdot \vec{B}\right),\]

<p>in vector notation.</p>

<hr />

<p>I found this particular way of deriving the transformation laws pretty neat;
and it was a nice exercise in tensor notation and index juggling.
If you spot a mistake in the algebra, please <abbr title="Write to me at ayush [dot] singh [at] niser [dot] ac [dot] in">let me know</abbr>.</p>]]></content><author><name></name></author><category term="nature" /><category term="physics" /><summary type="html"><![CDATA[This post goes over the algebra involved in deriving the expressions of electric and magnetic fields under the most general Lorentz transformation. I could not find this anywhere else on the internet.]]></summary></entry></feed>