Your Week in Seminars: Conformal Edition

Another week has gone by here in Cornell. The last leaves are turning red, a hint of snow passed us on the weekend, and the undergrads have hit the streets and parties in minimal clothing, then did the same again next day wearing a set of cat ears. And in the physics department, we had the usual three talks. On Monday, the colloquium speaker was Holger Müller from UC Berkeley, talking about Gravitational Redshift, Equivalence Principle, and Matter Waves. The center of the talk was Muller's experimental device, an atom interferometer. Many of you will remember the Michelson-Morley interferometer, the device used to disprove the existence of the ether. A light-interferometer essentially takes a beam of light, splits it in two and then merges it back again, using the result of the interference between the two parts to learn something about the relative difference between the optical paths taken by the two. The atom interferometer, then, performs a similar function with atom wavefunctions. An atom is shot up into the air and a laser is directed towards it and calibrated to interact with the atom half the time. The atom's wavefunction is split two trajectories, at the end of which another laser is calibrated to bring the two paths back together. The detector can then measure the path difference between the two trajectories, and as we have excellent ways of measuring time and the mass and energy of the atom, this amounts to a very accurate measurement of g, the free-fall constant. Muller went on to show how his team has been using the interferometer to perform very accurate measurements of General Relativity, from its isotropy to the universality of free fall motion for objects of different masses. There were some neat tricks described, and they mentioned the ability to measure those minute differences in gravity experienced by moving the system one meter upwards. It's always a little difficult to get excited about tests that confirm an accepted theory, especially one like General Relativity, but I think this is important work. To paraphrase the words of fellow Virtuos Jared, GR is always going to be right up until we find where it breaks. On Wednesday, David Kaplan talked about Conformality Lost. This talk was about QCD, but not about QCD. One of the features of QCD, or really field theories in general, is the running of the coupling constants. Where in classical theories the strength of the interaction between two particles is constant and depends only on the distance between them, field theory shows us how the strength of the interaction changes with the energy of the participating particles. This is crucial, for instance, for theories of grand unification that posit that the known forces are all the same at very high energies. In QCD, in particular, the running of the constant also has to with confinement and asymptotic freedom. Confinement is the notion that quarks can never break free of each other, and so we never observe them alone in nature, only within particles such as protons, neutrons, baryons and mesons. Asymptotic freedom is the notion that at high energies, if we collide another particle with a quark, it behaves as if it was free of other influence. If we associate long distances with low energy and short distances with high energy, we can see how the coupling must flow from very small at one end to very large at the other end. One of the interesting things about the running of the coupling is that it defines a scale for the theory. If the coupling is different for particles of energy E~1~ and E~2~, then we can choose some value of the coupling and describe our energy in relation to the energy relative to this scale. Theories without running coupling are called conformal and have no natural scale. QCD, it seems, behave this way if you take it all the way to asymptotic freedom. Kaplan talked about the investigation of this conformal stage of the theory, its existence and inexistence. As an analog he showed a quantum-mechanical system of a particle in a Coulombic, potential. The minimum energy of this system is given by solution of a quadratic equation, which can have either two solutions, one or none, depending on the relation between the mass of the particle and the strength of the potential. A scale exists in this case only if there are two solutions: a single energy is meaningless, of course, because we can always add a constant, but if there's two of them then the difference defines a scale. This toy model, it turns out, can be analogous to a QCD with the equivalent parameter being the relative number of flavors (kind of particles) and colors (different charges in the theory, red, blue and green in our regular QCD). There were a number of interesting results from this model, the most exciting one, perhaps, being the possible existence of a "mirror" QCD theory beyond the conformal point of QCD, a sort of theory with a different number of colors and different gauge groups. Kaplan ended his talk by talking of at least one possible candidate for this mirror theory that they had recently found. Finally, on Friday, we had Ami Katz from Boston University talk about CFT/AdS. AdS/CFT has been a big buzzword for the last decade or so. The CFT here stands for conformal field theory of the kind mentioned in the previous summary, and AdS stands for Anti-de Sitter space, a geometry of spacetime possible in general relativity. The slash in between stands for a duality that allows results from one theory to be interpreted in the other and vice versa. This has some exciting implications since it allows us to use each theory in the regime where we can solve it. Particle theorists are, in general, trying to use the CFT to solve for high-energy theories that behave like AdS. Katz had apparently rewritten the duality as CFT/AdS, to signal that he was asking the opposite question, starting with a CFT and asking whether it is a good fit for the duality. A large part of the talk was dedicated to making an analogy from CFTs into conventional field theories. We know pretty well when a field theory is a good description of reality and when it tends to break down. This has to do, usually, with some cutoff energy, a scale at which new physics comes into play. As long as we stay at energies far below that cutoff, the effects of the unknown physics will be a small correction to the calculations we make with our known physics. In CFTs, we had just said, there is no energy scale, and so the question must be different. The relevant question, apparently, is the dimensionality of operators - not what their energy scale is, but how they scale with a change of energy. For instance, a derivative behaves like inverse distance, and distance behaves like inverse energy, so a single derivative scales linearly in energy, while a double derivative scale quadratically. I didn't understand much past the half-point of this lecture, but the bottom line appeared to be that a well-behaved CFT has a gap in its operators dimensionality, allowing us to focus on one operator and plenty of its derivatives before coming to the scaling of the next operator. This kind of gap allows our perturbative corrections to remain perturbative when we go to the AdS side. That's it for last week, with its conformal ups and down. As usual, we're past the first seminar of the new week, which was non-wimpy talk about WIMPs. Still ahead this week are superconductors (and more AdS/CFT, presumably) and some non-thermal histories of the universe. (that is, of course, if I don't freeze first - temperatures have dropped below zero already. It's so much colder when you work in Celsius)

Your Week in Seminars Fermionic Edition

Good evening, and welcome to the second edition of YWiS. Last week I took in the full range of seminars, from colloquium to Friday lunch. I don't know if I can say I took in the full content of these talks as well, but let's see what I learned On Monday we had Andrew Millis from Columbia University talk about Materials with Strong Electronic Correlations: The (Theoretical) End of the Beginning? (I think the subtitle wasn't actually there in the talk itself). This was a condensed matter theory talk, and like all condensed matter talks it started off with the phase diagram for cuprates and a mention of the illustrious pseudogap, the Dark Energy of condensed matter. The pseudogap is a phase of cuprates - the materials that make high-temperature superconductors - that occurs at about the same concentration of defects as superconductivity but at a higher temperature. It is a little-understood phase that sits between two well-understood phases (antiferromagnetism and Fermi liquid) and perhaps holds some answers to the nature of high-temperature superconductivity. Millis started with the pseudogap picture and a short overview of the current state of condensed matter theory. He claimed that perhaps some of the phases of matter in question have local, short-range ordering, but no overarching long-range order in the system, and that the investigation of these phases should take this into account. At the end of this introduction he asked why we cannot easily solve the problems of condensed matter. The basic equations that govern the interactions in the field are known - the electromagnetic potential and Schrödinger's equation - and we should be able to just plug them into a computer and calculate away. The trouble, as Millis presented it, comes from the fermionic nature of the problem. What we're trying to calculate, in metals, is the behavior of the electrons running through the bulk of the metal. Electrons are fermions, which means that no two can have the same quantum numbers, that is no two can be in the same place with the same momentum and spin. It turns out that the configurations with lowest energies tend to be symmetric, with many particles in the same position. Finding low-energy configurations that put every particle in a different place is much harder. I didn't get a lot more from this talk. Millis went on to suggest a method that avoids tackling the problem directly, but rather solves an analogous one that we can translate to into a solution. I believe that there was some talk of a local, rather than global, solution, and of the Hubbard model, which is a popular approximation used in modeling electrons in a solid. I phased in and out of this talk, but I'd peg my Understanding at 25 minutes, and my Interest at about 35 minutes. The Wedenesday particle talk was by Jesse Thaler from MIT. He talked about Aspects of Goldstini. Goldstini is the Italian plural form of goldstino, which is the fermionic version - we put "ino" at the end of fermionic particles, influenced by the neutrino - of the Goldstone boson. A Goldstone boson is a massless particle that we find in theories of spontaneous symmetry breaking. Spontaneous symmetry breaking is a popular concept in particle physics, which springs from the concept of an unstable energy maximum. Imagine a pencil standing on its tip, a system which is symmetric in every direction. The pencil is unstable, though, and left by itself it would fall down in any one of the equivalent directions around it. Once it has fallen, it's broken the symmetry and created one preferred direction. Thus the symmetry of the system is broken when one direction is chosen spontaneously. This sort of thing is at the bottom of our understanding the electroweak force, and pops up quite a bit in particle physics. When it does, we expect a Goldstone boson, a massless particle that roughly corresponds to spinning the fallen-down pencil around its tip. The goldstini is the fermionic version of that particle which springs from the breaking of supersymmetry - the symmetry that relates fermions and bosons. The goldstino, then, is well known and accepted in common theories of supersymmetry. It breaks supersymmetry, and then interacts with the gravitino - another fermion, which mediates the force of gravity - to become massless. Thaler's work posits more than one goldstino, hence, goldstini. How can we have more than one goldstino? By breaking supersymmetry more than once. We do this by imagining several "sectors" in our theory, different sets of fields (particles) that break supersymmetry but don't interact with each other significantly. When you work through this model it turns out that you can have several goldstini. Also, as the original goldstini lost its mass by giving it to the gravitino, and the gravitino is now satisfied, the new goldstini get to keep their mass, which turns out to be exactly twice that of the (satisfied) gravitino. Thaler then discussed three possible scenarios for this mass, and what we would expect to see at the LHC in each case. The important thing, it turns out, is how this mass compares with that of the lightest ordinary superpartner, the first supersymmetry-related particle we expect to see in the LHC. If the mass of the goldstini is very small, they will not come into play as the LOSP will decay into particles we already know. If the mass of the goldstini is too large, then the LOSP cannot decay into it. But if the mass is in some goldstinilocks region in-between, things become interesting and we can expect to see evidence of the gravitino and the goldstini, and distinctly see one having double the mass of the other. I followed a good portion of this talk, with Understanding of 30 minutes all in all, and perhaps 45 minutes of interest. Finally, the Friday particle theory lunch had a talk by our own David Curtin, one of Csaba's grad students. He talked about Solving the gaugino mass problem in Direct Gauge Mediation. I came into this one to follow more of it, on account of the speaker being a student, but ended up following very little as it was technical and above my level. It revolved, again, around supersymmetry breaking. David does model building, which means he starts out with some acceptable results, i.e. the universe as we know it, and tries to tinker up a combination of particles and interactions that would reproduce it, one portion at a time. What he was trying to build this time was a metastable level in the broken supersymmetric potential. If we think back to our pencil, we had an unstable maximum, the pencil standing on its tip, and a minimum point, the pencil laying on the table, from which it cannot fall. But we can also imagine a midpoint - perhaps resting one side of the pencil on a book. It can't fall any further right away, but there is another, preferred position lying flat on the table. That's what we call a metastable energy level. As it turns out, the metastable level has some desirable outcomes within the context of supersymmetry, and the talk revolved around the ways we have of getting the right energy structure to our system while avoiding things we don't want in our models - arbitrary particle masses, a large number of new particles, or anything blatantly unphysical. My Understanding here was quite close to 0, as the technicalities were beyond me. (in fact, the pre-seminar discussion was about soccer, so one might say my understanding was negative). I probably kept trying to follow for about half the talk, or 30 minutes. That's it for last week. This week we can expect gravity, (heavy!) Conformality Lost (literary!) and CFT/AdS (buzzwordy!). And hopefully less headscratching and more nodding in a agreement.

Paradigm Shifts 3: With a Vengence

The last shift I wanted to present is best explained at http://tauday.com/ . There you will find a manifesto (yes, a manifesto) about why we should change from using $$ \pi = \text{180 degrees} $$ as the circle constant to $$ \tau = 2 \pi = \text{360 degrees} $$ It's quite a convincing argument, and it's a shift that can easily be made. Check the website for more. TAU VS PI image

Your Week in Seminars Intro Edition

We've done a lot of talking over the past few months here on the Virtuosi, but one important subject has not come up so far. An issue that is central to the day to day life of the average grad student. The subject of free food. The average graduate student in an American university shops for food 0.7 times per semester, paying a total of $13.22. He eats an average of three vegetables and one fruit, all at home during Thanksgiving. He turns his oven on once per year while trying to ascertain if the power is out or the light bulb in the kitchen needs to be replaced. The rest of his nutrition is made up entirely of free donuts, bagels and pizza. The place to get all this free food, naturally, is various department talks and seminars. And while we're there, we may as well try to learn some physics. With that noble goal in mind, I'd like to welcome you to the first edition of Your Week in Seminars, where I shall endeavor to relay the content of the weekly seminars I attend in Cornell. On an average week this will be one general interest colloquium and two particle theory talks. One of my colleagues may want to take up the LASSP (Condensed Matter) talk or any of the other seminars going around in the department I'll try to relate what I got out of each talk, with more words than equations and with no figures. I'll aim for a general audience level but I think I'm likely to end up at a physics undergrad or a popular-science-savvy level, as technical terms are bound to be thrown about. If there's one you don't know, feel free to ask over in the comments or take this as an opportunity to delve into Wikipedia. I'll also provide two handy metrics to the quality of the talk, my Interest Level, defined as the amount of time before I start playing with my phone, and my Comprehension level, defined as the amount of time where I was still following the speaker. Last week there was no colloquium due to Fall break, so this post will cover just the Wednesday and Friday particle seminars. On Wednesday we had David Kagan from Columbia University tell us about Conifunneling - Stringy Tunneling Between Flux Vacua. As you may know, string theory demands that our universe have a large number of dimensions, generally 10 or 11, to avoid such nastiness mass particles. To bridge the gap between the theoretical and observed number of dimensions (four) one has to "compactify" the extra dimensions, that is, to posit that they have some shape and size and write down an effective four-dimensional theory that takes their presence into account. This compactification creates an energy surface, or some effective potential in space. What we call "vacuum", the ground state of the universe, rests in one of the minimum points of that potential, as ground levels are wont to do. But it need not be the absolute minimum, just a local one, and where there are local minima in a quantum theory we know that there is also tunneling. Kagan, then, talks of tunneling between these local energy minima created by compactification of the extra dimensions of string theory. This tunneling, from what I gathered, can be described as an evolution in time of the manifold, the geometric layout of spacetime. The main conceit of the talk was that this evolution takes the manifold into the form of a "conifold", which is a manifold with a conic singularity. This conifold then nucleates a 5d-brane; branes are a objects in string theory that have some dimensionality less than that of the entire spacetime. After creating this object, the conifold transforms back into a non-singular manifold, but one where the vacuum is in another energy minimum. We can visualize this process by thinking of spacetime as an elastic sheet of of sorts, pinched at a point and pulled. It is deformed, creating an elongated cone-like area, until finally it tears, emitting a five-dimensional brane, and reverting back to its original form. There was some discussion at the end which mostly went over my head, but at some point Henry Tye, Liam and Maxim were trying to figure out whether the tunneling is necessarily done via a conifold or whether Kagan was just describing what happens if it does. The conclusion, I believe, was that it is the latter case, though Kagan said they have some good arguments on why the conifold tunneling had to happen. Interest: 40 minutes. Understanding: 20 minutes. On Friday we had Zvi Lipkin from the Weizmann Institute tell us about Heavy quark hadrons and exotics, a challenge for QCD. This talk revolved around the constituent quark model for QCD. Our usual picture of hadrons is one of two or three valence quarks sitting in a sea of gluons and virtual quark-antiquark pairs, due to the strong interactions of Strong Interaction. Lipkin's work focuses on trying to abstract this sea away and focus on the valence quarks as if we were discussing a hydrogen-atom-like system of two particles and a potential between them. This kind of treatment allows us to maximize the use of flavor symmetries. Flavor is QCD-speak for "type of particle", that is, up, down, strange, charm and bottom quarks. Using the constituent quark model we may be able to say things like "the difference between the B^0^~s~ and the B^0^ (mesons made up of an anti-b and an s or d quark, respectively) is the same as the difference between the Ξ^0^ and the Σ^0^" (baryons made up of uss and uds quarks, respectively). (Don't take that last example too seriously - I made it up by looking at lists of baryons and mesons. But that was the gist of the talk) Lipkin showed done by him and Marek Karliner, (who taught me differential equations in Tel Aviv) including lots of numbers nicely matching between their theory and experiment as well as a less-convincing attempt to characterize the two-body potential in this two-body problem. At the end of the talk he also mentioned the X(3872) seen by the Belle experiment. This is a particle that does not seem to fit into our regular models as either a baryon or a meson, and Lipkin suggested that this might be a "tetraquark," a combination of two quarks and two antiquarks. This kind of exotic hadron has been talked about for a long time, and there was some excitement a few years ago with the discovery and eventual un-discovery of the Θ^+^ pentaquark. (made up of four quarks and an antiquark) Interest: 60 minutes. (I was sitting in the front and could not politely take out the phone) Understanding: 60 minutes.

Four Fantastic Books (3 of which are free)

Well, we just had our fall break, which means I get a bit of a break, coincidently enough. Somehow I've managed to read three books in the last two days, and each of them were excellent enough that I need to tell people about them.

Street Fighting Mathematics - Sanjoy Mahajan

The art of educated guessing and opportunistic problem solving
Link to MIT Press Site

You know that feeling you get when it's the second half of January and you put on new clothes that have just come out of the dryer? This book is like a cross between that and a kick in the face.

The warm fuzzy-clothes-out-of-the-dryer feeling will come from the realization that you can wield unsurmountable power. The kick in the face will come when you realize you're not doing it yet. The premise of the book is something along the lines of: We've all been taught how to solve math problem exactly. Science isn't exact. Turns out when you realize this, you can do a heck of a lot. Let Sanjoy show you how.

As an undergrad, I had the supreme fortune of taking some life changing courses. One of the ones that has struck me the deepest was Ph 101: Order of Magnitude Physics. It did a remarkable job building my confidence. It's one thing to go through your classes and complete the homework assignments. It's another thing entirely to feel as you can take a stab at just about any question anyone can ask.

This book is the handbook that will introduce you to the techniques and ways of thinking you'll need in order to tackle the most general of questions. The first chapter is Dimensional Analysis, something that every high school student should be exposed to. I love Dimensional Analysis. The rest of the book goes on to estimate Integrals, Sums and Differential Equations, thinking about limiting cases and scaling, and thinking pictorially.

The best part: it's available in a creative commons version, i.e. for free. Just follow the creative common pdf link in the left sidebar.

One of the biggest flaws I see in modern physics teaching is that physics courses have a tendency of being reduced to plugging numbers into highlighted and yellow boxed equations. That's not physics! Physics is a way of thinking about the world. It's the delight you obtain when you understand something for the first time. It's the power you can wield by being able to properly predict phenomenon that only minutes ago you found baffling. In a word: it's awesome. In order to be able to see past all of the equations, you need to have an appreciation for how powerful intelligent approximations can be.

The amazing fact is that with a proper introductory physics course, you are capable of understanding a huge deal of the world around you.

If physics classes were taught the way Sanjoy would like them to be taught, if they relied fundamentally on the kinds of techniques he discusses, I think students would like physics a lot more. I think the world would be a better place.

Why Things Break - Mark E. Eberhart

Understanding the world by the way it comes apart

Link to Google Books Page

This book was mentioned to me by someone in my group. I decided to check it out, and read the first 70% of it in one sitting. I think that says something about it.

This is a really fun read. It's a popular science book, but on something you've probably never read about before - material science.

Mixing very interesting history, science, and biography, Eberhart takes you on a journey attempting to answer the question: Why do things break? Which he is quick to point out is probably not the question you think it is. His life goal is not to answer what happens when things break, or which materials break sooner than others (which he manages to mention along the way anyway), but he is primarily interested in answergin why things even break in the first place, a rather subtle and non-trivial question when you think about it.

I actually learned quite a lot from this book. It's full of really interesting accounts and digressions. I can't recommend it enough. Very fun read.

Soap Bubbles - C. V. Boys

Their colours and the forces which mold them

Link to Google Books page

I found this book by accident, but boy am I glad I found it. It's a printing of a series of lectures the author gave to some children near the close of the 19th century about bubbles.

This goes into the drawer of happy little discoveries I've made of old science literature for which the copyright has expired. Meaning its free on Google Books as a pdf download.

I don't know what it is, but I find basically anything written before about 1950 at least an order of magnitude easier to understand than anything since. Sure, some of it has to do with the fact that older science literature is necessarily dated, while new physics can tend to be a lot more complicated, and you could point out that there is a clear selection bias in the old texts that I manage to find, but I really believe there is something more to it than that. Old science authors wrote to be understood. You get the distinct impression that most of these guys really loved their craft and really wanted to explain their findings to others. Sometimes I get the impression that modern articles are written less to be understood and more as the modern version of mailing your patent idea to yourself in a closed envelope - as a way to get a stamp on your lab notebook to prove you did something first.

That said, this little gem was not what I thought it was going to be. Going in, I thought it would be a bunch of cool things you could do with bubbles. Oh but it's so much more. Boys manages in these three little lectures to give one of the clearest introductions to some basic fluid dynamics and electricity I've seen. Boys manages to teach, and while using bubbles.

I recommend it. If not for the science and cool bubble tricks, I think it can serve as another find indicating that physics education doesn't need to be boring in order to get real ideas planted.

Calculus Made Easy - Silvanus Phillips Thompson

Being a very-simplest introduction to those beautiful methods of reckoning which are generally called by the terrifying names of the DIFFERENTIAL CALCULUS and INTEGRAL CALCULUS

Google Books Link - another freebie

I have to admit, I didn't just read this one. I read it a while ago, but while writing up the other ones I could let such a fantastical book as this pass by without mention.

Another book I found by accident for free on Google Books. If I remember correctly, this one was pure serendipity. But it has to be the best introductory calculus book ever written. Seriously. I don't joke about these things. I fell in love with it as soon as I finished reading the subtitle (and the author's name).

This. book. rocks. If nothing else, do yourself a favor and read the first couple chapters of this bad boy. It's free. I won't hurt.

It's so good, I read it online. Then I checked it out from the library. Then I bought the shiny new edition because I needed to have it on my shelf. Turns out I'm not the only one in love with the book. Martin Gardner so loved it as to release the shiny edition with recreational problems and his commentary.

This is not Calculus crib notes. This is not spark notes or Calculus for Dummies. This is not just a condensed version of the calculus book you used in highschool. This isn't just a list of formulas. This book explains what calculus is. You do not understand what I meant by the sentence. You will not understand until you read Calculus Made Easy.

This is another book that just makes me sad at the current state of education. Calculus is one of those things that's feared by the general public. It's feared because it's misunderstood. Calculus isn't hard. And I don't mean to just sound like a jerk when I say that. It isn't meant to be hard at least. As the opening proverb of Calculus Made Easy says:

What one fool can do, another can.
All it really takes to understand calculus is the ability to imagine a very little bit of something. That and a caring and skilled tutor to lead you on your way. What name can you think of that sounds more caring and skillful than Silvanus Phillips Thompson.

I can think of no legitimate reason this book isn't used in each and every high school calculus in America. Seriously.

Caught In The Rain II

image I was rather proud of my last post about being caught in the rain. In that post, I concluded that you were better off running in the rain, but that the net effect wasn't incredibly great. However, when I told people about it, the question I inevitably got asked was: What if the rain isn't vertical? That's what I'd like to look at today, and it turns out to be a much more challenging question. I'm still going to assume that the rain is falling at a constant rate. Furthermore, I'm going to assume that the angle of the rain doesn't change. With those two assumptions stated, let me remind you of the definitions we used last time. $$\rho - \text{the density of water in the air in liters per cubic meter}$$ $$A_t - \text{top area of a person}$$ $$\Delta t - \text{time elapsed}$$ $$d - \text{distance we have to travel in the rain}$$ $$v_r - \text{raindrop velocity}$$ $$A_f - \text{front area of a person}$$ $$W_{tot} - \text{total amount of water in liters we get hit with}$$ As a reminder, our result from last time was: $$W_{tot}= \rho d (A_t \frac{v_r}{v} + A_f)$$ Now, let's look at the new analysis. As before, let us consider the stationary state first. Our velocity now has two components, horizontal and vertical. Analogous to the purely vertical situation, we can write down the stationary state, but now we have rain hitting both our top and front (or back). I'm going to define the angle, theta, as the angle the rain makes with the vertical (check out figure 1 below). this gives $$W = \rho d A_t v_r \cos(\theta) \Delta t+\rho A_f v_r \sin(\theta) \Delta t$$ Let's check our limits. As theta goes to zero (vertical rain), we only get rain on top of us, and as theta goes to 90 (horizontal rain), we only get rain on the front of us. Makes sense! Alright, so let's add in the effect of motion now. This is going to be more challenging than in the vertical rain situation. We're going to examine two separate cases


image Fig. 1 - The rain, and our angle.


Case 1: Running Against The Rain This is the easier of the two cases. After thinking about it for a while, I believe that it is the same as when the rain is vertical. Let me explain why. If you are moving with some velocity v, in a time t you will cover a distance x=vt. Now, suppose we paused the rain, so it is no longer moving, then moved you a distance x, turned the rain back on, and had you wait for a time t. And repeated this over and over until you got to where you were going. This would result in an average velocity equal to v, even though it is not a smooth motion. However, my claim is that in the limit that t and x go to zero, this is a productive way of considering our situation. We note that v=x/t, and in the limit that both x and t go to zero, that is the definition* of instantaneous velocity. The recap is, that my 'pausing the rain' scheme of thinking about things is fine, as long as we consider moving ourselves only very small distances over very short times. Using this construction, we have an additional amount of rain absorbed by moving the distance delta x of: $$ \Delta W = \rho A_f \Delta x $$ $$ \Delta W = \rho A_f v \Delta t $$ This gives a net expression of $$\Delta W = \rho A_t v_r \cos(\theta) \Delta t+\rho A_f v_r \sin(\theta) \Delta t+\rho A_f v \Delta t $$ $$\Delta W = \rho A_f v \Delta t \left( \left( \frac{A_t}{A_f}\right) \left(\frac{v_r}{v}\right)\cos(\theta)+\left(\frac{v_r}{v}\right)\sin(\theta) + 1 \right)$$ As before, turning the deltas into differentials and integrating yields $$W = \rho A_f v t \left( \left( \frac{A_t}{A_f} \right) \left(\frac{v_r}{v}\right)\cos(\theta)+\left(\frac{v_r}{v}\right)\sin(\theta) + 1\right)$$ $$W=\rho A_f d \left( \left( \frac{A_t}{A_f}\right) \left(\frac{v_r}{v}\right)\cos(\theta)+ \left(\frac{v_r}{v}\right)\sin(\theta) + 1 \right)$$ Note that when theta is zero, our vertical rain result gives the same thing as we found in the last post (the first term lives, the second term goes to zero, the third term lives). I'm going to use the reasonable numbers I came up with in the last post. However, since we have wind, we'll have to modify our rain velocity. More specifically, we'll assume the rain has the same vertical component of velocity in all cases. Then the wind speed, v_w, will be what controls the angle. More exactly, the magnitude of the raindrop velocity will be $$v_r=\sqrt{(6 m/s)^2+v_w^2}$$ While the angle will be $$\theta=\tan^{-1}(v_w/ (6 m/s))$$ Next we note that $$v_r\cos\theta = 6 m/s$$ which is just the vertical component of our rain. Similarly, the other term is just the horizontal component of our rain. So we can write our as a function of our velocity and the wind speed (the angle and wind speed is interchangeable): $$W = \rho A_f d\left( \left( \frac{A_t}{A_f}\right) \left(\frac{6 m/s}{v}\right) +\left(\frac{v_w}{v}\right) + 1\right)$$ Using the reasonable numbers I came up with in my last post yields (with a distance of 100m) $$W = .2 liters \left( \left(\frac{.72 m/s}{v}\right)&+\left(\frac{v_w}{v}\right) + 1\right)$$ Once again, we have a least wet asymptote, which is the same as before. I've plotted this function for various values of theta, and, more intuitively, for various wind speeds (measured in mph, as we're used to here in the US), and the plots are shown below (click to enlarge). Unsurprisingly, you get the most wet when the rain is near horizontal, but interestingly enough you can get the most percentage change from a walk to a run when the rain is near horizontal. All angles are in degrees.


image Fig. 2 - How wet you get vs. how fast you run for various wind angles.



image Fig. 3 - How wet you get vs. how fast you run for various wind speeds in mph.


Case 2: Running With The Rain This is the potentially harder case. We've got two obvious limiting cases. If you run with the exact velocity of the rain and the rain is horizontal, you shouldn't get wet. If the rain is vertical, it should reduce to the result from my first post. We'll start with the stationary case. This should be identical to case 1, if you're stationary it doesn't matter if the rain is blowing on your front or back. That means that for v=0, we should have $$\Delta W = \rho A_t v_r \cos(\theta) \Delta t + \rho A_f v_r \sin(\theta) \Delta t$$ Now, let's use the same method as before, pausing the rain, advancing in x, then letting time run. First we'll deal with our front side. Consider figure 4.


image Fig. 4 - Geometry for small delta x.


Note that in front of us there is a rainless area, which we'll be advancing into. Consider a delta x less than the length of the base of that triangle. If we advance that delta x, we'll carve out a triangle of rain as indicated, which, by some simple geometry, contains an amount of rain $$\rho w \frac{(\Delta x)^2}{2 \tan(\theta)} = \rho w \frac{v^2 (\Delta t)^2}{2 \tan(\theta)}$$ where w is the width of our front. Now, consider if delta x is longer than the base of the rainless triangle, as shown in figure 5.


image Fig. 5 - Geometry for large delta x.


We'll carve out an amount of rain equal to the indicated triangle plus the rectangle. From the diagram we see this gives an amount of water $$A_f \rho (\Delta x - h \tan(\theta)) + A_f \rho h \tan(\theta)/2 = A_f \rho (\Delta x - \frac{h \tan(\theta)}{2})$$ We could write two separate equations for these two cases, but that's rather inefficient notation. I'm going to use the Heaviside step function, H(x). This is a function that is zero whenever the argument is negative, and 1 whenever the argument is positive. That means that for our front side, $$\Delta W_f=\rho w \frac{v^2 (\Delta t)^2}{2 \tan(\theta)} H( h\tan(\theta) - \Delta x) $$ $$+A_f \rho \left(\Delta x - \frac{h \tan(\theta)}{2}\right)H(\Delta x - h \tan(\theta))$$ Note that I've written my step function in terms of the relative length of delta x and the base of the rainless triangle. We get the first term when delta x is less than the base length, and the second term when delta x is more than the base length. Now, let us consider the rain hitting our back. There are two cases here as well. First consider the case where we're running with a velocity less than that of the rain. See figure 6..


image Fig. 6 - The back.


We get two terms. There's the triangle of rain that moves down and hits our back, shown above. Hopefully it is apparent that this is the same as the triangle of rain we carved out with our front, and so will contribute a volume of water $$\rho w \frac{v^2 (\Delta t)^2}{2 \tan(\theta)}$$ There's also the rain that manages to catch up with us, $$A_f \rho (v_r \sin(\theta) \Delta t - \Delta x) =A_f \rho \Delta t (v_r \sin(\theta) - v)$$ In the case where we outrun the rain, we don't want this term, and our triangle gains a maximal length of the horizontal and vertical components of the rain velocity times delta t. We can write this backside term using a step function as $$\Delta W_b =A_f \rho \Delta t \left(v_r \sin(\theta) - v +w \frac{v^2 (\Delta t)^2}{2 \tan(\theta)}\right)H( v_r \sin(\theta) - v)$$ $$+\rho w v_r^2 \Delta t^2 \frac{\sin(\theta)\cos(\theta)}{2} H(v-v_r\sin(\theta))$$ We can combine these terms, with our usual top term, to get $$\Delta W =A_f \rho \Delta t \left[ \left(v_r \sin(\theta) - v +\frac{w}{A_f} \frac{v^2 (\Delta t)}{2 \tan(\theta)}\right)H(v_r \sin(\theta) - v)$$ $$+ \frac{w}{A_f} v_r^2 \Delta t \frac{\sin(\theta)\cos(\theta)}{2} H(v-v_r\sin(\theta) $$ $$+ \frac{w}{A_f} \frac{v^2 (\Delta t)}{2 \tan(\theta)} H( h\tan(\theta) - \Delta x)$$ $$+\left(\frac{\Delta x}{\Delta t} - \frac{h \tan(\theta)}{2 \Delta t}\right)H(\Delta x - h \tan(\theta))+\frac{A_t}{A_f} v_r \cos(\theta) ] $$ I'm sure this four line equation looks intimidating (I'm also sure that it is the longest equation we've written here on the virtuosi!). But it'll simplify when we take our limit as delta t goes to zero. Let's do this a little more carefully than usual. $$\lim_{\Delta t \to 0}\frac{\Delta W}{\Delta t} =\lim_{\Delta t \to 0}A_f \rho \left[ \left(v_r \sin(\theta) - v +\frac{w}{A_f} \frac{v^2 (\Delta t)}{2 \tan(\theta)}\right)$$ $$*H(v_r \sin(\theta) - v)+ \frac{w}{A_f} v_r^2 \Delta t \frac{\sin(\theta)\cos(\theta)}{2} H(v-v_r\sin(\theta) $$ $$+ \frac{w}{A_f} \frac{v^2 (\Delta t)}{2 \tan(\theta)} H( h\tan(\theta) - v \Delta t)$$ $$+\left(v - \frac{h \tan(\theta)}{2 \Delta t}\right)H(v \Delta t - h \tan(\theta))+\frac{A_t}{A_f} v_r \cos(\theta) ] $$ We'll take this term by term. On the left side of our equality, we recognize the definition of a differential of W with respect to t. Any term on the right without a delta t we can ignore. The first term with a delta t is $$\frac{w}{A_f} \frac{v^2 (\Delta t)}{2 \tan(\theta)}H(v_r \sin(\theta) - v)$$ In all cases except when theta = 0, this term goes to zero. Now, when theta = 0, tan(theta) = 0, so our limit gives zero over zero, which is a number (note, I'm not being extremely careful. If you'd like, tangent goes as the argument to leading order, so we have two things going to zero linearly, hence getting a number back out). However, looking at the step function, when theta goes to zero, we likewise require v to be zero to get a value. However, our term goes as v^2, so we conclude that in our limit, this term goes to zero. Next we have $$\frac{w}{A_f} v_r^2 \Delta t \frac{\sin(\theta)\cos(\theta)}{2} H(v-v_r\sin(\theta)$$ This obviously goes to zero, no mitigating circumstances like a division by zero. The next term is $$\frac{w}{A_f} \frac{v^2 (\Delta t)}{2 \tan(\theta)} H( h\tan(\theta) - v \Delta t)$$ This term presents the same theta = 0 issues as the first term. The resolution is slightly more subtle and less mathematical than before. Remember that this term physically represents the rain that hits us when we move forward through the section that our body hasn't shielded from the rain (see the drawing above). I argue from a physical standpoint that when the rain is vertical, this term would double count the rain we absorb with the next term (which doesn't go to zero). I'm going to send this term to zero on physical principles, even though the mathematics are not explicit about what should happen. Next we have $$vH(v \Delta t - h \tan(\theta))$$ The argument of the step function makes it clear that to have any chance at a non-zero value we need theta = 0. The mathematics isn't completely clear here, as the value of a step function at zero is usually a matter of convention (typically .5). Let's think physically about what this term represents. This is the rain we absorb beyond the shielded region (see above figure). This is the term I said the previous term would double count with when the rain is vertical, so we're required to keep it. However, only when theta = 0. I'm going to use another special function to write that mathematically, the Kronecker delta, which is 1 when the subscript is zero, and zero otherwise. This is a bit of an odd use of the Kronecker delta, because it's typically only used for integers, but for those purists out there, there is an integral definition which has the same properties for any (non-integer) value. Thus $$vH(v \Delta t - h \tan(\theta))=v\delta_{\theta}$$ The last term we have to concern ourselves with is $$- \frac{h \tan(\theta)}{2 \Delta t}H(v \Delta t - h \tan(\theta))$$ Again, there is some mathematical confusion when theta = 0, so we think physically again. This term represents the rain in the unblocked triangle (see above). Obviously, there is no rain in the triangle when theta is zero, because there is no triangle! We set this term to zero as well. This gives us a much simler expression than before, $$\frac{dW}{dt} =A_f \rho \left[ (v_r \sin(\theta) - v)H(v_r \sin(\theta) - v)+v\delta_{\theta}+\frac{A_t}{A_f} v_r \cos(\theta) \right]$$ We can pull out a v and integrate with respect to t, giving $$W=A_f \rho v t \left[ (\frac{v_r \sin(\theta)}{v} - 1)H(v_r \sin(\theta) - v)+\delta_{\theta}+\frac{A_t}{A_f} \frac{v_r \cos(\theta)}{v} \right]$$ As before, we can write this in terms of the wind velocity and the vertical rain velocity, $$W=A_f \rho d \left[ (\frac{v_w}{v} - 1)H(v_w - v)+\delta_{v_w}+\frac{A_t}{A_f} \frac{v_{r,vert}}{v} \right]$$ This is a nice, simple expression that we can easily plot. There is one thing that bothers me, I feel like there should be another step function term that kicks in when your velocity exceeds the horizontal rain velocity, and you start getting more rain on your front. But I'm going to trust my analysis, and assert that such a term would be at least second order in our work. If someone does find it, let me know! Using the reasonable numbers from my last post gives $$W=.2 liters \left[ (\frac{v_w}{v} - 1)H(v_w - v)+\delta_{v_w}+\frac{.72 m/s}{v} \right]$$ Because this post is long enough already, I've gone ahead and plotted this only vs. wind velocity. I've also plotted the former least wet asymptote. Most interesting (and you'll probably have to click on the graph to enlarge to see this) is that there no longer is a least wet asymptote! In theory if you run fast enough you can stay as dry as you want.


image Fig. 6 - How wet you get vs. how fast you run for various wind speeds in mph.


Comparison I will conclude with a comparison of the two results, to each other and to the vertical case. First, lets take the appropriate limits. $$W_{with}=A_f \rho d \left[ (\frac{v_w}{v} - 1)H(v_w - v)+\delta_{v_w}+\frac{A_t}{A_f} \frac{v_{r,vert}}{v} \right]$$ $$W_{against} = \rho A_f d\left( \left( \frac{A_t}{A_f}\right) \left(\frac{v_{r,vert}}{v}\right) +\left(\frac{v_w}{v}\right) + 1\right)$$ $$W_{stationary} = \rho t A_f \left(\frac{A_t}{A_f} v_{r,vert}+v_w\right)$$ $$W_{vert}= \rho d A_f \left(\frac{A_t}{A_f} \frac{v_r}{v} + 1 \right)$$ In the stationary limit, we have to break up the d in our equations into v t, and that gives $$\lim_{v \to 0}W_{with}= \lim_{v \to 0} W_{against}=\rho t A_f \left(\frac{A_t}{A_f} v_{r_vert}+v_w\right)$$ While in the vertical rain limit $$\lim_{v_w \to 0}W_{with}= \lim_{v_w \to 0} W_{against} =\rho d A_f \left(\frac{A_t}{A_f} \frac{v_r}{v} + 1 \right)$$ So our limits work. Finally, it's a little hard to tell the difference between the forward and backwards case, so I've plotted the two lines together for a few values of v_w. You'll notice that for zero wind speed they have the same result (which is good, since our limit was the same), but for the other wind speeds they are remarkably divergent, more so as you run faster! (again, click to enlarge)


image Fig. 7 - Solid lines are running with the rain, dashed lines are running against the rain.


Conclusions Hopefully this has been an interesting exercise for you. I know it certainly took me longer to work and write than I initially thought. While you can't see it in the post, there were a lot of scribblings and thinking going on before I came to these conclusions. Most of it went something like: "No, that can't be right, it doesn't have the right (zero velocity/zero angle) limit!". I think this concludes all of the running in the rain that I want to do, but if you have more followup questions, post them below, and I'll do my best to answer. Also, I admit that my analysis may be a bit rough, so if you have other approaches, let me know. Finally, note that everything I've found favors running in the rain, so get yourself some exercise and stay dry!

Beards and Pulsars


image The bearded half of Hulse-Taylor


A few weeks ago I was on a bus going through Scranton and I read a super-awesome fun fact regarding the Hulse-Taylor binary pulsar in Black Holes, White Dwarfs and Neutron Stars. Sadly, I have since forgotten it and left the book a few thousand miles away. So, let's just make up our own! First, we need a little background. What the heck is a pulsar? A pulsar is a rapidly rotating neutron star that beams electromagnetic radiation towards us, which is how we can see them. Typical rotation periods range from a millisecond to a few seconds. So each time the pulsar rotates, we observe a blip when the radiation beams towards us. Since these objects are additionally very stable rotators, they are essentially very accurate clocks with which we may make astronomical measurements. So what's the Hulse-Taylor binary pulsar? The Hulse-Taylor binary is almost exactly what it sounds like: it's a pulsar binary where one of the pulsars is pointed towards earth. It was the first binary of it's kind discovered and offers a unique look into a very high gravity environment. It also provided a very nice test for General Relativity. General Relativity predicts that two orbiting massive bodies should emit gravitational waves. This emission of gravitational waves will then cause the orbit to decay and the two bodies to move closer together. So does the Hulse-Taylor binary show this? Take a look: imageThe data fit the prediction of general relativity perfectly! For this discovery Hulse and Taylor shared the 1993 Nobel prize in Physics. Now that's all well and good, but I was promised some fun facts...? Ah, yes! Well, we mentioned that the Hulse-Taylor binary orbit is decaying. It turns out that the orbit is decaying at about 3.5 meters per year. That's pretty slow. Let's put it into a more conventional speed, like meters per second. So $$ 3.5 m/yr = 3.5 m/yr \times \frac{1 year}{3.14 \times 10^7 s} = 1.1 \times 10^{-7} m/s $$ or, in less useful units, $$ 3.5 m/yr = 110 nm/s $$ Great, so what to compare this to? Well, all people who are in the know know that I am a manly man who gained the ability to grow facial hair sometime after my sophomore year of college. And since I have to pretend to be an upstanding member of society this week, I happen to know the last time I shaved. Thus, a few simple measurements and I can estimate how long hair takes to grow. The last time I shaved was three days ago and a quick eyeball measurement (sadly I have no ruler) gives a facial hair length of about 2mm. Thus, a beard grows at about 0.7 mm/ day. $$ 0.7 mm/day = 0.7 mm/day \times \frac{10^{-3} m}{mm} \times \frac{1 day}{86400s} = 8 nm/s $$ This is a universal speed constant, which we shall call the speed of beard. Or, bowing to our oppressive overload sponsors, we shall call it "Gillette Mach 1." So doing a quick division, we find that the rate at which the Hulse-Taylor binary's orbit is shrinking is roughly 14 times beard speed, or in our commercial units, Gillette Mach 14 (a razor close shave!). "Well," I hear you cry (a bit disappointed...?), "that's a pretty useless unit, but can't we be more useless?" Yes, dear reader, we certainly can! We are currently at Snuggielevels of uselessness right now, but I think we can just about bump it up to Member of Congress* useless if we try. A furlong is a unit of length about 200 meters long. A fortnight is a unit of time about 14 days long. Therefore, if we want a speed we just... $$ \frac{furlong}{fortnight} = 1 \frac{furlong}{fortnight} \times \frac{200 m}{furlong} \times \frac{1 fortnight}{14 \times 86400 s} = 1.6 \times 10^{-4} \frac{m}{s} $$ So the rate of decay of the Hulse-Taylor binary is: $$ 3.5 \frac{m}{yr} = 1.1 \times 10^{-7} m/s \times \frac{1 furlong/fortnight}{ 1.6 \times 10^{-4} m/s} = 7 \times 10^{-4} \frac{furlong}{fortnight} $$ Hooray! So now we know the decay rate of the Hulse-Taylor binary orbit in two horrible units: either 700 microfurlongs per fortnight or 14 times the speed of beard (AKA Gillette Mach 14). Please write these in your copybooks now and forever commit them to memory. * In no way is The Virtuosi affiliated with the wonderful Gillette Company, which makes the world's best razors. Since we aren't affiliated with this great Gillette company, we are not obligated to repeat their slogan that it's "The Best A Man Can Get" despite its self-evident truth. Nor is the author required to say that the silky smooth shave I get with a Mach 20 razor is the only reason I can even must social interaction. Hooray!

Paradigm Shifts 2: Paradigm ShiftER

Last time, I presented reasons why it would be economically infeasible for the US to switch to the metric system. This time, I'd like to talk about a change that could relatively easily be brought about soon. A change that would barely cost a thing, but could improve efficiency dramatically in many jobs and in every day life for many people. A change of this type would be very handy. Puns aside though, what I'm talking about is this: DVORAK SIMPLIFIED KEYBOARD (again lots from Wikipedia) image Look down at your keyboard. Chances are very good that if you bought your keyboard in an English speaking country, you're using the QWERTY keyboard layout. You'll also probably know (or else you'll learn from me) that the letter "E" is the most common in the English language. You might wonder then, why it's not in the "home row" (the row of keys in the middle of the keyboard that would be right under your fingers if you're typing in the standard way). You might also wonder why other common letters like "T" were exiled to the top row while less common letters like "J" and "K" sit right under your fingertips as they rest idle. You may then think about how slow it is to type words like "December" which require you to use the same finger for consecutive letters. You may even get to thinking that a lot of words require using the same hand for consecutive letters, but it would be much nicer to alternate hands as you type. The reasons for the slow speed of QWERTY are not entirely clear. There are stories floating around about the inventor of the typewriter deliberately laying out the keyboard this way to keep typing speeds down so that the mechanical keys wouldn't jam. The authenticity of these stories are disputed, but you'd be hard pressed to argue that QWERTY is the most efficient layout there could be. One competitor for the title that has stood out is called Dvorak, named after its creator. Some statistics from Wikipedia (sources given there):

  • "the Dvorak layout uses about 63% of the finger motion required by QWERTY"
  • "the vast majority of the Dvorak layout's key strokes (70%) are done in the home row" whereas QWERTY "has only 32% of the strokes done in the home row"
  • "The QWERTY layout has more than 3,000 words that are typed on the left hand alone and about 300 words that are typed on the right hand alone", but "with the Dvorak layout, only a few words can be typed on the left hand and not one syllable can be typed with the right hand alone, much less a word."
  • "On QWERTY keyboards, 56% of the typing strokes are done by the left hand. As the left hand is weaker for the majority of people, the Dvorak keyboard puts the more often used keys on the right hand side, thereby having 56% of the typing strokes done by the right hand."
  • "Because the Dvorak layout requires less finger motion from the typist compared to QWERTY, many users with repetitive strain injuries have reported that switching from QWERTY to Dvorak alleviated or even eliminated their repetitive strain injury."
  • "The fastest English language typist in the world, according to The Guinness Book of World Records" achieved "a peak speed of 212 wpm" using Dvorak

Okay, so maybe by now you see that Dvorak can be more efficient. So why hasn't it been implemented yet? Well, back in the typewriter era, it was one layout or the other, and people picked QWERTY. Look at Wikipedia for a summary. But now, assuming you're not using a typewriter or a 386 or something equivalently ancient, it's actually quite easy to switch back and forth between QWERTY and Dvorak. Check out your control panel if you're on a PC (or the equivalent on a mac or linux or what have you) and I bet you'll find the setting for changing the keyboard layout pretty easily, and I bet that Dvorak will be one of the layouts you can choose. So it's easy enough to switch all the devices to the new system, with pretty much no cost. The problem is of course that hardly anybody can type with Dvorak. Everybody is used to QWERTY! And I'm sure there are people out there who are happy to try to learn Dvorak, but I'm not one of them. I'm used to QWERTY and I'm pretty sure if I tried to switch, I'd get confused and completely jumble the two systems. I just can't see the slight increase in my typing speed being worth the hassle. Maybe some can, but I'm sure that many feel the same way I do, especially those who, like me, don't do a lot of transcribing or the like, and so rarely need to type very fast. The key? (sorry, no more puns, I promise) Get 'em while they're young. That's right, all the young typists out there can be trained on Dvorak instead of QWERTY. If they never learn QWERTY, they'll never be confused. And they can use the same keyboards as everyone else. All that we'd have to do is make it easier to switch the keyboard layout (put something a little closer than the control panel, maybe right on the keyboard), and print two sets of characters on the keyboard when they are made (note that I've neglected cell phones and other devices that also have keyboards, but they can be updated in the same way). It would be as easy as that to implement the change. Give it maybe 5-10 years to phase in keyboards and operating systems with the easy-switch built in (in this case, it's nice that computers have such short life spans), then have schools start teaching kids to use Dvorak in typing classes. In a generation, hardly anyone will even remember QWERTY except as a weird hiccup along the way to efficiency.

Breaking Intuition

When I walked into my first day of physics class in high school, I carried with me a set of ideas which I learned from simply observing and interacting with the world. In fact everyone builds up what they believe to be intuitive concepts, whether it be in science, math, or any other field. Without any scientific training whatsoever, we begin to build intuition. If you let go of a ball in the air, what will happen? If you try to run on the ice of a frozen lake, will it be easier than running on the sidewalk? If you stand in the sun and on the ground you see a strange dark misshapen copy of yourself imitating your every move... who is following you? Unfortunately we run into an issue when our intuition disagrees with experimental results or someone else’s intuition. At that point, it is essential to break down and analyze our intuition to find where any problems in our logic may exist. This process of continually breaking down and analyzing intuition is key to progressing in science. imageLet's take a look at a simple dice game. The rules of the game dictate that you pick a die first, then I pick a die, then we roll together 100 times (we’re really bored, apparently). The winner is the person who rolls a higher number more times in 100 rolls. The catch is that the numbers are not the standard 1-6 on each die, but a magic set of numbers which may repeat any number from 1-6 as many times as desired, for example {1, 2, 3, 4, 5, 5}. "Sounds easy," you say, as you pick up the yellow die. I choose blue. We roll, and I win 74 out of 100 times. "Obviously the blue die is better, give me that one," you say. I proceed to pick up the green die, lo and behold, I win 63 out of 100 times. "Okay okay, I've got the hang of it now. Clearly the green die is better than all of the rest." I choose the yellow die and win 65 out of 100 times. In a fit of rage you proclaim "witchcraft" and storm off for your witch-hunt gear. There is no deception here with the exception of logic, the younger sister of witchcraft. It is actually an interesting challenge to try to come up with a set of numbers which will yield the following result: The probability of the value on the blue die being higher than the value on the yellow die is greater than 1/2. The probability of the value on the green die being higher than the value on the blue die is greater than 1/2. The probability of the value on the yellow die being higher than the value on the green die is greater than 1/2. There are definitely a multitude of possible solutions, so I encourage you to attempt to find one using only numbers 1-6 before scrolling down. Got a solution? Let's take a look at the following dice: Yellow : {1, 4, 4, 4, 4, 4} Blue : {2, 2, 2, 5, 5, 5} Green : {3, 3, 3, 3, 3, 6} I should note that my solution is set up to have no ties, which makes the analysis a bit more straightforward. It is certainly possible to come up with interesting solutions which allow ties. imageThe chart on the right shows how each die compares to the others. The color of each square indicates the winner when the number of the same row and column are compared. We can see that blue beats yellow 21 out of 36 times, green beats blue 21 out of 36 times, and yellow beats green 25 out of 36 times. So this combination of dice will show the non-transitive effect we were looking for. So I explain this “sorcery” to you, before you try to burn me at the stake for being a witch, and you calm down. Now I tell you that I’d like to try a new game. I select two dice of the same color, then you get to select two dice of the same color, then we roll both pairs 100 times. The winner this time is the person who rolls a higher total, the sum of their two dice, more times in 100 rolls. I select two yellow dice. After learning of my trick, you decide to pick two blues and proceed to lose 60 out of 100 times. You declare, “‘tis but a statistical error, let’s have another go!” I select two blues and you, two greens. I win again! Just to rub it in, I choose green and you choose yellow, and I win once again. Softly weeping, you listen as I explain that the probabilities have now switched! The chart on the right shows the different sums that are possible for a given set of colored dice. When you lookimage at the possible sum 4 for the blue dice, you see that 4 can meet up with 2 a total of nine times, with blue winning each, 4 can meet up with 5 a total of 90 times with yellow winning each, and can meet up with 8 a total of 225 times with yellow winning each. So the value in each cell is the number of times each match-up can occur, with the color of the cell showing who will win each match-up. There are 6^4 = 1296 possibilities, so winning half corresponds to 648. This dice trick is an example of non-transitive logic, which can certainly be a non-intuitive topic (Stay tuned for some non-transitive logic involving coins!). In this case, you must break your intuition that there must be one “best” die. In science, it’s a great idea to try to look for other examples of the behavior you are observing to help reinforce what you’ve learned. It turns out that one of the most basic schoolyard games involves non-transitive logic! In the game of rock, paper, scissors, we find that rock crushes scissors, scissors cuts paper, and paper covers rock. This is analogous to the behavior of our special dice, and I believe makes the logic much easier to understand. Compare against your intuition, break down and analyze, build up and reinforce.

Visualizing Quantum Mechanics

Or how I learned to stop worrying and love the computer. [Note: There's a neat video below the fold. ]

A Confession

I was recently rereading the Feynman Lectures on Physics. If you haven't read them lately, I highly recommend them. Feynman is always a pleasure to read. As usual, I was surprised. This time the surprise came in lecture 9, which the way the course was laid out meant that this was something like the last lecture in the third week that these students had ever received of university level physics. The lecture is on Newton's laws of dynamics. The start is of course Newton's first (second) law, $$ F = \frac{d }{dt } (mv ) $$ which, provided the mass is constant takes the more familiar form $$ F = ma $$ After discussing the meaning of the equation and how in general it can give you a set of equations to solve, he naturally uses an example to illustrate the kinds of problems you can solve. What system does he choose to use as the first illustration of a dynamical system? The Solar System. That's right. Let that settle for a second. The sad thing is that if you caught me off guard before I read the lecture, caught me in an honest moment and asked me how you would solve the solar system, I would probably have launched into a discussion of the N-body problem and how there is no closed form solution to newtonian gravity that involves 3 or more bodies. (Depending on who you are, I might have then mentioned the recent caveat, namely that there is a closed form solution to the N-body problem, but that it involves a very very very slowly convergent series). Now, how can Feynman use the Solar System as his first example of solving Newtonian dynamics and I have told you that it's impossible as my first words on the subject? Well, the answer of course is that Feynman was much smarter than I am. Perhaps another way to say it is that in a lot of ways Feynman was a more contemporary physicist than I am.

A Realization

Physics education has changed very little in the last 50 years or so. Now in some ways this isn't a problem. The laws of nature also haven't changed in the last 50 years. What's unfortunate is that the tools available to physicists to answer their questions have changed remarkably. Namely, computers. Computers are great. They permeate daily life nowadays. They are capable of performing millions of computations per second. This is great for physics. You see, a lot of the time, as you all know, the way you achieve answers to specific questions about the evolution of a system is to do a lot of computation. So what did physicists do before computers? Well, a lot of time they would have to do a lot of calculations out by hand, but no one enjoys that, so a lot of times you would have to make sacrifices, make assumptions that meant that your analytical investigations were simple enough to yield tiddy little equations. This is reflected in the kinds of problems we still solve in our physics classes. I never solved the solar system in my mechanics class. I never did it because there isn't a closed form analytical solution to the solar system. But you know what... that doesn't matter. It doesn't matter in the least. Because while there doesn't exist a closed form solution to the problem, it is very easy to come up with a numerical approximation scheme (see Euler Method). You see, the point of physics is to get answers to questions. And the fact of the matter is that those answers don't have to be 'exact', they don't have to be perfect. They need to be good enough that we can't tell the difference between them being 'exact' and them being an approximation. To do this numerically with a pad of paper and a pencil is a heroic task. Do do this with a computer takes a couple of lines of python code and a couple seconds.

An example

As an example of the neat things you can do with a few lines of python code and a few minutes on your hand, check this out. and there's more This video depicts time dependent quantum mechanics. I set up a gaussian wavepacket, inside of a potential that includes a hard wall on the sides and is proportional to x. That sounds fancy but what it means is that this is the quantum mechanics equivalent of a bouncing ball. The amplitude of the wave function corresponds to the probability of finding the particle at any location. That is, imagine picking one of the colored pixels at random. If you pick any of the colored pixels at random, and look down at the x position, that is what measuring the position of the particle would do. But what are the colors? Quantum mechanical wave functions are complex. This means you can represent them either with a real and imaginary part, or with a magnitude and a phase. Here it's the latter. Like I said the amplitude is shown with the height (actually the amplitude squared). The color corresponds to the phase, where the phase is mapped to a location on the color wheel, just like the one that pops up in Photoshop or GIMP. And theres sound too! The sound is what the wave function would sound like if it was making noise. Its the real part of the wave function played as a sound. To that end, in this video it is very low frequency, because I made the movie slow enough to see the colors changing well. Its fun to watch the video and listen to the sound. For this movie the sound correlates nicely to when the 'ball' reaches its maximum height. Whats also cool is that you can hear the 'ball' delocalize after each bounce. The sound and function start off being nice and sharp, but after a few bounces it starts to spread out. You can also see how momentum is encoded in quantum mechanics. Funny thing is that instead of being something separate that you need to specify like in classical mechanics, in quantum mechanics the wave function is a complete description of the evolution of the system. I.e. if I showed you just one frame of this bouncing ball, you would be able to recreate the entire movie. If I showed you just one frame of a classical basketball, you'd have no idea what frame came next since you'd only know its position, not its velocity. In quantum mechanics the momentum gets encoded in the wave function, and as you can tell its encoded as a complex twist. A phase gradient. A crazy rainbow. If you look closely, you can even see that you can tell the difference between whether the particle is falling left or right. When it goes left the rainbow pattern goes (reading left to right) blue red green. When its moving right it goes blue green red. It twists one way then the other in the complex plane. The colors are a little hard to see in this one, they're a little easier to see in this one: This second one I dressed up a bit, labelling the axes with units, putting a time counter, superimposing the potential I was talking about, and marking the average expected position with a tracer black dot on the bottom.

A Call to Arms

Any student who has taken a first course in quantum mechanics knows enough physics to make these movies. The physics isn't complicated. But the movies really neat, right? More than neat. Making these videos taught me things about quantum mechanics I should have learned a long time ago. I really think computers are underestimated in physics education. They can be a great tool. A picture is worth a thousand words, so a movie must be worth millions. : denotes stolen quote More than just as an illustrative tool, the fact that even students in the first introductory mechanics physics course can solve for something like the solar system shouldn't be hidden from them. Classical mechanics after all is the physics of pretty much every object we can see and touch, but classics mechanics classes only ever talk about Atwood machines and frictionless planes. Often the closest they come to realism is in discussing projectile motion, where the laws you learn in the book (neglecting air resistance) are very good at describing the trajectories of very dense large objects (i.e. cannonballs). I can't remember the last time I've fired a cannon. But air resistance serves little trouble to my computer. Or Rhett's (of Dot Physics, which has just moved to Wired). Basically, if you give a student an intro physics course and an intro programming course, suddenly you have a human being who is better equipped to answer questions about natural phenomenon than 99% of human beings that have ever lived. So lets take a tip from Feynman and teach physics students how to solve the solar system.

Code

As per request, here is the python code I used to generate the videos. Its rather messy, so I apologize in advance. schrod.py - A general script which finds the eigenvalues and eigenbasis for a 1D particle with an arbitrary potential. qmsolver-bouncy.py - Code to generate the movie. You need to create a directory with the same name as the name in the script in the save folder as the script. The last two lines make the sound and the directory full of images. I used ffmpeg to wrap the two together.