On the Death of Karl Schwarzschild

\

image

Every once in a while, in the study of science, one comes across biographical snippets that momentarily breathe life into names that otherwise serve as shorthand for equations and eras. As an obvious effect of the selection bias involved with including this superfluous information in technical books, they are bound to be pretty interesting. Such stories range from the hilarious antics of Feynman [1] or Fermi [2], to the heartbreaking stories of Boltzmann and Oppenheimer, and even to the surprisingly scandalous life of Erwin Schrödinger. But my all time favorite of all these historical "fun facts" is that of the man who provided the first exact solution to the Einstein field equations while fighting in the First World War: Karl Schwarzschild (pictured left impersonating a surprised walrus [3]).

Karl Schwarzschild was an astronomer, physicist and mathematician; an across the board physical scientist with passions both abstract and practical. He published his first article regarding the orbits of double stars at the age of 16 while still in high school. He went on to work on the mathematical treatment of orbits, constructed several useful astronomical instruments, and put forward theoretical treatments of comet tails and stellar atmospheres. His creativity was admired by some of the greatest scientists of his era. Eddington, chief among them, offered his praise in a strangely brutal simile [4]:

"... his joy was to range unrestricted over the pastures of knowledge, and, like a guerrilla leader, his attacks fell where they were least expected."

But Schwarzschild's greatest contribution to science was finding the first exact solution to Einstein's field equations for general relativity in 1916. This solution, which takes the form of the Schwarzschild metric, describes the space-time surrounding a non-rotating spherically symmetric object (the solution for a rotating spherically symmetric object was found in 1963 by Roy Kerr). It was through this metric that I came to meet Schwarzschild, since every time it is introduced in a class the instructor is sure to drop the mother of all fun facts: Schwarzschild discovered his solution while in the army during World War I, a war that would eventually kill him (among others).

In this age of Wikipedia, I find it hard to believe that I didn't immediately go home and check the full story behind this statement. I may very well have, but over the years I have constructed a myth about the death of Schwarzschild that I at least half believed until I finally looked up the full story for this post.

I imagined Schwarzschild as the noble and peaceful scientist immediately skeptical of the war, but eventually conscripted to fight. There he fought on the front lines, carrying both a Mauser and a notebook. He would spend the long lulls between suicidal assaults through no-man's-land huddled down in the mud in the trenches scribbling away like mad at what would eventually become the elegant solution that bears his name.

Then (and here I will blatantly plagiarize the All Quiet on the Western Frontmovie) Schwarzschild began to see the solution, everything started to fall in place. He became excited and no longer able to sit still. Now standing, he reached out towards the beauty of nature he saw not in a butterfly or bluebird, but in the fabric of space-time. Just as he finishes his solution, Schwarzschild's head briefly peaks above the trench and is caught by a sniper's bullet. Both he and his notebook fall to the mud unnoticed, an overly romanticized symbol for the futility of war or some such nonsense.

Now this version is obviously false (how did he get the solution to anyone?), but it is the one that has persisted in my mind grapes. So what really happened? When war were declared in 1914, Schwarzschild volunteered for the German army and manned weather stations and calculated missile trajectories in France, Belgium, and Russia. It was in Russia that he discovered and published his well-known results in relativity as well as a derivation of the Stark effect using the 'old' quantum mechanics. It was also in Russia that he began to struggle with pemphigus, an autoimmune disease where the body starts attacking its own cells. He was sent home, where he died in 1916 at the age of 42.

The reality is much more sensible than my version, and still a good tale in its own right: a brilliant scientist cut down in his prime, working until the very end. So why did I unconsciously elevate a respectable tale to one of mythical proportions? I think it has something to do with how we view the great scientists of the past. Since they were extraordinary at one thing (some scientific field), we assume they must have been extraordinary in every regard. Therefore, their fates must carry some deeper meaning or symbolism. But it turns out that the reality, while certainly less romantic, may be more satisfying (to some), that these men and women whose names live on tied to equations were just regular people who happened to be very good at one thing: being scientists.

Or maybe it's just my inability to separate movies from real life. Who knows?

[1] Surely You're Joking, Mr. Feynman is essentially the gag reel for 20th century physics. It can be purchased with money if you're into the whole capitalism thing. Or for all you pinko commies and poor grad students there are two free options. One, assemble approximately one gaggle of undergrad physics majors or Virtuosi bloggers and proceed to give them the shakedown, at least one copy should pop out. Two, look into big government socialism.

[2] See Virtuosi blog post, Future.

[3] Some scholars use this picture as evidence of Schwarzschild's involvement as the nerdiest and most forgotten Marx Brother. There is no evidence to suggest his was, in fact, Poindexter Marx.

[4] Quote (and most facts) lifted from here, take that Wikipedia!

[5] Apologies for the excessive and irrelevant links! [6]

[6] Apologies for the excessive and unnecessary footnotes!

Something Bugging Me

image Apparently July is a quiet month here at the Virtuosi. We're busy with research, travel, vacation, etc. I, myself, have been busy with only a few of those things, though I've also been studying for my qualifying exam, which is coming up in less than a month. However, that's not the question before us today. Today I'd like to think about the density of bugs in the air. I was walking outside this past weekend, there was a fierce wind blowing, and twice in five minutes a bug hit my ear. That seemed like a lot. But for 1 hour of previous walking no bugs hit my ear. How many bugs would there have to be per cubic meter of air to achieve that rate? We'll restrict ourselves to small bugs, like nats, nothing really mobile like house flies. We have an observed bug impact rate of 2 bug/hour. I'd estimate that my ear is \~1"x2", an area of 2 in^2. Let's convert to metric, that's about 13 cm^2. Next I need to know how fast the wind was blowing. I'd guess about 15 miles per hour, it was a good stiff wind off the lake. From here, we just need to write down an equation for the bug collision rate. The simplest theory I can imagine would go something like this: (Bug density)(ear cross section)(wind speed)=(collision rate) In the above I've assumed that the bugs are moving with the wind (hence my initial assumption that the bugs are small, and thus will more or less move with the wind). If you check the above, it has the right units, bugs per time on both sides of the equation. Now we just need to solve for the bug density, we'll call that B, in terms of the rest of our knowns: ear cross section area A, wind speed v, and collision rate R. This gives $$B=\frac{R}{Av}=\frac{2 bug/hour}{(13cm^2)(15mph)}$$ We've got a big of a units problem with our wind speed in miles per hour and our ear area in cm^2, so I'll convert from mph to cm/hour, and come out with a bug density of $$B=\frac{2 bug/hour}{(13cm^2)(2.4 Mcm/hour)}=6.4*10^{-8}bugs/cm^3$$ That seems like an absurdly small number. Let's convert from cm^3 to m^3. That gives us .064 bugs/m^3. This still seems rather low to me. Put another way, we'd need \~16 m^3 to find 1 bug (this is only \~160 bugs/olympic swimming pool!). What might be amiss with the estimate? Well, I'm fairly happy with the ear area. I could be misremembering the length of my walk. More importantly, I'd wager that not all of my walk was perpendicular to the direction the wind was blowing. If I were at an angle to the wind I'd have a commensurately smaller ear surface area presented to the wind. Of course, this will at best probably gain us around a factor of two. I'm not very familiar with wind speeds, so maybe we could pick up another factor of two there. Still, I imagine that the main problem with my calculation is that I assumed that all of the bugs were stationary and would be blown against my ear. Even little bugs are very mobile (as you well know if you've ever tried to swat them), and were probably actively trying to avoid my ear for the most part. Only the really weak, senseless (literally), or stupid bugs got blown against my ear, and apparently those aren't that common. How common are they? You tell me. Make an estimate for the bug density of air, compare it to my senseless bug density, and find the percent of bugs that fit my description! Or not, if you only come here to watch the rest of us toil over our calculations.

Zombpocalypse

Here at the Virtuosi, we're concerned. We are concerned that perhaps the world is really not ready for a zombie apocalypse. You know, the kind of zombie apocalypse that you may have seen in such classics as "Night of the Living Dead", "Shaun of the Dead", or perhaps the even more recent "Zombieland" (sweet cameo by the way). The kind of zombpocalypse that could leave major cities void of life and the country plagued with the undead. Well, Alemi and I were curious as to how likely such a pandemic was to occur and what it would look like in the simplest of models. In typical Virtuosi fashion, we threw some physics at it and this is what we came up with. The Model: Taking cues from the epidemiologists, we feel that the spread of zombie-ism might fall into a compartmental model that is much like the heavily studied SIR model. Here, there are susceptibles, S, that get infected, I, and can subsequently infect other susceptibles or recover, R, and not play a part in the disease dynamics anymore. Our analogous model is the SZR model in which there are susceptibles, S, once again that can be bitten by a zombie to become another zombie, Z. However, this is a huge difference - a susceptible must then kill a zombie (by destroying the brain of cause) to make it removed, R. Hence, we have the SZR disease model. Like with the compartmental models, one can write down a set of differential equations that govern the dynamics of disease in a population. However, these might not be the best option for simulation as the population is represented as a continuous variable. After all, there can't be 1/100th of a zombie roaming around biting ankles causing a new epidemic after all of this partners have been killed off. To get around this, we set up a discrete contact network where each node is one of the three states of the model and there are set probabilities of the three states interacting when they are neighbors. In particular if there is a susceptible with a zombie neighbor then there is a probability b that the zombie will bite the susceptible turning the node into a zombie. In the same time step, there is a probability k that the susceptible will kill the zombie putting it in the removed class. Both of these actions can happen in one time step of the simulation. However, the two actions can't happen sequentially to the same node. That is, a newly bitten susceptible must be a zombie for at least one time step before it can be removed adding a latent period for the infection. Simulations: With these rules, we created networks with average connectivity k=8 (probably way too low) with up to N=40,000 susceptibles. Patient zero was seeded at a random location and the simulation run until there were no more susceptible - zombie neighbor pairs. A typical simulation looks something like: Here, the blue represent the susceptibles and red zombies. More subtly, the removed are shown in black. You may also notice that the zombies actually wrap around as they spread due to periodic boundary conditions. And so we ran many of these simulations for various values of N, b, and k to see what type of dynamics arise. As the ratio of b/k is increased and zombies become proportionally better at biting susceptibles than they are at killing the zombies, we see a transition from little to no infection to mass pandemic in what appears to be a sort of percolation transition. image In the figure red is the fraction of zombies in the final population and green is the amount of time that the simulation took to run. The spike in time around the critical point shows critical slowing down as also seen in the length of the movie above. Also, the ending configuration of zombies shows a fractal like structure near the critical point. Using the box-counting method, we actually calculated the fractal dimension: image This helps us pinpoint the epidemic transition at a ratio of b/k = 1.4. This means for our particular network type and assumptions on the infectious nature of zombie-ism that zombies must be 40% better at biting than we are at lopping heads for an epidemic to take off. Even then, there must be a much larger ratio of b/k for more than 50% of the population to become part of the undead. I trust that we the living would be at least as capable as the undead so any potential zombie scenario would seem to be stifled in our model. But then again, what about fast zombies? Now those things are scary. Update: I had a quick thought that mobile zombies might be a bit more realistic. I made a quick movie where zombies pursue the closest susceptible while the susceptibles run away from the zombies at the same speed. You will see an area clear around the initial infection as the zombies give chase. By accident, two zombies will collide providing the spark for an epidemic. An explosion of zombies results. More to follow!

The Impossibility of Why

So I think we've all been rather busy here. Hence the lack of posts. I'm going to try to keep this one short but sweet. A lot of people think that physics tells us why things happen. Why is the sky blue? Why does the earth orbit the sun? Why does copper transmit electricity so well? These all seem like perfectly reasonable questions to ask. Questions that we, as physicists can answer. Yet, I entitled this post the impossibility of why. In general, questions about why are not good questions for physicists More accurately we answer questions about how. Or what. What phenomena causes use to see the sky as blue? What forces cause the earth to orbit the sun? How does copper transmit electricity so well? In general, we can't answer a question of why. A friend once asked me (at the end of a talk I gave, nonetheless) 'why can't two electrons be in the same quantum state, while two bosons can?'. That's an example of a question that we as physicists can't answer. We have no idea about why that is. The best answer we can give is 'I don't know. However, experiment tells us that's what happens.' In general, physics cannot be built in a vacuum. We cannot sit down and write down the laws of nature. Not without looking at nature. That is the difference between physics and mathematics. Mathematicians can construct arbitrary logical systems. Anything with a given set of logical axioms that is consistent can be a valid system of mathematics. Of course, only a few are useful systems. This allows mathematicians to generate exceedingly interesting playgrounds for the mind. Physics is meaningless without experiment. We have to test our theories against the world as we know it. We can no more explain why F = ma or the Pauli exclusion principle is the way of the world than we can answer why the universe exists. We can (at least we hope) tell you how the world works, what to expect, cause and effect. But why we found this particular set of rules and not another, different but consistent system, is something we can't, and usually don't try to answer. In many ways, I think this reduces much of the perceived conflict between religion and science. As long as one doesn't read God's (or gods, depending on your religion) word literally, religion is an attempt to explain why. Physics is an attempt to explain how. There's no inherent conflict in that. At least, that's my thinking.

Life as an Experimenter - Reflections

I've been doing experimental physics for about three years. I started during my sophomore year in college, went straight to graduate school, and continued here (albeit not quite immediately). There are people out there who have been doing this for a lot longer than I have, but I've gained a few insights, by working with said people, and through my own experiences in the lab. I thought I'd try to share a few of these, to help illuminate the past few days I blogged about. Something Always Goes Wrong It's the nature of experimental science. Something always goes wrong. It might be your equipment, it might be your sample, hell, it might be something personal. But something will go wrong. Wednesday was a prime example. Not only did I get soaked by rain and temporarily lose my bike lock, but it took us 9 hours to really start taking data due to equipment problems. I think this is endemic in what we're doing. We're trying to investigate something that no one has ever done (at least not quite exactly what we're doing). This means that we're usually pushing the bounds of our equipment and setup. Not just that, but most of the equipment is exceedingly complex, giving it many more places it can break down. It's An Emotional Roller Coaster Sometimes nothing works. At all. You try and try and you just can't make it work. Then sometimes everything is going well. You're getting data, and not just that, but the data looks good. Whether it's confirming or destroying your expectations, you're finding out something new about the world. It's exciting. You find something interesting and start chasing it. However, you have to be careful. Often the result is not what you think you're seeing. It's a false signal from your equipment, or attributable to the background material doing something strange. Or you've destroyed the sample, and that's why it's not doing what you expect. You go up and down quite a bit. It's Exhausting Part of it is the long hours. But really, 16 hour days are not that bad. Sleeping 6.5-7 hours. I can usually do that, no problem. So what's different when it's research? You have to be thinking all the time. For 16 hours. In general, you can't just push the button and wait. You need to look at the data while it comes in, to determine what data to take next. Combine that with the emotional up and down . . . even though you're not necessarily physically exhausted, you're mentally worn out. You Get More Questions Than Answers My undergraduate advisor would always say that when you're doing research you generate three questions for every question you answer. That's more or less true. It is very rare to go out and answer all the questions you have. It is even more rare to not get more questions. There's always more to learn. In many ways, that's great, though a little frustrating. If we ran out of questions, we'd probably run out of a job. But there's always another twist that you didn't expect nature to have, something else surprising you. That's why we do it. There's no finality in research. We can't get all the answers, ever. A lot of our time isn't data taking, but making/setting up our equipment, analyzing data, and such things. Still, I hope that this series of posts has given you a taste of what we scientists may do when we're taking data.

Life as an Experimenter - Day Three

Today marks the third and final day in our beam time at CHESS. I think the circles under all of our eyes may take away from the glamor a bit, but the right makeup specialist could fix that. If they ever make a movie about us, which they should, I want to be played by some really awesome british actor. I think that would be about right. Someone with a strong jaw. In case it's not obvious, lack of sleep is getting to me a little bit. Read on for the final few hours of our experiment. Friday 6/18 9:07am - Not sure if I didn't turn on my alarm, or turned it off without noticing. But somehow I manage to jerk awake not too much later than planned. Dragging a little bit. Get some food. No time for shower, do that in a few hours. Only have to get through to noon. 9:41 - Leave house. Hop in car. Realize as I drive through that there's a stop sign at a corner that I didn't see last night, mostly hidden by trees. Very glad there wasn't any traffic. 9:54 - Arrive back at CHESS. Set up laptop to stream the US game. Get a report from Matt and Ryan. Looks like the data that was exciting last night is actually repeatable, and though it is a small effect, probably real. This is a very nice bonus for our experiment, we were mostly expecting null results from that phase of things. Plan is to test that more, run some experiments to make sure we're not seeing other, more mundane, effects that would look the same with our collection methods. 10:25 - I take over crystal mounting. Matt is busy with data, and Ryan is not having luck with crystals this morning. That's how it goes. US is down a goal. Morale low for that reason. 10:45 - Data taking going okay. Still seeing the effect, though small. US down 2 goals. Not good at all. 11:00 - US down 1 goal! Oh yes. Some experiment. We're almost done, not paying too much attention any more. Everyone is slowing down. 11:20 - CHESS staff members keep on wandering into our area and staying to watch the game. Kind of fun. One got us an ethernet cable to eliminate some of the pauses we were getting in the streaming using the wireless. Set up a final run, determine if the effect is from what we suspect (hope?) or from mundane causes. Need to pack up and be out in 40 minutes. 11:45 - Experiment finishes. US is robbed! Should have been a victory. 11:50 - Mad scramble to pack up equipment. Crystals, mounts, computers, etc. 11:58 - Off the beam. 12:05pm- Parking permit returned. On my way home. Taking the rest of the day off. I hope that you, dear reader, have enjoyed this brief taste of how experiments go. I tried to let the emotions reflect what I actually was feeling at the time, but often for the most harrowing/exciting parts there was no time to write, so much of this has been upon reflection. Tomorrow I hope to offer a few insights into how experiments usually seem to go. A lot of people have never done experimental science, and experimentation in movies is very different from what we actually do. Now it's time to sleep. And maybe watch the England/Algeria game.

Life as an Experimenter - Day Two

Today I'm continuing my series on the life of an experimenter. Today is the longest day, since we have beam time for all 24 hours. And after the setbacks of yesterday, we feel compelled to use it to the utmost bit. Read on for more tantalizing glimpses of the grit behind the glamor of the rock-star-like lifestyle of an experimental physicist. Today had a very different feel. Things were working, and that meant a lot of down time waiting for data to collect. Despite the excitement of data coming in, I was a little bored at points. Lots of internet use. Thursday 6/17 7:15am - Alarm goes off. 6.5 hours of sleep? Up quickly, and eat and shower. Run into a couple of my housemates that I normally don't. I don't feel to tired. Which is good, because I don't usually do caffeine, if I had to now, it might well mess me up. 7:50 - Out of the house. Taking my car, since my bike isn't working. 8:05 - At the F1 station. Matt is there, awake, looking not too much worse for ware. Ryan comes in right behind me. I have to get a parking permit, which CHESS kindly provides for visiting researchers. Since my car isn't registered on campus, I qualify as a visiting researcher. 8:10 - Matt briefs us on the progress of the night. Some good data was taken, and we've switched from the helium to a liquid nitrogen (LN) cooling. He lays out the set of experiments we should try to run today. Obvious that he needs sleep. 8:45- Matt goes home. Ryan and myself are on our own. Start a data run. 9-10 - All the staff are checking in on us. After the problems of yesterday, they want to make sure everything is running smoothly. Which it is. 10:30 - Data running smoothly. When everything goes well, for this particularly type of data set, there's not much to be done while it's coming in. Put on the Greece vs. Nigeria world cup game, streaming on my laptop. 11:30 - Data run finished. Load up a new crystal smooth as can be. Start another data set. Everything working like a charm. 12:30pm - Lunch break. Much more relaxed today. 2:00 - Data run done. Switching to a different type of data run 2:15 - New run starts. Smooth sailing so far. 2:30 - Put on Mexico vs. France world cup game. 3:30 - Attempting to start a new run. Took us three or four tries to find a new crystal that was good. Run good to go. 4:15 - We're trying to run at 80K, but we're having problems with temperature stability. We talk to Ulrich, and he thinks that we've got a partial ice plug in the LN line, reducing the LN we can draw through. We can either waste two hours having it replaced, or run with it as it is. No guarantee that it won't get worse. We decide to run with it, we've lost too much beam time already. Our lowest temperature seems to be \~90K. 5:15 - Data run finishes. New crystal mounted. New data run started. 6:30- Data run finishes. Two attempts before we get a good crystal. Data flowing smoothly. 7:10 - Matt returns. There is much rejoicing. 8:00- Trying room temperature data taking. Matt wants us to learn all the tricks this run, it seems. Having trouble getting good crystals. 10:00- Room temperature data giving us a few interesting results. Trying a full run, but we're going to kill the crystal long before that. X rays will kill proteins (that's why we avoid them, ourselves). Faster at room temperature than at 100K. 10:45 - Dinner from the stuff I packed this morning. I do love microwaved leftovers. No rush, though. All three of us are in the lab, and Ryan and Matt can handle whatever comes. 11:25 - Lots of trouble getting good crystals (4 or 5 attempts?). 220K is a hard temperature. Finally gave up and went to 240K. 11:55 - Ryan goes home. He'll be back around 6 tomorrow. I'm sticking it out for a while. Interesting data coming in around 240K. If we're skilled and lucky, we'll get something around 220K also. Friday 6/18 12:20am - Potentially exciting results! 12:45 - Reproducible potentially exciting results! 1:15 - My brain can't do simple calculations right now, but we're changing temperatures and trying out another look for our result at 240K. I hope it's there! 1:20 - Morale lower. Possibly the effect is from a perfectly reasonable explainable thing. What a great hour though! 1:45 - Data inconclusive but leaning towards no. Heading home. Leave Matt all by his lonesome. 1:55 - Home. Sleep time. Alarm set for 9.

Life as an Experimenter - Day One

I'm an experimental physicist. If you think this sounds like a job second in glamour only to rock star you would be right. Just like being a rock star, you have to deal with the people, the shows, the lights, the groupies . . . okay, maybe I'm lying about the groupies. Unless you're Brian Greene. Also similar to a rock star, no one really knows what it is we do behind the scenes (when we're not touring the nation or publishing papers). I'd like to pull back that curtain a little bit. A bit of background. For this data run we're looking at the structure of protein crystals. The basic idea is that if you have a bunch of proteins, you can create a crystal out of them using various synthesis techniques. If they're formed right, they look similar to the crystals you are more familiar with, quartz, diamond, emerald, etc. We're interested in how the proteins are held in the crystal, that is, what kind of structure the protein crystal has. I'll talk more about this at some other point. Our method for examining this is using x rays. Similar to a medical x-ray, we shoot x-rays at our sample, and look at the transmitted images. Of course, instead of something like this: image we see something like this: image which is a little bit harder to interpret. Nevertheless, we've got some pretty nifty software that will do the job for us. The unfortunate part about using x-rays is that to produce research grade x-rays takes really big expensive facilities, so we have a limited amount of time we get to use them for. Right now, we've got 48 hours of beam time, so we want to take data for as many of those 48 hours as possible. For those interested, we're using CHESS at Cornell, station F1. Wednesday, 6/16 9:30am- Get up. I was warned by the postdoc in my group to sleep in as much as I could. Our data run starts at noon, and goes for 48 hours. Who knows how much sleep we'll get in those 48 hours. Our beam time starts at noon today. I eat breakfast, and realize that I've left my lab notebook in the lab. I need to swing by there and pick it up. 10:52 - Out of the house. My goal is to be at CHESS by 11. As this is my first time using the facility, I need to get a safety tour before I can use the beam line. I hope on my bike and speed towards to the physics building. 10:54 - Downpour. I get soaked. I'm wearing a rain jacket, but my jeans and shoes are soaked through. 10:55- A pocket on my backpack comes unzipped, dumping my water bottle (and, as I find out later, my bike lock) into the road. I stop and retrieve the water bottle. 11:02 - Arrive at physics building. Hasten to the lab, dripping water. Grab notebook. Receive call from Ryan, another graduate student in the lab, saying he's at CHESS, and ready to take the safety tour when I am. 11:12 - Arrive at CHESS. Realize I don't have my bike lock anymore. Call Ryan and tell him to take the tour without me, I'll get to it later. Hop on bike, ride route in reverse. Find lock right where water bottle fell out. 11:25 - Back at CHESS. Check in. Get given safety tour, radiation badge. Meet up with Ryan. Good times. 11:59 - Matt, the postdoc in my lab, with experience at CHESS and the samples and equipment, arrives. 12:00pm - Beam time starts. We're not ready. For this run, we're using liquid helium to cool our sample. We have to get that set up first. 1:45 - With the aid of Ulrich, one of the Research Associates at CHESS, we have the liquid helium stream set up to cool our sample. 2:00- Matt trains Ryan and myself how to mount samples for the beam. 2:15 - Matt mounts a sample, and trains Ryan and myself how to operate beam controls. 2:30- Start taking data. 2:45- Crystal turns out to be not very crystalline. Rinse and repeat. 4:00 - After 7 increasingly frustrating attempts with bad crystals, the 8th turns out to give us a good signal. Looks like we're back in business. 4:15 - Getting anomalous signals from our sample. Every 5th image has about have the intensity of the others. Also getting some weird tiling in the image. No one knows what's going on. We call in Ulrich and the beam operator. 4:30- In the midst of trying to resolve the anomalous signals, the computer we're running the experimental control software on goes down. Requires calling in more tech support. 4:50- I take a break to eat. First food since breakfast. Two sandwiches and a half-frozen banana (kept it near the back of the fridge in the CHESS lounge. I learned my lesson). 5:10 - Food eaten. Computer back up. Data taking ready. Still no resolution for the anomalous signal. While the experts continue to trouble shoot, Matt attempts to figure out if we can run any of our experiments with the data as bad as it currently is. 5:45 - Still no solution. Matt has determined that none of our planned experiments will work with the data as bad as it is. Suspicion of a bad detector (a million dollar piece of equipment). Morale is low. 8:00 - After continued trouble shooting, Ulrich is convinced that the problem is a bad shutter for the x ray beam (just like a camera, you achieve a certain image exposure by letting the beam hit the sample/detector for a certain amount of time). 8:30 - Technician arrives to swap shutters. Operation successful! Data looks good. 9:00 - After some test runs, we start taking real data. Morale is high. 11:15 - Data coming in with no sign of stopping. Ryan and I head home. Matt has the night shift, we're to relieve him at 8 the next morning. 11:20- Discover that my bike has somehow broken while sitting on the bike rack. The collar that keeps the handlebars in the frame is loose, and the handlebars no longer connect with the front wheel. Bike is unrideable. 11:25 - Decide bike is unfixable with current tools. I have to walk it home. 11:40 - Home. Attempt to fix bike. Discover none of my wrenches are quite large enough. 11:45- Food. Microwave leftovers. Discover one of my housemates has left fresh chocolate chip cookies on the counter for the rest of the house. Bless her a thousand times over and eat two. * Thursday 6/17 *12:15am - Set alarm for 7:15am, and hop into bed.

How Long Can You Balance A (Quantum) Pencil

image

Sorry for the delay between posts. Here in Virtuosi-land, we've all begun our summer research projects and I think we've just become a little bogged down in the rush that is starting a summer research project. You feel as though you have no idea what the heck is going on, and just try desperately to keep your head up as you hit the ground running, but thats a topic for another post. In this post I'd like to address a fun physics problem.

How long can you balance a pencil on its tip? I mean in a perfect world, how long?

No really. Think about it a second. Try and come up with an answer before your proceed. What this question will become by the end of this post is something like the following:

Given that Quantum Mechanics exists, what is the longest time you could conceivably balance a pencil, even in principle?

I will walk you through my approach to answering this question. I think it is a good problem to illustrate how to solve non-trivial physics problems. I will try and go into some detail about how I arrived at my solution. For most of you this will probably be quite boring, so feel free to skip ahead to the last section for some numbers and plots.

Finding an Equation of Motion

The first thing we need to do is find an equation of motion to describe our system. Lets consider the angle theta that the pencil makes with respect to the vertical. Lets treat this as a torque problem. Dealing with rotating systems is almost identical to dealing with free particles in Newtonian mechanics. Instead of Netwon's first law, relating forces to acceleration $$ F = m \ddot x $$ we just replace it with the rotational analogue of force - torque, the rotational analogue of acceleration - rotational acceleration, and the rotational analogue of mass - the moment of inertia. $$ T = I \ddot \theta $$ (I've taken the usual physics notation here, dots represent time derivatives) We need to determine the torque and moment of inertia of our pencil. At this point I need to model the system. I need to break up the real world, rather complicated idea of a pencil, and turn it into an approximation that retains all of the important bits but enables me to actually proceed. So, I will model a pencil as a rod, a uniform rod with a constant mass density. In doing so, I can proceed. The moment of inertia of a rod about its end is rather easy to calculate. If you are not familiar with the result I recommend you try the integral yourself. $$ I = \int r^2 \ dm = \frac{1}{3} m l^2 $$\ where m is the total mass of my pencil and l is its length. I will take a pencil's mass to be 5 g and its length to be 10 cm. Now the torque. The only force the pencil feels is the force due to gravity, which acts from the center of mass, which for my model of a pencil occurs at half its length. I additionally wish to express the force in terms of the parameter I decided would be useful, namely theta, the angle the pencil makes with the vertical. I obtain $$ T = r \times F = \frac{1}{2} m g l \sin \theta $$ Great, putting the pieces together we obtain an equation of motion for our pencil $$ \frac{1}{2} m g l \sin \theta = \frac{1}{3} m l^2 \ddot \theta $$ rearranging I get this into a nicer form $$ \ddot \theta - \frac{3}{2} \frac{g}{l} \sin \theta = 0 $$ in fact, I'll utilize another time honored physics trick of the trade and simplify my expression further by making up a new symbol. Since I've done these kinds of problems before I can make a rather intelligent replacement $$ \omega^2 = \frac{3}{2} \frac{g}{l} $$ obtaining finally $$ \ddot \theta - \omega^2 \sin \theta = 0 $$ And we've done it.

Looking at the equation of motion

Now that we've found the equation of motion, lets look at it a bit. First off, what does an equation of motion tell us? Well, it tells us all of the physics of our system of interest. That little equation contains all of the information about how our little model pencil can move. (Notice that while I haven't yet been explicit about it, in my model of the pencil, I also don't allow the tip to move at all, the pencil is only able to pivot about its tip). Great. A useful thing to do when confronting a new equation of motion is to try and find its fixed points. I.e. try and find states in which your system can be which do not evolve in time. How can I do that? Sounds complicated. In fact, I'll sort of work backwards. I want to know the values that do not evolve in time, meaning of course that if I were to find such a solution, all of the terms that depend on time would be zero. So, if such a solution exists, for that solution the derivative term will vanish. So the solutions have to be solutions to the much simpler equation $$ \sin \theta = 0 $$ Which we know the solutions. In fact, lets be a little smart about things and only worry about theta = 0 and theta = pi. Thinking back to our model this suggest a pencil being straight up (theta = 0) and straight down (theta = pi). These are the stable points of the equation. The second one you are familiar with. If you instead of balancing a pencil 'up', try to balance it 'down', you know that if you start with the pencil pointing straight down it stays that way and doesn't do anything interesting. But what about that first solution theta =0? That indicates that if you could start this model pencil exactly straight up, it would stay that way forever, and also not do anything interesting. Oh no you cry. It seems as though we've already answered the question. How long can you balance a pencil? It looks like you could do it forever if you did it perfectly. But you and I both know that is impossible. You can't ever balance a pencil forever. I've never done it, and tonight I've spent a lot of time trying. So what went wrong?

When your approximations fail

So what went wrong again? It seems like I've gotten an answer, namely in my model you could, at least in principle balance your pencil forever. But you and I both know you can't. Something is amiss. Hopefully, the first thought that occurs to you is something along the lines of the following.

Of course you dummy! You could in principle balance a pencil forever, but in the real world, you can't set the pencil up standing perfectly straight. Even if its tilted just a little bit, its going to fall. This is exactly the problem with your physicists, you don't live in the real world!

Whoa, lets not be so harsh there. I made some rather crude approximations in order to get such a simple equation. You are allowed to make approximations provided (1) they are still right to as many digits as you care about, and (2) you keep in mind the approximations you made, and think a bit about how they could go wrong. So, before we do anything too drastic, lets do with your gut. I agree, it seems like if the pencil would be at any small angle, it ought to fall. Lets double check that our equation does that. So for the moment imagine theta being some small number. In fact, I will use the usual notation and call it epsilon. What does our equation say then? $$ \ddot \theta = \omega^2 \sin \epsilon $$ lets make another approximation (I know I know, we've already run into trouble, but bear with me). If epsilon is going to be a really small number, then we can simplify this equation even more. That sine being in there is really bugging me. Sines are hard. So lets fix it. Can we say something about how sine behaves when the angles are super small?? In fact we can. Such an approach is super common in physics.

A short side comment on Taylor Series

Imagine a function. Any function. Picture the graph of the function. I.e. imagine a line on a graph. No matter what function you imagine, if you zoom in far enough, at any point that function ought to look like a line. Seriously. Zoom out a little bit and it will look like a line plus a parabola. Zoom out a little more and it will look like a cubic polynomial. You can make these statements precise, and thats the Taylor Expansion. But the idea isn't much more complicated than what I've described. Taylor expanding the sine, we obtain $$ \ddot \theta \approx \omega^2 \left( \epsilon - \frac{\epsilon^3}{3!} + \cdots \right) $$ So if you are at really small angles, sin(x) looks just like x. Whats really small? Well as long as x^3 is too small for you to care about. For me, for the rest of the problem that will be for angles that are less than about 0.1 radians, for which that second term is about 0.00017 radians or 0.01 degrees, which is too small for me to care.

Coming back to the approximation bit

Anywho, for really small angles, our equation of motion is approximately $$ \ddot \theta = \omega^2 \theta $$ So, notice for a second that if theta is positive, since omega^2 has to be positive, then our angular acceleration is going to be positive. So your intuition was right. If your pencil ever gets to any positive angle, even the smallest of angles, then our angular acceleration is positive and our pencil will start to fall down. So. The next question becomes. How can we capture this bit of reality. It looks like my model has this unphysical solution. How can I make it more real worldy? Ah, this is the real fun of physics. You could go in any number of directions. Perhaps you could try and estimate how good you can actually prepare the angle of the pencil, perhaps you could ask whether air bouncing into the pencil would make it fall, perhaps you could wonder whether adding more realism to the moment of inertia would make the pencil fall easier, maybe you could wonder whether the thermal motion of the pencil would make it fall? Maybe you could consider the pencil as an elastic object and consider it vibrating as well as pivoting. Maybe you could model the tip as being able to move? Maybe you could introduce the gravitational pull of the sun? or the moon? or you? or the nearest mountain? The sky's the limit. So what am I going to do? Quantum Mechanics. Seriously, bear with me a bit.

A little preliminaries

Before proceeding any further, lets actually solve the equation of motion we just got for the smallest angles. To remind you, the equation I got for my model of a pencil in the limit of the smallest angles was $$ \ddot \theta = \omega^2 \theta $$ This is an equation I can solve. Its a very common differential equation, and one that we use and abuse in physics so I know the solution by heart. So lets write it down. First just to let know you, this sort of equation, second derivative of thing being linearly proportional to thing gives you solutions that are always pairs, usually, depending on the sign of the constant, they are written as sines and cosines, or decaying and growing exponentials. Naturally of course in order to solve a second order differential equation, we need to specify two initial conditions. In this case I will call them theta_0 and dot theta_0, representing the initial position and initial angular velocity. In this form the solution can actually most easily be written in terms of the exponential pair associated with sine and cosine, sinh and cosh (you can read more about them here, they are really neat functions). The solution is $$ \theta(t) = \theta_0 \cosh \omega t + \dot \theta_0 /\omega \sinh \omega t $$ which as you could probably convince yourself, exponentially grows for any positive theta_0 or dot theta_0.

Abandoning Realism

At this point of considering the question, I turned down a different route. I don't really care about balancing pencils on my desk. You see, I know a curious fact. I know that in quantum mechanics there is an uncertainty principle which says that you cannot precisely know both the position and momentum of an object. This of course means that even in principle, since our world is dominated by quantum mechanics, I could never actually balance even my model pencil forever, because I could never prepare it with perfect initial conditions. The uncertainly principle tells us that the best possible resolution I could have in the position and momentum of an object are set by Planck's constant: $$ \Delta x \Delta p \geq \frac{\hbar}{2} $$ This has to be true for our pencil as well. In fact I can translate the uncertainty principle into its angular form $$ \Delta \theta \Delta J \geq \frac{\hbar }{2} $$ where theta is our theta and J is the angular momentum, which for our pencil we know is $$ J = I \dot \theta = \frac{1}{3} m l^2 $$ So the uncertainty principle for our pencil is, $$ \Delta \theta \Delta \dot \theta \geq \frac{3 \hbar }{2 m l^2 } \approx 3.2 \times 10^{-30} \text{ Hz} $$ So what? So, I'm going to approximate the effects the uncertainty relation has on our pencil problem by saying that when I start off the classical mechanical pencil, I'm going to require that my initial conditions satisfy the uncertainty relation: $$ \theta_0 \dot\theta_0 = \frac{3 \hbar }{2 ml^2} $$ which we decided is going to mean that our pencil has to fall. The real question is? How long will it take this pseudo-quantum mechanical pencil to fall? In order words the question I am really trying to answer is:

Assuming a completly rigid pencil which you place in a vaccum and cool down to a few millikelvin so that it is in its ground state. Roughly how long will it take this pencil to fall?

Do it to it

So lets do it. This is going to be a bit quick, mostly because its getting late and I want to go to bed. But the procedure is kind of straight forward now. I need to choose initial conditions subject to the above constraint, figure out how long a pencil with those initial conditions takes to get to theta = pi/2 (i.e. fall over), and then do it over and over again for different values of the initial conditions. So, the first thing to do is figure out how to pick initial conditions that satisfy the constraint. I'll do this systematically by parameterizing the problem in terms of the ratio of the initial conditions. I.e. lets define $$ \log_{10} \frac{\theta_0}{\dot \theta_0} = R $$ where I've taken the log for convenience. Now, figuring how long the pencil takes to fall in principle is just numerically integrating forward the full equation of motion $$ \ddot \theta = \omega^2 \sin \theta $$ where I need to do it numerically because the sine makes this equation too hard to solve analytically. In order to do the numerical integration I implemented a Runge-Kutta algorithm in python. The only problem is that I am dealing with really small numbers and my algorithm can't play well with those in any reasonable amount of time. But, I can solve analytically for the equations of motion in the small angle limit, so I actually use the solution $$ \theta(t) = \theta_0 \cosh \omega t + \dot \theta_0 / \omega \sinh \omega t $$ to evolve the system up to an angle of 0.1 radians, and then let the nonlinear equation and runge kutta algorithm take over. The full python code for my problem is available here (if you want to run it, remove the .txt extension, I did that so that it would be previewable). And what do I obtain? First looking over 20 orders of magnitude in differential initial conditions:

image

And second, zooming into the interesting region.

image

So, what is the best time you could balance a quantum mechanical pencil, i.e. what is the absolute longest time you could hope to balance a pencil in our universe? About 3.5 seconds. Seriously. Think about that for a second. Usually you hear about the uncertainty principle, and it seems like a neat parlor trick, but something that couldn't influence your day to day lives, and here is a remarkable problem where even in the best case, the uncertainty principle puts a hard limit on championship pencil balancing which seems tantalizingly close. And there you have a graduate student working through a somewhat non trivial problem. I probably went into way too much details with the basics, but we are still trying to feel out who our audience is. Please leave comments and let me know if I either could have left things out, or should have went into more details at parts.

EDIT

As per request, here is how the max fall time scales with the length of the pencil assuming a pencil with uniform density.

image

Plotted on a log-log plot, that is a pretty darn good line. The power law dependence is $$ t \sim l^{0.514} $$ Neat. Strangely enough, if I trust my numbers here, the longest you could hope to balance a 'pencil' 1 km long would be about 6 minutes. Thats a very strange mental picture.

My Pepsi* Challenge

imageThe basement of the Physics building has a Pepsi machine. Over the course of two semesters Alemi and I have deposited roughly the equivalent of the GDP of, say, Monaco to this very same Pepsi machine (see left, with most of Landau and Lifshitz to scale). It just so happens that Pepsi is now having a contest, called "Caps for Caps," in which it is possible to win a baseball hat. There are several nice things about this contest. Firstly, I drink a lot of soda. Secondly, I like baseball hats. So far so good. Lastly (and most important for this post), is that it is fairly straightforward to calculate the statistics of winning (or at least simulate them). So how does the game work? Well, on each soda cap the name of a Major League Baseball team is printed. All thirty teams are (supposedly) printed with the same frequency, so the odds of getting any particular team are 1/30. You can win a hat by collecting three caps with the same team printed on them. So if I had five caps, the following would be a win: Phillies Cubs Tigers Phillies Phillies whereas the following would not win me anything: Yankees Rays Blue Jays Orioles Royals and I would also lose if I had: Mets Nationals Braves Braves Mets. In addition, one in eight caps gets you 15% off on some $50 dollar or more purchase to MLB.com or something like that. For simplicity, I ignored these 15% off guys, but all they will do is push back the number of caps you need by one for every eight purchased. It should not be too difficult to factor these ones in, but I was lazy and I already made all these nice graphs, so... The first thing I tried to do was just simulate the contest. I wrote a little Python script that randomly generates a team for each cap and counts my wins over a given number of caps purchased. Running this about 100,000 times for all number of caps between 1 and 61 (with 61 guaranteed to win) and averaging over the number of wins, I could determine both the expected number of wins per cap value and the probability of winning at least once. The results are shown below. image.png) image But we can also exactly solve this game. This turned out to take longer for me (I'm bad at probability) than just simulating the darn thing. I had initially included my derivation in the post but it was long, muddled, and none too illuminating, so I took it out. But I super-duper promise I did it and can post if you really really want to know. Otherwise, I have just plotted the predicted results below (as a red curve) along with the simulated data (blue dots). Turns out they agree pretty well! image Just eyeballing the graph, we see that after 18 or 19 sodas the chances of winning are about a half. Beyond about 25 or so it appears to be almost 90% that you'll win at least once. In reality, these percentages would occur about 2 or 3 caps later to compensate for the 15% off thingies. So now that we have some numbers and can trust our model a bit, let's see how worth it this contest is for us. First, we can ask: Is this a good way to get a hat cheaper than retail value (about $15)? To quantify "worth it" I have chosen to find the value of winnings (price of hat times expected number of wins) minus the cost of caps (how much I spend on soda). I am fairly embarrassed to say that the cost of each soda is $1.75. See plot below. image From this plot, we see that it doesn't become "worth it" (that is, value of winnings is greater than cost of sodas) until about 40 sodas purchased. That's a lot! In fact, we see that just when I start feeling pretty confident I'll win something (around 20-25 sodas), I'm right in a big valley of "totally not worth it." So if I just want a baseball hat, I'm better off forking over the $15 dollars. Although, one does see from this plot that once I get above about 40 or so sodas, it becomes much more cost effective to just keep buying sodas and winning hats. However, Pepsi tries to stifle this a bit in the rules, stating that "Limit one (1) Official MLB® baseball cap per name, address or household." Unless I either make a lot of friends real soon or develop a creative definition of my address, it looks like I'm out of luck. But what if I want a hat but I don't want to actually buy soda like a chump? This contest, like many others, needs to have a "No Purchase Necessary" clause for some legal reason or another (so they aren't lotteries or gambling or something). I had assumed they (the nameless overlords at Pepsi) would limit the number of caps possible from just mailing in, but it doesn't seem that way. From the Official rules, Chapter Two, verses nine to twenty-one: "Limit one (1) free game piece per request, per stamped outer envelope." That sounds to me like you could get as many as you want, as long as you use different envelopes. So we can redo our cost analysis with the cost of getting one cap as the cost of a stamp. Putting the value of a cap now at the cost of a stamp (44 cents), we get the following: image Zooming in: image Hey, that seems worth it! And it should, since from above we saw that the probability of winning after about 30 caps was in the high 90% 's. The cost of getting 30 caps this way is the cost of 30 stamps, which is less than the $15 that the hat is (supposedly) worth. So if I really wanted a hat from this contest and didn't feel like drinking all my money away, I'd just send away for the mail-in pieces. I may try this method, since it seems to be allowed under the rules. Although, even a strict constructionist reading of the contest rules pretty much allows Pepsi to do whatever the heck it wants. Either way, I'll be sure to update to see how well my model holds up!


*NOTE: In no way is The Virtuosi blog affiliated in any way with Pepsi. We may occasionally purchase Pepsi products (like sweet tasting Wild Cherry Pepsi!), but we don't do it because we think it makes us look "cool" or "hip" or "rad" (we KNOW it does). In fact, drinking too much soda can have certain adverse health effects (like making you stronger, faster, and in general more attractive). So if you want to have a Pepsi product (like sweet tasting Wild Cherry Pepsi!) every now and then (literally, EVERY INSTANT), go ahead. But drinking too many Pepsi products (like sweet tasting Wild Cherry Pepsi!) could make you sick (with awesome-itis).