The Physics Virtuosihttp://thephysicsvirtuosi.com/2014-03-20T20:00:00-04:00Special Brain2014-03-20T20:00:00-04:00DTCtag:thephysicsvirtuosi.com,2014-03-20:posts/special-brain.html<p>A colleague (and friend) of mine <a href="https://www.youtube.com/user/thephysicsfactor">(hereafter referred to as Katie Mack the Physics Hack)</a>
produced a fun video last year that tried to show how people sometimes react when she
tells them that she studies physics:</p>
<iframe width="560" height="315" src="//www.youtube.com/embed/AAA25XQKCbY" frameborder="0" allowfullscreen></iframe>
<p>I loved this video because I’ve had a number of experiences like this. My
favorite reaction that I’ve ever gotten happened in 2007. I was on a trip
during college with other college kids, and I was placed in a hotel room with
some guys who went to another school. We met for the first time while
unpacking, and naturally we asked each other what we were studying. Turns out
my new roommate was majoring in international business, something I knew
nothing about. Not wanting to alienate a total stranger I was going to be
sleeping in the same room with, I asked him questions and told him that his
chosen major sounded interesting and important. I told him that I studied
physics, and when he asked me what that meant I told him how I had worked on
modeling <a href="http://en.wikipedia.org/wiki/Cytokinesis">cell division</a>. My new
roommate responded, “You must have a special kind of brain for that.”</p>
<p align="center">
<img src = "/static/images/special_brain/profx.jpg" alt="A special brain." id="back_up1">
<p style="text-align: center; color: #999">A special brain. <a href="#footnote1"><sup>[1]</sup></a></p>
</p>
<p>This anecdote has stuck with me for a couple of reasons— first because
“special kind of brain” is a funny turn of phrase, and second because I think
it’s a perfect example of how an attempt at a flattering response can actually
create some uncomfortable social distance between people.</p>
<p><span class="dquo">“</span>Special kind of brain” was my roommate’s way of expressing how intelligent he
thought I was. (Or, as Zach put it in Katie’s video, “You must be soooooo
smart!”) His reaction hinged upon the assumption that what I was interested in
doing was so far beyond the understanding of ordinary folk that I could be set
apart as a member of an elite group. Instead of being merely complimentary,
his comment held me at arms’ length. His reaction wasn’t something I took
offense to, but it made me uncomfortable to hear that he considered me an
outsider of sorts based on my professed interests. </p>
<p>Speaking more generally, the notion that scientific professionals are set apart
from other people as members of a professional group isn’t so ridiculous. After
all, these days people’s lives are often defined by their careers. (And of
course scientists aren’t the only profession with associated negative
stereotypes— Anyone know any good lawyer jokes?). But to me, thinking of
scientists as some kind of inscrutable cabal of geniuses is an exaggeration.
The truth is, not every scientist is a rocket-powered superbrain. Quite the
opposite— scientists make silly mistakes all the time. Being a scientist is
a technical profession requiring years of training like law, medicine, or
accounting: there are a few practitioners who really are exceptionally smart,
while most of the others aren’t. </p>
<p>The even more disappointing truth is that being a scientist is actually usually
pretty mundane. Don’t get me wrong— the long-term goals of making new
discoveries and developing new insights into the world around us are exactly
why I like my job. I just mean that the day-to-day labor involved can be as
tedious as any other profession. I sit in my cubicle and code (debug)
endlessly on my laptop, or I read books and research papers to learn new things
about my field. Most days don’t get much more action-packed than that. In a
lot of ways it’s like any other office job. Aside from the end goal of
research, working as a scientist is not so special.</p>
<p align="center">
<a href = "http://www.imdb.com/title/tt0021884/">
<img src = "/static/images/special_brain/frankenstein.jpg" alt="My office definitely does not look like this."
width="100%" height="auto" id="back_up2"/></a>
<p style="text-align: center; color: #999">I work in a cubicle. Not here.<a href="#footnote2"><sup>[2]</sup></a></p>
</p>
<p>Another reaction that I get when I say I study physics is one of apprehensive
disappointment. (Zach’s pronunciation of ‘ohhhhhh…’ combining equal parts
boredom and distaste was dead on.) I don’t think I need to dwell on this too
long— it is undeniably unpleasant for me when I hear this. Upon hearing that
I’m a scientist, otherwise polite, kind people will suddenly lose their cool
and be unable to hide the fact that my profession conjures up memories of
boredom and frustration. (“Oh, man. I <span class="caps">HATED</span> physics in high school.”) As
Katie Mack puts it at the end, “polite interest is the way to go.”</p>
<p align="center">
<img src="/static/images/special_brain/big-bang-theory5.jpg" width="304" height="228" id="back_up3">
<p style="text-align: center; color: #999"><span class="dquo">“</span>Have you ever seen the Big Bang Theory? <br/>Is that what physicists are really like?<br/> I bet it is. I mean, no offense.” </p>
</p>
<p>There’s another type of off-putting reaction that comes up sometimes, which is
commenting (jokingly or not) that I’m similar to familiar caricature of
scientists that appear in popular culture. “You’re just like Sheldon Cooper!”
is a comment I’ve heard more times than I care to say. I know that The Big Bang
Theory is a popular show, but frankly I dislike being associated I with
characters that are cartoonishly depicted as condescending and socially
tone-deaf <a href="#footnote3"><sup>[3]</sup></a>. Now, I appreciate that some
people, when meeting others for the first time, like to demonstrate familiarity
with others’ jobs, but to me it just seems that making a pop culture references
to another person’s profession is just a bad way to go. I find this to be a
safe bet when meeting anyone, not just scientists, simply because popular
culture isn’t a great way to learn about anyone else’s job. Try telling the
next lawyer you meet that they remind you of <a href="https://www.youtube.com/watch?v=YPR9ORpwBEU">Saul
Goodman</a>, and see how they react.</p>
<p>So, what is there for physicists (and other scientists) to do when this
happens? The most facile answer to this question is for us to grow a thicker
skin and suck it up. Just ignore it when people have disparaging reactions
upon first meeting us, and find a way to get past this in conversation. The
thing is, I personally am not good enough at hiding my own negative reaction
upon hearing these kinds of obnoxious remarks. Ideally, I’d like to make
conversation easier by finding a way to avoid them altogether. </p>
<p>I can’t change the way other people react to learning about my profession, but
I can change how I present myself. Personally, I have given up on telling
people that I’m in the physics department. Instead, when asked “what do you
study in grad school?” I tell then exactly what I’m up to— I study how
infectious diseases spread through human and animal communities. I’ve found
that I get a much more relaxed reaction when I do this. The same people who may
have uncomfortable reactions to physics have enough familiarity with the idea
of epidemics to be a little more comfortable. And besides, everyone has some
amount of morbid curiosity about the next big plague that’s going to kill us
all. (I realize that this may not be a viable strategy for some of my
colleagues who study nanoscience, magnetic materials, high-energy particles, or
other mainstream physics topics. I’m interested to hear if anyone else who
works in the sciences has come up with a different technique for breaking
through the “I’m a physicist” ice.)</p>
<p>A friend of mine once chastised me for doing this. “Why should you have to hide
what you’re interested in?” he asked. “If they react badly to your profession,
is it really worth getting to know them?” To that I say that I’m still telling
them honestly what I’m interested in, I just sidestep the potentially negative
associations carried by the word “physics.” And besides, just because someone
has a bad or obnoxious reaction to finding out that I’m a physicist doesn’t
mean they aren’t worth meeting. The fact remains that I’ve found this to be a
great way to keep the getting-to-know-you conversation light when meeting new
people for the first time. I wish I could wave a magic wand and make it so that
everyone was comfortable wih the idea of interacting with professional
scientists, but I can’t. While I’m waiting for <a href="http://www.theverge.com/2014/2/4/5379246/watch-this-bill-nye-debates-evolution-with-the-founder-of-the-creation-museum">Bill
Nye</a>
and <a href="http://www.cosmosontv.com/">Neil DeGrasse Tyson</a> and others to humanize
the profession for the public, this is how I’ll be introducing myself. </p>
<p>Katie Mack and Zach’s video really got me thinking about how to talk to other
people about their jobs with more empathy— avoiding flattery and
stereotyping, and doing my best to hide any negative visceral reactions evoked
by the thought of others’ jobs and interests. One question that has occurred
to me is whether there are people in completely different professions
experience similarly frustrating reactions when they say what their jobs are.
Programmers, actuaries, office administrators, copy editors, art dealers,
karate instructors, gravediggers, lion-tamers, etc.: whoever you are, I want
to hear about any difficulties you may have had with telling other people what
you do in the comments below.</p>
<hr>
<ol>
<li><p id="footnote1"><a href="#back_up1">^</a>
Image from New X-Men #121, written by Grant Morrison with art by Frank Quitely.
You can see some more of this particularly trippy story <a href="https://marswillsendnomore.wordpress.com/2011/09/04/inside-the-twisted-mind-of-the-professor/">
here</a>.
</p></li>
<li><p id="footnote2"><a href="#back_up2">^</a>
All of the imagery of Frankenstein’s monster being brought to life with electricity
comes from James Whale’s <a href ="http://www.imdb.com/title/tt0021884/"
<i>Frankenstein</i> from 1931.</a> Mary Shelley’s original book contained no mention of
electricity, and instead remained eerily vague about the mechanisms for creating life.
</p></li>
<li><p id="footnote3"><a href="#back_up3">^</a>
<a href="http://scitation.aip.org/content/aip/magazine/physicstoday/news/10.1063/PT.4.0293">
Here </a> is a really level-headed critique of The Big Bang Theory that I like a
lot. There isn’t a ton of hand-wringing, and the author does talk about what the
show might consider doing differently. It was written three years ago.
</p></li>
<li>
Watch what happens when <a href="https://www.youtube.com/watch?v=THNPmhBl-8I/">
a brain surgeon meets a rocket scientist for the first time </a>.
To justify my linking to this skit (outside of the fact that I love it so much),
I’ll just say that <i>nobody</i> is acting appropriately in this video.
</p></li>
<li>
(Sorry, lawyers.)
</p></li>
</ol>Quantum Mechanics: Trying to Sort the Physical from the Mystical2013-09-23T00:00:00-04:00DTCtag:thephysicsvirtuosi.com,2013-09-23:posts/quantum-mechanics-mysticism.html<p align="center">
<a href="http://xkcd.com/1240/">
<img src = "http://imgs.xkcd.com/comics/quantum_mechanics.png" title="You can also just ignore any science assertion where 'quantum mechanics' is the most complicated phrase in it." alt="You can also just ignore any science assertion where 'quantum mechanics' is the most complicated phrase in it." id="back_up1">
<a href="#footnote1"><sup>[1]</sup></a>
</p>
<p>A friend of mine (I’ll call him Ron), who knows that I study physics,
likes to talk to me about quantum mechanics. He’s an easy-going guy
and likes to joke around. “Hey, is it a particle or is it a wave today?”
he’ll say, or, “How many dimensions do we have now?” When the conversation
turns more serious, he tells me how he believes
in the “quantum universe,” which is greater than what we humans are
able to ordinarily perceive. He talks about consciousness, immortality,
spirits, and the great cosmic grandeur of the universe, all of which he
ties together with the label of “quantum.”</p>
<p>These conversations are strange to me. Both of us are using the same two words:
quantum mechanics. When Ron thinks about quantum mechanics, he associates it
with nonphysical concepts, like spirits. Through my time spent studying physics,
I’ve come to understand quantum mechanics as a theory describing the behavior of
atoms and subatomic particles.
</p>
<p>For example, one day our chatting turned to the topic of medicine and how the
human body heals itself. Ron told me that the biggest problem with modern
medicine is that doctors think of the body as a physical object only.
Healing, he said, was a “quantum” effect. I told him that I could make a
pretty strong physical argument for why that wasn’t the case. He responded
with this story: Once, when playing football, he severely injured his knee.
The injury was so bad that he couldn’t bend it or move it. He didn’t have
health insurance and didn’t have the cash on hand to pay for medical treatment.
One day, he prayed to the universe that he would get better and a “tornado of
light came down” and healed his leg. Since then, Ron says, he’s always believed
in and respected the quantum universe.</p>
<p>I can’t tell Ron that what he described in his story didn’t happen,
that his experience was wrong or incorrect in some way. I wasn’t there,
so I can’t comment on the accuracy of his narrative. And even the story
of how his body healed out of the blue isn’t problematic: as far as I can
tell, decades after the story took place, Ron is in good shape and his leg
is doing fine. What I found objectionable about the story was how, in the
end, Ron attributed his healing to the miraculous intervention of quantum mechanics. </p>
<p>Quantum mechanics, in all of its glorious strangeness, is only relevant on
inconceivably small scales and at very, very low temperatures. One of the
reasons it took humans so long to develop the theory of quantum mechanics is
that quantum effects don’t readily appear in everyday life. My <a href="http://ultracold.lassp.cornell.edu/">colleagues</a>
who work to observe quantum mechanics in their experiments use lasers to manipulate
<a href="http://en.wikipedia.org/wiki/Rubidium/">atoms</a>
(objects that are 1/10,000,000,000th of a meter in size and weigh around
1/10,000,000,000,000,000,000,000,000th of a kilogram) at temperatures less than
1 Kelvin (about -459 degrees Fahrenheit). At larger sizes and temperatures
quantum effects are negligible. The human body is more than a meter long,
usually weighs around 50-100 kilograms and, if healthy, maintains a toasty
98 degrees Fahrenheit. Quantum mechanics is important at the atomic level,
but on the scales at which people interact with the world it hardly shows up
at all. So, even if Ron’s leg heals as he said it did, I wouldn’t give credit to quantum mechanics.<a href="#footnote-1"><sup>[2]</sup></a><span id="back_up2"></span></p>
<p>I have been studying physics for years and still quantum mechanics
remains utterly baffling to me. The fact that such an abstract theory can tell us so
much about the world feels a little bit like a miracle. Quantum mechanics carries
with it a number of counterintuitive ideas like the
<a href="http://opinionator.blogs.nytimes.com/2013/07/21/nothing-to-see-here-demoting-the-uncertainty-principle/">uncertainty principle</a><a href="#footnote-1"><sup>[3]</sup></a> <span id="back_up3"></span>
entanglement, or <a href="http://en.wikipedia.org/wiki/Many-worlds_interpretation">parallel universes</a>.
These ideas are so abstracted from every day life that the subject begins to take
on a supernatural quality. Physics no longer seems like physics-
it starts to sound like mysticism.</p>
<p>So it makes sense that contemporary culture has seized upon quantum mechanics
as a possible explanation for inexplicable things. The theory has so many
surprising results that it seems natural to extend it to encompass other things
that confuse us, like questions of consciousness. Furthermore, “quantum mechanics”
is a term that carries with it the weight of scientific legitimacy. If Ron had
said that he had been healed through witchcraft, laying on of hands, or alchemy
he would have sounded ridiculous, but attributing his experience to quantum effects
allows the story to borrow from the credible reputation of fact-based 20th century
science. What my friend doesn’t realize is that terminology is not what makes
quantum theory powerful: the scientific methodology supporting quantum mechanics
is what matters. </p>
<p>It’s important to keep separate quantum mechanics the physical theory and
quantum mechanics as a mystical cosmic principle. Despite how confusing it
is, quantum mechanics is an empirically motivated and mature theory that gives
us a framework to understand physical phenomena like radiation and chemical
bonding. This is fundamentally different from applying quantum mechanical
concepts to the nature of reality or consciousness. To do so may be a fun
philosophical parlor game, but it is baseless speculation without any evidence
to motivate the connection between quantum mechanics and the supernatural
that it begins with. This confusion is not just restricted to scientific laymen: there are
trained researchers working at well-respected research institutions
who also <a href="http://www.quantumconsciousness.org/">make the same mistake</a>
my friend Ron does.</p>
<p>At the end of the day, speculation that the soul, the afterlife, <span class="caps">ESP</span>,
or whatever else are quantum effects is unscientific, but at least it
isn’t dangerous or harmful in the same way as climate change denial or
<a href="http://www.nbcnews.com/health/measles-surges-uk-years-after-vaccine-scare-6C9997438/">refusing to vaccinate your children</a>.
It’s closer to something like <a href="http://www.intelligentdesign.org/">intelligent design</a>, which is
<a href="http://en.wikipedia.org/wiki/Kitzmiller_v._Dover_Area_School_District/">fundamentally confused about what science is</a>.
(We physicists are truly thankful that there is no noisy political movement
to teach <a href="https://www.deepakchopra.com/blog/view/900/from_quanta_to_qualia:_the_mystery_of_reality/">Deepak Chopra</a>
alongside physics in high school classrooms.) So I won’t object to my
friends’ stories of sudden, unexpected recoveries from illness, but
I will react skeptically when I hear that healing has anything to do with quantum mechanics. </p>
<hr>
<ol>
<li><p id="footnote1"><a href="#back_up1">^</a> Credit where credit is due: I took this from <span class="caps">XKCD</span>. This may be one of
my favorite comics Randall Munroe has ever done. That it came out while
I was thinking about this piece was a great coincidence. (A cosmic
coincidence explainable through quantum entanglement? Probably not.) </p></li>
<li><p id="footnote-1"><a href="#back_up2">^</a> Quantum mechanics may be just as mundane as any other materialistic physical
theory, but that doesn’t make it any less amazing. My favorite example is
how quantum mechanics allows us to understand <a href ="http://www.youtube.com/watch?v=gS1dpowPlE8/"> why the sun works. </a> </p></li>
<li><p id="footnote-1"><a href="#back_up3">^</a> In case you didn’t take the time to click on the link: Seriously, do
yourself a favor and click on the <a href="http://opinionator.blogs.nytimes.com/2013/07/21/nothing-to-see-here-demoting-the-uncertainty-principle/"> link </a>. It’s an essay from The Stone that very elegantly describes
how the uncertainty principle is far less cosmically mind-blowing than you
may have come to believe. It does a beautiful job bringing us back down to
earth and carefully explaining the scope of the principle. I must give it credit
for having inspired this piece in no small way. </p> </li>
</ol>Tragedy of Great Power Politics? Modeling International War2013-06-23T23:48:00-04:00Briantag:thephysicsvirtuosi.com,2013-06-23:posts/modeling-international-war.html<div style="float: right;">
<img src="/static/images/tgpp.jpg">
</div>
<p>Recently I finished reading John Mearsheimer’s excellent political science book
The Tragedy of Great Power Politics. In this book, Mearscheimer lays out his
“offensive realism” theory of how countries interact with each other in the
world. The book is quite readable and well-thought-out — I’d recommend it to
anyone who has an inkling for political history and geopolitics. However, as I
was reading this book, I decided that there was a point of Mearsheimer’s
argument which could be improved by a little mathematical analysis.</p>
<p>The main tenant of the book is that states are rational actors who act to to
maximize their standing in the international system. However, states don’t seek
to maximize their absolute power, but instead their relative power as compared
to the other states in the system. In other words, according to this logic the
United Kingdom position in the early 19th century — when its army and navy
could trounce most of the other countries on the globe — was better than it is
now — when many other countries’ armies and navies are comparable to that of
the <span class="caps">UK</span>, despite the <span class="caps">UK</span> current army and navy being much better now than they
were in the early 19th century. According to Mearsheimer, the main determinant
of state’s international actions is simply maximizing its relative power in its
region. All other considerations — capitalist or communist economy, democratic
or totalitarian government, even desire for economic growth — matter little in
a state’s choice of what actions it will take. (Perhaps it was this
simplification of the problem which made the book really appeal to me as a physicist.)</p>
<p>Most of Mearsheimer’s book is spent exploring the logical corollaries of his
main tenant, along with some historical examples. He claims that his idea has
three different predictions for three different possible systems. 1) A balanced
bipolar system (one where two states have roughly the same amount of power and
no other state has much to speak of) is the most stable. War will probably not
break out since, according to Mearsheimer, each state has little to gain from a
war. (His example is the Cold War, which didn’t see any actual conflict between
the <span class="caps">US</span> and the <span class="caps">USSR</span>.) 2) A balanced multipolar system (<mathjax>$N>2$</mathjax> states each share
roughly the same amount of power) is more prone to war than a bipolar system,
since a) there is a higher chance that two states are mismatched in power,
allowing the more powerful to push the less around, and b) there are more
states to fight. (One of his examples is Europe between 1815 and 1900, when
there were several great-power wars but nothing that involved the entire
continent at once.) 3) An unbalanced multipolar system (<mathjax>$N>2$</mathjax> states with power,
but one that has more power than the rest) is the most prone to war of all. In
this case, the biggest state on the block is almost able to push all the other
states around. The other states don’t want that, so two or more of them collude
to stop the big state from becoming a hegemon — i.e. they start a war.
Likewise, the big state is also looking to make itself more relatively
powerful, so it tries to start wars with the little states, one at a time, to
reduce their power. (His examples here are Europe immediately before and
leading up to the Napoleonic Wars, <span class="caps">WWI</span>, and <span class="caps">WWII</span>.) There is another case, which
is unipolarity — one state has all the power — but there’s nothing
interesting there. The big state does what it wants.</p>
<p>While I liked Mearsheimer’s argument in general, something irked me about the
statement about bipolarity being stable. I didn’t think that the stability of
bipolarity (corollary 1 above) actually followed from his main hypothesis.
After spending some extra time thinking in the shower, I decided how I could
model Mearsheimer’s main tenant quantitatively, and that it actually suggested
that bipolarity was actually unstable!!</p>
<p><a id="note1"></a>
Let’s see if we can’t quantify Mearsheimer’s ideas with a model. Each state in
the system has some power, which we’ll call <mathjax>$P_i$</mathjax>. Obviously in reality there are
plenty of different definitions of power, but in accordance with Mearsheimer’s
definition, we’ll define power simply in a way that if State 1 has power
<mathjax>$P_1 > P_2$</mathjax>, the power of State 2, then State 1 can beat State 2 in a
war<a href="#fnote1"><sup>[1]</sup></a>.
Each state does not seek to maximize their total power <mathjax>$P_i$</mathjax>, but instead their
relative power <mathjax>$R_i$</mathjax>, relative to the total power of the rest of the states, So
the relative power <mathjax>$R_i$</mathjax> would be</p>
<p><mathjax>$$ R_i = P_i / \left( \sum_{j=1}^N P_j \right) \qquad , $$</mathjax></p>
<p>where we take the sum over the relevant players in the system. If there was
some action that changed the power of some of the players in the system (say a
war), then the relative power would also change with time <mathjax>$t$</mathjax>:</p>
<p><mathjax>$$ \frac{dR_i}{dt} = \frac{dP_i}{dt} \times \left( \sum_{j=1}^N P_j \right)^{-1} - P_i \times \left( \sum_{j=1}^N P_j \right)^{-2} \times \left(\sum_{j=1}^N \frac{dP_j}{dt} \right) \qquad (1) $$</mathjax></p>
<p>A state will pursue an action that increases its relative power <mathjax>$R_i$</mathjax>. So if we
want to decide whether or not State A will go to war with State B, we need to
know how war affects a state’s individual powers. While this seems intractable,
since we can’t even precisely define power, a few observations will help us
narrow down the allowed possibilities to make definitive statements on when war
is beneficial to a state:</p>
<ol>
<li>War always reduces a state’s absolute power. This is simply a statement that
in general, war is destructive. Many people die and buildings are bombed,
neither of which is good for a state. Mathematically, this statement is that in
wartime, <mathjax>$dP_i/dt < 0$</mathjax> always. Note that this doesn’t imply that that <mathjax>$dR_i/dt$</mathjax>
is always negative.</li>
</ol>
<p><a id="note2"></a></p>
<ol>
<li>
<p>The change in power of two states A <span class="amp">&</span> B in a war should depend only on
how much power A <span class="amp">&</span> B have. In addition, it should be independent of the
labeling of states. Mathematically, <mathjax>$dP_a / dt = f(P_a, P_b)$</mathjax>, and
<mathjax>$dP_b/dt = f(P_b, P_a)$</mathjax> with the same function <mathjax>$f$</mathjax><a href="#fnote2"><sup>[2]</sup></a>.</p>
</li>
<li>
<p>If State A has more absolute power than State B, and both states are in a
war, then State B will lose power more rapidly than State A. This is almost a
re-statement of our definition of power. We defined power such that if State A
has more absolute power than State B, then State A will win a war against State
B. So we’d expect that power translates to the ability to reduce another
state’s power, and more power means the ability to reduce another state’s power
more rapidly.</p>
</li>
<li>
<p>For simplicity, we’ll also notice that the decrease of a State A’s absolute
power in wartime is largely dependent on the power of State B attacking it, and
is not so much dependent on how much power State A has.</p>
</li>
</ol>
<p>In general, I think that assumptions 1-3 are usually true, and assumption 4 is
pretty reasonable. But to simplify the math a little more, I’m going to pick a
definite form for the change of power. The simplest possible behavior that
capture all 4 of the above assumptions is:</p>
<p><mathjax>$$ \frac{dx}{dt} = -y \qquad \frac{dy}{dt} = -x \qquad (2) $$</mathjax></p>
<p><a id="note3"></a>
where <mathjax>$x$</mathjax> is the absolute power of State X and <mathjax>$y$</mathjax> is the absolute power of State
y. (I’m switching notation because I want to avoid using too many
subscripts<a href="#fnote3"><sup>[3]</sup></a>). Here I’m assuming that the rate of
change of State X’s power is directly
proportional to State Y’s power, and depends on nothing else (including how
much power State Y actually has). <a id="note4"></a>
We’ll also call <mathjax>$r$</mathjax> the relative power of State
X, and <mathjax>$s$</mathjax> the relative power of State Y<a href="#fnote4"><sup>[4]</sup></a>.
Now we’re equipped to see when war
is a good idea, according to our hypotheses.</p>
<p>Let’s examine the case that was bothering me most — a balanced bipolar system.
Now we have only two states in the system, X and Y. For starters, let’s address
the case where both states start out with equal power <mathjax>$(x = y)$</mathjax>. If State X goes
to war with State Y, how will the relative powers <mathjax>$r =x/(x+y)$</mathjax> <span class="amp">&</span> <mathjax>$s=y/(x+y)$</mathjax>
change? Looking at Eq. (1), we see that by symmetry both states have to lose
absolute power equally, so <mathjax>$x(t) = y(t)$</mathjax> always, and thus <mathjax>$r(t) = s(t)$</mathjax> always. In
other words, from a relative power perspective it doesn’t matter whether the
states go to war! For our system to be stable against war, we’d expect that a
state will get punished if it goes to war, which isn’t what we have! So our
system is a neutral equilibrium at best.</p>
<p>But it gets worse. For a real balanced bipolar system, both states won’t have
exactly the same power, but will instead be approximately equal. Let’s say that
the relative power between the two states differs by some small (positive)
number <mathjax>$e$</mathjax>, such that <mathjax>$x(0) = x0$</mathjax> and <mathjax>$y(0) = x0 + e$</mathjax>. Now what will happen? Looking
at Eq. (2), we see that, at <mathjax>$t=0$</mathjax>,</p>
<p><mathjax>$$ \frac{dr}{dt} = -(x_0 + e) / (2x_0 + e) + x_0(2x_0 + e) / (2x_0 + e)^2 = -e/(x_0 + e) $$</mathjax></p>
<p><mathjax>$$ \frac{ds}{dt} = -(x_0) / (2x_0 + e) + (x_0+e)(2x_0 + e) / (2x_0 + e)^2 = + e/(x_0 + e) \qquad . $$</mathjax></p>
<p>In other words, if the power balance is slightly upset, even by an
infinitesimal amount, then the more powerful state should go to war! For a
balanced bipolar system, peace is unstable, and the two countries should always
go to war according to this simple model of Mearsheimer’s realist world.</p>
<p>Of course, we’ve just considered the simplest possible case — only two states
in the system (whereas even in a bipolar world there are other, smaller states
around) who act with perfect information (i.e. both know the power of the other
state) and can control when they go to war. Also, we’ve assumed that relative
power can change only through a decrease of absolute power, and in a
deterministic way (as opposed to something like economic growth). To really say
whether bipolarity is stable against war, we’d need to address all of these in
our model. A little thought should convince you which of these either a) makes
a bipolar system stable against war, and b) makes a bipolar system more or less
stable compared to a multipolar system. Maybe I’ll address these, as well as
balanced and unbalanced multipolar systems, in another blog post if people are interested.</p>
<p><a id="fnote1"></a>
1. <a href="#note1">^</a> <mathjax>$P_i$</mathjax> has some units (not watts). My definition of power is strictly
comparative, so it might seem that any new scale of power <mathjax>$p_i = f(P_i)$</mathjax> with an
arbitrary monotonic function <mathjax>$f(x)$</mathjax> would also be an appropriate definition.
However, we would like a scale that facilitates power comparisons if multiple
states gang up on another. So we would need a new scale such that </p>
<p><mathjax>$$ p_{i+j} = f(P_i + P_j) = f(P_i) + f(P_j) = p_i + p_j $$</mathjax> </p>
<p>for all <mathjax>$P_i, P_j$</mathjax> . The only function that behaves like this is a linear function of
<mathjax>$P(p_i) = A \times P_i $</mathjax>, where A is some constant. So our definition of power is
basically fixed up to what “units” we choose. Of course, defining <mathjax>$P_i$</mathjax> in terms
of tangibles (e.g. army size or <span class="caps">GDP</span> or population size or number of nuclear warheads)
would be a difficult task. Incidentally, I’ve also implicitly assumed here that there is a power scale,
such that if <mathjax>$P_1 > P_2$</mathjax>, and <mathjax>$P_2 > P_3$</mathjax>, then <mathjax>$P_1 > P_3$</mathjax>. But I think
that’s a fairly benign assumption.</p>
<p><a id="fnote2"></a>
2. <a href="#note2">^</a> This implicity assumes that it doesn’t matter which state attacked the
other, or where the war is taking place, or other things like that.</p>
<p><a id="fnote3"></a>
3. <a href="#note3">^</a> Incidentally this form for the rate-of-change of the power also has the
advantage that it is scale-free, which we might expect since there is no
intrinsic “power scale” to the problem. Of course there are other forms with
this property that follow some or all of the assumptions above. For instance,
something of the form <mathjax>$dx/dt = -xy = dy/dt$</mathjax> would also be i) scale-invariant, and
ii) in line with assumptions 1 <span class="amp">&</span> 2 and partially inline with assumption 3.
However I didn’t use this since a) it’s nonlinear, and hence a little harder to
solve the resulting differential equations analytically, and b) the rate of
decrease of both state’s power is the same, in contrast to my intuitive feeling
that the state with less power should lose power more rapidly.</p>
<p><a id="fnote4"></a>
4. <a href="#note4">^</a> Homework for those who are not satisfied with my assumptions: Show that any
functional form for <mathjax>$dP_i/dt$</mathjax> that follows assumptions 1-3 above does not change
the stability of a balanced bipolar system.</p>Trigonometric Derivatives2013-02-22T16:52:00-05:00Alemitag:thephysicsvirtuosi.com,2013-02-22:posts/trigonometric-derivatives.html<p>I was recently reading <a href="http://www.johndcook.com/blog/2013/02/11/differentiating-bananas-and-co-bananas/">The Endeavour</a>
where he responded to a post over at
<a href="http://mathmamawrites.blogspot.com/2013/02/derivatives-of-sine-and-cosine.html">Math Mama Writes</a>
about teaching the derivatives of the trigonometric functions.</p>
<p>I decided to weigh in on the issue.</p>
<p>In my experience,
<a href="http://en.wikipedia.org/wiki/Calculus">Calculus</a> is always best taught
in terms of infinitesimals, as in
<a href="http://books.google.com/books?id=BrhBAAAAYAAJ&printsec=frontcover&dq=calculus+made+easy&hl=en&sa=X&ei=vu8nUZ-MGcW20AHknICgCw&ved=0CD4Q6AEwAA">Thompson’s Book</a>,
(which I’ve <a href="http://thephysicsvirtuosi.com/posts/four-fantastic-books-3-of-which-are-free-.html">already talked about</a> )
and <a href="http://en.wikipedia.org/wiki/Trigonometry">Trigonometry</a> is best taught using
the <a href="http://tricochet.com/math/pdfs/completetriangle.pdf">complete triangle</a>.
Marrying these two together, we can give a simple geometric proof of the basic trigonometric derivatives:</p>
<p><mathjax>$$ \frac{ d }{dx } \sin x = \cos x \qquad \frac{d}{dx} \cos x = -\sin x $$</mathjax></p>
<p>Summed up on one diagram:
<p><a href="/static/images/trigdiff.pdf">
<img src="/static/images/trigdiff.png" width=500px alt="Trigonometic Derivatives">
</a></p></p>
<h3>Short version</h3>
<p>By looking at how the line <mathjax>$\sin \alpha$</mathjax> and <mathjax>$\cos \alpha$</mathjax> change when we change <mathjax>$\alpha$</mathjax> a little bit (<mathjax>$d\alpha$</mathjax>) and noting that we form a similar triangle, we know exactly what those lengths are.</p>
<h3>Long version</h3>
<p>You’ll notice I’ve drawn a unit circle in the bottom right, chosen an angle <mathjax>$\alpha$</mathjax>, and shown both <mathjax>$\sin \alpha$</mathjax> and <mathjax>$\cos \alpha$</mathjax> on the plot.</p>
<p>We are interested in how <mathjax>$\sin \alpha$</mathjax> changes when we make a very small change in <mathjax>$\alpha$</mathjax>, so I’ve done just that. I’ve moved the blue line from and angle of <mathjax>$\alpha$</mathjax> to the dotted line at an angle of <mathjax>$\alpha + d\alpha$</mathjax>. Don’t get caught up on the <mathjax>$d$</mathjax> symbol here, it just means ‘a little bit of’.</p>
<p>Since we’ve only moved the angle a little bit, I’ve included a zoomed in picture in the upper right so that we can continue. Here, we see the solid and dashed lines again where they meet our unit circle. Notice that since we’ve zoomed in quite a bit the circle’s edge doesn’t look very circley anymore, it looks like a straight line.</p>
<p>In fact that is the first thing we’ll note, namely that the arc of the circle we trace when we change the angle a little bit has the length <mathjax>$d\alpha$</mathjax>. We know this is the case because we know that we’ve only gone an angle <mathjax>$d\alpha$</mathjax>, which is a small fraction <mathjax>$d\alpha/2\pi$</mathjax> of the total circumference of the circle. The total circumference is itself <mathjax>$2\pi$</mathjax> so at the end of the day, the length of that little bit of arc is just:</p>
<p><mathjax>$$ \frac{ d\alpha }{2\pi} 2\pi = d\alpha $$</mathjax></p>
<p>which we may have remembered anyway from our trig classes. What is important here is that even though <mathjax>$d \alpha$</mathjax> is the length of the arc, when we are this zoomed in,
we can treat the arc as a straight line. In fact if we imagine taking our change <mathjax>$d\alpha$</mathjax> smaller and smaller,
approximating the segment of arc as a line gets better and better. [Technically it should be noted that what is important is that the correction between the arc length and line length is higher order in <mathjax>$d\alpha$</mathjax>, so it can be ignored to linear order]</p>
<p>You’ll notice that in the zoomed in picture, we can see the yellow and green segments,
which correspond to the changes in the length of the dotted yellow and green segments
from the zoomed out picture. These are the segments I’ve marked <mathjax>$d(\sin \alpha)$</mathjax> and <mathjax>$-d(\cos \alpha)$</mathjax>, because the represent the change in the length of the <mathjax>$\sin \alpha$</mathjax> line
and <mathjax>$\cos \alpha$</mathjax> line respectively. The green segment is marked <mathjax>$-d(\cos \alpha)$</mathjax> because the <mathjax>$\cos \alpha$</mathjax> line actually shrinks when we increase <mathjax>$\alpha$</mathjax> a little bit.</p>
<p>Now for the kicker. Notice the right triangle formed by the green, yellow and red sements? That is similar to the larger triangle in the zoomed out picture. I’ve marked the similar angle in red. If you stare at the picture for a bit, you can convince yourself of this fact. If all else fails, just compute all of the angles involved in the intersection of the circle with the blue line, they can all be resolved.</p>
<p>Knowing that the two triangles are similar, we know that the lengths of theirs sides are equal except for some scale factor, in particular:</p>
<p><mathjax>$$ \frac{ d(\sin \alpha) }{\cos \alpha} = \frac{ d\alpha }{ 1} $$</mathjax>
or
<mathjax>$$ d(\sin \alpha) = \cos \alpha \ d\alpha $$</mathjax>
And we’ve done it! Shown the derivative of <mathjax>$\sin \alpha$</mathjax> with a little picture.<br />
In particular, the change in the sine of the angle (<mathjax>$d(\sin \alpha)$</mathjax>) is equal to the cosine of that angle <mathjax>$\cos \alpha$</mathjax> times the amount we change it. In the limit of very tiny angle changes, this tells us the derivative of <mathjax>$\sin \alpha$</mathjax>:
<mathjax>$$ \frac{d}{d\alpha} \sin \alpha = \cos \alpha $$</mathjax></p>
<p>Doing the same for the <mathjax>$d(\cos \alpha)$</mathjax> segment gives
<mathjax>$$ d(\cos \alpha) = -\sin\alpha \ d\alpha $$</mathjax>
and we even get the sign right. </p>
<p>From here, the other trigonometric derivates are easy to obtain, either by making similar pictures al la the <a href="http://tricochet.com/math/pdfs/completetriangle.pdf">complete triangle</a>,
or by using the regular rules relating all of the trigonometric function to one another.</p>Re-evaluating the values of the tiles in Scrabble™2013-01-20T22:52:00-05:00DTCtag:thephysicsvirtuosi.com,2013-01-20:posts/re-evaluating-the-values-of-the-tiles-in-scrabble.html<p>
<img src="/static/images/scrabble/scrabble.jpg" width="100%" alt="Scrabble Tiles" style="float:center">
</p>
<p>Recently I have seen quite a few blog posts written about re-evaluating
the points values assigned to the different letter tiles in the
Scrabble™ brand Crossword Game. The premise behind these posts is that
the creator and designer of the game assigned point values to the
different tiles according to their relative frequencies of occurrence in
words in English text, supplemented by information gathered while
playtesting the game. The points assigned to different letters reflected
how difficult it was to play those letters: common letters like E, A,
and R were assigned 1 point, while rarer letters like J and Q were
assigned 8 and 10 points, respectively. These point values were based on
the English lexicon of the late 1930’s. Now, some 70 years later, that
lexicon has changed considerably, having gained many new words (e.g.:
<span class="caps">EMAIL</span>) and lost a few old ones. So, if one were to repeat the analysis
of the game designer in the present day, would one come to different
conclusions regarding how points should be assigned to various letters?
<a id="note1"></a></p>
<p>I’ve decided to add my own analysis to the recent development because I
have found most of the other blog posts to be unsatisfactory for a
variety of reasons<a href="#fnote1"><sup>[1]</sup></a>.<br />
One <a href="http://deadspin.com/5975490/h-y-and-z-as-concealed-weapons-we-apply-google+inspired-math-to-scrabbles-flawed-points-system">article</a>
calculated letters’ relative frequencies by counting the number of times
each letter appeared in each word in the Scrabble™ dictionary. But this
analysis is faulty, since it ignores the probability with which
different words actually appear in the game. One is far less likely to
draw <span class="caps">QI</span> than <span class="caps">AE</span> during a Scrabble™ game (since there’s only one Q in the
bag, but many A’s and E’s). Similarly, very long words like
<span class="caps">ZOOGEOGRAPHICAL</span> have a vanishingly small probability of appearing in the
game: the A’s in the long words and the A’s in the short words cannot be
treated equally. A second <a href="http://blog.useost.com/2012/12/30/valett/">article</a> I saw calculated
letter frequencies based on their occurrence in the Scrabble™ dictionary
and did attempt to weight frequencies based on word length. The author
of this second article also claimed to have quantified the extent to
which a letter could “fit well” with the other tiles given to a player.
Unfortunately, some of the steps in the analysis of this second article
were only vaguely explained, so it isn’t clear how one could replicate
the article’s conclusions. In addition, as far as I can tell, neither of
these articles explicitly included the distribution of letters (how many
A’s, how many B’s, etc) included in a Scrabble™ game. Also, neither of
these articles accounted for the fact that there are blank tiles (that
act as wild cards and can stand in for any letter) that appear in the game.</p>
<p>So, what does one need to do to improve upon the analyses already
performed? We’re given the Scrabble™ dictionary and bag of <a href="http://upload.wikimedia.org/wikipedia/commons/b/b8/Scrabble_tiles_en.jpg">100
tiles</a>
with a set distribution, and we’re going to try to determine what a good
pointing system would be for each letter in the alphabet. We’re also
armed with the knowledge that each player is given 7 letters at a time
in the game, making words longer than 8 letters very rare indeed. Let’s
say for the sake of simplicity that words 9 letters long or shorter
account for the vast majority of words that are possible to play in a
normal game.</p>
<p>Based on these constraints, how can one best decide what points to
assign the different tiles? As stated above, the game is designed to
reward players for playing words that include letters that are more
difficult to use. So, what makes an easy letter easy, and what makes a
difficult letter difficult? Sure, the number of times the letter appears
in the
<a href="http://scrabblehelper2.googlecode.com/svn-history/r3/trunk/src/scrabble/dictionary.txt">dictionary</a>
is important, but this does not account for whether or not, on a given
rack of tiles (a rack of tiles is to Scrabble™ as a hand of cards is to
poker), that letter actually can be used. The letter needs to combine
with other tiles available either on the rack or on the board in order
to form words. The letter Q is difficult to play not only because it is
used relatively few times in the dictionary, but also because the
majority of Q-words require the player to use the letter U in
conjunction with it.</p>
<p>So, what criterion can one use to say how useful a particular tile is?
Let’s say that letters that are useful have more potential to be used in
the game: they provide more options for the players who draw them. Given
a rack of tiles, one can generate a list of all of the words that are
possible for the player to play. Then, one can count the number of times
that each letter appears in that list. Useful letters, by this
criterion, will combine more readily with other letters to form words
and so appear more often in the list than un-useful letters.</p>
<p>(I would also like to take a moment to preempt <a href="http://scrabbleplayers.org/w/Valett">criticism from the
competitive Scrabble™ community</a> by
saying that strategic decisions made by the players need not be brought
into consideration here. The point values of tiles are an engineering
constraint of the game. Strategic decisions are made by the players,
given the engineering constraints of the game. Words that are “available
to be played” are different from “words that actually do get played.”
The potential usefulness of individual letter tiles should reflect
whether or not it is even possible to play them, not whether or not a
player decides that using a particular group of tiles constitutes an
optimal move.)</p>
<p><a id="note2"></a>
To give an example, suppose I draw the rack <span class="caps">BEHIWXY</span>. I can
generate<a href="#fnote2"><sup>[2]</sup></a>
the full list of words available to be played given this rack: <span class="caps">BE</span>, <span class="caps">BEY</span>,
<span class="caps">BI</span>, <span class="caps">BY</span>, <span class="caps">BYE</span>, <span class="caps">EH</span>, <span class="caps">EX</span>, <span class="caps">HE</span>, <span class="caps">HEW</span>, <span class="caps">HEX</span>, <span class="caps">HEY</span>, <span class="caps">HI</span>, <span class="caps">HIE</span>, <span class="caps">IBEX</span>, <span class="caps">WE</span>, <span class="caps">WEB</span>, <span class="caps">WHEY</span>,
<span class="caps">WHY</span>, <span class="caps">WYE</span>, <span class="caps">XI</span>, <span class="caps">YE</span>, <span class="caps">YEH</span>, <span class="caps">YEW</span>. Counting the number of occurrences of each
letter, I see that the letter E appears 18 times, while the letter W
only appears 7 times. This example tells me that the letter E is
probably much more potentially useful than the letter W.</p>
<p>The example above is only one of the many, many possible racks that one
can see in a game of Scrabble™. I can use a
<a href="http://en.wikipedia.org/wiki/Monte_Carlo_method">Monte Carlo</a>-type simulation
to estimate the average usefulness of the different letters by drawing
many example racks.
<a id="note3"></a>
Monte Carlo is a technique used to estimate
numerical properties of complicated things without explicit calculation.
For example, suppose I want to know the probability of drawing a
straight flush in poker.<a href="#fnote3"><sup>[3]</sup></a> I can calculate that probability
explicitly by using combinatorics, or I can use a Monte Carlo method to
deal a large number of hypothetical possible poker hands and count the
number of straight flushes that appear. If I deal a large enough number
of hands, the fraction of hands that are straight flushes will converge
upon the correct analytic value. Similarly here, instead of explicitly
calculating the usefulness of each letter, I use Monte Carlo to draw a
large number of hypothetical racks and use them to count the number of
times each letter can be used. Comparing the number of times that each
tile is used over many, many possible racks will give a good
approximation of how relatively useful each tile is on average. Note
that this process accounts for the words acceptable in the Scrabble™
dictionary, the number of available tiles in the bag, as well as the
probability of any given word appearing.</p>
<p>In my simulation, I draw 10,000,000 racks, each with 9 tiles
(representing the 7 letters the player actually draws plus two tiles
available to be played through to form longer words). I perform the
calculation two different ways: once with a 98-tile pool with no blanks,
and once with a 100-tile pool that does include blanks. In the latter
case, I make sure to not count the blanks used to stand in for different
letters as instances of those letters appearing in the game. The results
are summarized in the table below.</p>
<p>
<img src="/static/images/scrabble/scrabble_tiles_table.jpg" width="80%" alt="Scrabble Tiles" style="float:center">
</p>
<p>There are two key observations to be made here. First, it does not seem
to matter whether or not there are blanks in the bag! The results are
very similar in both cases. Second, it would be completely reasonable to
keep the tile point values as they are. Only the Z, H, and U appear out
of order. It’s only if one looks very carefully at the differences
between the usefulness of these different tiles that one might
reasonably justify re-pointing the different letters.</p>
<p>For fun, I have included in the table my own suggestions for what these
tiles’ values might be changed to based on the simulation results.
(<strong>Note</strong>: here’s where any pretensions of scientific rigor go out the
window.) I have kept the scale of points between 1 and 10, as in the
current pointing system. I have assigned groups of letters the same
number of points based on whether they have a similar usefulness score.
Here are the significant changes: L and U, which are significantly less
useful than the other 1-point tiles may be bumped up to 2 points,
comparable to the D and G. The letter V is clearly less useful than any
of the other three 4-point tiles (W, Y, and F, all of which may be used
to form 2-letter words while the V forms no 2-letter words), and so is
undervalued. The H is comparable to the 3-point tiles, and so is
currently overvalued. Similarly, the Z is overvalued when one considers
how close to the J it is. Unlike in the previous two articles that I
mentioned, I don’t find any strong reason to change the value of the
letter X compared to the other 8 point tiles. I suppose one could lower
its value from 8 points to 7, but I have (somewhat arbitrarily) chosen
not to do so.</p>
<p>One may also ask the question whether or not the fact that a letter
forms 2- or 3-letter words is unfairly biasing that letter. In
particular, is the low usefulness of the C and V compared to
comparably-pointed tiles due to the fact that they form no 2-letter
words? Performing the simulation again without 2-letter words, I found
no changes in the results in any of the letters except for C, which
increased in usefulness above the B and the H. The letter V’s ranking,
however, did not change at all, indicating that unlike the C the V is
difficult to use even when combining with letters to make longer words.
Repeating the simulation yet again without 2- or 3-letter words yielded
the same results.</p>
<p>As a final note, I would like to respond directly to to Stefan Fatsis’s
<a href="http://www.slate.com/articles/sports/gaming/2013/01/scrabble_tile_values_why_it_s_a_mistake_to_change_the_point_value_of_the.single.html">excellent article</a>
about the so-called controversy surrounding re-calculating tile values
and say that I am fully aware that this is indeed a “statistical
exercise,” motivated mostly by my desire to do the calculation made by
others in a way that made sense in the context of the game of Scrabble.
Similarly, I realize that these recommendations are unlikely to actually
change anything. Given that the original points values of the tiles are
still justifiably appropriate by my analysis, it’s not like anybody at
Hasbro is going to jump to “fix” the game. Lastly, my calculations have
nothing to do with the strategy of the game whatsoever, and cannot be
used to learn how to play the game any better. (If anything, I’ve only
confirmed some things that many experienced Scrabble players already
know about the game, such as that the V is a tricky tile, or that the H,
X, and Z tiles, in spite of their high point values, are quite flexible.)</p>
<hr />
<p><strong>Notes</strong></p>
<p><a id="fnote1"></a>
1. <a href="#note1">^</a> To state my own credentials, I have played Scrabble™competitively for
4 years, and am quite familiar with the mechanics of the game, as well
as contemporary strategy.</p>
<p><a id="fnote2"></a>
2. <a href="#note2">^</a> Credit where credit is due: Alemi provided the code used to
generate the list of available words given any set of tiles. Thanks Alemi!</p>
<p><a id="fnote3"></a>
3. <a href="#note3">^</a> Monte Carlo has a long history of being used to estimate the
properties of games. As recounted by George Dyson in <em>Turing’s
Cathedral</em>, in 1948 while at Los Alamos the mathematician Stanislaw Ulam
suffered a severe bout of encephalitis that resulted in an emergency
trepanation. While recovering in the hospital, he played many games of
solitaire and was intrigued by the question of how to calculate the
probability that a given deal could result in a winnable game. The
combinatorics required to answer this question proved staggeringly
complex, so Ulam proposed the idea of generating many possible solitaire
deals and merely counting how many of them resulted in victory. This
proved to be much simpler than an explicit calculation, and the rest is
history: Monte Carlo is used today in a wide variety of applications.</p>
<hr />
<p><strong>Additional References:</strong></p>
<p>The photo at top of a Scrabble™ board was taken during the 2012 National
Scrabble™ Championship. Check out the 9-letter double-blank <span class="caps">BINOCULAR</span>.</p>
<p>For anyone interested in learning more about the fascinating world of
competitive Scrabble™, check out <em>Word Freak</em>, also by Stefan Fatsis.
This book has become more or less the definitive documentation upon this
subculture. If you don’t have enough time to read, check out <a href="http://en.wikipedia.org/wiki/Word_Wars">Word
Wars</a>, a documentary that
follows many of the same people as Fatsis’s book. (It still may be
available streaming on Netflix if you hurry.)</p>The Skeleton Supporting Search Engine Ranking Systems2013-01-01T19:09:00-05:00DTCtag:thephysicsvirtuosi.com,2013-01-01:posts/the-skeleton-supporting-search-engine-ranking-systems.html<p>
<img src="/static/images/skeleton-search/Skeleton_image_1.jpg" width="100%" alt="octopus google" title="Octosearch!" style="float:center">
</p>
<p>A lot of the research I’m interested in relates to networks – measuring
the properties of networks and figuring out what those properties mean.
While doing some background reading, I stumbled upon some discussion of
the algorithm that search engines use to rank search results. The
automatic ranking of the results that come up when you search for
something online is a great example of how understanding networks (in
this case, the World Wide Web) can be used to turn a very complicated
problem into something simple.</p>
<p>Ranking search results relies on the assumption that there is some
underlying pattern to how information is organized on the <span class="caps">WWW</span>- there are
a few core websites containing the bulk of the sought-after information
surrounded by a group of peripheral websites that reference the core.
Recognizing that the <span class="caps">WWW</span> is a network representation of how information
is organized and using the properties of the network to detect where
that information is centered are the key components to figuring out what
websites belong at the top of the search page.</p>
<p>Suppose you look something up on Google (looking for YouTube videos of
your favorite band, <a href="http://thephysicsvirtuosi.com/author/corky.html">looking for edifying science
writing</a>, tips on octopus pet care,
etc): the search service returns a whole spate of results. Usually, the
pages that Google recommends first end up being the most useful. How on
earth does the search engine get it right?</p>
<p>First I’ll tell you exactly how Google does <em>not</em> work. When you type in
something into the search bar and hit enter, a message is <em>not</em> sent to
a guy who works for Google about your query. That guy does <em>not</em> then
look up all of the websites matching your search, does not visit each
website to figure out which ones are most relevant to you, and does
<em>not</em> rank the pages accordingly before sending a ranked list back to
you. That would be a very silly way to make a search engine work! It
relies on an individual human ranking the search results by hand with
each search that’s made. Maybe we can get around having to hire
thousands of people by finding a clever way to automate this process.</p>
<p>So here’s how a search engine <em>does</em> work. Search engines use robots
that crawl around the World Wide Web (sometimes these robots are
referred to as “spiders”) finding websites, cataloguing key words that
appear on those webpages, and keeping track of all the other sites that
link into or away from them. The search engine then stores all of these
websites and lists of their keywords and neighbors in a big database.</p>
<p>Knowing which websites contain which keywords allows a search engine to
return a list of websites matching a particular search. But simply
knowing which websites contain which keywords is not enough to know how
to order the websites according to their relevance or importance.
Suppose I type “octopus pet care” into Google. The search yields 413,000
results- far too many for me to comb through at random looking for the
web pages that best describe what I’m interested in.</p>
<p>Knowing the ways that different websites connect to one another through
hyperlinks is the key to how search engine rankings work. Thinking of a
collection of websites as an ordinary list doesn’t say anything about
how those websites relate to one another. It is more useful to think of
the collection of websites as a network, where each website is a node
and each hyperlink between two pages is a directed edge in the network.
In a way, these networks are maps that can show us how to get from one
website to another by clicking through links.</p>
<p>Here is an example of what a network visualization of a website map of a
large portion of the <span class="caps">WWW</span> looks like. (Original full-size image
<a href="http://upload.wikimedia.org/wikipedia/commons/d/d2/Internet_map_1024.jpg">here</a><a href="http://www.blogger.com/"></a>.)</p>
<p>
<img src="/static/images/skeleton-search/Internet_map_1024.jpg" width="100%" alt="internet map" style="float:center">
</p>
<p>Here is a site map for a group of websites that connect to the main page
of English Wikipedia. (Original image from
<a href="http://en.wikipedia.org/wiki/Site_map">here</a>.) This smaller site map is
closer to the type of site map used when making a search using a search engine.</p>
<p>
<img src="/static/images/skeleton-search/Main_Page_Usability.png" width="100%" alt="internet map" style="float:center">
</p>
<p>So, how does knowing the underlying network of the search results help
one to find the best website on octopus care (or any other topic)? The
search engine assumes that behind the seemingly random, hodgepodge
collection of files on the <span class="caps">WWW</span>, there is some organization in the way
they connect to one another. Specifically, the search engine assumes
that finding the websites most central to the network of search results
is the same as finding the search results with the best information.
Think of a well-known, trusted source of information, like the New York
Times. The <span class="caps">NY</span> Times website will have many other websites referencing it
by linking to it. In addition, the <span class="caps">NY</span> Times website, being a trusted
news source, is likely to refer to the best references for other sources
that it wants to refer to, such as Reuters. High-quality references will
also probably have many incoming links from websites that cite them. So
not only does a website like the <span class="caps">NY</span> Times sit at the center of many
other websites that link to it, but it also frequently connects to other
websites that themselves are at the center of many other websites. It is
these most central websites that are probably the best ones to look at
when searching for information.</p>
<p>When I search for “octopus pet care” using Google I am necessarily
assuming that the search results are organized according to this
core-periphery structure, with a group of important core websites
central to the network surrounded by many less important peripheral
websites that link to the core nodes. The core websites may also connect
to one another. There may also be websites disconnected from the rest,
but these will probably be less important to the search simply because
of the disconnection. Armed with the knowledge of the connections
between the different relevant websites and the core-periphery network
structure assumption, we may now actually find which of the websites are
most central to the network (in the core), and therefore determine which
websites to rank highly.</p>
<p><a id="note1"></a>
Let’s begin by assigning a quantitative “centrality” score to each of
the nodes (websites) in the network, initially guessing that all of the
search results are equally important. (This, of course, is probably not
true. It’s just an initial guess.) Each node then transfers all of its
centrality score to its neighbors, dividing it evenly between
them<a href="#fnote1"><sup>[1]</sup></a>.
(Starting with a centrality score of 1 with three neighbors, each of
those neighbors receives 1/3.) Each node also receives a some centrality
from each neighbor that links in to it. Following this first step, we
find that nodes with many incoming edges will have higher centrality
than nodes with few incoming edges. We can repeat this process of
dividing and transferring centrality again. Nodes with many incoming
links will have more centrality to share with their neighbors, and nodes
with many incoming links will themselves also receive more centrality.</p>
<p>After repeating this process many times, we begin to see a difference
between which nodes have the highest centrality scores: nodes with high
centrality are the ones that have many incoming links, or have links to
other central nodes, or both. This algorithm therefore differentiates
between the periphery and the core of the network. Core nodes receive
lots of centrality because they link to one another and because they
have lots of incoming links from the periphery. Peripheral nodes have
fewer incoming links and so receive less centrality than the nodes in
the core. Knowing the centrality scores of search results on the <span class="caps">WWW</span>
makes it pretty straightforward for us to quantitatively rank which of
those websites belongs at the top of the list.</p>
<p>Of course, there are more complex ways that one can add to and improve
this procedure. Google’s algorithm PageRank (named for founder Larry
Page, not because it is used to rank web pages) and the <span class="caps">HITS</span> algorithm
developed at Cornell are two examples of more advanced ways of ranking
search engine results. We can go even further: a search engine can keep
track of the links that users follow whenever a particular search is
made. (This is almost the same as the company hiring someone to order
sought-after web pages automatically whenever a search is made, except
all the company lets the user do it for free.) Over time, search engines
can improve their methods for helping us find what we need by learning
directly from the way users themselves prioritize which search results
they pursue. Still, these different search engine ranking systems may
operate using slightly different methods, but all of them depend on
understanding the list of search results within the context of a network.</p>
<hr />
<p><strong>Notes</strong></p>
<p><a id="fnote1"></a>
1. <a href="#note1">^</a> It’s not always all - there are other variations where nodes only
transfer a fraction of their centrality score at each step.</p>
<p><strong>Sources (and further reading)</strong></p>
<p>I wanted to include no mathematics in
this post simply because I cannot explain the mathematics behind these
algorithms and their convergence properties better than my sources can.
For those of you who want to see the mathematical side of the argument
for yourselves (which involves treating the network adjacency matrix as
a Markov process and finding its nontrivial steady state eigenvector),
do consult the following two textbooks:</p>
<p>Easley, David, and Jon Kleinberg. <em>Networks, Crowds, and Markets:
Reasoning about a Highly Connected World</em>. Cambridge University Press,
2010 (<a href="http://www.cs.cornell.edu/home/kleinber/networks-book/networks-book-ch14.pdf">Chapter 14</a>
in particular) </p>
<p>Newman, Mark. <em>Networks: an Introduction</em>. Oxford
University Press, 2010 (Chapter 7 in particular)</p>
<p>A popular book on the early development of network science that contains
a lot of information on the structure of the <span class="caps">WWW</span>:</p>
<p>Barabasi, Albert-Laszlo. <em>Linked: How Everything is Connected to
Everything Else and What It Means</em>. Plume, 2003.</p>
<p>A book on the history of modern computing that contains an interesting
passage on how search engines learn adaptively from their users (that
deserves a shout-out in this blog post).</p>
<p>Dyson, George. <em>Turing’s Cathedral</em>. Pantheon, 2012.</p>When will the Earth fall into the Sun?2012-11-29T22:58:00-05:00Briantag:thephysicsvirtuosi.com,2012-11-29:posts/when-will-the-earth-fall-into-the-sun-.html<p style="float: center;">
<figure>
<img src="/static/images/earth-fall-sun/BrianWastesHisTime.png" alt="the sun giveth, the sun taketh away" width="50%">
<figcaption>The time I spent making this poster could have been spent doing research</figcaption>
</figure>
</p>
<p>Since December 2012 is coming up, I thought I’d help the Mayans out with
a look at a possible end of the world scenario. (I know, it’s not Earth
Day yet, but we at the Virtuosi can only go so long without fantasizing
about wanton destruction.) As the Earth zips around the Sun, it moves
through the <a href="http://en.wikipedia.org/wiki/Heliosphere">heliosphere</a>,
which is a collection of charged particles emitted by the Sun. Like any
other fluid, this will exert drag on the Earth, slowly causing it to
spiral into the Sun. Eventually, it will burn in a blaze of glory, in a
bad-action-movie combination of Sunshine meets Armageddon. Before I get
started, let me preface this by saying that I have no idea what the hell
I’m talking about. But, in the spirit of being an arrogant physicist,
I’m going to go ahead and make some back-of-the-envelope calculations,
and expect that this post will be accurate to within a few orders of
magnitude. Well, how long will the Earth rotate around the Sun before
drag from the heliosphere stops it? This seems like a problem for fluid
dynamics. How do we calculate what the drag is on the Earth? Rather than
solve the fluid dynamics equations, let’s make some arguments based on
dimensional analysis. What can the drag of the Earth depend on? It
certainly depends on the speed of the Earth v — if an object isn’t
moving, there can’t be any drag. We also expect that a wider object
feels more drag, so the drag force should depend on the radius of the
Earth R. Finally, the density of the heliosphere might have something to
do with it. If we fudge around with these, we see that there is only 1
combination that gives units of force: </p>
<p><mathjax>$$ F_{drag} \sim \rho v^2 R^2 $$</mathjax> </p>
<p>Now that we have the force, the energy dissipated from the Earth
to the heliosphere after moving a distance <mathjax>$d$</mathjax> is <mathjax>$E_\textrm{lost} = F\times d$</mathjax>. If
the Earth moves with speed v for time t, then we can write
<mathjax>$E_\textrm{lost} = F v t$</mathjax>. So we can get an idea of the time scale over which the Earth
starts to fall into the Sun by taking </p>
<p><mathjax>$E_\textrm{lost} = E_\textrm{Earth} \sim 1/2 M_\textrm{Earth} v^2$</mathjax>.
Rearranging and dropping factors of 1/2 gives </p>
<p><mathjax>$$ T_\textrm{Earth burns} \sim M_{Earth} v^2 / (F_{drag}\times v) \\
\qquad \sim M_{Earth} / (\rho R^2 v) $$</mathjax> </p>
<p>Using the velocity of the Earth as <mathjax>$2\pi \times 1 \mbox{Astronomical unit/year}$</mathjax>,
Googlin’ for some numbers, and taking the
<a href="http://web.mit.edu/space/www/helio.review/axford.suess.html">density of the heliosphere</a>
to be <mathjax>$10^{-23}$</mathjax> g/cc we get… </p>
<p><mathjax>$$ T \approx 10^{19} \textrm{ years} $$</mathjax></p>
<p>Looks like this won’t be the cause of the Mayan apocalypse. (By comparison, the
<a href="http://en.wikipedia.org/wiki/Sun#Life_cycle">Sun will burnout</a>
after only <mathjax>$\sim10^9$</mathjax> years.)</p>Creating an Earth2012-10-27T19:07:00-04:00Briantag:thephysicsvirtuosi.com,2012-10-27:posts/creating-an-earth.html<div style="float: center;">
<figure>
<img src="/static/images/creating-an-earth/116.png" alt="GAH!" width="50%">
<figcaption style="text-align=center;"><span class="caps">GAAAAAAAAH</span></figcaption>
</figure>
</div>
<p>A while ago I decided I wanted to create something that looks like the
surface of a planet, complete with continents <span class="amp">&</span> oceans and all. Since
I’ve only been on a small handful of planets, I decided that I’d
approximate this by creating something like the Earth on the computer
(without cheating and just copying the real Earth). Where should I
start? Well, let’s see what the facts we know about the Earth tell us
about how to create a new planet on the computer. </p>
<p><strong>Observation 1</strong>:
Looking at a map of the Earth, we only see the heights of the surface.
So let’s describe just the heights of the Earth’s surface. </p>
<p><strong>Observation 2</strong>:
The Earth is a sphere. So (wait for it) we need to describe the
height on a spherical surface. Now we can recast our problem of making
an Earth more precisely mathematically. We want to know the heights of
the planet’s surface at each point on the Earth. So we’re looking for
field (the height of the planet) defined on the surface of a sphere (the
different spots on the planet). Just like a function on the real line
can be expanded in terms of its Fourier components, almost any function
on the surface of a sphere can be expanded as a sum of spherical
harmonics <mathjax>$Y_{lm}$</mathjax>. This means we can write the height <mathjax>$h$</mathjax> of our planets
surfaces as </p>
<p><mathjax>$$ h(\theta, \phi) = \sum A_{lm}Y_l^m(\theta, \phi) \quad (1) $$</mathjax> </p>
<p>If we figure out what the coefficients <mathjax>$A$</mathjax> of the sum should
be, then we can start making some Earths! Let’s see if we can use some
other facts about the Earth’s surface to get get a handle on what
coefficients to use. </p>
<p><strong>Observation 3</strong>:
I don’t know every detail of the Earth’s surface, whose history
is impossibly complicated. I’ll capture
this lack-of-knowledge by describing the surface of our imaginary planet
as some sort of random variable. Equation (1) suggests that we can do
this by making the coefficients <mathjax>$A$</mathjax> random variables. At some point we
need to make an executive decision on what type of random variable we’ll
use. <a id="note1"></a>For various reasons,<a href="#fnote1"><sup>[1]</sup></a>
I decided I’d use a Gaussian
random variable with mean 0 and standard deviation <mathjax>$a_{lm}$</mathjax>: </p>
<p><mathjax>$$ A_{lm} = a_{lm} N(0,1) $$</mathjax> </p>
<p>(Here I’m using the notation that <mathjax>$N(m,v)$</mathjax> is a normal
or Gaussian random variable with mean <mathjax>$m$</mathjax> and variance <mathjax>$v$</mathjax>. If you
multiply a Gaussian random variable by a constant <mathjax>$a$</mathjax>, it’s the same as
multiplying the variance by <mathjax>$a^2$</mathjax>, so <mathjax>$a N(0,1)$</mathjax> and <mathjax>$N(0,a^2)$</mathjax> are
the same thing.) </p>
<p><strong>Observation 4</strong>:
The heights of the surface of the
Earth are more-or-less independent of their position on the Earth. In
keeping with this, I’ll try to use coefficients <mathjax>$a_{lm}$</mathjax> that will give me
a random field that is is isotropic on average. This seems hard at
first, so let’s just make a hand-waving argument. Looking at some
<a href="http://en.wikipedia.org/wiki/Spherical_harmonics">pretty pictures</a>
of spherical harmonics, we can see that each spherical harmonic of degree <mathjax>$l$</mathjax>
has about <mathjax>$l$</mathjax> stripes on it, independent of <mathjax>$m$</mathjax>. <a id="note2"></a>
So let’s try using <mathjax>$a_{lm}$</mathjax>‘s
that depend only on <mathjax>$l$</mathjax>, and are constant if just
<mathjax>$m$</mathjax> changes<a href="#fnote2"><sup>[2]</sup></a>. Just for convenience,
we’ll pick this constant to be <mathjax>$l$</mathjax> to some power <mathjax>$-p$</mathjax>: </p>
<p><mathjax>$$ a_{lm} = l^{-p} \quad \textrm{ or} $$</mathjax></p>
<p><mathjax>$$ h(\theta, \phi) = \sum_{l,m} N_{lm}(0,1) l^{-p} Y_l^m(\theta, \phi) \quad (2) $$</mathjax> </p>
<p>At this point I got bored <span class="amp">&</span> decided to see what a
planet would look like if we didn’t know what value of <mathjax>$p$</mathjax> to use. So
below is a movie of a randomly generated “planet” with a fixed choice of
random numbers, but with the power <mathjax>$p$</mathjax> changing.</p>
<p><a id="note3"></a>
As the movie starts (<mathjax>$p=0$</mathjax>), we see random uncorrelated heights on the
surface.<a href="#fnote3"><sup>[3]</sup></a> As the movie continues and <mathjax>$p$</mathjax> increases, we see
the surface smooth out rapidly. Eventually, after <mathjax>$p=2$</mathjax> or so, the planet
becomes very smooth and doesn’t look at all like a planet. So the
“correct” value for p is somewhere above 0 (too bumpy) and below 2 (too
smooth). Can we use more observations about Earth to predict what a good
value of <mathjax>$p$</mathjax> should be? </p>
<p><strong>Observation 5</strong>:
The elevation of the Earth’s
surface exists everywhere on Earth (duh). So we’re going to need our sum
to exist. How the hell are we going to sum that series though! Not only
is it random, but it also depends on where we are on the planet! Rather
than try to evaluate that sum everywhere on the sphere, I decided that
it would be easiest to evaluate the sum at the “North Pole” at
<mathjax>$\theta=0$</mathjax>. Then, if we picked our coefficients right, this should be
statistically the same as any other point on the planet. Why do we want
to look at <mathjax>$\theta = 0$</mathjax>? Well, if we look back at the
<a href="http://en.wikipedia.org/wiki/Spherical_harmonics">wikipedia entry</a>
for spherical harmonics, we see that </p>
<p><mathjax>$$ Y_l^m = \sqrt{ \frac{2l +1}{4\pi}\frac{(l-m)!}{(l+m)!}} e^{im\phi}P^m_l(\cos\theta) \quad (3)$$</mathjax> </p>
<p>That doesn’t look too helpful — we’ve just picked up
another special function <mathjax>$P_l^m$</mathjax> that we need to worry about. But there is a
trick with these special functions <mathjax>$P_l^m$</mathjax>: at <mathjax>$\theta = 0$</mathjax>, <mathjax>$P_l^m$</mathjax> is 0 if <mathjax>$m$</mathjax>
isn’t 0, and <mathjax>$P_l^0$</mathjax> is 1. So at <mathjax>$\theta = 0$</mathjax> this is simply: </p>
<p><mathjax>$$ Y_l^m(\theta = 0) = \bigg \{ ^{\sqrt{(2l+1)/4\pi},\,m=0}_{0,\,m \ne 0} $$</mathjax> </p>
<p>Now we just have, from every equation we’ve written down: </p>
<p><mathjax>$$ h(\theta = 0) = \sum_l \times l^{-p} \times \sqrt{(2l+1)/4\pi }\times N(0,1) $$</mathjax></p>
<p><mathjax>$$ \quad \qquad = \times \frac{1}{\sqrt{4\pi}} \times \sum_l N(0,l^{-2p}(2l+1)) $$</mathjax></p>
<p><mathjax>$$ \quad \qquad = \times \frac{1}{\sqrt{4\pi}} \times N(0,\sum_l l^{-2p}(2l+1) ) $$</mathjax> </p>
<p><mathjax>$$ \quad \qquad = \times \frac{1}{\sqrt{4\pi}} \sqrt{\sum_l l^{-2p}(2l+1)} \times N(0,1) $$</mathjax> </p>
<p><mathjax>$$ \quad \qquad \sim \sqrt{\sum_l l^{-2p+1}} N(0,1) \qquad (4) $$</mathjax> </p>
<p>So for the surface of our imaginary planet to exist, we had better have that sum
converge, or <mathjax>$-2p+1 < -1 ~ (p > 1)$</mathjax>. And we’ve also learned something
else!!! Our model always gives back a Gaussian height distribution on
the surface. Changing the coefficients changes the variance of
distribution of heights, but that’s all it does to the distribution.
Evidently if we want to get a non-Gaussian distribution of heights, we’d
need to stretch our surface after evaluating the sum. Well, what does
the height distribution look like from my simulated planets? Just for
the hell of it, I went ahead and generated <mathjax>${\sim}400$</mathjax> independent surfaces at
<mathjax>${\sim}40$</mathjax> different values for the exponent <mathjax>$p$</mathjax>, looking at the first 22,499
terms in the series. From these surfaces I reconstructed the measured
distributions; I’ve combined them into a movie which you can see below.</p>
<p>As you can see from the movie, the distributions look like Gaussians.
The fits from Eq. (4) are overlaid in black dotted lines. (Since I can’t
sum an infinite number of spherical harmonics with a computer, I’ve
plotted the fit I’d expect from just the terms I’ve summed.) As you can
see, they are all close to Gaussians. Not bad. Let’s see what else we
can get. </p>
<p><strong>Observation 6</strong>:
According to some famous people, the Earth’s surface is
<a href="http://en.wikipedia.org/wiki/How_Long_Is_the_Coast_of_Britain%3F_Statistical_Self-Similarity_and_Fractional_Dimension">probably a fractal</a>
whose coastlines are non-differentiable.
This means that we want a value of <mathjax>$p$</mathjax> that will make our surface rough
enough so that its gradient doesn’t exist (the derivative of the sum of
Eq. (2) doesn’t converge). At this point I’m getting bored with writing
out calculations, so I’m just going to make some scaling arguments. From
Eq. (3), we know that each of the spherical harmonics <mathjax>$Y_l^m$</mathjax> is related to
a polynomial of degree <mathjax>$l$</mathjax> in <mathjax>$\cos \theta$</mathjax>. So if we take a derivative, I’d
expect us to pick up another factor of <mathjax>$l$</mathjax> each time. Following through
all the steps of Eq. (4) we find </p>
<p><mathjax>$$ \vec{\nabla}h \sim \sqrt{\sum_l l^{-2p+3}}\vec{N}(0,1) \quad , $$</mathjax> </p>
<p>which converges for <mathjax>$p > 2$</mathjax>. So for our planet to be “fractal,” we want <mathjax>$1<p<2$</mathjax>.
Looking at the first movie, this seems reasonable. </p>
<p><strong>Observation 7</strong>:
70% of the Earth’s surface is under water. On Earth, we can think of the points
underwater as all those points below a certain threshold height. So
let’s threshold the heights on our sphere. If we want 70% of our
generated planet’s surface to be under water, Eq (4) and the
<a href="http://en.wikipedia.org/wiki/Cumulative_distribution_function">cumulative distribution function</a>
of a
<a href="http://en.wikipedia.org/wiki/Normal_distribution">Gaussian distribution</a>
tells us that we want to pick a critical height <mathjax>$H$</mathjax> such that </p>
<p><mathjax>$$ \frac{1}{2} \left[ 1 + \textrm{erf}(H \sqrt{2\sigma^2}) \right] = 0.7 \quad \textrm{or} $$</mathjax> </p>
<p><mathjax>$$ H = \sqrt{2\sigma^2}\textrm{erf}^{-1}(0.4) $$</mathjax> </p>
<p><mathjax>$$\sigma^2 = \frac 1 {4\pi} \sum_l l^{-2p}(2l+1) \quad (5)\, , $$</mathjax></p>
<p>where <mathjax>$\textrm{erf}()$</mathjax> is a special function called the error function,
and <mathjax>$\textrm{erf}^{-1}$</mathjax> is its inverse. We can evaluate these numerically (or by using some
<a href="http://en.wikipedia.org/wiki/Error_Function#Asymptotic_expansion">dirty tricks</a>
if we’re feeling especially masochistic). So for our generated planet,
let’s call all the points with a height larger than <mathjax>$H$</mathjax> “land,” and all
the points with a height less than <mathjax>$H$</mathjax> “ocean.” Here is what it looks like
for a planet with <mathjax>$p=0$</mathjax>, <mathjax>$p=1$</mathjax>, and <mathjax>$p=2$</mathjax>, plotted with the same
<a href="http://en.wikipedia.org/wiki/Sinusoidal_projection">Sanson projection</a>
as before.</p>
<p>
<img src="/static/images/creating-an-earth/allContinents.png" width="50%" alt="allContinents" align="center">
</p>
<p><sub>
Top to bottom: p=0, p=1, and p=2. I’ve colored all the “water” (positions with heights < <mathjax>$H$</mathjax> as given in Eq. (5) ) blue and all the land (heights > <mathjax>$H$</mathjax>) green.
</sub></p>
<p>You can see that the the total amount of land area is roughly constant
among the three images, but we haven’t fixed how it’s distributed.
Looking at the map above for <mathjax>$p=0$</mathjax>, there are lots of small “islands”
but no large contiguous land masses. For <mathjax>$p=2$</mathjax>, we see only one
contiguous land mass (plus one 5-pixel island), and <mathjax>$p=1$</mathjax> sits somewhere
in between the two extremes. None of these look like the Earth, where
there are a few large landmasses but many small islands. From our
previous arguments, we’d expect something between <mathjax>$p=1$</mathjax> and <mathjax>$p=2$</mathjax> to look
like the Earth, which is in line with the above picture. But how do we
decide which value of p to use? </p>
<p><strong>Observation 8</strong>:
The Earth has 7 continents This one is more vague than the others, but I think it’s the
coolest of all the arguments. How do we compare our generated planets to
the Earth? The Earth has 7 continents that comprise 4 different
contiguous landmasses. In order, these are 1) Europe-Asia-Africa, 2)
North- and South- America, 3) Antartica, and 4) Australia, with a 5th
Greenland barely missing out. In terms of fractions of the Earth’s
surface, Google tells us that Australia covers 0.15% of the Earth’s
total surface area, and Greenland covers 0.04%. So let’s define a
“continent” as any contiguous landmass that accounts for more than 0.1%
of the planet’s total area. Then we can ask: What value of <em>p</em> gives us
a planet with 4 continents? I have no idea how to calculate exactly what
that number would be from our model, but I can certainly measure it from
the simulated results. I went ahead and counted the number of continents
in the generated planets.</p>
<p>
<img src="/static/images/creating-an-earth/numContinents.png" width="50%" alt="allContinents" align="center">
</p>
<p>The results are plotted above. The solid red line is the median values
of the number of continents, as measured over 400 distinct worlds at 40
different values of <mathjax>$p$</mathjax>. The red shaded region around it is the band
containing the upper and lower quartiles of the number of continents.
For comparison, in black on the right y-axis I’ve also plotted the log
of the total number of landmasses at the resolution I’ve used. The
number of continents has a resonant value of <mathjax>$p$</mathjax> — if <mathjax>$p$</mathjax> is too small,
then there are many landmasses, but none are big enough to be
continents. Conversely, if <mathjax>$p$</mathjax> is too large, then there is only one huge
landmass. Somewhere in the middle, around <mathjax>$p=0.5$</mathjax>, there are about 20
continents, at least when only the first <mathjax>${\sim}23000$</mathjax> terms in the series are
summed. Looking at the curve, we see that there are roughly two places
where there are 4 continents in the world — at <mathjax>$p=0.1$</mathjax> and at <mathjax>$p=1.3$</mathjax>.
Since <mathjax>$p=0.1$</mathjax> doesn’t converge, and since <mathjax>$p=0.1$</mathjax> will have way too many
landmasses, it looks like a generated Earth will look the best if we use
a value of <mathjax>$p=1.3$</mathjax> And that’s it. <a id='note4'></a>
For your viewing pleasure, here is a video of three of these planets below,
complete with water, continents, and mountains.<a href="#fnote4"><sup>[4]</sup></a></p>
<hr />
<p><strong>Notes</strong></p>
<p><a id="fnote1"></a>
1. <a href="#note1">^</a> Since I wanted a random surface, I wanted to make the mean of each
coefficient 0. Otherwise we’d get a deterministic part of our surface
heights. I picked a distribution that’s symmetric about 0 because on
Earth the bottom of the oceans seem roughly similar in terms of changes
in elevation. I wanted to pick a stable distribution <span class="amp">&</span> independent
coefficients because it makes the sums that come up easier to evalutate.
Finally, I picked a Gaussian, as opposed to another stable distribution
like a Lorentzian, because the tallest points on Earth are finite, and I
wanted the variance of the planet’s height to be defined.</p>
<p><a id="fnote2"></a>
2. <a href="#note2">^</a> We could make this rigorous by showing that a rotated spherical
harmonic is orthogonal to other spherical harmonics of a different
degree <mathjax>$l$</mathjax>, but you don’t want to see me try.</p>
<p><a id="fnote3"></a>
3. <a href="#note3">^</a> Actually <mathjax>$p=0$</mathjax> should correspond to completely uncorrelated
delta-function noise. (You can convince yourself by looking at the
spherical harmonic expansion for a delta-function.) The reason that the
bumps have a finite width is that I only summed the first 22,499 terms
in the series (<mathjax>$l=150$</mathjax> and below). So the size of the bumps gives a rough
idea of my resolution.</p>
<p><a id="fnote4"></a>
4. <a href="#note4">^</a> For those of you keeping score at home, it took me more than 6 days
to figure out how to make these planets.</p>A Curious Footprint2012-08-04T04:56:00-04:00Corkytag:thephysicsvirtuosi.com,2012-08-04:posts/a-curious-footprint.html<figure style="float:center; margin:0px 0px 10px 0px" width=50%>
<img src="/static/images/a-curious-footprint/msl_laser.jpg" alt="msl laser"; />
<figcaption> Lasers! <i>Credit: <span class="caps">JPL</span>/Caltech</i> </figcaption>
</figure>
<p>In less than two days, <span class="caps">NASA</span>’s Mars Science Laboratory (<span class="caps">MSL</span>) / <em>Curiosity</em>
rover will begin its harrowing descent to the Martian
surface. If everything goes according to the
kind-of-crazy-what-the-heck-is-a-sky-crane plan, this process will be
referred to as “landing” (otherwise, more crashy/explodey gerunds will
no doubt be used). The <span class="caps">MSL</span> mission is run through <span class="caps">NASA</span>’s Jet Propulsion
Laboratory where, by coincidence, I happen to be at the moment. Now, I’m
not working on this project, so I don’t have a lot to add that
<a href="http://mars.jpl.nasa.gov/msl/index.cfm">isn’t</a>
<a href="http://blogs.discovermagazine.com/badastronomy/2012/08/02/curiositys-chem-lab-on-mars/">available</a>
<a href="http://scienceblogs.com/startswithabang/2012/07/20/43-years-later-were-seven-minutes-away-from-a-second-great-step-forward/">elsewhere</a>.
<span class="caps">BUT</span> I do feel an authority-by-proximity kind of fallacy kicking in, so
how about a post why not?</p>
<h3>Preliminaries</h3>
<p>Before we get started, I feel obligated to link to <span class="caps">NASA</span>’s
<em><a href="http://www.youtube.com/watch?v=Ki_Af_o9Q9s">Seven Minutes of Terror</a></em>
video. If you haven’t seen it yet, I highly recommend watching it right now (my
favorite part is the subtitles). It has over a million views on YouTube
and seems to have done a pretty good job at generating interest in the
mission. Although, it’s a shame they had to interview the first guy in
what appears to be a police interrogation room. Oh well.</p>
<h3>About the Rover</h3>
<p>This thing is <em>big</em>. It’s the size of a car and is jam-packed with
<a href="http://mars.jpl.nasa.gov/msl/mission/instruments/">scientific equipment</a>.
There’s a couple different spectrometers, a bunch of cameras, a drill for
collecting rock samples, and radiation detectors. Probably the coolest
instrument onboard <em>Curiosity</em> is called the ChemCam. The ChemCam uses a
laser to vaporize small regions of rock, which allows it to study the
composition of things about 20 feet away.</p>
<p>In addition to the scientific payload, <em>Curiosity</em> also needs some way
to generate power. Previous rovers had been powered by solar panels, but
there don’t appear to be any here. Instead, <em>Curiosity</em> is
<a href="http://www.ne.doe.gov/pdfFiles/MMRTG_Jan2008.pdf">powered</a>
by the heat released from the radioactive decay of about 10 pounds of plutonium
dioxide. This source will power the rover for
<strike>about a Martian year</strike>
well beyond the currently planned mission duration of one Martian year
(about 687 Earth days) [Thanks to Nathan in the comments for pointing
this out!].</p>
<p>To summarize, the rover is a nuclear-powered lab-on-wheels that
<em>shoots lasers out of its head</em>. This is pretty cool.</p>
<figure style="width:70%">
<img src="/static/images/a-curious-footprint/msl2.jpg" width="100%" alt="msl laser 2" align="center" style="margin:0px 0px 0px 0px">
<figcaption>In non-<span class="caps">SI</span> units, the <span class="caps">MSL</span> is roughly one handsome man (1 hm) tall</figcaption>
</figure>
<h3>A Curious Footprint</h3>
<p>There’s been a lot of preparation at <span class="caps">JPL</span> this week for the upcoming
landing. All the shiny rover models have been taken out of the visitor’s
center and put in a tent outside, presumably so there will be a pretty
backdrop for press reports and the like.</p>
<p>Anyway, I was out taking pictures of the rovers at the end of the day
today when someone pointed out something cool about the tires on
<em>Curiosity</em>.</p>
<p>Here’s a close-up:</p>
<figure style="width:70%">
<img src="/static/images/a-curious-footprint/MSL_tire.jpg" width="100%" alt="msl tire" align="center" style="margin:0px 0px 0px 0px">
<figcaption> Hole-y Tires </figcaption>
</figure>
<p>Each tire on the rover is has “<span class="caps">JPL</span>” punched out in
<a href="http://en.wikipedia.org/wiki/File:International_Morse_Code.svg">Morse code</a>!
Makes sense, though. If you’re going to spend $2.5 billion on something,
you might as well put your name on it.</p>
<h3>Watch the Landing</h3>
<p>If you want to watch the landing, check out the
<a href="http://www.nasa.gov/multimedia/nasatv/index.html"><span class="caps">NASA</span> <span class="caps">TV</span> stream</a>.
The landing is scheduled for Sunday night at 10:31 pm <span class="caps">PDT</span> (1:31 am <span class="caps">EDT</span>). Until then,
it looks like they are showing a lot of interviews and other cool
behind-the-scenes kind of stuff.</p>A Homemade Viscometer I2012-07-24T22:32:00-04:00Briantag:thephysicsvirtuosi.com,2012-07-24:posts/a-homemade-viscometer-i.html<p>Stirring a bowl of honey is much more difficult than stirring a bowl of
water. But why? The mass density of the honey is about the same as that
of water, so we aren’t moving more material. If we were to write out
Newton’s equation, <mathjax>$ma$</mathjax> would be about the same, but yet we still need
to put in much more force. Why? And can we measure it? </p>
<p>The reason that
honey is harder to stir is of course that the drag on our spoon depends
on more than just the density of the fluid. The drag also depends on the
viscosity of the fluid — loosely speaking, how thick it is — and the
viscosity of honey is about 400 times that of water, depending on the
conditions. In fact, a quick perusal of the Wikipedia article on
<a href="http://en.wikipedia.org/wiki/Viscosity">viscosity</a>
shows that viscosities can vary by a fantastic amount — some 13 orders of
magnitude, from easy-to-move gases to
<a href="http://en.wikipedia.org/wiki/Pitch_drop_experiment">thick pitch</a>
that behaves like a solid except on
long time scales. The situation is even more complicated than this, as
<a href="http://en.wikipedia.org/wiki/Non-Newtonian_fluid">some fluids</a>
can have a viscosity that changes depending on the flow. I wanted to
find a way to measure the viscosities of the stuff around me, so I made
the <a href="http://en.wikipedia.org/wiki/Viscometer">viscometer</a> pictured below
for about $1.75 (the vending machines in Clark Hall are pretty expensive).</p>
<div style="float: center">
<figure>
<img src="/static/images/viscometer1/visc_fig1.jpg" alt="A PLOT???" width="40%">
<figcaption> My homemade viscometer, taking data on the viscosity of water. </figcaption>
</figure>
</div>
<p>To do this, I </p>
<ol>
<li>
<p>Enjoyed the crisp, refreshing taste of Diet Pepsi from a 20 oz
bottle (come on, sponsorships).</p>
</li>
<li>
<p>Cut the top and bottom off the bottle, so all that was left was a
straight tube.</p>
</li>
<li>
<p>Mounted the bottle with on top of a small piece of flat plastic.</p>
</li>
<li>
<p>Mounted a single-tubed coffee stirrer horizontally out of the bottle
(I placed the end towards the middle of the bottle to avoid end effects).</p>
</li>
<li>
<p>Epoxied or glued the entire edge shut.</p>
</li>
<li>
<p>Marked evenly-spaced lines on the side of the bottle.</p>
</li>
</ol>
<p>I can load my “sample” fluid in the top of the Pepsi bottle, and time
how long it takes for the sample level to drop to a certain point. A
more viscous fluid will take more time to leave the bottle, with the
time directly proportional to the viscosity. (This is a consequence of
Stokes flow and the equation for flow in a pipe. It will always be true,
as long as my fluid is viscous enough and my apparatus isn’t too big.)</p>
<p>So we’re done! All we need to do is calibrate our viscometer with one
sample, measure the times, and then we can go out and measure stuff in
the world! No need to stick around for the boring calculations! We can
do some fun science over the next few blog posts! </p>
<p>But this is a physics
blog written by a bunch of grad students, so I’m assuming that a few of
you want to see the details. (I won’t judge you if you don’t though.) If
we think about the problem for a bit, we basically have flow of a liquid
through a pipe (i.e. the coffee stirrer), plus a bunch of other crap
which hopefully doesn’t matter much. </p>
<p>We first need to think about how
the fluid moves. We want to find the velocity of the fluid at every
position. This is best cast in the language of vector calculus — we
have a (vector) velocity field <mathjax>$\vec{u}$</mathjax> at a position <mathjax>$x$</mathjax>.
There are two things we know: 1) We don’t (globally) gain or lose any
fluid, and 2) Newton’s laws <mathjax>$F=ma$</mathjax> hold. We can write these equations as
the Navier-Stokes equations: </p>
<p><mathjax>$$ \vec{\nabla}\cdot \vec{u} = 0 \quad (1) $$</mathjax> </p>
<p><mathjax>$$ \rho \left( \frac {\partial \vec{u}} {\partial t} + (\vec{u}\cdot\vec{\nabla})\vec{u} \right) = - \vec{\nabla}p + \eta \nabla^2 \vec{u} \quad (2) $$</mathjax> </p>
<p>The first equation basically
says that we don’t have any fluid appearing or disappearing out of
nowhere, and the second is basically <mathjax>$m \vec{a}=\vec{F}$</mathjax>, except written per
unit volume. (The fluid’s mass-per-unit-volume is <mathjax>$\rho$</mathjax>, the rate of
change of our velocity is <mathjax>$\frac{d\vec{u}}{dt}$</mathjax>, and our force per unit volume is
<mathjax>$\vec{\nabla}p$</mathjax>, plus a viscous term <mathjax>$\nabla^2 \vec{u}$</mathjax>. The only
complication is that <mathjax>$\frac{d\vec{u}}{dt}$</mathjax> is a total derivative, which we need to
write as </p>
<p><mathjax>$$ \frac{d\vec{u}}{dt} = \frac{\partial \vec{u}}{\partial t} + \frac{\partial \vec{u}}{\partial x} \frac{d x}{d t}$$</mathjax></p>
<p>I won’t drag you through the
<a href="http://www.4shared.com/office/y9ay-fNh/Homemade_viscometer_-gory_sect.html?refurl=d1url">gory details</a>,
unless you want to see them, but it turns out that for my system the
height of the fluid <mathjax>$h$</mathjax> (measured from the coffee stirrer) versus time
<mathjax>$t$</mathjax> is </p>
<p><mathjax>$$ h(t) = h(0)e^{-t/T}, \quad T= 60.7 \textrm{sec} \times [\eta / \textrm{1 mPa s}] \times [\textrm{ 1 g/cc} / \rho] $$</mathjax> </p>
<p>[For my viscometer, the coffee stirrer has length 13.34 cm and inside
diameter 2.4 mm, and the Pepsi bottle has a cross-sectional area of 36.3
square centimeters (3.4 cm inner radius). You can see how the timescale
scales with these properties in the
<a href="http://www.4shared.com/office/y9ay-fNh/Homemade_viscometer_-gory_sect.html?refurl=d1url">gory details section</a>.]</p>
<div style="float: center;">
<figure>
<img src="/static/images/viscometer1/visc_fig2.png" alt="A PLOT" width="70%">
<figcaption style="font-size:small;">
A run with measured heights vs times <span class="amp">&</span> error bars. The
majority of the uncertainty turns out to come from not knowing the
exact proportions of the viscometer. I don’t know exactly why the
heights are systematically deviating from the fit, but I suspect it’s
that my gridlines aren’t perfectly lined up with the bottom of my
viscometer (it looks like $\sim 5$ mm off would do it, which I can totally
believe looking at the picture of my viscometer). However, because of
the linearity of the equations for steady flow in a pipe, we know that
the time scales linearly with the viscosity, so we should be able to
accurately measure relative viscosities.
</figcaption>
</figure>
</div>
<p>Well, how well does it work? Above is a plot of the height of water in
my viscometer versus time, with a best-fit value from the equations
above. To get a sense of my random errors (such as how good I am at
timing the flow), I measured this curve 5 separate times. If I take into
account the uncertainties in my apparatus setup as systematic errors, I
find a value for my viscosity as </p>
<p><mathjax>$$ \eta \approx 1.429 \textrm{mPa s} \pm 0.5 \% \textrm{Rand.} \pm 55\% \textrm{Syst.} $$</mathjax> </p>
<p>The actual value of the viscosity of water at room temperature (T=25 C) is about
<mathjax>$0.86~\textrm{mPa s}$</mathjax>, which is more-or-less within my systematic errors. So it
looks like I won’t be able to measure absolute values of viscosity
accurately without a more precise apparatus. But if I look at the
variation of my measured viscosity, I see that I should probably be able
to measure changes in viscosity to 0.5% !! That’s pretty good! Hopefully
over the next couple weeks I’ll try to use my viscometer to measure some
interesting physics in the viscosity of fluids.</p>Batman, Helicopters, and Center of Mass2012-06-26T11:10:00-04:00DTCtag:thephysicsvirtuosi.com,2012-06-26:posts/batman-helicopters-and-center-of-mass.html<div style="float: right; margin: 0px 0px 0px 10px">
<img src="/static/images/batman/wiki_batman.jpg">
</div>
<p>A couple weeks ago, I came home after a long day at work looking for a
break. I thought to myself, “What’s more fun than physics?” </p>
<p><a id="note1"></a>
Batman.<a href="#fnote1"><sup>[1]</sup></a></p>
<p>I sat down to play the <a href="http://en.wikipedia.org/wiki/Arkham_City">latest Batman
videogame</a>, in which Batman’s
current objective was to use his grappling hook to jump onto an enemy
helicopter to steal an electronic MacGuffin. As awesome as this was, it
occurred to me that something was very wrong about the way the
helicopter moved while Batman zipped through the air. </p>
<p><a href="http://youtu.be/81qN-PHucqM?t=3m12s">See if you can spot it too</a>. (Watch for about 5
seconds after the video starts. Ignore the commentary. Note: The
grunting noises are the sounds that Batman makes if you shoot him with bullets.) </p>
<p>What occurred to me was this: If the helicopter’s rotors
provided enough lift to balance the force of gravity, wouldn’t Batman’s
sudden additional weight cause the helicopter to fall out of the sky?
Also, to get lifted up into the air, the helicopter must be pulling up
on Batman: shouldn’t Batman also be pulling down on the helicopter? By
how much should we expect to see the helicopter’s altitude change? </p>
<p>To address the first question, let’s go to Newton’s second law: </p>
<p><mathjax>$$ \sum \vec{F} = m\vec{a} $$</mathjax> </p>
<p>Let’s assume that the helicopter is hovering
stationary, minding its own business, when Batman jumps onto it. Let’s
also assume the helicopter pilots are totally oblivious to Batman and
make no flight corrections after Batman jumps onto it. In order to
hover, the lift from the helicopter’s rotors exactly matched the pull of gravity. </p>
<p><mathjax>$$ \sum \vec{F} = \vec{F}_{rotors} - \vec{F}_{gravity} = 0 $$</mathjax> </p>
<p>Batman’s sudden additional weight would cause the helicopter to
start falling, as the forces would no longer balance: </p>
<p><mathjax>$$ \sum \vec{F} = \vec{F}_{rotors} - \vec{F}_{gravity} - \vec{F}_{Batman} < 0 $$</mathjax></p>
<p>So the helicopter does accelerate (and move) when Batman jumps onto it.
How much does it move? Let’s assume there are no crazy winds or other
external forces acting on the helicopter or Batman while Batman grapples
onto the helicopter. “No external forces” means that momentum of
helicopter + Batman does not change during Batman’s flight. </p>
<p>Let’s make
things a little simpler and assume that neither Batman nor the
helicopter had any vertical momentum before Batman used his grappling
hook. (I can choose to approach this problem from a reference frame
where the center of mass is stationary. Choosing a frame where the
center of mass moves won’t change the results, it will just make the
calculation more complicated.) Because the momentum of helicopter +
Batman does not change, then the center of mass does not move while
Batman zips through the air: </p>
<p><mathjax>$$ \frac{d}{dt} y_{COM} = \frac{d}{dt} \frac{m_{Bat} y_{Bat} + m_{Copter} y_{Copter}}{m_{Bat} + m_{Copter}} = \frac{p}{m_{Bat} + m_{Copter}} = 0 $$</mathjax></p>
<p>The center of mass must remain stationary, so we can find how much the
helicopter’s height changes by if Batman starts on the ground (y = 0)
and both end up at the same height with Batman hanging from the helicopter: </p>
<p><mathjax>$$ y_{COM} = \frac{m_{Copter} y_{Copter} + m_{Bat} (0)}{m_{Bat} + m_{Copter}} = \frac{m_{Copter} y_{final} + m_{Bat} y_{final}}{m_{Bat} + m_{Copter}} $$</mathjax> </p>
<p><mathjax>$$ \Delta y = y_{Copter} - y_{final} = \frac{m_{Bat}}{m_{Bat} + m_{Copter}} y_{Copter}$$</mathjax></p>
<p>Now, some numbers: The police helicopters in the game are pretty small,
probably about
<a href="http://en.wikipedia.org/wiki/Bell_206">1500 kg</a>. </p>
<p>Batman is a big guy who works out and probably weighs around 100 kg (220 lb).
Plus, he’s wearing body armor (hence surviving when bullets hit him) and
a utility belt and all of those other Bat-gadgets, which probably adds
about 30 kg (<mathjax>$\sim 30$</mathjax> lb for the gadgets,
<mathjax>$\sim30$</mathjax> lb for the <a href="http://www.nationaldefensemagazine.org/archive/2011/February/Pages/ManufacturersAnswerMilitary%E2%80%99sCalltoReduceBodyArmorWeight.aspx">armor</a>).
If Batman has to grapple onto a helicopter 30 meters above him, then the
helicopter should drop out of the air by about 2.4 m. This is greater
than the height of Batman himself, and would be noticeable if the
helicopter physics in the game were perfect. Of course, if the
helicopters appearing in the game were the giant army helicopters (they
do carry rockets, after all), their mass would be much larger
<mathjax>$(\sim 5000-10000~{\rm kg})$</mathjax> so the effect of Batman’s additional weight would be
much smaller. None of these considerations detracted from the fun I had
playing the game, but it did seem odd that the helicopters appeared to
be nailed to the sky instead of moving freely through the air. I’ll be
writing the game developers a strongly-worded letter directly. </p>
<hr />
<p><strong>Notes</strong></p>
<p><a id="fnote1"></a>
1. <a href="#note1">^</a> The <span class="caps">DC</span> superhero, not the <a href="http://en.wikipedia.org/wiki/Batman,_Turkey">city</a>
or the
<a href="http://www.newcritters.com/2007/01/23/the-batman-fish-otocinclus-batmani/">fish</a>.</p>Tales from the Transit of Venus2012-06-05T00:50:00-04:00Corkytag:thephysicsvirtuosi.com,2012-06-05:posts/tales-from-the-transit-of-venus.html<div style="float: center;">
<figure>
<img src="/static/images/venus-transit/sad_transit.png" alt="sad old sun" width="30%">
<figcaption>Sad Old Sun</figcaption>
</figure>
</div>
<p>Today is the transit of Venus, which, aside from being a totally rad
astronomical event, is also the perfect excuse to tell my favorite story
of an unlucky Frenchman (I have many). This is by no means new and, if
you’ve ever taken an astronomy course, you’ve probably already heard it.
It is perhaps the closest thing Astronomy has to a ghost story, told
though the glow of a flashlight on moonless nights to scare the
children. This is the story of Guillaume Le Gentil, a dude that just
couldn’t catch a break.</p>
<p>Guillaume Joseph Hyacinthe Jean-Baptiste Le Gentil de la Galaisière was
a Frenchman with an incredibly long name. He was also an astronomer,
though he hadn’t started out that way. Monsieur Le Gentil (as his
friends called him and so, then, shall we) had originally intended to
enter the priesthood. However, he soon began sneaking away to hear
astronomy lectures and quickly switched from studies of Heaven to those
heavens more readily observed in a telescope. Le Gentil happened to get
into the astronomy game at a very exciting time. The next pair of Venus
transits was imminent and astronomers were giddy with anticipation.
Though the previous transit of 1639 had been predicted, it was met with
little fanfare and only a few measurements. But the transits of 1761 and
1769 would be different. People would be ready. And the stakes were
higher this time, too. Soon after the 1639 transit, Edmund Halley (he of
the-only-comet-people-can-name fame) calculated that with enough
simultaneous measurements, the distance from the Earth to the Sun (the
so-called astronomical unit, or <span class="caps">AU</span>) could be measured fairly accurately.
Since almost all other astronomical distances were known in terms of the
<span class="caps">AU</span>, knowing its precise value would essentially set the scale for the
cosmos. Brand new telescopes in hand, the astronomers of Europe set sail
for locations all over the world.</p>
<p>Le Gentil had been assigned to observe the transit from Pondicherry, a
French holding on the eastern side of India. On March 26th, 1760, he
began his long sea voyage around the Cape of Good Hope towards India.</p>
<p>The voyage from France to India was a bit too long for the ship Le
Gentil hitched a ride on and he only made it as far as Mauritius (a
small island off Madagascar). Dropped off with all his equipment, Le
Gentil was left waiting for any ship at all to take him to Pondicherry.</p>
<p>Perhaps it was the Curse of the
<a href="http://en.wikipedia.org/wiki/Dodo">Dodo</a> or perhaps it was just bad
luck, but while he was waiting, Le Gentil learned that
<a href="http://en.wikipedia.org/wiki/Seven_Years'_War">war</a> had broken out
between the French and the British, making a trip to British India very
difficult for a Frenchman.</p>
<p>Then the monsoon season started, meaning that even if he could find a
ship, it would have to take a much longer route to India than initially
planned and that it would be very difficult to make the journey before
the transit occurred.</p>
<p>Then, he caught dysentery for the first time.</p>
<p>Finally, after months of waiting, Le Gentil (barely recovered from his
illness) left Mauritius for India in February of 1761. Though time
appeared to be running out, the captain of the ship he was on promised
he would be there to observe the transit in June. About halfway to
India, the winds switched directions and the ship was forced to turn
back to Mauritius.</p>
<p>Le Gentil dutifully observed the transit of Venus in 1761 from a rocky
ship in the middle of the Indian Ocean. The data were useless and he
never attempted any analysis.</p>
<p>Although he missed the first transit, these things come in pairs
separated by eight years. There was still another chance. And with all
this time to prepare, there was no way he was going to miss the second one.</p>
<p>In fact, there was a bit <em>too much</em> time. But as a world-traveling 18th
century man of science, Le Gentil had plenty of other interests to fill
his days. He was particularly interested in surveying the region around Madagascar.</p>
<p>So he made a really nice map of Madagascar. And then he ate some bad
kind of some kind of animal and came down with a terrible sickness. He
describes this illness and its “cure” in his journals:</p>
<blockquote>
<p><em>This sickness was a sort of violent stroke, of which several very
copious blood-lettings made immediately on my arm and my foot, and
emetic administered twelve hours afterwards, rid me of it quite
quickly. But there remained for seven or eight days in my optic nerve
a singular impression from this sickness; it was to see two objects in
the place of one, beside each other; this illusion disappeared little
by little as I regained my strength…</em></p>
</blockquote>
<p>After recovering from both his sickness and the treatment, Le Gentil
decided to begin his preparations for the 1769 transit of Venus. He
calculated that either Manila or the Mariana Islands would be the ideal
spot to observe. The Sun would be relatively high in the sky at both
places when Venus passed by, meaning that the view would be through less
atmosphere with a reduced chance of clouds passing through the line of
sight. Le Gentil packed up his stuff and headed off to Manila, where he
could catch another ship to get to the Mariana Islands. Arriving in
Manila in 1766, the astronomer found himself exhausted from months of
sickness and sea-voyage. So, when he was offered passage on a ship
heading to the Mariana Islands, he quickly declined. That he chose not
to depart Manila at that time was perhaps his one stroke of good luck in
the entire journey. The ship sunk. Writing in his journal, Le Gentil
appears to have developed that particular sense of humor that generally
accompanies constant disappointment:</p>
<blockquote>
<blockquote>
<p><em>It is true that only three or four people were drowned, those who
were the most eager to save themselves, which is what almost always
happens in shipwrecks. I cannot answer that I would not have
increased the number of persons eager to save themselves.</em></p>
</blockquote>
</blockquote>
<p>In any case, Le Gentil was in Manila with plenty of time to prepare for
the next transit. Unfortunately, the astronomer may have over-prepared.
Having arrived three years before the event, he now had three years to
worry and second-guess his decision. It didn’t help that the Spanish
governor of Manila was kind of a crazy person. Not wanting to miss the
observation of a lifetime owing to the whims of mildly insane strong
man, Le Gentil packed up his stuff and headed to Pondicherry. Finally in
Pondicherry, Le Gentil worked tirelessly to construct his observatory
and make plenty of astronomical observations in preparation for the
event. He had state of the art equipment and had fully calibrated and
double checked everything. It was now nine years since his journey began
and only a few days until the transit was scheduled to occur at sunrise
on June 4th. The entire month of May was beautiful weather and pristine
observing conditions, as were the first few days of June. Le Gentil
likely went to bed on the 3rd of June fully confident that the next
morning would be no different. He woke up early in the morning to begin
preparations for his sunrise observations only to find clouds on the
horizon. The clouds remained, obscuring the sun, all through the
duration of the transit. A few hours after the end of the transit, the
sun broke through the clouds and remained visible for the rest of the
day. Le Gentil had missed his second transit in Pondicherry. He sums it
up in his journal:</p>
<blockquote>
<blockquote>
<p><em>That is the fate which often awaits astronomers. I had gone more
than ten thousand leagues; it seemed that I had crossed such a great
expanse of seas, exiling myself from my native land, only to be the
spectator of a fatal cloud which came to place itself before the sun
at the precise moment of my observation, to carry off from me the
fruits of my pains and of my fatigues</em></p>
</blockquote>
</blockquote>
<p>In Manila, the Sun rose in perfectly clear skies. Distraught, Le Gentil
remained in bed for some weeks afterward. He soon caught a fever and
missed the ship that was supposed to take him home. He recovered, but
then came down with dysentery again. Barely recovered from his various
illnesses, he managed to get a ride back to Mauritius. He caught a ship
leaving the island in November of 1770. The ship was struck by a
hurricane and almost completely destroyed. It managed to limp back to
Mauritius. The second attempt proved more successful and Le Gentil
finally “set foot on France at nine o’clock in the morning, after eleven
years, six months and thirteen days of absence.” Though he had finally
made it home, he was not out of the woods quite yet. In his absence, Le
Gentil’s heirs had tried to declare him dead to gain their inheritance,
his accountant had mishandled (and lost) a large chunk of his holdings,
and the Academy of Sciences, which had sent him on his 11 year mission,
had given his seat to someone else. It was not quite the welcome home he
had hoped for. Despite his seemingly never-ending misfortune, things did
turn around for Le Gentil. He married, had a daughter, and was
reinstated into the Academy of Sciences. Presumably, he lived out the
rest of his days in relative happiness. Le Gentil died in 1792. Keeping
true to his style, this man who missed two of the most important
astronomical events of his time fortunately managed to also miss the
most important (and violent) <a href="http://en.wikipedia.org/wiki/Reign_of_Terror">political
event</a> of his time.</p>
<hr />
<p><strong>References:</strong></p>
<p>I have mainly used a very nice series of historical papers of Le
Gentil’s misadventures with the transit of Venus written by Helen Sawyer
Hogg. The papers were originally published in the <em>Journal of the Royal
Astronomical Society of Canada</em> and can be accessed through <span class="caps">NASA</span>’s <span class="caps">ADS</span>
(<a href="http://adsabs.harvard.edu/abs/1951JRASC..45...37S">Part 1</a>,
<a href="http://adsabs.harvard.edu/abs/1951JRASC..45...89S">Part2</a>,
<a href="http://adsabs.harvard.edu/abs/1951JRASC..45..127S">Part 3</a>,
<a href="http://adsabs.harvard.edu/abs/1951JRASC..45..173S">Part 4</a>).</p>
<p><strong>More Transit of Venus:</strong></p>
<p>If you want to see the Transit of Venus without having to go on an
eleven year voyage (or even leaving your room), check out the <span class="caps">NASA</span>
<a href="http://sunearthday.nasa.gov/transitofvenus/">live-feed</a> from Mauna Kea.</p>How Cold is the Ground II2012-05-26T21:28:00-04:00Briantag:thephysicsvirtuosi.com,2012-05-26:posts/how-cold-is-the-ground-ii.html<div style="float: right; margin: 0px 0px 0px 10px">
<figure>
<img src="/static/images/how-cold-is-the-ground-ii/mainImage.png" alt="GAH!" width="40%">
<figcaption style="text-align=center;">
Images <a href="http://en.wikipedia.org/wiki/File:Ithaca_Hemlock_Gorge.JPG">from</a>
<a href="http://en.wikipedia.org/wiki/File:Mercury_in_color_-_Prockter07_centered.jpg"> Wikipedia
</a>
</figcaption>
</figure>
</div>
<p><a href="http://thephysicsvirtuosi.com/posts/how-cold-is-the-ground-.html">Last week</a>
(ok, it was a little more than a few days ago…) I used
dimensional analysis to figure out how the ground’s temperature changes
with time. But although dimensional analysis can give us information
about the length scales in the problem, it doesn’t tell us what the
solution looks like. From dimensional analysis, we don’t even know what
the solution does at large times and distances. (Although we can usually
see the asymptotic behavior directly from the equation.) So let’s go
ahead and solve the the heat equation exactly: </p>
<p><mathjax>$$ \frac {\partial T}{\partial t} = a \frac {\partial ^2 T}{\partial x^2} \quad (1)$$</mathjax> </p>
<p>Well, what type of solution do we want to this equation? We want the
temperature at the Earth’s surface <mathjax>$x=0$</mathjax> to change with the days or the
seasons. So let’s start out modeling this with a sinusoidal dependence
— we’ll look for a solution of the form </p>
<p><mathjax>$$ T(x,t) = A(x)e^{i wt} $$</mathjax></p>
<p>for some function <mathjax>$A(x)$</mathjax>, then we can take the real part for our
solution. Plugging this into Eq. (1) gives
<mathjax>$A^{\prime\prime} = i\omega/a \times A$</mathjax>, or </p>
<p><mathjax>$$ A(x) = e^{ \pm \sqrt{w/2a } (1+i) x} $$</mathjax> </p>
<p>Since we have a second-order
ordinary differential equation for <mathjax>$A$</mathjax>, we have two possible solutions,
which are like <mathjax>$\exp(+x)$</mathjax> or <mathjax>$\exp(-x)$</mathjax>. Which one do we choose? </p>
<p><a id="note1"></a>
Well, we want the temperature very far away from the surface of the ground to be
constant, so we need the solution that decays with distance,
<mathjax>$A\exp(-x)$</mathjax>. Taking the real part of this solution, we
find<a href="#fnote1"><sup>[1]</sup></a> </p>
<p><mathjax>$$ T(x,t) = T_0 \cos (wt + \sqrt{w/2a}\times x ) e^{-\sqrt{w/2a}x} \quad (2) $$</mathjax> </p>
<p>Well, what does this solution <em>say</em>?
As we expected from our scaling arguments last week, the distance scale
depends on the <em>square root</em> of the time scale — if we decrease our
frequency by 4 (say, looking at changes over a season vs over a month),
the ground gets cooler only <mathjax>$2{\times}$</mathjax> deeper. We also see that the temperature
oscillation drops off quite rapidly as we go deeper into the ground, and
that there is a “lag” the farther you go into the ground. In particular,
we see that at distances deep into the ground, the temperature drops to
its average value at the surface. You can see this all in the pretty
plot below (generated with Python):</p>
<div style="float: center;">
<figure>
<img src="/static/images/how-cold-is-the-ground-ii/SingleFrequency.png" alt="GAH!" width="60%">
<figcaption style="text-align=center;">
Single frequency plot
</figcaption>
</figure>
</div>
<p>Let’s recap. To model the temperature of the ground, we looked for a
solution to the heat equation which had a sinusoidally oscillating
temperature at <mathjax>$x=0$</mathjax>, and decayed to 0 at large <mathjax>$x$</mathjax>. We found a solution
such a solution, and it shows that the temperature decays rapidly as we
go far into the ground. At this point, there are two questions that pop
into mind: </p>
<p>1) Is the solution that we found <em>unique</em>? Or are there other
possible solutions? </p>
<p>2) This is all well and good, but what if our days
or seasons <em>aren’t perfect sines</em>? Can we find a solution that describes
this behavior? </p>
<p><a id="note2"></a>
I’ll give one (1) VirtuosiPoint to the first commenter
who can prove to what extent the above solution is
unique<a href="#fnote2"><sup>[2]</sup></a>. But how about the second point? Can we solve this
for non-sinusoidal time variations? Well, at this point most of the
readers are rolling their eyes and shouting “Use a
<a href="http://en.wikipedia.org/wiki/Fourier_series">Fourier series</a> and move on.” So I
will. Briefly, it turns out that (more or less) <em>any</em> periodic function
can be written as a sum of sines <span class="amp">&</span> cosines. So we can just add a bunch
of sines and cosines together and construct our final solution. So just
for fun, here is a plot of the temperature of the ground in Ithaca (data
from <a href="http://en.wikipedia.org/wiki/Ithaca,_New_York">Wikipedia</a>) over a
year. (I used a discrete Fourier transform to compute the coefficients.)</p>
<div style="float: center;">
<figure>
<img src="/static/images/how-cold-is-the-ground-ii/IthacaTemp.png" alt="ithaca temp!" width="60%">
<figcaption style="text-align=center;">
The temperature (colorbar) is in degrees C, assuming a=0.5 mm^2/s from <a href="http://thephysicsvirtuosi.com/posts/how-cold-is-the-ground-.html">before</a>.
</figcaption>
</figure>
</div>
<p>Looks pretty boring, but I swear that all the frequencies are in that
plot. It just turns out that the seasons in Ithaca are pretty
sinusoidal. So about 20 meters below Ithaca, the temperature is a pretty
constant 8 C. While I was postponing writing this, I wondered what the
temperature on Mercury’s rocks would be. If we dig deep enough, can we
find an area with habitable temperatures? Some
<a href="http://hypertextbook.com/facts/2000/OlesyaNisanov.shtml">quick</a>
<a href="http://en.wikipedia.org/wiki/Mercury_%28planet%29#Surface_conditions_and_.22atmosphere.22_.28exosphere.29">Googlin</a>‘
shows that the daytime and nighttime temperatures on Mercury are
<mathjax>${\sim}550-700~{\rm K}$</mathjax> and <mathjax>${\sim}110~{\rm K}$</mathjax> at the “equator.”
While I don’t think that Mercury’s temperature varies symmetrically, let’s assume so for lack of
better data.<a href="#fnote3"><sup>[3]</sup></a> Then we’d expect that deep into the
surface, the temperature would be fairly constant in time, at the
average of these two extremes. Plugging in the numbers
(assuming <mathjax>$a\approx0.52~{\rm mm}^2 / s$</mathjax> and using a Mercurial solar day as 176 days), we get</p>
<p><mathjax>$T=94~{\rm C}$</mathjax> at 2.75 meters into the surface.</p>
<p><a id="fnote1"></a>
1. <a href="#note1">^</a> More precisely, since the heat equation is linear and real, if
<mathjax>$T(x,t)$</mathjax> is a solution to the equation, then so are <mathjax>$\frac{1}{2}(T+T^{*})$</mathjax> or
<mathjax>$\frac{1}{2i}(T-T^{*})$</mathjax>.</p>
<p><a id="fnote2"></a>
2. <a href="#note2">^</a> Hint: It’s not unique. For instance, here is another solution that
satisfies the constraints, with no internal heat sources or sinks
(I’ll call it the “freshly buried” solution):</p>
<div style="float: center;">
<figure>
<img src="/static/images/how-cold-is-the-ground-ii/buriedAlive.png" alt="buried alive" width="60%">
<figcaption style="text-align=center;">
Freshly buried.
</figcaption>
</figure>
</div>
<p>Can you prove that all the other solutions decay to the original
solution? Or is there a second or even a spectrum of steady state solutions?</p>
<p><a id="fnote3"></a>
[3] <a href="#note3">^</a> If someone provides me with better data of the time variation of
Mercury’s surface at some specific latitude, I’ll update with a full
plot of the temperature as a function of depth and time.</p>How Cold is the Ground?2012-05-18T00:20:00-04:00Briantag:thephysicsvirtuosi.com,2012-05-18:posts/how-cold-is-the-ground-.html<p>It snowed in Ithaca a few weeks ago. Which sucked. But fortunately, it
had been warm for the previous few days, and the ground was still warm
so the snow melted fast. Aside from letting me enjoy the absurd
arguments against global warming that snow in April birthed, this got me
thinking: How cold is the ground throughout the year? At night vs.
during the day? And the corollary: How cold is my basement? If I dig a
deeper basement, can I save on heating and cooling? (I’m very cheap.)</p>
<p>Well, we want to know the temperature distribution <mathjax>$T$</mathjax> of the ground as
a function of time <mathjax>$t$</mathjax> and position <mathjax>$x$</mathjax>. So some googlin’ or previous
knowledge shows that we need to solve the
<a href="http://en.wikipedia.org/wiki/Heat_equation">heat equation</a>. </p>
<p>For our purposes,
we can treat the Earth as flat (I don’t plan on digging a basement deep
enough to see the curvature of the Earth), so we can assume the
temperature only changes with the depth into the ground <mathjax>$x$</mathjax>: </p>
<p><mathjax>$$ \frac{\partial T}{\partial t} = a \frac{\partial^2 T}{\partial x^2}\qquad (1) $$</mathjax> </p>
<p>where <mathjax>$a$</mathjax> is the thermal diffusivity of the material, in
units of square meters per second. It looks like we’re going to have to
solve some partial differential equations! Or will we? We can get a very
good estimate of how much the temperature changes with depth just by
dimensional analysis. </p>
<p>Let’s measure our time <mathjax>$t$</mathjax> in terms of a
characteristic time of our problem <mathjax>$w$</mathjax>
(it could be 1 year if we were trying to see the change in the ground’s temperature from summer to winter,
or 1 day if we were looking at the change from day to night).
Then we can write: </p>
<p><mathjax>$$ \frac{\partial T }{\partial t} = \frac{1}{w} \frac {\partial T} {\partial t/w} $$</mathjax> </p>
<p>plugging this in Eq. (1),
rearranging, and calling <mathjax>$l= \sqrt{wa}$</mathjax> gives…. </p>
<p><mathjax>$$ \frac{\partial T}{\partial (t/w)} = \frac{\partial ^2 T}{\partial (x/l )^2} $$</mathjax> </p>
<p>Now let’s say we didn’t know how to or didn’t want to solve
this equation. (Don’t worry, we do <span class="amp">&</span> we will). From rearranging this
equation, we see right away there is only one “length scale” in the
problem, <mathjax>$l$</mathjax>. So if we had to guess, we could guess that the ground
changes temperature a distance <mathjax>$l$</mathjax> into the ground. A quick look at
Wikipedia for
<a href="http://en.wikipedia.org/wiki/Thermal_diffusivity">thermal diffusivities</a>
gives us the following table, for materials we’d find in the ground:</p>
<table>
<thead>
<tr>
<th>Tables</th>
<th align="center">Are</th>
<th align="right">Cool</th>
</tr>
</thead>
<tbody>
<tr>
<td>col 3 is</td>
<td align="center">right-aligned</td>
<td align="right">$1600</td>
</tr>
<tr>
<td>col 2 is</td>
<td align="center">centered</td>
<td align="right">$12</td>
</tr>
<tr>
<td>zebra stripes</td>
<td align="center">are neat</td>
<td align="right">$1</td>
</tr>
</tbody>
</table>
<p>blloop</p>
<table>
<thead>
<tr>
<th>Material</th>
<th align="center"><mathjax>$a$</mathjax> (<mathjax>${\rm mm}^2 / {\rm s}$</mathjax>)</th>
<th align="center"><mathjax>$l$</mathjax> (<mathjax>$\rm{cm}$</mathjax>, <mathjax>$1~{\rm day}$</mathjax>)</th>
<th align="center"><mathjax>$l$</mathjax> (<mathjax>$\rm{m}$</mathjax>, <mathjax>$1~{\rm year}$</mathjax>)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Polycrystalline Silica (glass, sand)</td>
<td align="center">0.83</td>
<td align="center">27</td>
<td align="center">5.1</td>
</tr>
<tr>
<td>Crystalline Silica (quartz)</td>
<td align="center">1.4</td>
<td align="center">35</td>
<td align="center">6.6</td>
</tr>
<tr>
<td>Sandstone</td>
<td align="center">1.15</td>
<td align="center">32</td>
<td align="center">6.0</td>
</tr>
<tr>
<td>Brick</td>
<td align="center">0.52</td>
<td align="center">21</td>
<td align="center">4.0</td>
</tr>
<tr>
<td><a href="http://soilphysics.okstate.edu/software/SoilTemperature/document.pdf">Soil</a></td>
<td align="center">0.3-1.25</td>
<td align="center">16-33</td>
<td align="center">3.1-6.3</td>
</tr>
</tbody>
</table>
<p>So we would expect that the temperature of the ground doesn’t change
much on a daily basis a foot or so below the ground, and doesn’t change
ever about 15-20 feet into the ground. Just to pat ourselves on the back
for our skills at dimensional analysis, a quick check shows that
<a href="http://en.wikipedia.org/wiki/Permafrost#Time_to_form_deep_permafrost">permafrost</a>
penetrates 14.6 feet into the ground after 1 year. So our dimensional
estimates looks pretty good! In the next few days I’ll solve this
equation exactly and throw up a few pretty graphs, and maybe talk a
little about <span class="caps">PDE</span>’s and Fourier series in the process.</p>End of the Earth VII: The Big Freeze2012-04-22T19:34:00-04:00Jessetag:thephysicsvirtuosi.com,2012-04-22:posts/end-of-the-earth-vii-the-big-freeze.html<hr />
<p><a href="http://1.bp.blogspot.com/-c8vJR4CVwZc/T5R7_62SLuI/AAAAAAAAAHU/POCT5Fhx-CQ/s1600/Space_Scene_Frozen_Earth_WP_BG_by_PimArt.jpg"><img alt="image" src="http://1.bp.blogspot.com/-c8vJR4CVwZc/T5R7_62SLuI/AAAAAAAAAHU/POCT5Fhx-CQ/s320/Space_Scene_Frozen_Earth_WP_BG_by_PimArt.jpg" /></a> http://tinyurl.com/7rdj996</p>
<hr />
<p>It is traditional here at The Virtuosi to
<a href="http://thevirtuosi.blogspot.com/2010/04/end-of-earth-physics-i.html">plot</a>
<a href="http://thevirtuosi.blogspot.com/2010/04/end-of-earth-ii-blaze-of-glory.html">the</a>
<a href="http://thevirtuosi.blogspot.com/2010/04/end-of-earth-physics-iii-asteroids.html">destruction</a>
<a href="http://thevirtuosi.blogspot.com/2011/04/end-of-earth-iv-shocking-destruction.html">of</a>
<a href="http://thevirtuosi.blogspot.com/2011/04/end-of-earth-v-there-goes-sun.html">the</a>
<a href="http://thevirtuosi.blogspot.com/2011/04/end-of-earth-vi-nanobot-destruction.html">earth</a>.
We also are making secret plans for our volcano lair and death ray.
However, since it is earth day, we will only share with you the plans
for the total doom of the earth, not the cybernetically enhanced guard
dogs we’re building for our <a href="http://thevirtuosi.blogspot.com/2012/04/earth-day-2012-escape-to-moon.html">moon
base</a>.
The plan I reveal today is elegant in its simplicity. I intend to alter
the orbit of the earth enough to cause the earth to freeze, thus ending
life as we know it. According to the internet at large, the average
surface temperature of the earth is \~15 C. This average surface
temperature is directly related to the power output of the sun. More
precisely, it is directly related to the radiated power from the sun
that the earth absorbs. Assuming that the earth’s temperature is not
changing (true enough for our purposes), the then power radiated by the
earth must be equal to the power absorbed from the sun. More precisely
<mathjax>$$ P_{rad,earth}=P_{abs,sun}$$</mathjax> Now, the radiated power goes as
<mathjax>$$P_{rad}=\epsilon \sigma A_{earth} T^4 $$</mathjax> where A_earth is the
surface area of the earth, T is the temperature of the earth, and
epsilon and sigma are constants. I’ll be conservative and say that I
want to cool the temperature of the earth down to 0 C. The ratio of the
power the earth will emit is
<mathjax>$$\frac{P_{new}}{P_{old}}=\frac{T_{new}^4}{T_{old}^4} \approx
.81$$</mathjax> Note that the temperature ratio must be done in Kelvin. The power
radiated by the sun (or any star) drops off as the inverse square of the
distance from the sun to the point of interest: <mathjax>$$P_{sun} \sim
\frac{1}{r^2} $$</mathjax> To reduce the power the earth receives from the sun
to 81% of the current value would require
<mathjax>$$\frac{P_{sun,new}}{P_{sun,old}}=\frac{r_{old}^2}{r_{new}^2}=.81
$$</mathjax> This tells us that the new earth-sun distance must be larger than the
old (a good sanity check). In fact, it gives <mathjax>$$r_{new}=1.11 r_{old} $$</mathjax>
So I’ll need to move the earth by 11% of the current distance from the
earth to the sun. No small task! The earth is in a circular orbit (or
close enough). To change to a circular orbit of larger radius requires
two applications of thrust at opposite points in the orbit It turns out
that the required boost in speed (the ratio of the speeds just before
and after applying thrust) for the first boost of an object changing
orbits is given by
<mathjax>$$\frac{v_{f}}{v_{i}}=\sqrt{\frac{2R_{f}}{R_i+R_f}}=1.026$$</mathjax> To
move from the transfer orbit to the final circular orbit requires
<mathjax>$$\frac{v_{f}}{v_{i}}=\sqrt{\frac{R_{i}+R_f}{2R_i}}=1.027$$</mathjax> Note
that despite the fact that we boost the velocity at both points, the
velocity of the final orbit is less than that of the initial. Now, how
could we apply that much thrust? Well, the change in momentum for the
earth from each stage is roughly (ignoring the slight velocity increase
of the transfer orbit) <mathjax>$$\Delta p = .03M_E v_E $$</mathjax> The mass of the
earth is \~6<em>10^24 kg, the orbital velocity is \~30 km/s, so <mathjax>$$\Delta
p = 5\cdot 10^{27} kg*m/s$$</mathjax> A solid rocket booster (the booster
rocket used for shuttle launches, when those still happened) can apply
about 12 <span class="caps">MN</span> of force for 75 s (thank you wikipedia). That’s a net
momentum change of \~900 </em>10^9 kg*m/s (900 billion!). So we would
only need <mathjax>$$\frac{2*5\cdot 10^{27}}{9\cdot 10^{11}}=12 \cdot
10^{15}$$</mathjax> That’s right, only 12 million billion booster rockets! With
those I can freeze the earth. I assure you that this plan is proceeding
on schedule, and will be ready shortly after we have constructed our
volcano lair.</p>Earth Day 2012: Escape to the Moon2012-04-22T15:12:00-04:00Briantag:thephysicsvirtuosi.com,2012-04-22:posts/earth-day-2012-escape-to-the-moon.html<p>It is now Earth Day 2012, and, according to the Mayan predictions, <a href="http://thevirtuosi.blogspot.com/search/label/end%20of%20the%20earth">The
Virtuosi will destroy the
earth</a>.
In a futile attempt to fight my own mortality, I decided to send
something to the Moon. It seems, for a poor graduate student trying to
get to the Moon, the most difficult part is the Earth holding me back.
So first I’ll focus on escaping the Earth’s gravitational potential
well, and if that’s possible, then I’ll worry about more technical
problems, such as actually hitting the moon. Moreover, in honor of the
destructive spirit of The Virtuosi near Earth Day, I’ll try to do this
in the most Wiley-Coyote-esque way possible. <strong>Preliminaries</strong> If we
want to get to the Moon, we need to first figure out how much energy
we’ll need to escape the Earth’s gravitational pull. “That’s easy!” you
say. “We need to escape a gravitational well, and we know from Newton’s
law that the potential from a spherical mass <em><span class="caps">ME</span></em> ‘s gravity for a test
mass <em>m</em> is : \begin{equation} \Phi = - \frac {G M_{E} m}{r}
\label{eqn:gravpot} \end{equation} We’re currently sitting at the
radius of the Earth <em><span class="caps">RE</span></em>, so we simply need to plug this value in and
we’ll find out how much energy we need.” This is all well and good, but
i) I can never remember what the gravitational constant <em>G</em> is, and ii)
I have no idea what the mass of the Earth <em><span class="caps">ME</span></em> is. So let’s see if we
can recast this in a form that’s easier to do mental arithmetic in.
Well, we know that the force of gravity is the related to the potential
by: <mathjax>$$ \vec{F}(r) = - \vec{\nabla} \Phi = - \frac {d\Phi}{dr}
\hat{r} \\ \vec{F} = - \frac {G m M_E } {r^2}
\label{eqn:gravforce} $$</mathjax> Moreover, we all know that the force of
gravity at the Earth’s surface is <em>F(r=<span class="caps">RE</span>)=-mg</em>. Substituting this in
gives: <mathjax>$$ \frac {G m M_E} {R_E^2} = m g \quad \textrm{, or} $$</mathjax>
\begin{equation} \frac {G m M_E}{R_E} = m g {R_E} \quad .
\label{eqn:betterDef} \end{equation} So the depth of the Earth’s
potential well at the Earth’s surface is <em>mgRE</em>. If we use <em>g</em> = 9.8
m/s^2 \~ 10 m/s^2 and <em><span class="caps">RE</span></em> = 6378 km \~ 6x10^6 m, then we can write
this as \begin{equation} \Delta \Phi = m g {R_E} \approx m \times
6 \times 10^7 \textrm{m}^2/\textrm{s}^2 \quad \textrm{(1)},
\end{equation} give or take. How fast do we need to go if we’re going
to make it to the Moon? Well, at the minimum, we need the kinetic energy
of our object to be equal to the depth of the potential well
<a href="#footnote-1">[1]</a>, or <mathjax>$$ \frac 1 2 m v^2 = 6 m \times 10^7
\textrm{m}^2/\textrm{s}^2 \quad \textrm{or} \\ v \approx 1.1
\times 10^4 \textrm{ m/s (2)} . $$</mathjax> So we need to go pretty fast —
this is about Mach 33 (33 times the speed of sound in air). At this
speed, we’d get from <span class="caps">NYC</span> to <span class="caps">LA</span> in under 7 minutes. Looks difficult, but
let’s see just how difficult it is. <strong>Attempt I: Shoot to the Moon</strong>
What goes fast? Bullets go fast. Can we shoot our payload to the moon?
Let’s make some quick estimates. First, can we shoot a regular bullet to
the moon? Well, we said that we need to go about Mach 33, and a fast
bullet only goes about Mach 2, so we won’t even get close. Since energy
is proportional to velocity squared, we’ll only have (2/33)^2 \~ 0.4 %
of the necessary kinetic energy. <a href="#footnote-2">[2]</a> So let’s make a
bigger bullet. How big does it need to be? Well, loosely speaking, we
have the chemical potential energy of the powder being converted into
kinetic energy of the bullet. Let’s assume that the kinetic energy
transfer ratio of the powder is constant. If a bullet receives kinetic
energy <em>1/2mbvb^2</em> from a mass <em>mP</em> of powder, then for our payload to
have kinetic energy <em>1/2 M V^2</em>, we need a mass of powder <em><span class="caps">MP</span></em> such
that \begin{equation} \frac {M_P} {m_P} = \frac M {m_b} \times
\frac {V^2}{v_b^2} \end{equation} A quick reference to Wikipedia
for a <a href="http://en.wikipedia.org/wiki/7.62%C3%9751mm_NATO">7.62x51mm <span class="caps">NATO</span>
bullet</a> shows that
\~25 grams of powder propels a \~10 gram bullet at a speed of \~Mach
2.5. We need to get our payload moving at Mach 33, so (<em>V/vb</em>)^2 \~
175. If we send a 10 kg payload to the Moon, we have <em>M/mb</em> \~ 1000. So
we’ll need about 1.75 x 10^5 the amount of powder of a bullet to get us
to the Moon, or about 4400 kg, which is 4.8 tons (English) of powder.
That’s a lot of gunpowder to get us to the Moon. For comparison, if we
are going to construct a tube-like “case” for our 10 kg
bullet-to-the-Moon, it will have to be about half a meter in diameter
and 17 feet tall. So I’m not going to be able to shoot anything to the
Moon anytime soon. <strong>Attempt <span class="caps">II</span>: Charge to the Moon</strong> <span class="caps">OK</span>, shooting
something to the Moon is out. Can we use an electric field to propel
something to the Moon? Well, we would need to pass a charged object
through a potential difference such that \begin{equation} q \Delta
\Phi_E = m g R_E = 6 m \times 10^7 \textrm{m}^2/\textrm{s}^2
\quad . \label{eqn:chargepot} \end{equation} After the humiliation of
the last section, let’s start out small. Can we send an electron to the
Moon? We could plug numbers into this equation, but I’m too lazy to look
up all those values. Instead, we know that we need to get our electron
(rest mass 511 keV) to a speed which is (Eq. 2) <mathjax>$$v \approx 1.1 \times
10^4 \textrm{m/s} \approx 4 \times 10^{-5} c. $$</mathjax> So an electron
moving at this velocity will have a kinetic energy of <mathjax>$$ \textrm{KE} =
m c^2 \times \frac 1 2 \frac {v^2}{c^2} = 511 \textrm{ keV}
\times \frac 1 2 \frac {v^2}{c^2} \\ \qquad \approx 511
\textrm{ keV} \times 0.8 \times 10^{-9} \approx 0.4 \times
10^{-3} eV. $$</mathjax> So we can give an electron enough kinetic energy to get
to the moon with a voltage difference of 0.4 mV, assuming it doesn’t hit
anything on the way up (it will). We can send an electron to the Moon!
How about a proton? Well, the mass of a proton is 1836x that of an
electron, but with the same charge, so we’d need 1836 * 0.4 mV \~ 0.73
V to get a proton to the Moon — again, pretty easy. Continuing this
logic, we can send a singly-charged ion with mass 12 amu (<em>i.e.</em> C-)
with a 9V battery, and a singly-charged ion with mass 150 amu (something
like caprylic acid) using a 110V voltage drop. (Again, assuming these
don’t hit anything on the way up.) How about our 10 kg object? Let’s say
we can miraculously charge it with 0.01 C of charge. <a href="#footnote-3">[3]</a>
Then from Eq. (1), we’d need <mathjax>$$ 0.01 C \times \Delta \Phi_E \approx
6 \times 10^8 \textrm{ J ,} $$</mathjax> or a potential difference of <mathjax>$$
\Delta \Phi_E = 6 \times 10^{10} \textrm{ V. } $$</mathjax> That is a <span class="caps">HUGE</span>
potential drop. For comparison, if we have 2 parallel plates with a
surface charge of 0.01 C/m^2 (again, a huge charge density), they’d
have to be a distance <mathjax>$$ d = 6 \times 10^{10} \textrm{V} \times
\epsilon_0 / (0.01 \textrm{C/m}^2) \approx 53 \textrm{ meters
apart} $$</mathjax> It looks like I won’t be able to send something to the Moon
using tools from my basement anytime soon.
[1] We’ll ignore both air resistance and the Moon’s gravitational
attraction for simplicity.
[2] Since the potential <em>U \~ - 1/r</em>, if we increase our potential
energy by 0.4%, this is (to 1st order) the same as increasing <em>r</em> by
0.4%. So we’ll get 0.004 * 6378 km \~ 25 km above the Earth’s surface.
Of course <a href="http://scienceblogs.com/dotphysics/2009/09/how-high-does-a-bullet-go.php">air resistance slows it down a
lot</a>.
[3] According to Wikipedia, this is <a href="http://en.wikipedia.org/wiki/Orders_of_magnitude_%28charge%29">0.04% of the total charge of a
thundercloud</a>.
And if our object is uniformly charged with a radius of 1 m, it will
have an electrical self-energy of ** <mathjax>$$ U = \frac 1 2 \int
\epsilon_0 E^2 dV \approx 36 \textrm{kJ} $$</mathjax></p>Money for (almost) Nothing2012-03-29T01:27:00-04:00Corkytag:thephysicsvirtuosi.com,2012-03-29:posts/money-for-almost-nothing.html<hr />
<p><a href="http://4.bp.blogspot.com/-j1vgDeAaElw/T3O5K09X6XI/AAAAAAAAAW0/b3uIqmdtPl4/s1600/mega_millions.png"><img alt="image" src="http://4.bp.blogspot.com/-j1vgDeAaElw/T3O5K09X6XI/AAAAAAAAAW0/b3uIqmdtPl4/s320/mega_millions.png" /></a>
Five Hundred Mega Dollars, to be precise. (Image from Wikipedia)</p>
<hr />
<p>I am not typically interested in lotteries. They seem silly and I am
seriously beginning to question their usefulness in bringing about a
<a href="http://en.wikipedia.org/wiki/The_Lottery">good harvest</a>. But this
morning I read in the news that the Mega Millions lottery currently has
a <a href="http://en.wikipedia.org/wiki/Mega_Millions#Record_jackpots_.28listed_by_cash_value.29">world
record</a>
jackpot up for grabs. In fact, the jackpot is so big…</p>
<p>Tonight Show Audience: <em><span class="caps">HOW</span> <span class="caps">BIG</span> <span class="caps">IS</span> <span class="caps">IT</span>?</em></p>
<p>**</p>
<p>It is <em>so big</em> that I decided to do a little bit of analysis on the
expected returns. Zing!</p>
<h2>Some Background</h2>
<p>First, a little background. The Mega Millions lottery is an aptly named
lottery in which numbered ping pong balls are pulled from a giant
rotating tub of randomization. Five of these are drawn from one tub of
56 balls, with no replacement. The sixth ball (the so-called “Mega
Ball”) is drawn from a <em>separate</em>tub of 46 balls. To play, one picks 5
different numbers (1-56) for the regular draws and one number (1-46) for
the Mega Ball. The first five can match in any order, but the last ball
has to match with the Mega Ball. Prizes are given out based on how many
numbers you match. Stolen from the Mega Millions website, the prizes and
odds are given in the table below. The current jackpot is listed at <mathjax>$500
million (if taken in annuity) or $</mathjax>359 million if taken in an up-front
lump sum. It costs $1 dollar to play.</p>
<hr />
<p><a href="http://3.bp.blogspot.com/-Zi3rSfNqtuI/T3PHTEaXmvI/AAAAAAAAAXE/y9Hs7fqWrgs/s1600/crummy_table.gif"><img alt="image" src="http://3.bp.blogspot.com/-Zi3rSfNqtuI/T3PHTEaXmvI/AAAAAAAAAXE/y9Hs7fqWrgs/s320/crummy_table.gif" /></a>
Don’t worry about the asterisk. It just says <span class="caps">CA</span> is lame. (Source: <a href="http://www.megamillions.com/howto/">Mega Millions</a>)</p>
<hr />
<p>Hot diggity daffodil, we’re ready to get going!</p>
<h2>Expected Winnings</h2>
<p>Alright, so it costs <mathjax>$1 to play and we could potentially win $</mathjax>500
million. It sure <em>feels</em> like it is worth it to play (what’s the harm?).
But we can do better than feelings, we have… <span class="caps">MATH</span>!</p>
<p>Since we have an exhaustive list of outcomes and their probabilities
(which is just the inverse of the big number in the “chances” column),
we can calculate the expectation value for our winnings. The expectation
value is just the sum over all the possible prize values times the
probability of winning that prize. In other words,</p>
<p><mathjax>$$\langle W \rangle = \sum_i W_i \times p_i, $$</mathjax></p>
<p>where we denote our expected winnings in angled brackets.</p>
<p>In essence, this value represents the average prize you would win if you
played this lottery over and over and over again (or played all the
combinations of numbers).</p>
<p>Setting the jackpot to $500 million, we can now compute the expected
winnings as</p>
<p><mathjax>$$ \langle W \rangle = \frac{\$ 500,000,000}{175,711,536} +
\frac{\$ 250,000}{3,904,701} +\frac{\$ 10,000}{689,065} + \frac{\$
150}{15,313} + \frac{\$ 150}{13,781} $$</mathjax></p>
<p><mathjax>$$+ \frac{\$ 10}{844} + \frac{\$ 7}{306} + \frac{\$ 3}{141} +
\frac{\$ 2}{75}$$</mathjax></p>
<p>A few flicks of the abacus later, we find that the expectation value of
our prize is</p>
<p><mathjax>$$\langle W \rangle = \$ 3.02,$$</mathjax></p>
<p>which means that after we subtract the dollar we paid for the ticket,
our expected return is $2.02.</p>
<p>But what if we had chosen to take our winnings as a lump sum of <mathjax>$359
million instead of the $</mathjax>500 million paid out over a span of 26 years? In
that case we find</p>
<p><mathjax>$$\langle W \rangle = \$ 2.22,$$</mathjax></p>
<p>which results in a $1.22 gain when we subtract the dollar we paid for
the ticket.</p>
<p>At least in a statistical sense for this particular jackpot, one is
better off playing than not playing. But are we forgetting anything?</p>
<h2>The Taxman</h2>
<p>If you win a <mathjax>$500 million jackpot, do you *really* get a $</mathjax>500 million
jackpot? Well, no. For winnings in a lottery over <mathjax>$5000, the IRS
[withholds](http://www.irs.gov/instructions/iw2g/ar02.html#d0e401) 25%
in federal income taxes. Additionally, the winnings are subject to state
taxes as well. For example, if I were to win, the great state of New
York would be entitled to about 6.8% (apparently also just for winnings
above $</mathjax>5000).</p>
<p>After applying federal and state taxes to the prizes above $5000, we now
have an expected winnings of</p>
<p><mathjax>$$ \langle W \rangle = \left[1-(0.25 +
0.068)\right]\times\left(\frac{\mbox{Jackpot}}{175,711,536} +
\frac{\$ 250,000}{3,904,701} +\frac{\$ 10,000}{689,065}\right) $$</mathjax></p>
<p><mathjax>$$+ \frac{\$ 150}{15,313} + \frac{\$ 150}{13,781}+ \frac{\$
10}{844} + \frac{\$ 7}{306} + \frac{\$ 3}{141} + \frac{\$
2}{75},$$</mathjax></p>
<p>which gives an expected net win (minus the <mathjax>$1 for the ticket) of $</mathjax>1.10
for the <mathjax>$500 million annuity prize and $</mathjax>0.55 for the $359 million
up-front lump sum.</p>
<p>We’re still in the black, but it’s slowly slipping away. Is there
anything else we need to factor in? Well, yes. For one thing, winning
the jackpot qualifies us for the top tax bracket, so most of the
winnings would be taxed at the top marginal tax rate of 35%. Welcome to
the 1%, kids! <a href="#note">[1]</a>.</p>
<p>Changing the federal tax rate on the jackpot from 25% to 35% and
recalculating, we find net expected winnings of <mathjax>$0.81 for the $</mathjax>500
million annuity and <mathjax>$0.34 for the $</mathjax>359 million lump sum. Surprisingly,
it is still worth it in a statistical sense.</p>
<h2>Is it always like this?</h2>
<p>One thing to keep in mind as we make these estimates is that this is a
<em>historically large</em> jackpot. So even though it may be favorable to play
this time, this will not always be the case. In fact, we can find the
minimum jackpot value for which this is the case.</p>
<p>The condition in which our expected return is a gain (rather than a
loss) is</p>
<p><mathjax>$$ \langle W \rangle - \$1.00 \> 0. $$</mathjax></p>
<p>For simplicity, let’s ignore the top marginal tax rate and just factor
in the 25% withholding and the 6.8% state tax. Solving for the minimum
jackpot using the expression for we found in the last section, we see that</p>
<p><mathjax>$$ \mbox{Jackpot}_{min} = \$217\~\mbox{million}.$$</mathjax></p>
<p>Technically, this would have to be the amount actually awarded by the
payment method of your choice. The <em>stated</em> jackpot is always the
annuity method (because it looks higher). The lump sum offering is <em>at
most</em> about 70% of the stated jackpot. So if you want to take the lump
sum offering the <em>stated</em> jackpot will need to be</p>
<p><mathjax>$$ \mbox{Jackpot}_{min} = \$217\~\mbox{million} / 0.7 =
\$310\~\mbox{million}.$$</mathjax></p>
<p>In fact, these values are likely a bit low, since we have not included
the increase to the marginal tax rate, nor have we included other
effects like having to split a prize (which seems to happen a lot) or
inflation effects if you take the prize in yearly installments.</p>
<p>In any case, a quick look through the <a href="http://www.megamillions.com/winners/jackpothistory.asp">jackpot
history</a> shows
that these threshold values are only met occasionally. An eyeball
estimate puts about one jackpot per year that exceeds the (absolute)
minimum $217 million threshold.</p>
<h2>So am I going to win?</h2>
<p>No. No, you will not. <span class="caps">BUT</span> if you played record setting lotteries
hundreds of millions of times, you might see decent (\~10%) returns.
Although, it may just be easiest to, you know, <em>invest</em> that money.</p>
<h4>Only One Useless Footnote</h4>
<p>[1] Although, to be fair, the top marginal tax rate is currently at
<a href="http://en.wikipedia.org/wiki/File:MarginalIncomeTax.svg">historical
lows</a>. It could
always be worse… <a href="#back">[back]</a></p>Pi storage2012-03-14T15:13:00-04:00Alemitag:thephysicsvirtuosi.com,2012-03-14:posts/pi-storage.html<p><a href="http://4.bp.blogspot.com/-4x2fD-exJns/T2DAEJqroqI/AAAAAAAAAbI/8_9quiDP4p0/s1600/floppies.jpg"><img alt="image" src="http://4.bp.blogspot.com/-4x2fD-exJns/T2DAEJqroqI/AAAAAAAAAbI/8_9quiDP4p0/s320/floppies.jpg" /></a></p>
<p>Let me share my worst “best idea ever” moment. Sometime during my
undergraduate I thought I had solved all the world’s problems. You see,
on this fateful day, my hard drive was full. I hate it when my hard
drive fills up, it means I have to go and get rid of some of my stuff. I
hate getting rid of my stuff. But what can someone do? And then it hit
me, I had the bright idea:</p>
<blockquote>
<p>What if we didn’t have to <em>store</em> things, what if we could just
<em>compute</em> files whenever we wanted them back?</p>
</blockquote>
<p>Sounds like an awesome idea, right? I know. But how could we compute our
files? Well, as you may know pi is conjectured to be a <a href="http://en.wikipedia.org/wiki/Normal_number">normal
number</a>, meaning its digits
are probably random. We also know that it is irrational, meaning pi
never ends…. Since its digits are random, and they never end, in
principle any sequence you could ever imagine should show up in pi
eventually. In fact there is a nifty website
<a href="http://pi.nersc.gov/">here</a> that will let you search for arbitrary
strings (using a 5-bit format) in first 4 billion digits, for example
“alemi” <a href="http://pi.nersc.gov/cgi-bin/pi.cgi?word=alemi&format=char">seems to show
up</a> at around
digit 3149096356. So in principle, I could send you just an index, and a
length, and you could compute the resulting file. But wait you cry,
isn’t computing digits of pi hard, don’t people work really hard to
compute pi farther and farther? Hold on I claim, first of all, I’m
imagining a future where computation is cheap. Secondly, there is a
really neat algorithm, the <a href="http://en.wikipedia.org/wiki/Bailey%E2%80%93Borwein%E2%80%93Plouffe_formula"><span class="caps">BBP</span>
algorithm</a>,
that enables you to compute the kth binary digit of pi without knowing
any of the preceding digits. In other words, in principle if you wanted
to know the 4 billionth digit of pi, you can compute it without having
to first compute the first 4 billion other digits. Cool, this is
beginning to sound like a really good idea. What’s the catch? Perhaps
you’ve already gotten a taste of it. Let’s try to estimate just how far
along in pi we would have to look before our message of interest shows
up. Let’s assume we have written our file in binary, and are computing
pi in binary e.g.</p>
<blockquote>
<ol>
<li>00100100 00111111 01101010 10001000 10000101 10100011 00001000 11010011</li>
</ol>
</blockquote>
<p>etc. So, if the sequence is random, there is a 1/2 chance that at any
point we get the right starting bit of our file, and then a 1/2 chance
we get the next one, etc. So the chance that we would create our file if
we were randomly flipping coins would be <mathjax>$$ P = \left( \frac{1}{2}
\right)^N = 2^{-N} $$</mathjax> if our file was N bits long. So where do we
expect this sequence to first show up in the digits of pi? Well, this
turns out to be a <a href="http://mathworld.wolfram.com/CoinTossing.html">subtle
problem</a>, but we can get
a feel for it by assuming that we compute N digits of pi at a time and
see if its right or not. If its not, we move on to the next group of N
digits, if its right, we’re done. If this were the case, we should
expect to have to draw about <mathjax>$$ \frac{1}{P} = 2^N $$</mathjax> times until we
have a success, and since each trial ate up N digits, we should expect
to see our file show up after about <mathjax>$$ N 2^N $$</mathjax> digits of pi. Great, so
instead of handing you the file, I could just hand you the index the
file is located. But how many bits would I need to tell you that index.
Well, just like we know that 10^3 takes 4 digits to express in decimal,
and 6 x 10^7 takes 8 digits to express, in general it takes <mathjax>$$ d =
\log_b x + 1 $$</mathjax> digits to express a number in base b, in this case it
takes <mathjax>$$ d = \log_2 ( N 2^N ) + 1= \log_2 2^N + \log_2 N + 1 = N
+ \log_2 N + 1 $$</mathjax> digits to express this index in binary. And there’s
the rub. Instead of sending you N bits of information contained in the
file, all my genius compression algorithm has manged to do is replace N
bits of information in the file, with a number that takes ( \~ N +
\log_2 N ) bits to express. I’ve actually managed to make the files
larger not smaller! You may have noticed above, that even for the simple
case of “alemi”, all I managed to do was swap the binary message</p>
<blockquote>
<p>alemi -> 0000101100001010110101001 with the index 3149096356 ->
10111011101100110110010110100100</p>
</blockquote>
<p>which is longer in binary! As an aside, you may have felt uncomfortable
with my estimation for how long we have to wait to see our message, and
you would be right. Just because all N digits I draw at a time don’t
match up doesn’t mean that the second half isn’t useful. For instance if
I was looking for 010, lets say some of the digits are 101,010. While
both of those sequences didn’t match, if I was looking at every digit at
a time, I would have found a match. And you’d be right. <a href="http://www.cs.elte.hu/~mori/cikkek/Expectation.pdf">Smarter people
than I</a> have
computed just how long you should have to wait, and end up with the
better estimation <mathjax>$$ \text{wait time} \sim 2^N N \log 2 $$</mathjax> which is
pretty darn close to our silly estimate.</p>Calculator Pi2012-03-14T14:16:00-04:00Alemitag:thephysicsvirtuosi.com,2012-03-14:posts/calculator-pi.html<p>There is a very fast converging algorithm for computing pi that you can
do on a desktop calculator.</p>
<ul>
<li>Set x = 3</li>
<li>Now set x = x + sin(x)</li>
<li>Repeat</li>
</ul>
<p>This converges ridiculously fast, after 1 step you get 4 digits right,
after 2 steps you get 11 correct, in general we find:</p>
<hr />
<p># steps Digits right
1 4
2 11
3 33
4 100
5 301
6 903
7 2708
8 8124</p>
<hr />
<p>of course on a pocket calculator, you only need to do 2 steps to have an
accuracy greater than the calculator can display. To make this chart I
had to trick a computer into doing high precision arithmetic, the code
is <a href="https://gist.github.com/2038329">here</a>. Granted, this approximation
is really cheating, since sin is a hard function to compute, and
basically being able to compute sin means you know what pi is already.
Really, this is just <a href="http://en.wikipedia.org/wiki/Newton's_method">Newton’s
method</a> for computing the
root of sin(x) in disguise</p>A Clarification2012-03-14T12:20:00-04:00Yarivtag:thephysicsvirtuosi.com,2012-03-14:posts/a-clarification.html<p>As there seems to be some
<a href="http://en.wikipedia.org/wiki/Date_format#Date_format">confusion</a> among
my fellow Virtuosi, I wanted to point out that Pi day occurs on July
22nd or, in the year 4159, on January 3rd. Today is in fact <a href="http://en.wikipedia.org/wiki/Waring%27s_problem">Seventh
Power Day</a>.</p>Pi-rithmetic2012-03-14T11:52:00-04:00Alemitag:thephysicsvirtuosi.com,2012-03-14:posts/pi-rithmetic.html<p><a href="http://2.bp.blogspot.com/-7rfL9Iby34A/T2C3LhSj_6I/AAAAAAAAAa0/rXTR30c77bk/s1600/IMAG0200.jpg"><img alt="image" src="http://2.bp.blogspot.com/-7rfL9Iby34A/T2C3LhSj_6I/AAAAAAAAAa0/rXTR30c77bk/s320/IMAG0200.jpg" /></a></p>
<p>Fun fact: pi squared is very close to 10. How close? Well, <a href="http://www.wolframalpha.com/input/?i=%2810+-pi%5E2+%29%2Fpi%5E2">Wolfram
Alpha</a>
tells me that it is only about 1% off. I first realized this fact when
looking at my slide rule, pictured to the left (click to embiggen), just
another reason why slide rules are awesome. It turns out I use this fact
all of the time. How’s that you ask? Well, I use this fast to enable me
to do very quick mental arithmetic. It goes like this. For every number
you come across in a calculation, drop all of the information save two
parts, first, what’s its order of magnitude, that is, how many digits
does it have, and second, is it closest to 1, pi, or 10? The first part
amounts to thinking of every number you come across as it looks in
scientific notation, so a number like 2342 turns into 2.342 x 10^3, so
that I’ve captured its magnitude in a power of 10. As for the next part,
the rules I usually use are:</p>
<ul>
<li>If the remaining bit is between 1 and 2, make it 1</li>
<li>If its between 2 and 6.5 make it pi</li>
<li>if its bigger than 6.5, make it another 10</li>
</ul>
<p>Another way to think of this is to estimate every number to be a power
of ten, and then either 1, a few, or 10. The reason I choose pi is
because if I use pi, I know how the rest of the arithmetic should work,
namely, I only need to know a few rules, plus when I use this to
estimate answers of physics formulae, making a bunch of pis show up
tends to help me cancel other natural pis that are in the formulae.</p>
<p><mathjax>$$ \pi \times \pi \sim10 \qquad \frac{1}{\pi} \sim
\frac{\pi}{10} \qquad \sqrt{10} \sim \pi $$</mathjax></p>
<p>Which you might notice is just the same approximation written in 3
different ways.</p>
<p>Let’s work an example</p>
<p><mathjax>$$ \begin{align*} 23 \times 78 / 13 \times 2133 &= ? \\ \pi
\times 10 \times 100 / 10 \times \pi \times 10^3 &= ? \\ \pi^2
\times 10^5 &\sim 10^6 \\ \end{align*}$$</mathjax></p>
<p>of course the <a href="http://www.wolframalpha.com/input/?i=23+*+78%2F13+*+2133">real
answer</a> is
294,354, so you’ll notice I got the answer wrong, but I only got it
wrong by a factor of 3, which is pretty good for mental arithmetic, and
in particular mental arithmetic that takes no time flat.</p>
<p>In fact, the average error I introduce by using this approximation is
just 30% or so for each number, which I’ve shown below [the script that
produced this plot for those interested is
<a href="https://gist.github.com/2037431">here</a>].</p>
<p><a href="http://3.bp.blogspot.com/-uwGlV6y_pps/T2C90lPhmQI/AAAAAAAAAbA/k_Hl8H-y2ys/s1600/pierr.png"><img alt="image" src="http://3.bp.blogspot.com/-uwGlV6y_pps/T2C90lPhmQI/AAAAAAAAAbA/k_Hl8H-y2ys/s320/pierr.png" /></a></p>
<p>So, there you go, now you can impress all of your friends with a simple
mental arithmetic that gets you within a factor of 3 or so on average.</p>Moving Pi-ctures2012-03-14T08:35:00-04:00Corkytag:thephysicsvirtuosi.com,2012-03-14:posts/moving-pi-ctures.html<hr />
<p><a href="http://4.bp.blogspot.com/-YJBJw29pxZk/T1ribOQLS-I/AAAAAAAAAVA/xsJ-KAOJt5w/s1600/tv_pi.jpg"><img alt="image" src="http://4.bp.blogspot.com/-YJBJw29pxZk/T1ribOQLS-I/AAAAAAAAAVA/xsJ-KAOJt5w/s320/tv_pi.jpg" /></a>
My <span class="caps">TV</span> celebrates without me.</p>
<hr />
<p>Today, as I’m sure you’re aware, is Pi Day - a day for the festive
consumption of pies and quiet self-reflection. In the spirit of the
holiday, I’d like to present a point for discussion: <em>Everyone has a
great talent for at least one thing.</em> That this is true for at least
<em>some</em> people is seen through even a cursory glance at a history book:
George Washington was really good at leading revolutions, Michelangelo
was an outstanding ceiling painter <a href="#note1">[1]</a>, and Batman was the
best at solving complex riddles (especially in English, pero
<em>especialmente</em> <a href="http://www.youtube.com/watch?v=RY1U_pXUxUo&feature=related">en
español</a>).
But I’m certain that this holds for everyone. What’s your talent? Mine,
as those of you who read this blog should know very well by now, is
certainly <em>not</em> doing physics. Nope, my talent is watching <span class="caps">TV</span>. Seriously
guys, I watch <span class="caps">TV</span> like a boss <a href="#note2">[2]</a>. In light of this talent, I
thought I would describe a few instances in which I have seen pi
represented (for better or for worse) in <span class="caps">TV</span> and movies. Over the last
few months, I have been re-watching a lot of the <span class="caps">TV</span> show
<em><a href="http://en.wikipedia.org/wiki/Psych">Psych</a></em> with my good friend and
fellow Virtuosi contributor, Matt “<span class="caps">TT</span>” Showbiz <a href="#note3">[3]</a>. For the
uninitiated, <em>Psych</em> is a detective show where the main characters
(Shawn and Gus) run a (fake) psychic detective agency, which allows them
to solve mysteries, engage in various shenanigans, and make an
inordinate number of references to <em>Tears for Fears</em> frontman Curt Smith
<a href="#note4">[4]</a>. In one of the episodes, Shawn and Gus enter a room where
a long train of digits is written across the top of the wall. It soon
becomes evident that these are the digits of pi and the camera is sure
to zoom in on the famous first few digits to reassure us. But there are
hundreds of digits written out and I have very little faith in <span class="caps">TV</span> prop
people when it comes to background mathematical expressions. So I
decided to check it out.</p>
<hr />
<p><a href="http://4.bp.blogspot.com/-sIqwUS9wlog/T17KQ7mU7nI/AAAAAAAAAV4/1hiiMuN-yxA/s1600/pi_psych1.png"><img alt="image" src="http://4.bp.blogspot.com/-sIqwUS9wlog/T17KQ7mU7nI/AAAAAAAAAV4/1hiiMuN-yxA/s400/pi_psych1.png" /></a>
Pi on the Wall (click to enhance for texture)</p>
<hr />
<p>Using a neat little pi <a href="http://www.angio.net/pi/piquery">searcher</a>, I
checked to see if (and where) this sequence appeared in pi. Turns out
it’s legit and (almost!) correct. The first 105 digits of pi (counting
after the three) are:</p>
<blockquote>
<p>3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706<strong>798</strong>2148</p>
</blockquote>
<p>where I have underlined the 99th, 100th, and 101st digits. Looking back
at the writing on the wall, we see that the 100th digit has been duplicated.</p>
<hr />
<p><a href="http://1.bp.blogspot.com/-6vQ2gmfNULw/T17Nb29jAvI/AAAAAAAAAWA/2hMW9vdU3S0/s1600/pi_psych.png"><img alt="image" src="http://1.bp.blogspot.com/-6vQ2gmfNULw/T17Nb29jAvI/AAAAAAAAAWA/2hMW9vdU3S0/s400/pi_psych.png" /></a>
Very Almost Pi</p>
<hr />
<p>So close! Oh well, nobody is perfect. Even though there is an error
here, I very much appreciate that whoever was doing the set design
decided to use the <em>actual</em> digits of pi. All too often I see
nonsensical equations in the background of <span class="caps">TV</span> shows and movies when it
would take <em>exactly the same</em> amount of work to put real equations
there. So congratulations to you, O nameless prop-making intern!, for
giving an accurate (well, to a part in 10^100) value of pi. Neat, so
are there any other <span class="caps">TV</span> shows or movies that have pi in them? Well,
there’s <a href="http://www.youtube.com/watch?v=jo18VIoR2xU">Pi</a>. <em>Pi</em> is film
by Darren Aronofsky (<em>Requiem for a Dream</em>, etc) about a mathematician
looking for patterns in the stock market. It’s a pretty good movie with
a really cool
<a href="http://www.youtube.com/watch?v=9Cq_QO_4Cx4&feature=player_embedded">soundtrack</a>
by Clint Mansell. It also appears to display the digits of pi in the
opening credits. But does it? To the Youtube-mobile! You can watch the
opening credits <a href="http://www.youtube.com/watch?v=L61x5mbE-jc">here</a> if
you like and here is a still image of the relevant section.</p>
<hr />
<p><a href="http://4.bp.blogspot.com/-pEbkgRx-PFw/T17UHYdfhwI/AAAAAAAAAWI/vsyzWd139pg/s1600/sad_pi.png"><img alt="image" src="http://4.bp.blogspot.com/-pEbkgRx-PFw/T17UHYdfhwI/AAAAAAAAAWI/vsyzWd139pg/s400/sad_pi.png" /></a> Pi?</p>
<hr />
<p>Looks pretty cool, huh? But once we get past the slick aesthetics, we
see that something doesn’t seem right. This number they are showing
appears at first glance to be our good friend pi, but after the 8th
digit the cover is blown and we see that this is actually some impostor number!</p>
<hr />
<p><a href="http://2.bp.blogspot.com/-5afmhwZOhWk/T17U-d1okrI/AAAAAAAAAWQ/KB7NHwwOvQg/s1600/verysadpi.png"><img alt="image" src="http://2.bp.blogspot.com/-5afmhwZOhWk/T17U-d1okrI/AAAAAAAAAWQ/KB7NHwwOvQg/s400/verysadpi.png" /></a>
More like Darren Aron-wrong-sky.</p>
<hr />
<p>Now, I fully understand that this has no bearing whatsoever on the film
and, in the grand scheme of things, is not a Big Deal. But it would have
been just as easy to put the <em>real</em> digits of pi here instead of just
random filler. The only way that this could possibly be better than the
real deal would be if it is actually a secret code. I have not yet ruled
this out, as the movie is entirely about looking for meaning in
seemingly random numbers. Unfortunately, the difficulty in transcribing
the numbers from the screen greatly outweighs the very small chance that
this isn’t just gibberish. Four hundred Quatloos to anyone who can tell
me if this is a code or not!</p>
<p>[1] And an above average Ninja Turtle to boot. <a href="#back1">[back]</a></p>
<p>[2] Yes, I am putting my <span class="caps">TV</span> watching skills on par with the talents of
George Washington. In fact, the stoic way in which I persevered through
the entirety of <em><a href="http://en.wikipedia.org/wiki/Terminator:_The_Sarah_Connor_Chronicles">The Sarah Connor
Chronicles</a></em>in
under two weeks was described by historian David McCullough as
“Washingtonian.” These are simply facts. <a href="#back2">[back]</a></p>
<p>[3] The extra “T” is for extra talent. <a href="#back3">[back]</a></p>
<p>[4] A duo can <em>absolutely</em> have a frontman. For evidence, feel free to
ask the not-George-Michael-guy from <em>Wham!</em> or the not-Paul-Simon-guy
from <em>Simon <span class="amp">&</span> Garfunkel</em>. <a href="#back4">[back]</a></p>A Very Small Slice of Pi2012-03-14T08:22:00-04:00Corkytag:thephysicsvirtuosi.com,2012-03-14:posts/a-very-small-slice-of-pi.html<hr />
<p><a href="http://3.bp.blogspot.com/-q37nqPUh_t0/T2AJgUmFLKI/AAAAAAAAAWY/0rvkqKzmDBs/s1600/rhubarb.JPG"><img alt="image" src="http://3.bp.blogspot.com/-q37nqPUh_t0/T2AJgUmFLKI/AAAAAAAAAWY/0rvkqKzmDBs/s320/rhubarb.JPG" /></a>
Rhubarb pie (Source: Wikipedia)</p>
<hr />
<p>Some people know a <em>suspiciously</em> large number of the digits of pi.
Perhaps you have met one of these people. They can typically be found
hiding behind bushes and under the counters at pastry shops, just…
<em>waiting</em>. At the slightest hint of a mention of pi, they will jump out
and start reciting the digits like there’s a prize at the end. After
rattling off numbers for a few minutes they abruptly come to an end,
grin like an idiot, and walk away. It is an unpleasant encounter. The
sheer uselessness of this kind of thing has always bothered me, so I’d
like to set a preliminary upper bound on the number of digits of pi that
could ever possibly potentially kind of be useful (maybe). For those
following along at home, now would be a good time to put on your
numerology hats. Alright, so I hear this thing pi is fairly useful when
dealing with circles. Let’s say we want to make a really big circle and
have its diameter only deviate by a very small amount from the correct
value. To do this successfully, we will have to know pi fairly well.
Let’s take this to extremes now. Suppose I want to put a circle around
the <em>entire visible universe</em> such that the uncertainty in the diameter
is the size of a <em>single proton</em>. What would be the fractional
uncertainty in the circumference in this case? If we know pi exactly,
then we have that <mathjax>$$\delta C = \frac{\partial C}{\partial d} \delta
d = \pi \delta d = C \frac{\delta d}{d}, $$</mathjax> where d is the diameter
and C is the circumference. In other words, the fractional uncertainty
in the circumference is just <mathjax>$$\frac{\delta C}{C} = \frac{\delta
d}{d}. $$</mathjax> Using a femtometer for the size of a proton and 90 billion
light years for the size of the Universe <a href="#note">[1]</a>, we get
<mathjax>$$\frac{\delta C}{C} = \frac{\delta d}{d} =
\frac{10^{-15}\mbox{m}}{(90\times10^9)(3\times10^7\mbox{s})(3\times10^8\mbox{m
s}^{-1})} \sim\frac{10^{-15}}{10^{27}}\sim10^{-42}.$$</mathjax> Alright, so
how well do we need to know pi to get a similar fractional uncertainty?
Well, we have that <mathjax>$$\frac{\delta \pi}{\pi} = \frac{\delta C}{C} =
10^{-42}, $$</mathjax> so we can afford an uncertainty in pi of <mathjax>$$ \delta \pi =
\pi \times 10^{-42}$$</mathjax> and thus we’ll need to know pi to about 42
digits. How’s that for an
<a href="http://en.wikipedia.org/wiki/Phrases_from_The_Hitchhiker%27s_Guide_to_the_Galaxy#Answer_to_the_Ultimate_Question_of_Life.2C_the_Universe.2C_and_Everything_.2842.29">answer</a>?
So if we have a giant circle the size of the <em>entire visible universe</em>,
we can find its diameter to within the size of a <em>single proton</em> using
pi to 42 digits. Therefore, I adopt this as the maximal number of digits
that could ever prove useful in a physical sense (albeit under a
somewhat bizzarre set of circumstances). If reciting hundreds of digits
is what makes you happy, go for it. But 42 digits is more than enough pi
for me.</p>
<p>[1] “But I thought the Universe was only 13.7 billion years! What voodoo
is this!?” Yeah, I know. See
<a href="http://scienceblogs.com/startswithabang/2011/01/q_a_how_is_the_universe_so_big.php">here</a>
for a nice explanation.<a href="#back">[back]</a></p>Primes in Pi2012-03-14T03:14:00-04:00Alemitag:thephysicsvirtuosi.com,2012-03-14:posts/primes-in-pi.html<p><a href="http://1.bp.blogspot.com/-zr3Ex0CiRk4/T2DBK9RBX-I/AAAAAAAAAbQ/7SA_87njptE/s1600/repunit.png"><img alt="image" src="http://1.bp.blogspot.com/-zr3Ex0CiRk4/T2DBK9RBX-I/AAAAAAAAAbQ/7SA_87njptE/s320/repunit.png" /></a></p>
<p>Recently, I’ve been concerned with the fact that I don’t know many large
primes. Why? I don’t know. This has led to a search for easy to remember
prime numbers. I’ve found a few goods ones, namely</p>
<ul>
<li>867-5309 - <a href="http://en.wikipedia.org/wiki/867-5309/Jenny">Jenny’s number</a></li>
<li>the digit 1 - 1031 times, in the style of the picture to above, and
the largest known <a href="http://en.wikipedia.org/wiki/Repunit_prime#Repunit_primes">repunit prime</a></li>
<li>1987 (my birth year), 2011 (last year), 1999 (<a href="http://en.wikipedia.org/wiki/1999_(song)">the party
year</a>)</li>
</ul>
<p>But then I remembered that I already know 50 digits of pi, memorized one
boring day in grade school, so this got me wondering whether there were
any primes among the digits of pi</p>
<p>Lo an behold, I wrote a <a href="https://gist.github.com/2033970">little
script</a>, and found a few:</p>
<p>Found one, with 1 digits, is: 3 Found one, with 2 digits, is: 31 Found
one, with 6 digits, is: 314159 Found a rounded one with 12 digits, is:
314159265359 Found one, with 38 digits, is: 31415926535897932384626433832795028841</p>
<p>I think it’s usual for most science geeks to know pi to at least
3.14159, if you’re one of those people, now you know a 6 digit prime!
for free!</p>F-91 Revisited2012-03-12T00:57:00-04:00Corkytag:thephysicsvirtuosi.com,2012-03-12:posts/f-91-revisited.html<hr />
<p><a href="http://3.bp.blogspot.com/-05FSvjIalzs/T11JuY0apyI/AAAAAAAAAVI/hHndDNPO5CE/s1600/dst_sam.jpeg"><img alt="image" src="http://3.bp.blogspot.com/-05FSvjIalzs/T11JuY0apyI/AAAAAAAAAVI/hHndDNPO5CE/s400/dst_sam.jpeg" /></a>
Farmer Uncle Sam…with a rifle. (Image Credit: Wikipedia)</p>
<hr />
<p>Today was a sunny exception to the grey overcast rule of weather in
Ithaca. I <em>should</em> be overjoyed by this anomaly, spending the day
outside flying a kite or playing frisbee with a border collie in a
bandanna. Unfortunately, today was also the beginning of Daylight
Savings Time (<span class="caps">DST</span>) - my least favorite day of the year. For my
colleagues unfamiliar with this temporal travesty (I’m looking at you
Arizona), let me briefly explain <span class="caps">DST</span>. Once a year, the time lords steal
a single hour from us and place it in an escrow account for future
disbursement, presumably in some elaborate scheme to gain the favor of
hat-throwing farmer-clock hybrids (see image left). The details are a
bit murky, but the net result is that today I had one less hour to do my
very favorite thing in the whole wide world - sleep. It also means that
I have to set my watch, so I figured I’d check in and see how well my
<a href="http://thevirtuosi.blogspot.com/2012/02/time-keeps-on-slippin.html">previous
model</a>
for time-loss in my watch has held up. About a month ago, I looked at
how my watch slowly deviated from the <a href="http://nist.time.gov/timezone.cgi?Eastern/d/-5/java">official
time</a> (the original
post can be found
<a href="http://thevirtuosi.blogspot.com/2012/02/time-keeps-on-slippin.html">here</a>
and a helpful clarification by Tom can be found
<a href="http://blogs.scienceforums.net/swansont/archives/11014">here</a>). Based
on a little over 50 days worth of data, I found that my watch lost about
0.35 seconds per day against the official time. About 50 days have
passed since my last measurement and today when I set the watch, so I
thought it would be interesting to see how well my model fit the new
data. The old data are presented in Figure 1 in blue, the old best fit
line is in red, and the new data point (taken this morning) is in green.
As always, click through the plot for a larger version.</p>
<hr />
<p><a href="http://4.bp.blogspot.com/-HqpuhvxXl8M/T1115Tz6lnI/AAAAAAAAAVQ/I1AUjQLXres/s1600/dst_update_long.png"><img alt="image" src="http://4.bp.blogspot.com/-HqpuhvxXl8M/T1115Tz6lnI/AAAAAAAAAVQ/I1AUjQLXres/s400/dst_update_long.png" /></a>
Figure 1</p>
<hr />
<p>The new data point appears to be in fair agreement with the old best-fit
model, but it’s a little hard to see here. Zooming in a bit, though, we
see that the model lies outside the error bar of the new data point.</p>
<hr />
<p><a href="http://4.bp.blogspot.com/-1euutOYSIrs/T1125Hi_XNI/AAAAAAAAAVY/tnWZ6N3mJJs/s1600/dst_update_short.png"><img alt="image" src="http://4.bp.blogspot.com/-1euutOYSIrs/T1125Hi_XNI/AAAAAAAAAVY/tnWZ6N3mJJs/s400/dst_update_short.png" /></a>
Figure 2</p>
<hr />
<p>So is this a big deal? Not really. But if it will help you sleep at
night, we can redo the fitting with the new data point included to see
how much things change. The the plots with the updated model to include
all data points are provided below with the old data in blue and the new
point in green.</p>
<hr />
<p><a href="http://4.bp.blogspot.com/-oIeLFmzJgKI/T115TGf3laI/AAAAAAAAAVg/RHPJuP9UrKk/s1600/dst_newfit_long.png"><img alt="image" src="http://4.bp.blogspot.com/-oIeLFmzJgKI/T115TGf3laI/AAAAAAAAAVg/RHPJuP9UrKk/s400/dst_newfit_long.png" /></a>
Figure 3</p>
<hr />
<p>The new model looks a whole lot like the old one, except the best fit
line now appears to go through the new data point. Zooming in a little,
we see that it does indeed fall within the error bars of our new point.</p>
<hr />
<p><a href="http://1.bp.blogspot.com/-EUf50O2-jjQ/T11514CpS_I/AAAAAAAAAVo/Ryt2v1LOeVk/s1600/dst_newfit_short.png"><img alt="image" src="http://1.bp.blogspot.com/-EUf50O2-jjQ/T11514CpS_I/AAAAAAAAAVo/Ryt2v1LOeVk/s400/dst_newfit_short.png" /></a>
Figure 4</p>
<hr />
<p>Alright, so the new model fits with our new point, but how much did the
model have to change? Well, the fit to just the old data gave an offset
of 0.36 seconds and a loss rate of 0.35 seconds per day. The new model
has an offset of 0.40 seconds and a loss rate of 0.348 seconds per day.
Overall, not a significant change.
It looks as though I may continue to not worry about the accuracy of my
watch. I have set it to match the official time and have no intention of
fiddling with it until I have to set it again at the end of Daylight
Savings Time - my favorite day of the year.</p>Proofiness: A look into how mathematics relates to American political life2012-03-06T14:37:00-05:00DTCtag:thephysicsvirtuosi.com,2012-03-06:posts/proofiness-a-look-into-how-mathematics-relates-to-american-political-life.html<p>Dearest readers,
This is my first post on The Virtuosi, so I thought I’d take a moment to
introduce myself. I’m a first year physics graduate student at Cornell,
recently joined after 2 years working as an engineer first at a private
firm and then at a national lab. I myself have had lots of fun following
the exploits of my estimable colleagues here on The Virtuosi, and I
thought I could bring a new angle to the content here. I would like to
use this space to discuss how science interacts with everyday life in a
cultural sense. How does science appear in popular culture? How do
political or social issues relate back to science? Those sorts of
questions. (I understand that there are plenty of other resources
elsewhere that offer far more intelligent insight into these matters
than I can, but in the very least this will give people a chance point
them out to me as they yell at me in the forum below.)
Enough intro, here begins my very first blog post:
Being interested in how science is communicated to the public, I am an
avid reader of popular science. While academic types sometimes dismiss
this kind of writing as shallow or otherwise uninteresting, I think
science writers perform a very important function serving as a way to
convey information about conceptually challenging topics to a general
audience. At their best, I find that these books serve as examples for
how I can communicate my own ideas better, and in addition challenge my
understanding of how science relates back to society in general.
This being said, I cannot recommend Charles Seife’s <em>Proofiness</em> enough.
The basic premise of this book is to explore the way that good
mathematics is hijacked, twisted, or ignored in everyday life, and the
ugly consequences of the tendency to misunderstand numbers and
measurements.
Seife gives a number of fascinating examples of the ways in which
numbers and math connect to American democracy. American government
functions through representation, and so the
“<a href="http://www.archives.gov/exhibits/charters/constitution_transcript.html">enumeration</a>”
of citizens and their opinions through the Census and elections is an
essential part of the democratic process. This “enumeration” is a
counting measurement, subject to errors like any other. And yet, the
laws that govern how Censuses and elections are run ignore this fact.
Seife’s discussion of elections (and in particular <em>Bush v. Gore</em>) is
fascinating, but I won’t spoil that here. Here’s my take on the
discussion of the Census that appears in <em>Proofiness</em>:
Consider a (vague) physics experiment. I want to know how many particles
are inside a box. To figure this out, I have a detector that goes
<em>ping</em> every time a particle passes through it. I set up my detector
inside the box and count the number of times that it goes <em>ping</em> in a
certain amount of time. I can then use that count to guess at the number
of particles that I have in my box. My measurement will let me estimate
N to within some margin of error. This process is perhaps unnecessary if
I have only five particles in my box (in which case, I might just open
the box and count what I see inside), but if I have 300 million
particles in my box, it would be totally impractical for me to reach
into the box 300 million times and count each one individually.
We can consider the Census to be just like this physics experiment. I
have N inhabitants (particles) living in my country (box), and I can use
my detector (census replies) to count a certain number of people. In
principle, using well-understood statistical techniques of regression
and error analysis, I can estimate to within a very good margin of error
how many people live in each region of the country. Instead, what the
Census requires is that we reach inside the box (send representatives to
every household that doesn’t reply by mail) and count every single
person. The whole process ignores the fact that even if we send a
representative to every single household there will still be some margin
of error in our counting measurements. No such measurement can be made
without errors.
The consequences of ignoring these errors, says Seife, can be that we
waste money in attempting the impossible and trying to count everybody.
From a civic-minded perspective, this attitude towards the perfection of
the Census can backfire. For example, if undercounting occurs (i.e.,
certain households do not respond for some reason), the Census has no
mechanism for correcting that miscount. Counter-intuitively, the Census
laws actually prohibit the use of any statistical techniques to correct
miscounting. The result is that those slow to respond are ignored and
not taken into account when allotting seats in the legislature to
represent them.
<em>Proofiness</em> is a fascinating book and a fun read, and I recommend you
all look it up. In addition, it serves as an excellent example of
science writing that helped me to rethink how scientific ideas relate to
everyday life. I hope to invite consideration of these topics here and
in future posts. If you want to know more about the inspiration for this
post, go <a href="http://journalism.nyu.edu/faculty/charles-seife/">here</a>.</p>Time Keeps On Slippin’2012-02-12T22:00:00-05:00Corkytag:thephysicsvirtuosi.com,2012-02-12:posts/time-keeps-on-slippin-.html<hr />
<p><a href="http://3.bp.blogspot.com/-rtzREUF0wYM/TyX0orGHkpI/AAAAAAAAASk/hsRZAwtg7H4/s1600/casi_f91w.jpg"><img alt="image" src="http://3.bp.blogspot.com/-rtzREUF0wYM/TyX0orGHkpI/AAAAAAAAASk/hsRZAwtg7H4/s320/casi_f91w.jpg" /></a>
This is picture of a watch. (Source: Wikipedia)</p>
<hr />
<p>A couple of months ago, the Virtuosi Action Team (<span class="caps">VAT</span>) assembled for
lunch and the discussion quickly and unexpectedly turned to watches. As
Nic and Alemi argued over the finer parts of fancy-dancy watch
ownership, I looked down at my watch: the lowly <a href="http://en.wikipedia.org/wiki/Casio_F91W">Casio
F-91W</a>. Though it certainly
isn’t fancy, it is inexpensive, durable, and could potentially win me an
all-expense paid trip to the
<a href="http://www.guardian.co.uk/world/2011/apr/25/guantanamo-files-casio-wristwatch-alqaida">Caribbean</a>.
But how <em>good</em> of a watch is it? To find out, I decided to time it
against the official <span class="caps">U.S.</span> time for a couple of months. Incidentally,
about half-way in I found out that Chad over at <a href="http://scienceblogs.com/principles/">Uncertain
Principles</a> had done essentially
the same thing
<a href="http://scienceblogs.com/principles/2011/05/the_testing_of_time_measuring.php">already</a>.
No matter, science is still fun even if you aren’t the first person to
do it. So here’s my “new-to-me” analysis. Alright, so how do we go about
quantifying how “good” a watch is? Well, there seem to be two main
things we can test. The first of these is accuracy. That is, how close
does this watch come to the <em>actual</em> time (according to some time
system)? If the official time is 3:00 pm and my watch claims it is 5:00
am, then it is not very accurate. The second measure of “good-ness” is
precision or, in watch parlance, stability. This is essentially a
measure of the consistency of the watch. If I have a watch that is
consistently off by 5 minutes from the official time, then it is not
accurate but it is still stable. In essence, a very consistent watch
would be just as good as an accurate one, because we can always just
subtract off the known offset. To test any of the above measures of how
“good” my cheap watch is, we will need to know the actual time. We will
adopt the official <span class="caps">U.S.</span> time as provided on the <a href="http://nist.time.gov/timezone.cgi?Eastern/d/-5/java"><span class="caps">NIST</span>
website</a>. This time
is determined and maintained by a collection of really impressive atomic
clocks. <a href="http://en.wikipedia.org/wiki/NIST-F1">One of these</a> is in
Colorado and the other is secretly guarded by an ever-vigilant Time Lord
(see Figure 1).</p>
<hr />
<p><a href="http://4.bp.blogspot.com/-EXMf5o9a4GI/TzCRcaMjjVI/AAAAAAAAASs/FlqyjHbJS9Q/s1600/flavor_flav.jpg"><img alt="image" src="http://4.bp.blogspot.com/-EXMf5o9a4GI/TzCRcaMjjVI/AAAAAAAAASs/FlqyjHbJS9Q/s320/flavor_flav.jpg" /></a>
Figure 1: Flavor Flav, Keeper of the Time</p>
<hr />
<p>At 9:00:00 am <span class="caps">EST</span> on November 30th, I synchronized my watch with the
time displayed on the <span class="caps">NIST</span> website. For the next 54 days, I kept track
of the difference between my watch an the <span class="caps">NIST</span> time. On the 55th day, I
forgot to check the time and the experiment promptly ended. The results
are plotted below in Figure 2 (and, as with all plots, click through for
a larger version).</p>
<hr />
<p><a href="http://3.bp.blogspot.com/-hscjUIehrHQ/TzCYqNhcsYI/AAAAAAAAATE/eAz3O5KlAcY/s1600/time_fit_short.png"><img alt="image" src="http://3.bp.blogspot.com/-hscjUIehrHQ/TzCYqNhcsYI/AAAAAAAAATE/eAz3O5KlAcY/s400/time_fit_short.png" /></a>
Figure 2: Best-fit to time difference</p>
<hr />
<p>As you can see from Figure 2, the amount of time the watch lost over the
timing period appears to be fairly linear. There does appear to be a
jagged-ness to the data, though. This is mainly caused by the fact that
both the watch and the <span class="caps">NIST</span> website only report times to the nearest
second. As a result, the finest time resolution I was willing to report
was about half a second. Adopting an uncertainty of half a second, I did
a least-squares fit of a straight line to the data and found that the
watch loses about 0.35 seconds per day. As far as accuracy goes, that’s
not bad! No matter what, I’ll have to set my watch at least twice a year
to appease the Daylight Savings Gods. The longest stretch between
resetting is about 8 months. If I synchronize my watch with the <span class="caps">NIST</span>
time to “spring forward” in March, it will only lose about <mathjax>$$ t_{loss}
= 8\~\mbox{months} \times 30\frac{\mbox{days}}{\mbox{month}}
\times 0.35 \frac{\mbox{sec}}{\mbox{day}} = 84\~\mbox{sec} $$</mathjax>
before I have to re-synchronize to “fall back” in November. Assuming the
loss rate is constant, I’ll never be more than about a minute and a half
off the “actual” time. That’s good enough for me. Furthermore, if the
watch is <em>consistently</em> losing 0.35 seconds per day and I know how long
ago I last synchronized, I can always correct for the offset. In this
case, I can always know the official time to within a second (assuming I
can add). But <em>is</em> the watch consistent? That’s a good question. The
simplest means of finding the stability of the watch would be to look at
the timing residuals between the data and the model. That is, we will
consider how “off” each point is from our constant rate-loss model. A
plot of the results is shown below in Figure 3.</p>
<hr />
<p><a href="http://1.bp.blogspot.com/-13ULtvOlXuU/TzCpp6LGvfI/AAAAAAAAATM/8rybD1lxZOo/s1600/residuals.png"><img alt="image" src="http://1.bp.blogspot.com/-13ULtvOlXuU/TzCpp6LGvfI/AAAAAAAAATM/8rybD1lxZOo/s400/residuals.png" /></a>
Figure 3: Timing residuals</p>
<hr />
<p>From Figure 3, we see that the data fit the model pretty well. There’s a
little bit of a wiggle going on there and we see some strong short-term
correlations (the latter is an artifact of the fact that I could only
get times to the nearest second). To get some sense of the timing
stability from the residuals, we can calculate the standard deviation,
which will give us a figure for how “off” the data typically are from
the model. The standard deviation of the residuals is <mathjax>$$ \sigma_{res}
= 0.19\~\mbox{sec}. $$</mathjax> A good guess at the fractional stability of the
watch would then just be the standard deviation divided by the sampling
interval, <mathjax>$$ \frac{\sigma_{res}}{T} = 0.19\~\mbox{sec} \times
\frac{1}{1\~\mbox{day}} \times
\frac{1\~\mbox{day}}{24\times3600\~\mbox{sec}} \approx
2\times10^{-6}.$$</mathjax> In words, this means that each “tick” of the watch
is consistent with the average “tick” value to about 2 parts in a
million. That’s nice…but isn’t there something <em>fancier</em> we could be
doing? Well, I have been wanting to learn about <a href="http://en.wikipedia.org/wiki/Allan_variance">Allan
variance</a> for some time
now, so let’s try that. The Allan variance (refs: <a href="http://www.ino.it/~azavatta/References/Allan.pdf">original
paper</a> and a
<a href="http://tf.boulder.nist.gov/general/pdf/118.pdf">review</a>) can be used to
find the fractional frequency stability of an oscillator over a wide
range of time scales. Roughly speaking, the Allan variance tells us how
averaging our residuals over different chunks of time affects the
stability of our data. The square root of the Allan variance,
essentially the “Allan standard deviation,” is plotted against various
averaging times for our data below in Figure 4.</p>
<hr />
<p><a href="http://2.bp.blogspot.com/-_zhCroHDoNE/TzC7XYkDZ1I/AAAAAAAAATU/mCaspxYhlxA/s1600/allan_variance.png"><img alt="image" src="http://2.bp.blogspot.com/-_zhCroHDoNE/TzC7XYkDZ1I/AAAAAAAAATU/mCaspxYhlxA/s400/allan_variance.png" /></a>
Figure 4: Allan variance of our residuals</p>
<hr />
<p>From Figure 4, we see that as we increase the averaging time from one
day to ten days, the Allan deviation decreases. That is, the averaging
reduces the amount of variation in the frequency of the data, making it
more stable. However, at around 10 days of averaging time it seems as
though we hit a floor in how low we can go. Since the error bars get
really big here, this may not be a real effect. If it is real, though,
this would be indicative of some low-frequency noise in our oscillator.
For those who prefer colors, this would be
“<a href="http://en.wikipedia.org/wiki/Colors_of_noise">red</a>” noise. Since the
Allan deviation gives the fractional frequency stability of the
oscillator, we have that <mathjax>$$\sigma_A = \frac{\delta f}{f} =
\frac{\delta(1/t)}{1/t} = \frac{\delta t}{t}. $$</mathjax> Looking at the
plot, we see that with an averaging time of one day, the fractional time
stability of the watch is <mathjax>$$\frac{\delta t}{t} \approx
2\times10^{-6}, $$</mathjax> which corresponds nicely to our previously
calculated value. If we average over chunks that are ten days long
instead, we get a fractional stability of <mathjax>$$\frac{\delta t}{t}
\approx 10^{-7}, $$</mathjax> which would correspond to a deviation from our
model of about 0.008 seconds. Not bad. The initial question that started
this whole ordeal was “How good is my watch?” and I think we can safely
answer that with “as good as I’ll ever need it to be.” Hooray for cheap
and effective electronics!</p>The Stars Fell on Abe and Frederick2012-01-02T20:02:00-05:00Corkytag:thephysicsvirtuosi.com,2012-01-02:posts/the-stars-fell-on-abe-and-frederick.html<hr />
<p><a href="http://3.bp.blogspot.com/-p3-DTUn3sJ0/TwJnlct7XLI/AAAAAAAAASQ/1nZ0xtbwXHo/s1600/leonids_pic.jpg"><img alt="image" src="http://3.bp.blogspot.com/-p3-DTUn3sJ0/TwJnlct7XLI/AAAAAAAAASQ/1nZ0xtbwXHo/s320/leonids_pic.jpg" /></a>
The 1833 Leonids (Source: Wikipedia)</p>
<hr />
<p>Word on the street is there’s a meteor shower set for late Tuesday
night, peaking at 2 am <span class="caps">EST</span> on January 4th <a href="#footnote-1">[1]</a><a href=""></a>. The
meteors in question are the
<a href="http://en.wikipedia.org/wiki/Quadrantids">Quadrantids</a>, which often go
unnoticed for two good reasons. Reason the first: apparently
<a href="#footnote-2">[2]</a><a href=""></a>, they are usually pretty awful. Unlike the “good”
meteor showers, the Quadrantids are bright and pretty for only a few
hours (instead of a few days). This means that a lot of the time, we
just miss them. Reason the second: they have a lame name
<a href="#footnote-3">[3]</a><a href=""></a>. But this year, they should be pretty good if the
weather is right. Now, there’s lots of neat physics to talk about with
meteors, but that’s not why I bring it up. This has all just been flimsy
pretext so I could share a historical anecdote about a meteor shower.
Trickery, indeed. Those who feel cheated are free to leave now with
<a href="http://www.youtube.com/watch?v=apu_585SW18">heads held high</a>. Those
still around (Hi, Mom!) will hear about the night in 1833 when the stars
fell on <a href="http://www.youtube.com/watch?v=6ibV3tCDvd8">Alabama</a> (and the
rest of the country, too). The
<a href="http://en.wikipedia.org/wiki/Leonids">Leonids</a> typically put on a
pretty good show, but their showing in 1833 was so dramatic that the
term “meteor shower” was coined to describe what was happening. The 1833
Leonids were truly one for the ages and made such an impression that
people were often able to remember when events happened by their
relation to the night when “the stars fell.” It was in this use as a
“calendar anchor” that I first heard of this particular meteor shower.
While home for the holiday I was reading <em>Life and Times of Frederick
Douglass</em>, one of the later autobiographies written by the former slave
and noted abolitionist. Recounting when he was moved from Baltimore to a
plantation on the Eastern Shore of Maryland, Douglass writes:</p>
<blockquote>
<p>I went to St. Michaels to live in March, 1833. I know the year,
because it was the one succeeding the first cholera in Baltimore, and
was also the year of that strange phenomenon when the heavens seemed
about to part with their starry train. I witnessed this gorgeous
spectacle, and was awe-struck. The air seemed filled with bright
descending messengers from the sky. It was about daybreak when I saw
this sublime scene. I was not without the suggestion, at the moment,
that it might be the harbinger of the coming of the Son of Man; and in
my then state of mind I was prepared to hail Him as my friend and
deliverer. I had read that the “stars shall fall from heaven,” and
they were now falling. I was suffering very much in my mind. It did
seem that every time the young tendrils of my affection became
attached they were rudely broken by some unnatural outside power; and
I was looking away to heaven for the rest denied me on earth.</p>
</blockquote>
<p>Douglass wrote these words almost 50 years after the fact and it is
evident that the meteor shower clearly had an effect on him. By this
time (at age 15), Douglass had already made up his mind to escape from
slavery. Three years later, he made a failed attempt. Two years after
that, in 1838, Frederick Douglass escaped to the North and became an
influential abolitionist. After reading the above passage from Douglass,
I wondered who else may have seen the 1833 Leonids. After a bit of
research, I found a paper by <a href="http://ecommons.txstate.edu/cgi/viewcontent.cgi?article=1004&context=physfacp&sei-redir=1&referer=http%3A%2F%2Fwww.google.com%2Furl%3Fsa%3Dt%26rct%3Dj%26q%3Dolson%2Blincoln%2Bleonids%26source%3Dweb%26cd%3D1%26ved%3D0CB4QFjAA%26url%3Dhttp%253A%252F%252Fecommons.txstate.edu%252Fcgi%252Fviewcontent.cgi%253Farticle%253D1004%2526context%253Dphysfacp%26ei%3DaogCT_2TB4rv0gGZ_5HoBw%26usg%3DAFQjCNE8HE4-k_Zcl2PzK2shdtMCi6ZyEQ#search=%22olson%20lincoln%20leonids%22">Olson <span class="amp">&</span> Jasinski
(1999)</a>
which provides an excerpt from Walt Whitman recounting a story told by
Abraham Lincoln. Whitman writes:</p>
<blockquote>
<p>In the gloomiest period of the war, he [Lincoln] had a call from a
large delegation of bank presidents. In the talk after business was
settled, one of the big Dons asked Mr. Lincoln if his conﬁdence in the
permanency of the Union was not beginning to be shaken — whereupon the
homely President told a little story. “When I was a young man in
Illinois,” said he, “I boarded for a time with a Deacon of the
Presbyterian church. One night I was roused from my sleep by a rap at
the door, <span class="amp">&</span> I heard the Deacon’s voice exclaiming ‘Arise, Abraham, the
day of judgment has come!’ I sprang from my bed <span class="amp">&</span> rushed to the
window, and saw the stars falling in great showers! But looking back
of them in the heavens I saw all the grand old constellations with
which I was so well acquainted, ﬁxed and true in their places.
Gentlemen, the world did not come to an end then, nor will the Union now.”</p>
</blockquote>
<p>Abraham Lincoln witnessed the 1833 meteor shower and was still telling
stories about it 30 years later.</p>
<p>So what’s the point of this whole story? Is there any significance to
the fact that the man who escaped slavery to tell the world of its evils
and “The Great Emancipator” both saw the same meteor shower? Probably
not. Tons of people saw it.</p>
<p>Regardless, it is interesting to think about. Though these men would
cross paths several times over the next 30 years, the earliest memory
they shared was of a night in 1833, when a 15 year old slave in Maryland
and a 24 year old boarder in Illinois watched the stars fall from the sky.</p>
<p>[1] I use “Tuesday night” here to mean, of course, “Wednesday morning.”
<a href="#back-1">[back]</a></p>
<p>[2] I say “apparently” because I have never heard of these guys before,
so this is all Wikipedia, baby! <a href="#back-2">[back]</a></p>
<p>[3] Like other meteor showers, the Quadrantids take their name from the
constellation from which the meteors seem to emerge. In this case,
<a href="http://en.wikipedia.org/wiki/Quadrans_Muralis">Quadrans Mural</a>: The
Mural Quadrant. Unfortunately for Quadrans Mural, the constellations
dumped it like the planets dumped Pluto. <a href="#back-3">[back]</a></p>How Long Will a Bootprint Last on the Moon?2012-01-01T20:46:00-05:00Corkytag:thephysicsvirtuosi.com,2012-01-01:posts/how-long-will-a-bootprint-last-on-the-moon-.html<hr />
<p><a href="http://3.bp.blogspot.com/-gEyp9bylwcQ/Tv5v7PlugjI/AAAAAAAAARU/OnVdFTPRGkI/s1600/bootprint_buzz.jpg"><img alt="image" src="http://3.bp.blogspot.com/-gEyp9bylwcQ/Tv5v7PlugjI/AAAAAAAAARU/OnVdFTPRGkI/s320/bootprint_buzz.jpg" /></a>
Buzz Aldrin’s bootprint (source: Wikipedia)</p>
<hr />
<p>A couple of months ago, I stumbled across a bunch of
<a href="http://www.nasa.gov/mission_pages/apollo/revisited/index.html">pictures</a>
of Apollo landing sites taken by one of the cameras onboard the Lunar
Reconnaissance Orbiter. The images have a resolution high enough that
you can resolve features on the surface down to about a meter. Looking
at the Apollo 17 <a href="http://www.nasa.gov/images/content/584392main_M168000580LR_ap17_area.jpg">landing
site</a>,
you can see the trails of both astronauts and a moon buggy. It’s pretty
cool. It also got me thinking about how long the landing sites would be
preserved. More specifically, I want to know how long Buzz Aldrin’s
right bootprint (shown, incidentally, to the left) will last on the
Moon. Since the Moon has no atmosphere, the wind and rain that would
weather away a similar bootprint here on Earth are not present and it
seems as though the print would last a really long time. But how long?
Let’s try to quantify it <a href="#footnote-yariv">[1]</a><a href=""></a>. Pick Your Poison
Before we get going, we need to figure out what physical process would
be most important in erasing a bootprint from the Moon. Although the
Moon lacks the conventional “weathering” we experience on Earth (due to
wind, rain, etc), it does experience something called “<a href="http://en.wikipedia.org/wiki/Space_weathering">space
weathering</a>.” Space
weathering is the changing of the lunar surface due to cosmic rays,
micrometeorite collisions, regular meteorite collisions, and the solar
wind <a href="#footnote-wind">[2]</a><a href=""></a>. Of these phenomena, the most apparent
and well-studied would be the meteorites which have covered the Moon in
craters. We adopt the meteorite impact as our primary means of wiping
out a bootprint and restate our question as follows: “How long would it
take for a meteorite to hit the Moon such that the resulting crater
wipes out Aldrin’s right bootprint?” Background As it is currently
stated, we can answer our question if we knew the rate of formation and
size distribution of the craters on the Moon. We could count up all the
craters on the Moon (or a particular region of interest) and tabulate
their sizes. This would give us the size distribution. It would also
give us a headache and potentially drive us to lunacy
<a href="#footnote-forced">[3]</a><a href=""></a>. Luckily, someone has beat us to it. <a href="http://adsabs.harvard.edu/abs/1966MNRAS.134..245C">Cross
(1966)</a> used images
from the Ranger 7 and 8 missions to count craters and determine the size
distribution of craters in three regions of the Moon. The data for the
crater distribution in the Sea of Tranquility (where Apollo 11 landed)
are given in the figure below. Cross found that in the Sea of
Tranquility, the number of craters with diameters greater than X meters
(per million square kilometers) is given by: <mathjax>$$ N(d\>X) =
10^{10}\left(\frac{X}{1\~\mbox{m}}\right)^{-2}, $$</mathjax> which holds for
craters with diameters between 1 meter and 10 kilometers (see figure below).</p>
<hr />
<p><a href="http://3.bp.blogspot.com/-kHxcn_PTd5o/TwDg3XGAcqI/AAAAAAAAARg/jl-kb22JD-Y/s1600/fig2.png"><img alt="image" src="http://3.bp.blogspot.com/-kHxcn_PTd5o/TwDg3XGAcqI/AAAAAAAAARg/jl-kb22JD-Y/s400/fig2.png" /></a>
Figure 2 from Cross (1966)</p>
<hr />
<p>We can also estimate the rate at which craters are formed from this
data. If we assume that the craters formed at a constant rate over the
age of the Moon (about 4 billion years), then we get about 2.5 craters
with diameters above 1 meter formed in a million square kilometer area
every year. This is a “crater flux” for the Moon. Written another way,
the crater flux in the Sea of Tranquility is <mathjax>$$F \approx
1\~{\mbox{km}}^{-2} \frac{1}{4\times10^5\~\mbox{yr}}, $$</mathjax> so we get
that roughly one crater with diameter greater than 1 meter is formed on
a square kilometer of the Moon once every 400,000 years or so. We now
have enough information to do some simulations. Simulation I wrote up a
code that simulates craters being formed on a 1 square kilometer patch
of the Moon. A crater is randomly placed in the 1 square kilometer
region with a diameter pulled from the above distribution. The bootprint
is placed at the center of the grid and craters are formed until we get
a “hit.” At that point, the time is recorded and the run stops. As a
sanity check, I thought it would be fun to just let the simulation run
without caring if the boot was hit or not. By simulating the craters in
this way for 4 billion years, I should get something that looks like the
Moon at the present day. Here’s a 200 m square from my simulation:
<a href="http://3.bp.blogspot.com/-DoFPG23CGwg/TwDqo-7GoQI/AAAAAAAAARs/QJW3RxeXFWI/s1600/mymoon_wticks.png"><img alt="image" src="http://3.bp.blogspot.com/-DoFPG23CGwg/TwDqo-7GoQI/AAAAAAAAARs/QJW3RxeXFWI/s400/mymoon_wticks.png" /></a>
and here’s a picture of the same-sized region on the surface of the Moon:</p>
<hr />
<p><a href="http://3.bp.blogspot.com/-qWW5dU-aEN8/TwDrDLPy93I/AAAAAAAAAR4/XGOywC79NFU/s1600/200metersquare.jpg"><img alt="image" src="http://3.bp.blogspot.com/-qWW5dU-aEN8/TwDrDLPy93I/AAAAAAAAAR4/XGOywC79NFU/s320/200metersquare.jpg" /></a>
Cropped from <a href="http://www.nasa.gov/images/content/584398main_M168353795RE_25cm_AP12_area.jpg">this image</a> (Source: <span class="caps">LRO</span>)</p>
<hr />
<p>Just eyeballing it, things look pretty good. Now it’s time for the
actual simulation. I ran the simulation 10,000 times and tabulated the
amount of time needed before the bootprint was hit. The figure below
gives the
<a href="http://en.wikipedia.org/wiki/Cumulative_distribution_function"><span class="caps">CDF</span></a> for
the hit times in the simulation. That is, for each time T, we find the
fraction of simulations in which the bootprint got hit in a time less
than or equal to T. The dashed lines in the plot indicate the amount of
time needed to pass for half of the simulations to have recorded a hit.
This time turns out to be about 24 billion years.</p>
<hr />
<p><a href="http://3.bp.blogspot.com/-L2WLnCepUiw/TwDs7X_njkI/AAAAAAAAASE/jS_-I0Eem4k/s1600/hit_cdf.png"><img alt="image" src="http://3.bp.blogspot.com/-L2WLnCepUiw/TwDs7X_njkI/AAAAAAAAASE/jS_-I0Eem4k/s400/hit_cdf.png" /></a>
(Click for larger, actually readable version)</p>
<hr />
<p>Conclusions and Caveats Based on the simulations, the bootprint on the
Moon would have about even odds of lasting at least 20 billion years
<em>if</em> the primary means of destruction is through the formation of a
crater from a meteorite. However, there are a few caveats that should be
addressed. These deal with either the details of the simulation or the
assumptions we have made. In the simulation, we just took at 1 km square
patch of the moon and scaled back the “crater flux” accordingly.
However, this does not fully account for all possible craters that can
form. For example, our simulation would miss an event that hit 50 km
away from the target, but had a diameter of 100 km. Obviously this would
hit the target, but we are only seeding craters in the 1 square km
region. This would mean that the actual lifetime of the bootprint would
be less than our 24 billion year figure. Re-running with a 10km by 10km
square region, we find a lifetime of 18 billion years. Thus, an increase
in area by a factor of 100 only reduces the age by 25%. Considering
areas much larger than this makes the simulation prohibitively slow, but
the order unity effect does not seem too significant. Additionally, we
have made a number of assumptions. The big one is that we have assumed
that the craters currently seen on the Moon were formed uniformly in
time. In fact, a large fraction of the craters may have been formed when
the Moon was still very young (see <a href="http://en.wikipedia.org/wiki/Late_Heavy_Bombardment">Late Heavy
Bombardment</a>). If
this were the case, we would have greatly overestimated the rate of
crater formation and thus underestimated the time needed to hit the
bootprint. In spite of these caveats, let’s take our value of 20 billion
years to be accurate. What else can we say? Well, if we are right then
we are wrong because the Moon may not last that long (and it’s hard to
have bootprints on the Moon without a Moon). Current
<a href="http://en.wikipedia.org/wiki/Sun#Life_cycle">estimates</a> have that the
Sun will expand into a red giant and (potentially) destroy the Earth
(and the Moon) in about 5 billion years. So a record of the Apollo
astronauts’ boot sizes could potentially last as long as the Moon
<a href="#footnote-nixon">[4]</a><a href=""></a>. Not bad. Footnotes and Such
[1] Now with linked footnotes so Yariv doesn’t have to scroll!
<a href="#back-yariv">[back]</a>
[2] There was a fairly recent press release about Coronal Mass Ejections
from the Sun “sandblasting” the lunar surface. For more info, check
<a href="http://www.nasa.gov/topics/solarsystem/features/dream-cme.html">here</a>,
and note the acronymic acrobatics needed to make them the “<span class="caps">DREAM</span> team.”
But it’s totally worth it. <a href="#back-wind">[back]</a>
[3] A horribly forced pun. But it’s totally worth it.
<a href="#back-forced">[back]</a>
[4] Also, <a href="http://en.wikipedia.org/wiki/Lunar_plaque">Nixon</a>
<a href="#back-nixon">[back]</a></p>Report from the Trenches: A CMS Grad Student’s Take on the Higgs2011-12-13T17:36:00-05:00Nic Eggerttag:thephysicsvirtuosi.com,2011-12-13:posts/report-from-the-trenches-a-cms-grad-student-s-take-on-the-higgs.html<p><img alt="Mmmm run172822 evt2554393033
3d" src="http://lh6.ggpht.com/-hPHBh1UVJic/TuvD-aa1MYI/AAAAAAAAAX4/BQgsVZulLkw/mmmm-run172822-evt2554393033-3d.jpg?imgmax=800" title="mmmm-run172822-evt2554393033-3d.jpg" />
Hi folks. It’s been an embarrassingly long time since I last posted, but
today’s news on the Higgs boson has brought me out of hiding. I want to
share my thoughts on today’s announcement from the <span class="caps">CMS</span> and <span class="caps">ATLAS</span>
collaborations on their searches for the Higgs boson. I’m a member of
the <span class="caps">CMS</span> collaboration, but these are my views and don’t represent those
of the collaboration. The upshot is that <span class="caps">ATLAS</span> sees a 2.3 sigma signal
for a Higgs boson at 126 GeV. <span class="caps">CMS</span> sees a 1.9 sigma excess around 124
GeV. <span class="caps">CERN</span> is being wishy-washy about whether or not this is actually a
discovery. After all the media hype leading up to the announcement, this
is somewhat disappointing, but maybe not too surprising. First of all,
what does a 2 sigma signal mean? The significance corresponds to the
probability of seeing a signal as large or larger than the observed one
given only background events. That is, what’s the chance of seeing what
we saw if there is no Higgs boson? You can think of the significance in
terms of a Normal distribution. The probability of the observation
corresponds to the integral of the tails of the Normal distribution from
the significance to infinity. For those of you in the know, this is just
1 minus the <span class="caps">CDF</span> evaluated at the significance. For a 2 sigma
observation, this corresponds to about 5%. For both experiments, there
was a 5% chance of observing the signal they observed or bigger if the
Higgs boson doesn’t exist. In medicine, this would be considered an
unqualified success. So why is <span class="caps">CERN</span> being so cagey? In particle physics
we require at least 3 sigma before we even consider something
interesting, and 5 sigma to consider it an unambiguous discovery. The
reasons why the burden of proof is so much higher in particle physics
than in other fields aren’t entirely clear to me. I suspect is has to do
with the relative ease of running the collider a little longer compared
to recruiting more human test subjects, to use medicine as an example.
Given what I’ve just told you that we need a 3 sigma significance in
particle physics, why is everyone so excited about a couple of 2 sigma
results? Well, the first reason is that both results show bumps at
approximately the same Higgs mass. Although it’s not rigorous, you can
get a rough idea of what the significance of the combined results are by
adding the significances in quadrature. This gives us about 2.8 sigma.
Higher, but still not up to the magic number of 3. The explanation for
the excitement that is most compelling brings us to Bayesian statistics.
The paradigm of Bayesian statistics says that our belief in something
given new information is the product of our prior beliefs and a term
which updates them based on the new information. Physicists have long
expected to find a Higgs boson with a mass around 120 GeV. So our prior
degree of belief is pretty high. Thus, it doesn’t take as much to
convince us (or me anyway) that we have observed the Higgs boson. In
contrast, consider the <span class="caps">OPERA</span> collaboration’s measurement of neutrinos
going faster than the speed of light. This claims to be a 6 sigma
result, but no one expected to find superluminal neutrinos, so our (or
at least my) prior for this is much lower. (Aside: If the <span class="caps">OPERA</span> result
is wrong, it is likely due to a systematic effect rather than a
statistical one. Nevertheless, I stand by my point.) The final thing
that excites me about this observation is that what we’ve seen is
completely consistent with what we would expect to see from the Standard
Model. Forgetting about significances for the moment, when the <span class="caps">CMS</span>
experiment fits for the Higgs boson mass, they find a cross section that
agrees very well with that predicted by the Standard Model. In the plot
below, you’re interested in the masses where the black line is near 1.
The <span class="caps">ATLAS</span> experiment actually sees more signal than one would expect.
This is likely just a statistical fluctuation, and explains why the
<span class="caps">ATLAS</span> result has a higher significance. <img alt="GUIDO HIGGS CERN SEMINAR pdf
page 43 of 60
1" src="http://lh6.ggpht.com/-tHKpXH_FDfM/TuvD_jbM8wI/AAAAAAAAAYA/-LWjE0AqDog/GUIDO_HIGGS_CERN_SEMINAR.pdf%252520%252528page%25252043%252520of%25252060%252529-1.png?imgmax=800" title=""GUIDO_HIGGS_CERN_SEMINAR.pdf (page 43 of 60" />-1.png”)
<img alt="ATLAS Higgs pdf page 34 of
68" src="http://lh6.ggpht.com/-dLvDz4KoVuU/TuvEBK3mg3I/AAAAAAAAAYI/4x-6m2b-g0M/ATLAS-Higgs.pdf%252520%252528page%25252034%252520of%25252068%252529.jpg?imgmax=800" title=""ATLAS-Higgs.pdf (page 34 of 68" />.jpg”)
In conclusion, while <span class="caps">CERN</span> is being non-committal, in my opinion, we have
seen the first hints of the Higgs boson. This is mostly due to my high
personal prior that there the Higgs boson exists around the observed
mass. Unfortunately, Bayesian priors are for the most part a qualitative
thing. Thus, <span class="caps">ATLAS</span> and <span class="caps">CMS</span> are sticking to the hard numbers, which say
that what we have looks promising, but is not yet anything to get
excited about. I’ll close by reminding you all to take this all with a
grain of salt. There is every possibility that this is just a
fluctuation. I’ll remind you that at the end of last summer, <span class="caps">CMS</span> and
<span class="caps">ATLAS</span> both showed a <a href="http://resonaances.blogspot.com/2011/07/higgs-wont-come-out-of-closet.html">3 sigma
excess</a>
around 140 GeV, which went away just a month later at the next
conference. So let’s cross our fingers that next year’s data will give
us a definitive answer on this question. By the way, if anyone wants to
know more, fire away in the comments. I’ll do my best.</p>Physics Challenge Award Show II2011-11-06T23:22:00-05:00Corkytag:thephysicsvirtuosi.com,2011-11-06:posts/physics-challenge-award-show-ii.html<hr />
<p><a href="http://3.bp.blogspot.com/-KNDW4alRDDI/Tra8NaWqlcI/AAAAAAAAAQo/6REV_1iTv2o/s1600/time_machine.jpg"><img alt="image" src="http://3.bp.blogspot.com/-KNDW4alRDDI/Tra8NaWqlcI/AAAAAAAAAQo/6REV_1iTv2o/s320/time_machine.jpg" /></a>
Not a DeLorean. You’re doing it wrong.</p>
<hr />
<p>[<em>Update: Prize Update / Added link to full solutions</em>] Welcome to the
second Physics Challenge Award show!
[<span class="caps">APPLAUSE</span>]
Our judges have deliberated for several units of time and I now have in
my hands the envelope holding our list of winners. I could easily just
tell you who won right now and save everyone some time, but award shows
need some suspense to work effectively, so let’s first give some tedious
background information!
[<span class="caps">APPLAUSE</span>]
You may recall that the winner of the first <a href="http://thevirtuosi.blogspot.com/2011/03/physics-challenge-award-show.html">Physics
Challenge</a>
contest won a <a href="http://en.wikipedia.org/wiki/CRC_Handbook_of_Chemistry_and_Physics"><span class="caps">CRC</span>
Handbook</a>.
We will not be giving out CRCs this time around. We felt that such a
prize was far too ~~expensive~~ impersonal, so we have opted this year
for something ~~much cheaper~~ from the heart. The following prizes will
be awarded to our top three solutions:
<strong>First Prize:</strong> Our first prize winner will receive an actual
back-of-an-envelope used in one of our posts (gasp!) signed by all the
of the members of the Virtuosi that I can find at colloquium tomorrow.
But that’s not all! Alemi will also salute in your general direction.
<strong>Second Prize:</strong> For our second prize winner, we appear to have run out
of envelopes… but Alemi will still salute in your general direction.
You will not see him do this, but you will feel a major disturbance in
the Awesome Force (mediated, of course, through the midi-chlorian
boson).
<strong>Third Prize:</strong> You will receive no material prize, but on your
deathbed you will receive <em>total consciousness</em>. So you’ve got that
going for you, which is nice.
Let’s first remind everyone what the Challenge problem was. The full
text of the problem can be found
<a href="http://pages.physics.cornell.edu/~aalemi/challenge/timemachine.php">here</a>,
but the gist is basically this: You’ve created a time machine and you’re
biggest fear is that you’ll be stuck back in the past without any way to
communicate to the future that your design worked and you deserve all
kinds of Nobel prizes. The solution should be able to last long periods
of time (who knows how far back in time you’ll go?), should maximize the
chances of modern people finding it, and be able to convince people that
you have in fact gone back in time.
Alright, let’s get to some solutions already!
<strong>First Place:</strong> The first place solution comes from Christian, who uses
some biological wrangling to solve the time traveller conundrum. With
some information from the
<a href="http://www.ted.com/talks/craig_venter_unveils_synthetic_life.html">announcement</a>
of “synthetic life” and some bio how-to from an entity known only as
“steve,” Christian plans to implant a message into the <span class="caps">DNA</span> of bacteria.
The message will contain his name, identifying information, and the url
of a website which will (presumably) contain a video of him with one
hand outstretched saying “Nobel prize please.”
Let’s see how this solution satisfies our criteria for a successful
solution. <em>Does it work for an arbitrary amount of time?</em> It appears to,
so long as the bacteria manage to survive and the message doesn’t become
too garbled over time (perhaps some error-correction might be useful).
Additionally, if one is worried about introducing non-native bacteria to
the wild you could bring back a bunch of bacteria that were known to
exist over wide periods of time and just release those alive at the
time. <em>Will modern humans find it?</em> It seems that geneticists are
decoding just about any genome they can get their hands on, so this is a
strong possibility. <em>Would it convince people that someone travelled in
time?</em> If the bacteria has dispersed enough, shows enough variation over
geographic regions, and contains specific identifying information about
a missing person who has allegedly created a time machine, I think
that’s pretty strong evidence. Neato, gang!
<strong>Second Place:</strong> The second place solution comes from Kyle, who offers
a space-based answer. Kyle suggests etching detailed plans of the time
travel mechanism (flux capacitor) onto a durable metal and putting that
bad boy into space. He suggests that anyone capable of building a fully
functional time machine should have no problem launching a small
satellite. Fair enough. Additionally, the satellite would use some kind
of solar power or the like to produce a low-power radio signal. In fact,
this signal would only need to spit something out once every year or ten
years or something. Since radio communication precedes space
exploration, the detection of an artificial satellite sending a message
would attract a fair deal of attention. The plans and successful
reproduction of the time machine would then seal the deal.
Does this solution satisfy the necessary conditions? I think so.
Assuming all goes according to plan, this would easily be detected by
modern people and, assuming the time machine plans are accurate, would
provide indisputable proof. My main concern would be that the satellite
could be launched and survive to the present. Modern satellites need
constant boosts to stay in orbit, without which they fall back onto
Earth and burn up. One potential solution would be to put it on the
Moon. This is technically much more difficult, but hey, you just created
a time machine! Also, putting it on the moon then allows for a totally
rad recreation of the
<a href="http://en.wikipedia.org/wiki/Monolith_(Space_Odyssey)">Monolith</a> scene
in <em>2001: A Space Odyssey</em>.
<strong>Third Place:</strong> The third place solution comes from Yariv. Though Yariv
did not submit a solution through the proper channels (he follows no
one’s rules, not even his own), he was overheard to give a solution.
While the Physics Challenge planning committee was discussing the
problem over lunch, Yariv flippantly dismissed the entire premise as
“trivial” and suggested a two-word solution: “radioactive paint.”
Personally, I like the idea of bewildered archaeologists finding a cave
painting of Yariv riding a dinosaur done using a variety of radioactive
paints which all date back 200 million years. For this amusement, I
award Yariv the third place prize for this contest. As a member of the
Virtuosi, however, Yariv is ineligible to receive a prize and instead
receives 5 demerits on his record for his willful disregard of our
institution’s rules and excessive flippancy. One more slip-up and you’ll
lose your badge!
<a href="http://pages.physics.cornell.edu/~aalemi/challenge/timemachinesol.php">Full
solutions</a>
are up on the Challenge website. Thanks for joining us for this episode
of Physics Challenge Award Show, and thanks to everyone who submitted a
response! <strong>First and Second Prize Winners:</strong> We present the following
in partial fulfillment of our prize offer.</p>
<hr />
<p><a href="http://1.bp.blogspot.com/-VdXMZhn5g6Q/TryZqc0wbVI/AAAAAAAAAQw/NBVdnG2oOrw/s1600/alemi_salute.png"><img alt="image" src="http://1.bp.blogspot.com/-VdXMZhn5g6Q/TryZqc0wbVI/AAAAAAAAAQw/NBVdnG2oOrw/s640/alemi_salute.png" /></a>
For those who solve problems (he <a href="http://www.youtube.com/watch?v=xMUgmU_Hsjc&feature=related">salutes</a> you)</p>
<hr />Betelgeuse, Betelgeuse, Betelgeuse!2011-11-05T01:25:00-04:00Corkytag:thephysicsvirtuosi.com,2011-11-05:posts/betelgeuse-betelgeuse-betelgeuse-.html<hr />
<p><a href="http://3.bp.blogspot.com/-6s6rSrItOdk/TrS6st3tdVI/AAAAAAAAAPk/GpGfjY5KZ3Y/s1600/betelgeuse.png"><img alt="image" src="http://3.bp.blogspot.com/-6s6rSrItOdk/TrS6st3tdVI/AAAAAAAAAPk/GpGfjY5KZ3Y/s320/betelgeuse.png" /></a>
A very cold person points out Betelgeuse</p>
<hr />
<p><em>Betelgeuse is a massive star at the very end of its life and could
explode any second now!</em> Every time I hear that I get really <em>really</em>
excited. Like a kid in a candy store that’s about to see a star blow up
like nobody’s business. This giddiness will last for a solid minute
before I realize that “any second now” is taken on astronomical
timescales and roughly translates to “sometime in the next million years
maybe possibly.” Then I feel sad. But you know what always cheers me up?
Calculating things! Hooray! So let’s take a look at the ways Betelgeuse
could end its life (even if it’s not going to happen tomorrow) and how
these would affect Earth. First, a little background. Betelgeuse is the
bright orangey-red star that sits at the head/armpit of Orion. It is one
of the <a href="http://en.wikipedia.org/wiki/List_of_brightest_stars">brightest</a>
stars in the night sky. Its distance has been measured by the
<em><a href="http://en.wikipedia.org/wiki/Hipparcos">Hipparcos</a></em>satellite to be
about 200 parsecs [1] from Earth (about 600 light years). Betelgeuse is
at least 10 times as massive as our Sun and has a diameter that would
easily accomodate the orbit of Mars. In fact, the star is big enough and
close enough that it can actually be spatially resolved by the Hubble
Space Telescope! Being so big and bright, Betelgeuse is destined to die
young, going out with a bang as a
<a href="http://en.wikipedia.org/wiki/Supernova#Core_collapse">core-collapse</a>
supernova. This massive explosion ejects a good deal of “star stuff”
into interstellar space [2] and leaves behind either a <a href="http://en.wikipedia.org/wiki/Neutron_star">neutron
star</a> or a <a href="http://en.wikipedia.org/wiki/Black_hole">black
hole</a>. Alright, now that we’re
all caught up, let’s turn our focus on this “massive explosion” bit.
What kind of energy scale are we talking about if Betelgeuse blows up?
Well, a pretty good upper bound would be if all of the star’s mass (10
solar masses worth!) were converted directly to energy, so <mathjax>$$ E_{max} =
mc^2 =
10M_{\odot}\times\left(\frac{2\times10^{30}\~\mbox{kg}}{1\~M_{\odot}}\right)\times
\left(3\times10^8\~\mbox{m/s}\right)^2 $$</mathjax> which is about <mathjax>$$
E_{max} \sim 10^{48}\~\mbox{J} $$</mathjax> and that’s nothing to shake a
stick at. But remember, this is if the <em>entire star</em> were converted
directly to energy, and that would be hard to do. Typical fusion
efficiencies are about \~1% [3], so let’s say a reasonable estimate for
the total nuclear energy available is <mathjax>$$ E_{nuc} \sim \eta_{f}
\times E_{max} \sim 10^{-2} \times 10^{48}\~\mbox{J} \sim
10^{46}\~\mbox{J}. $$</mathjax> This is the total energy released by a typical
supernova. As it turns out though, 99% of this energy is carried away in
the form of neutrinos and only about 1% is carried away in photons.
Since we are mainly concerned with how this explosion will affect Earth,
and the neutrinos will just pass on by, we will only consider the 1% of
energy released in photons that would reasonably interact with Earth.
That gives us <mathjax>$$ E_{ph} \sim 0.01 \times E_{nuc} \sim
10^{44}\~\mbox{J}. $$</mathjax> Neato, so that’s the total amount of energy
released in a supernova in the form of photons. How much of this energy
would be deposited at the Earth if Betelgeuse exploded? Well, if the
energy is deposited isotropically (that is, the same in all directions),
then the fluence (or time integrated energy flux) is given by <mathjax>$$ F_{ph}
= \frac{E_{ph}}{4\pi d^2}. $$</mathjax> All this is saying is that the total
energy release by the supernova spreads out uniformly over a sphere of
radius d, so the fluence will give us the amount of energy deposited in
each square meter of that sphere (the units of fluence here are J/m^2).
The total energy deposited on Earth is then <mathjax>$$ E_{\oplus} = F_{ph}
\times \pi R^2_{\oplus}. $$</mathjax> Hot dog! Let’s plug in some numbers,
already. The total energy deposited on the Earth by a symmetrically
exploding Betelgeuse at a distance of d = 200 pc (where 1 pc = 3 <em>
10^16 m) is <mathjax>$$E_{\oplus}=\frac{E_{ph}}{4\pi d^2}\times\pi
R^2_{\oplus}\sim
10^{19}\~\mbox{J}\left(\frac{E_{ph}}{10^{44}\~\mbox{J}}\right)\left(\frac{d}{200\~\mbox{pc}}\right)^{-2}.$$</mathjax>
Well, 10^19 J certainly </em>seems* like a lot of energy. In fact, it is
roughly the amount of energy contained in the entire nuclear arsenal of
the United States [4]. But it is spread over the entire atmosphere. Is
there a way to gauge how this would affect life on Earth? We could see
how much it would heat up the atmosphere using specific heats: <mathjax>$$ E =
m_{atm}c_{air}\Delta T $$</mathjax> where c is the specific heat of air
(\~10^3 J per kg per K). Oops, looks like we need to know the mass of
the atmosphere. But we can figure this out, the answer is pushing right
down on our heads! We know the pressure at the surface of the Earth (1
atm = 101 kPa) and that pressure is just the result of the weight of the
atmosphere pushing down on us. Since pressure is just force / area, we
have <mathjax>$$ P = F/A = m_{atm}g / A_{\oplus} $$</mathjax> So <mathjax>$$ m_{atm} =
\frac{P\times4\pi
R^2_{\oplus}}{g}=\frac{10^5\~\mbox{Pa}\times4\pi
(6\times10^6\~\mbox{m})^2}{9.8\~\mbox{m/s}^2}\approx4\times10^{18}\~\mbox{kg}.$$</mathjax>
Neato, gang. So we could see a temperature rise of about <mathjax>$$ \Delta T =
\frac{E_{ph}}{m_{atm}c_{air}}=\frac{10^{19}\~\mbox{J}}{4\times10^{18}\~\mbox{kg}\times10^3\~\mbox{J/
kg K}}\approx0.003\~\mbox{K}, $$</mathjax> or three one-thousandths of a degree.
Remember, too, that this will be an upper bound since we are assuming
that all this energy is deposited into the atmosphere before it has a
chance to cool. In fact, if the energy is deposited over the course of
hours or days, this value will be much less. So it looks like we’ve
wrapped this thing up: Betelgeuse exploding will most certainly not put
the Earth in any danger. Or did we? We have considered the case of a
symmetric supernova, but there’s more than one way to blow up a star.
Massive stars can also end their lives in a fantastic explosion called a
<a href="http://en.wikipedia.org/wiki/Gamma-ray_burst">gamma-ray burst</a> (GRBs to
the hep cats that study them, some fun facts relegated to [5]). GRBs are
still an intense area of current study, but the current picture (for one
type of <span class="caps">GRB</span>, at least) is that they are the result of a star blowing up
with the energy of the explosion focussed into two narrow beams (see
picture below). Since the flux isn’t distributed over the whole sphere,
GRBs can be seen at much greater distances than a typical supernova.</p>
<hr />
<p><a href="http://2.bp.blogspot.com/-siXXiFnQWHY/TrITJA8z_sI/AAAAAAAAAPU/qStD4_9qLBI/s1600/grb.jpg"><img alt="image" src="http://2.bp.blogspot.com/-siXXiFnQWHY/TrITJA8z_sI/AAAAAAAAAPU/qStD4_9qLBI/s320/grb.jpg" /></a>
Example of a gamma-ray burst, with the explosion in two beams.</p>
<hr />
<p>So how will this change our answer? Well, it’s going to change the
fluence we calculated above. Instead of spreading the energy out over
the whole sphere, it’s only going to go to some fraction of the 4pi
steradians. So we get <mathjax>$$ F_{ph} = \frac{E_{ph}}{4\pi f_{\Omega}
d^2}, $$</mathjax> where f_{omega} is called the “beaming fraction” and tells us
what fraction of the sphere the energy goes through. Typical <span class="caps">GRB</span> beams
range from 1 to 10 degrees in radius. Converting this to radians, we can
find the beaming fraction as <mathjax>$$ f_{\Omega} = \frac{2 \times \pi
\theta^2}{4\pi} \approx
10^{-4}\left(\frac{\theta}{1^\circ}\right)^2,$$</mathjax> so the beaming
fraction is 10^-4 and 10^-2 for a beam angle of 1 degree and 10
degrees, respectively. Alright, so now we can redo the calculations we
did for the supernova case, but keeping this beaming fraction around.
The total amount of energy that would hit Earth is then about
<mathjax>$$E_{\oplus}=\frac{E_{ph}}{4\pi f_{\Omega} d^2}\times\pi
R^2_{\oplus}\sim
10^{23}\~\mbox{J}\left(\frac{E_{ph}}{10^{44}\~\mbox{J}}\right)\left(\frac{d}{200\~\mbox{pc}}\right)^{-2}\left(\frac{\theta}{1^\circ}\right)^{-2}.$$</mathjax>
Holy sixth-of-a-moley! Continuing as we did above, we find that this
could potentially heat up the atmosphere by <mathjax>$$ \Delta T =
\frac{E_{ph}}{m_{atm}c_{air}}=\frac{10^{23}\~\mbox{J}}{4\times10^{18}\~\mbox{kg}\times10^3\~\mbox{J/
kg
K}}\approx3\~\mbox{K}\left(\frac{\theta}{1^\circ}\right)^{-2},
$$</mathjax> which is certainly non-negligible. Now, this won’t destroy the planet
[6], but it could make things really uncomfortable. This will be
especially true when you realize that a fair amount of the energy
carried away from a gamma-ray burst is in the form of (wait for it…)
gamma-rays, which will wreck havoc on your <span class="caps">DNA</span>. Remember, though, that
this is an absolute worst-case scenario since we have assumed the
smallest beaming angle. But this may still make us a little nervous, so
is there anyway to figure out if Betelgeuse could, in fact, beam a
gamma-ray burst towards Earth? Yes, yes there is. Jets and beams like
those in GRBs typically point along the rotation axis of the star [7].
If we could determine the rotational axis of Betelgeuse, then we could
say whether or not there’s a chance it’s pointed towards us. It just so
happens that Betelgeuse is the only star (aside from our Sun) that is
spatially resolved. If you could measure spectra along the star, you
could look for Doppler shifting of absorption lines and say something
about the velocity at the surface of the star. Luckily, this has already
been done for us (see, for example <a href="http://adsabs.harvard.edu/abs/1998AJ....116.2501U">Uitenbroek et al.
1998</a>). These
measurements are hard to do since the star is only a few pixels wide,
but it appears as though the rotation axis is inclined to the
line-of-sight by about 20 degrees (see figure below). That means this
would require a beam with at least a 20 degree radius to hit the Earth.
This appears to be outside the typical ranges observed. So even if
Betelgeuse were to explode in a gamma-ray burst, the beam would miss
Earth and hit some dumb other planet nobody cares about.</p>
<hr />
<p><a href="http://2.bp.blogspot.com/-xYILwh0eLEM/TrSTBxflbxI/AAAAAAAAAPc/Amma1aQtmPU/s1600/rotation.png"><img alt="image" src="http://2.bp.blogspot.com/-xYILwh0eLEM/TrSTBxflbxI/AAAAAAAAAPc/Amma1aQtmPU/s400/rotation.png" /></a>
Figure reproduced from Uitenbroek et al. (1998)</p>
<hr />
<p>Alright, so the moral of the story is that Betelgeuse is completely
harmless to people on Earth. When it does explode, it will be a
brilliant supernova that would likely be visible at least a little bit
during the day. It will be the coolest thing that anyone alive (if there
are people…) will ever see. Sadly, this explosion could take place at
just about any time during the next million years. Assuming a uniform
distribution over this time period and a human lifetime of order 100
years, there is something like a 1 in 10,000 chance you’ll see this in
your life. Feel free to hope for a spectacular astronomical sight, but
don’t lose sleep worrying about being hurt by Betelgeuse!
<strong>Semi-excessive Footnotes:</strong> [1] This has nothing to do with the Kessel
Run. For a description of the actual distance unit see
<a href="http://en.wikipedia.org/wiki/Parsec">Wikipedia</a>. For a circuitous
retconning to correct for one throwaway line in <em>Star Wars</em>, see
<a href="http://starwars.wikia.com/wiki/Kessel_Run">Wookieepedia</a>. [2] This is
how anything heavier than helium gets distributed throughout the
universe. The hydrogen and helium formed after the Big Bang gets fused
into heavier elements in stars and then dispersed out through
supernovae. In fact, most things heavier than Iron actually <em>require</em>
supernovae to even exist. If you have any gold on you right now (I’m
looking at you Mr. T), that only exists <em>because a star exploded!</em> [3]
Let’s consider the case of turning 4 protons into a Helium nucleus.
<a href="http://en.wikipedia.org/wiki/Helium-4">Helium-4</a> has a binding energy
of about 28 MeV, which means that the total energy of a bound He-4
nucleus is 28 MeV less than its free protons and neutrons (in other
words, we need to put in 28 MeV to break it up). So the process of
turning 4 protons into a Helium nucleus gives off 28 MeV worth of
energy. But we had a total of 4 times 1000 MeV worth of matter we could
have turned into energy. Thus, the process was 28 MeV/4000 MeV \~ 0.7%
efficient at turning matter into energy. [4] Sometime last year, the
United States
<a href="http://www.defense.gov/npr/docs/10-05-03_Fact_Sheet_US_Nuclear_Transparency__FINAL_w_Date.pdf">disclosed</a>
that its nuclear arsenal as of Sept 2009 was something like 5000
warheads. Assume these to be Megaton warheads. A Megaton is about 4 <em>
10^15 J, so the total energy in the <span class="caps">US</span> arsenal is about 5000 * 4 </em>
10^15 J = 2 * 10^19 J. [5] A fun fact about GRBs: They were
discovered by a <a href="http://en.wikipedia.org/wiki/Vela_(satellite)">military
satellite</a> looking for
illegal nuclear tests, which would emit some gamma-rays. Instead of
seeing a signal on Earth, they saw bursts coming from space. I really
really hope that someone’s first thought was that the Russians were
testing nukes on the Moon or something. <em>We must not allow a moon-nuke
gap!</em> [6] We here at the Virtuosi are contractually obligated to only
destroy the Earth in our posts on Earth day. I apologize for any
inconvenience this may cause. [7] I am not exactly sure why this is the
case. It is certainly observed to be the case and I thought there was a
straightforward explanation for why this was the case, but I don’t
really have a good explanation. Although, maybe there just <a href="http://en.wikipedia.org/wiki/Polar_jet">isn’t a good
one yet.</a> [8] For comparison,
there is about a 1 in 3000
<a href="http://www.lightningsafety.com/nlsi_pls/probability.html">chance</a>
you’ll be struck by lightning in your lifetime.</p>Physics Challenge Update2011-10-31T22:30:00-04:00Corkytag:thephysicsvirtuosi.com,2011-10-31:posts/physics-challenge-update.html<hr />
<p><a href="http://1.bp.blogspot.com/-mQtqC-5MmYk/Tq9MkU56O9I/AAAAAAAAAPE/iaFRwtMKwJ0/s1600/mary_mcwatch.jpg"><img alt="image" src="http://1.bp.blogspot.com/-mQtqC-5MmYk/Tq9MkU56O9I/AAAAAAAAAPE/iaFRwtMKwJ0/s320/mary_mcwatch.jpg" /></a>
Marty McFly realizes he is running out of time to submit his solution</p>
<hr />
<p>Did you know that our <a href="http://pages.physics.cornell.edu/~aalemi/challenge/timemachine.php">Physics
Challenge</a>
problem contest thingy is still up and going? It is! The contest will be
open until the end of the day this <strong>Friday, November 4th</strong>. And, unlike
last time, the winning solution will be chosen and posted by the end of
the weekend. So even if you don’t submit your own solution (though you
<em>totally</em> should), check back here Monday morning for the winning entry.
Why should you submit a solution to our problem? Lots of reasons! The
top ten reasons as decided by a random sample of me are given below the
break. <strong>Top Ten Reasons to Submit a Solution:</strong> 1) You will win
super-awesome prizes! What kinds of prizes? Well how does a greeting
card with kittens on the front and a collection of encouraging haikus
from the entire Virtuosi staff written inside sound? It sounds awesome,
awesome to the max. 2) You get to show off your totally rad physics
skills! 3) You can put it on your resume/cv! The semi-annual Virtuosi
Physics Challenge problem winners are held to the same esteem as Fields
Medalists and Nobel Prize winners! 4) You will earn the respect and
admiration of your peers! Winners of this contest develop an aura that
all other people can see, fear, and respect. 5) You will become
stronger, faster, and more productive than ever before! This is a
scientifically proven fact perhaps. 6) You may gain the ability to talk
to animals! Have you ever wanted to discuss espionage-related topics
with a platypus (perhaps this
<a href="http://www.youtube.com/watch?v=ONgjXrOShlc">platypus</a>)? Of course you
have! Winners can! 7) You will be in Presidential company! Did you know
William Henry Harrison won the very first Virtuosi physics challenge
contest shortly before becoming the ninth President of the United
States? Yep! Tippecanoe and so can you! 8) Alemi will salute in your
general direction. No questions asked. 9) You will give me work to do on
the weekend! This will give my dull and uninteresting life a small
glimmer of meaning! Hooray! 10) There’s like a 50% chance that you will
get three wishes from a magical genie named David Bowie (no relation).
Seriously. Like 50%. That should get you properly motivated! So check
out the Challenge Problem
<a href="http://pages.physics.cornell.edu/~aalemi/challenge/timemachine.php">website</a>
and when you have a solution, send it to our email address (given up in
the sidebar). Good luck!</p>The Linear Theory of Battleship2011-10-03T00:48:00-04:00Alemitag:thephysicsvirtuosi.com,2011-10-03:posts/the-linear-theory-of-battleship.html<p><a href="http://3.bp.blogspot.com/-_6JxttjO_hA/Tm_XKW58W_I/AAAAAAAAAXM/n2xdgZCvVAc/s1600/battleship.png"><img alt="image" src="http://3.bp.blogspot.com/-_6JxttjO_hA/Tm_XKW58W_I/AAAAAAAAAXM/n2xdgZCvVAc/s400/battleship.png" /></a></p>
<p>Recently I set out to hold a
<a href="http://en.wikipedia.org/wiki/Battleship_(game)">Battleship</a> programming
tournament here among some of the undergraduates. Naturally, I myself
wanted to win. So, I got to thinking about the game, and developed what
I like to call “the linear theory of battleship”. A demonstration of the
fruits of my efforts can be found
<a href="http://pages.physics.cornell.edu/~aalemi/battleship/">here</a>. Below, my
aim is to guide you through how I developed this theory, as an exercise
in using physics to solve an interesting unknown problem. This is one of
the things I really love about physics, the fact that obtaining an
education in physics is essentially an education in reasoning and
thinking through complicated problems, along with an honestly short list
of tips and tricks that have proven successful for tackling a wide range
of problems. So, how do we develop the linear theory of battleship?
First we need to quantify what we know, and what we want to know.</p>
<h3>The Goal</h3>
<p>So, how does one win Battleship? Since the game is about sinking your
opponents ships before they can sink yours, it would seem that a good
strategy would be to try to maximize your probability of getting a hit
every turn. Or, if we knew the probabilities of there being a hit on
every square, we could guess each square with that probability, to keep
things a little random. So, let’s try to represent what we are after. We
are after a whole set of numbers <mathjax>$$ P_{i,\alpha} $$</mathjax> where i ranges
from 0 to 99 and denotes a particular square on the board, and alpha can
take the values C,B,S,D,P for carrier, battleship, submarine, destroyer,
and patrol boat respectively. This matrix should tell us the probability
of there being the given ship on the given square. E.g. <mathjax>$$ P_{53,B} $$</mathjax>
would be the probability of there being a battleship on the 53rd square.
If we had such a matrix, we could figure out the probability of there
being at hit on every square by summing over all of the ships we have
left, i.e. <mathjax>$$ P_i = \sum_{\text{ships left}} P_{i, \alpha } $$</mathjax></p>
<h3>The Background</h3>
<p>Alright, we seem to have a goal in mind, now we need to quantify what we
have to work with. Minimally, we should try to measure the probabilities
for the ships to be on each square given a random board configuration.
Let’s codify that information in another matrix <mathjax>$$ B_{i,\alpha} $$</mathjax>
where B stands for ‘background’, i runs from 0 to 99, and alpha is
either C,B,S,D, or P again, and stands for a ship. This matrix should
tell us the probability of a particular ship being on a particular spot
on the board assuming our opponent generated a completely random board.
This is something we can measure. In fact, I wrote a little code to
generate random Battleship boards, and counted where each of the ships
appeared. I did this billions of times to get good statistics, and what
I ended up with is a little interesting. You can see the results for
yourself over at my <a href="http://pages.physics.cornell.edu/~aalemi/battleship/">results exploration
page</a> by changing
the radio buttons for the ship you are interested in, but I have some
screen caps below. Click on any of them to embiggen.</p>
<h4>All</h4>
<p>First of all, lets look at the sum of all of the ship probabilities, so
that we have the probability of getting a hit on any square for any ship
given a random board configuration, or in our new parlance <mathjax>$$ B_i =
\sum_{\alpha=\{C,B,S,D,P\} } B_{i,\alpha} $$</mathjax> The results:</p>
<p><a href="http://2.bp.blogspot.com/-G-vGF0DUOgM/Tokf-JE6AAI/AAAAAAAAAXU/Oyk1qlj3tKQ/s1600/all.png"><img alt="image" src="http://2.bp.blogspot.com/-G-vGF0DUOgM/Tokf-JE6AAI/AAAAAAAAAXU/Oyk1qlj3tKQ/s200/all.png" /></a></p>
<p>shouldn’t be too surprising. Notice first that we can see that my
statistics are fairly good, because our probabilities look more or less
smooth, as they ought to be, and show nice left/right up/down symmetry,
which it ought to have. But as you’ll notice, on the whole there is
greater probability to get a hit near the center of the board than near
the edges, an especially low probability of getting a hit in the
corners. Why is that? Well, there are a lot more ways to lay down a ship
such that there is a hit in a center square than there are ways to lay a
ship so that it gives a hit in a corner. In fact, for a particular ship
there are only two ways to lay it so that it registers a hit in the
corner. But, for a particular square in the center, for the Carrier for
example there are 5 different ways to lay it horizontally to register a
hit, and 5 ways to lay it vertically, or 10 ways total. Neat. We see
entropy in action.</p>
<h4>Carrier</h4>
<p>Next let’s look just at the Carrier:</p>
<p><a href="http://1.bp.blogspot.com/-CPYGjQCZbgA/Tokf-e0oKPI/AAAAAAAAAXk/hjfU3YgFkQk/s1600/carrier.png"><img alt="image" src="http://1.bp.blogspot.com/-CPYGjQCZbgA/Tokf-e0oKPI/AAAAAAAAAXk/hjfU3YgFkQk/s200/carrier.png" /></a></p>
<p>Woah. This time the center is very heavily favored versus the edges.
This reflects the fact that the Carrier is a large ship, occupying 5
spaces, basically no matter how you lay it, it is going to have a part
that lies near the center.</p>
<h4>Battleship</h4>
<p>Now for the Battleship:</p>
<p><a href="http://1.bp.blogspot.com/-6On4gLpSBUM/Tokf-EyNHZI/AAAAAAAAAXc/lp5mxbYeAo0/s1600/battleship.png"><img alt="image" src="http://1.bp.blogspot.com/-6On4gLpSBUM/Tokf-EyNHZI/AAAAAAAAAXc/lp5mxbYeAo0/s200/battleship.png" /></a></p>
<p>This is interesting. This time, the most probable squares are not the
center ones, but the not quite center ones. Why is that? Well, we saw
that for the Carrier, the probability of finding it in the center was
very large, and so respectfully, our battleship cannot be in the center
as often, as a lot of the time it would collide with the Carrier. Now,
this is not because I lay down the Carrier first, my board generation
algorithm assigns all of the boards at once, and just weeds out invalid
ones, this is a real entropic effect. So here we begin to see some
interesting Ship-Ship interactions in our probability distributions. But
notice again that on the whole, the battleship should also be found near
the center as it is also a large ship.</p>
<h4>Sub / Destroyer</h4>
<p>Next let’s look at the sub / destroyer. First thing to note is that our
plot should be the same for both of these ships as they are both the
same length.</p>
<p><a href="http://3.bp.blogspot.com/-hF3iyCrPVq8/Tokf-p_R5_I/AAAAAAAAAXs/FxaAiGmzq4Q/s1600/sub.png"><img alt="image" src="http://3.bp.blogspot.com/-hF3iyCrPVq8/Tokf-p_R5_I/AAAAAAAAAXs/FxaAiGmzq4Q/s200/sub.png" /></a></p>
<p>Here we see an even more pronounced effect near the center. The Subs and
Destroyers are ‘pushed’ out of the center because the Carriers and
Battleships like to be there. This is a sort of entropic repulsion.</p>
<h4>Patrol Boat</h4>
<p>Finally, let’s look at the patrol boat:</p>
<p><a href="http://2.bp.blogspot.com/-i8FNLK7mPII/Tokf-vkEtHI/AAAAAAAAAX0/FcIr6D9zNCo/s1600/patrol.png"><img alt="image" src="http://2.bp.blogspot.com/-i8FNLK7mPII/Tokf-vkEtHI/AAAAAAAAAX0/FcIr6D9zNCo/s200/patrol.png" /></a></p>
<p>The patrol boat is a tiny ship. At only two squares long, it can fit in
just about anywhere, and so we see it being strongly affected by the
affection the other ships have for the center. Neat stuff. So, we’ve
experimentally measured where we are likely to find all of the
battleship ships if we have a completely random board configuration.
Already we could use this to make our game play a little more effective,
but I think we can do better.</p>
<h3>The Info</h3>
<p>In fact, as a game of battleship unfolds, we learn a good deal of
information about the board. In fact on every turn we get a great deal
of information about a particular spot on the board, our guess. Can we
incorporate this information into our theory of battleship? Of course we
can, but first we need to come up with a good way to represent this
information. I suggest we invent another matrix! Let’s call this one <mathjax>$$
I_{j,\beta} $$</mathjax> Where I is for ‘information’, j goes from 0 to 99 and
beta marks the kind of information we have about a square, let’s let it
take the values M,H,C,B,S,D,P, where M means a miss, H means a hit, but
we don’t know which ship, and <span class="caps">CBSDP</span> mark a particular ship hit, which we
would know once we sink a ship. This matrix will be a binary one, where
for any particular value of j, the elements will all be 0 or 1, with
only one 1 sitting at the spot marking our information about the square,
if we have any. That was confusing. What do I mean? Well, let’s say its
the start of the game and we don’t know a darn thing about spot 34 on
the board, then I would set <mathjax>$$
I_{34,M}=I_{34,H}=I_{34,C}=I_{34,B}=I_{34,S}=I_{34,D}=I_{34,P}=0
$$</mathjax> that is, all of the columns are zero because we don’t have any
information. Now let’s say we guess spot 34 and are told we missed, now
that row of our matrix would be <mathjax>$$ I_{34,M} = 1 \quad
I_{34,H}=I_{34,C}=I_{34,B}=I_{34,S}=I_{34,D}=I_{34,P}=0 $$</mathjax> so that
we put a 1 in the column we know is right, instead, if we were told it
was a hit, but don’t know which ship it was: <mathjax>$$ I_{34,H} = 1 \quad
I_{34,M}=I_{34,C}=I_{34,B}=I_{34,S}=I_{34,D}=I_{34,P}=0 $$</mathjax> and
finally, lets say a few turns later we sink our opponents sub, and we
know that spot 34 was one of the spots the sub occupied, we would set:
<mathjax>$$ I_{34,S} = 1 \quad
I_{34,M}=I_{34,H}=I_{34,C}=I_{34,B}=I_{34,D}=I_{34,P}=0 $$</mathjax> This
may seem like a silly way to codify the information, but I promise it
will pay off. As far as my <a href="http://pages.physics.cornell.edu/~aalemi/battleship/">Battleship Data
Explorer</a> goes,
you don’t have to worry about all this nonsense, instead you can just
click on squares to set their information content. Note: shift-clicking
will let you cycle through the particular ships, if you just regular
click it will let you shuffle between no information, hit, and miss.</p>
<h3>The Theory</h3>
<p>Alright if we decide to go with my silly way of codifying the
information, at this point we have two pieces of data, <mathjax>$$ B_{i,\alpha}
$$</mathjax> our background probability matrix, and <mathjax>$$ I_{j,\beta} $$</mathjax> our
information matrix, where what we want is <mathjax>$$ P_{i,\alpha} $$</mathjax> the
probability matrix. Here is where the linear part comes in. Why don’t we
adopt the time honored tradition in science of saying that the
relationship between all of these things is just a linear one? In matrix
language that means we will choose our theory to be <mathjax>$$ P_{i,\alpha} =
B_{i,\alpha} + \sum_{j=[0,..,99],\beta=\{M,H,C,B,S,D,P\}}
W_{i,\alpha,j,\beta} I_{j,\beta} $$</mathjax> Whoa! What the heck is that!?
Well, that is my linear theory of battleship. What the equation is
trying to say is that I will try to predict the probability of a
particular ship being in a particular square by (1) noting the
background probability of that being true, and (2) adding up all of the
information I have, weighting it by the appropriate factor. So here, P
is our probability matrix, B is our background info matrix, I is our
information matrix, and W is our weight matrix, which is supposed to
apply the appropriate weights. That W guy seems like quite the monster.
It has four indexes! It does, so let’s try to walk through what they all
mean. Here: <mathjax>$$ W_{i,\alpha,j,\beta} $$</mathjax> is supposed to tell us: “the
extra probability of there being ship alpha at location i, given the
fact that we have the situation beta going on at location j” Read that
sentence a few times. I’m sorry its confusing, but it is the best way I
could come up with explaining W in english. Perhaps a visual would help.
Behold the following: (click to embiggen)</p>
<p><a href="http://4.bp.blogspot.com/-3yG2fZ0Shbw/Tokz-Sj3NHI/AAAAAAAAAYM/dAwvv-d7Fy4/s1600/W1.png"><img alt="image" src="http://4.bp.blogspot.com/-3yG2fZ0Shbw/Tokz-Sj3NHI/AAAAAAAAAYM/dAwvv-d7Fy4/s400/W1.png" /></a></p>
<p>That is a picture of <mathjax>$$ W_{i,C,33,M} $$</mathjax> that is, that is a picture of
the extra probabilities for each square (i is all of them), of there
being a carrier, (alpha=C) given that we got a miss (beta=M) on square
33, (j=33). You’ll notice that the fact that we saw a miss affects some
of the squares nearby. In fact, knowing that there was a miss on square
33 means that the probability that the carrier will be found on the
adjacent squares is a little lower (notice on the scale that the nearby
values are negative), because there are now fewer ways the carrier could
be on those squares without it overlapping over into square 33. Let’s
try another:</p>
<p><a href="http://2.bp.blogspot.com/-QyHjW6mNlQY/Tokp1FryPpI/AAAAAAAAAYE/8hXRtZZblE0/s1600/W2.png"><img alt="image" src="http://2.bp.blogspot.com/-QyHjW6mNlQY/Tokp1FryPpI/AAAAAAAAAYE/8hXRtZZblE0/s400/W2.png" /></a></p>
<p>That is a picture of <mathjax>$$ W_{i,S,65,H} $$</mathjax> that is, it’s showing the extra
probability of there being a submarine (alpha=S), at each square (i is
all of them, since its a picture with 100 squares), given that we
registered a hit (beta=H) on square 65 (j=65). Here you’ll notice that
since we marked a hit on square 65, it is very likely that we will also
get hits on the squares just next to this one, as we could have
suspected. In the end, by assuming our theory has this linear form, the
benefit we gain is that by doing the same sort of simulations I did to
generate the background information, I can back out what the proper
values should be for this W matrix. By doing billions and billions of
simulations, I can ask, for any particular set of information, I, what
the probabilities are P, and solve for W. Given that the problem is
linear, this solving step is particularly easy for me to do.</p>
<h3>The Results</h3>
<p>In the end, this is exactly what I did. I had my computer create
billions of different battleship boards, and figure out what the proper
values of B and W should be for every square of the matrix. I put all of
those results together in a way that I hope is easy to explore up at the
<a href="http://pages.physics.cornell.edu/~aalemi/battleship/">Fancy Battleship Results
Page</a>, where you
are free to explore all of the results yourself. In fact, the way it’s
set up, you can even use the <a href="http://pages.physics.cornell.edu/~aalemi/battleship/">Superduper Results
Page</a> as a sort of
Battleship Cheat Sheet. Have it open while you play a game of
battleship, and it will show you the probabilities associated with all
of the squares, helping you make your next guess. I’ve used the page
while playing a few games of battleship online, and have had some
success, winning 9 of the 10 games I played against the computer player.
Of course, this linear theory isn’t everything…</p>
<h3>Why Linear isn’t everything</h3>
<p>But at the end of the day, we’ve made a pretty glaring assumption about
the game of battleship, namely that all of the information on the board
adds in a linear way. Another way to say that is that in our theory of
battleship, we have a principle of superposition. Another way to say
that is that in this theory, what you think is happening in a particular
square is just the sum of the results from all of the squares,
independent of one another. Another way to say that is to show it with
another picture. Consider the following:</p>
<p><a href="http://2.bp.blogspot.com/-ZS2W4c9TfFc/Tok1UPt6OzI/AAAAAAAAAYk/Eia8LvwdAIU/s1600/nonlin.png"><img alt="image" src="http://2.bp.blogspot.com/-ZS2W4c9TfFc/Tok1UPt6OzI/AAAAAAAAAYk/Eia8LvwdAIU/s400/nonlin.png" /></a></p>
<p>Here, I’ve specified a bunch of misses, and am asking for the
probability of there being a Carrier on all of the positions of the
board. If you look in the center of that cluster of misses, especially
in the inner left of the bunch, you’ll see that the linear theory tells
me that there is a small but finite chance that the Carrier is located
on those squares. But if you stop to look at the board a little bit,
you’ll notice that I’ve arranged the misses such that there is a large
swatch of squares in the center of the cluster where the Carrier is
strictly forbidden. There is no way it can fit such that it touches a
lot of those central squares. This is an example of the failure of the
linear model. All the linear model knows is that in the spots nearby
misses there is a lower probability of the ship being there, but what it
doesn’t know to do is look at the arrangement of misses and check to see
whether there is any possible way the ship can fit. This is a nonlinear
effect, involving information at more than one square at a time. It is
these kinds of effects that this theory will miss, but as you’ll notice,
it still does pretty well. Even though it reports a finite positive
probability of the Carrier being inside the cluster, the value it
reports is a very small one, about 1 percent at most. So the linear
theory will have corrections at the 1 percent level or so, but that’s
pretty good if you ask me.</p>
<h3>Summary</h3>
<p>And so it is. I’ve tried to develop a linear theory for the game
Battleship, and display the results in a <a href="http://pages.physics.cornell.edu/~aalemi/battleship/">Handy Dandy Data
Explorer</a>. I
encourage you to play around with the website, use it to win games of
Battleship, and in the comments, point out interesting effects, things
you think I’ve missed, or ideas for how to come up with linear theories
of other things.</p>Physics Challenge II: Marty McPhysics2011-09-05T14:16:00-04:00Corkytag:thephysicsvirtuosi.com,2011-09-05:posts/physics-challenge-ii-marty-mcphysics.html<hr />
<p><a href="http://1.bp.blogspot.com/-cuT_VPl_Yd8/TmQbG_7B5DI/AAAAAAAAAOs/DA9sTNI-nGk/s1600/greatscott.jpg"><img alt="image" src="http://1.bp.blogspot.com/-cuT_VPl_Yd8/TmQbG_7B5DI/AAAAAAAAAOs/DA9sTNI-nGk/s1600/greatscott.jpg" /></a>
Doc Brown didn’t have a time-travel backup plan.</p>
<hr />
<p>In light of the incredible success of our last Physics Challenge Problem
(we received <em>several</em> responses), we here at the Virtuosi have decided
to reinstate what was nominally a “monthly” contest. In addition to a
brand-spankin’-new problem (with “prizes”, see [1]), we have also tried
to make a nice collaborative environment for discussing interesting
physics problems and posting your own solutions. So I will discuss our
<a href="http://pages.physics.cornell.edu/~aalemi/challenge/timemachine.php">new
problem</a>
and then I’ll throw it over to Alemi to discuss the goal of our new
Physics Challenge webpage. Our last challenge problem
(<a href="http://pages.physics.cornell.edu/~aalemi/challenge/archive.php">here</a>,
with winning solutions) was deemed a bit “unrealistic” by many because
it assumes that you were shipwrecked on a desert island <em>with a <span class="caps">CRC</span>
handbook!</em> What a ridiculous situation! Fear not, dear readers, we have
heard your complaints and have adjusted accordingly. Our new problem
relies on a far less ridiculous assumption. Assume you have created a
time machine out of a DeLorean [2]. You are sure it will send you back
in time and, causality-be-damned, you are ready to give it a test drive.
But you are less confident in the machine’s ability to take you exactly
<em>when</em> you want. Additionally, the positive tachyonic chrono-coupler is
a bit finicky - meaning, of course, that there is a non-negligible
chance that once you go back in time, you may never be able to come
<em>Back to the Present</em>. Thus, there is a chance that your time machine
will have worked but no one will ever know! As an unabashed narcissist,
you find this totally unacceptable! But first and foremost, you are a
scientist. As a scientist, your challenge is to design a strategy that
would allow you to convince your present-day scientific peers that you
did travel back in the time even if you are stuck in the past. What
could you build/make/leave in the past that would be most likely to
convince modern scientists that it <em>had</em> to be from a time-traveler and
not just some hoaxer? How long would your strategy be effective? Would
it work if you were stuck ten thousand years in the past? What about a
million years in the past? What about 100 million? Though this is an
inherently silly problem, we feel that the idea behind it is a very
interesting one. What could one motivated person do/build/make that
would survive a long time into the future? Also, the winning solution
will be submitted as a policy White Paper for all future time-travelers.
So that’s the new problem. You can find the “official” posting
<a href="http://pages.physics.cornell.edu/~aalemi/challenge/timemachine.php">here</a>.
I now cede the floor and my remaining time to the gentleman from Wisconsin.</p>
<h4>A sense of community…</h4>
<p>So, we’ve had this blog going for a while now, and we get a decent
number of comments on our posts, but felt as though something has been
lacking. A certain sense of community. Over the past year, we’ve done
some neat things here, and now it’s your turn. Step right up, because
we’ve just launched a google groups:
<a href="https://groups.google.com/forum/#!forum/thevirtuosi">TheVirtuosi</a> where
we hope people from all corners of the globe will gather to discuss
their kooky physics projects. Have you recently explored the physics of
cats? Launch a discussion. Do you have an idea but need some help
investigating it? Create a discussion and query us and all of our
fantastic readers. Have an idea for a blog post you want to see? Let us
know. Want to take a stab at solving one of the newly minted <a href="http://bit.ly/physicschallenge">Physics
Challenge</a> problems in public? Go for
it! I know the site is sparse now, but we’d really like to build a
little bit of a community here, so please, post away. You all have
already bookmarked it I’m sure, but a permanent link has been added to
the top of the sidebar as well. That is all. [1] These “prizes” will no
longer be as exciting as the <span class="caps">CRC</span> from last time, for several reasons.
First of all, we are all poor grad students and have no money. Secondly,
the winners from last time lived as far away from us as physically
possible and it’s expensive to ship heavy things. So what can you expect
to win? Well, you could win a “certificate of accomplishment” which
would just be drawn in crayon by one of this. But it will be made with
love! Or you could win a floppy disk with a recording of my very own
version of the Jurassic Park theme. The surprise is part of the fun (I’m
told). [2] Quoth the most famous chrononaut of our time, Dr. Emmett
Brown: “The way I see it, if you’re gonna build a time machine into a
car, why not do it with some style?”</p>A Tweet is Worth (at least) 140 Words2011-08-30T23:49:00-04:00Alemitag:thephysicsvirtuosi.com,2011-08-30:posts/a-tweet-is-worth-at-least-140-words.html<p><a href="http://2.bp.blogspot.com/-VJ3MBvt13Z4/Tl2Q7Z4J5WI/AAAAAAAAAWw/GG50fsyHvoo/s1600/twittercompression.png"><img alt="image" src="http://2.bp.blogspot.com/-VJ3MBvt13Z4/Tl2Q7Z4J5WI/AAAAAAAAAWw/GG50fsyHvoo/s400/twittercompression.png" /></a></p>
<p>So, I recently read <a href="http://books.google.com/books?id=fXxde44_0zsC&printsec=frontcover&dq=An+Introduction+to+Information+Theory&hl=en&ei=7opdTrjhMMXrOarHmdIC&sa=X&oi=book_result&ct=result&resnum=1&ved=0CC0Q6AEwAA#v=onepage&q&f=false">An Introduction to Information Theory: Symbols,
Signals and
Noise</a>.
It is a very nice popular introduction to <a href="http://en.wikipedia.org/wiki/Information_Theory">Information
Theory</a>, a modern
scientific pursuit to quantify information started by <a href="http://en.wikipedia.org/wiki/Claude_Shannon">Claude
Shannon</a> in 1948. This got
me thinking. Increasingly, people try to hold conversations on
<a href="http://twitter.com/">Twitter</a>, where posts are limited to 140
characters. Just how much information could you convey in 140
characters? After some coding and investigation, I created
<a href="http://pages.physics.cornell.edu/~aalemi/twitter/">this</a>, an
experimental twitter English compression algorithm capable of
compressing around 140 words into 140 characters. So, what’s the story?
Warning: It’s a bit of a story, the juicy bits are at the end. <span class="caps">UPDATE</span>:
Tomo in the comments below made <a href="http://www.saigonist.com/b/twitter-decoder-ring">a chrome
extension</a> for the algorithm</p>
<h3>Entropy</h3>
<p>Ultimately, we need some way to assess how much information is contained
in a signal. What does it mean for a signal to contain information
anyway? Is ‘this is a test of twitter compression.’ more meaningful than
‘歒堙丁顜善咮旮呂’? The first is understandable by any english speaker,
and requires 38 characters. You might think the second is meaningful to
a speaker of chinese, but I’m fairly certain it is gibberish, and takes
8 characters. But, the thing is if you put those 8 characters into <a href="http://pages.physics.cornell.edu/~aalemi/twitter/">the
bottom form here</a>,
you’ll recover the first. So, in some sense to the messages are
equivalent. They contain the same amount of information. Shannon tried
to quantify how we could estimate just how much information any message
contains. Of course it would be very hard to try to track down every
intelligent being in the universe and ask them if any particular message
had any meaning to them. Instead, Shannon reserved himself to trying to
quantify how much information was contained in a message produced by a
random source. In this regard, the question of how much information a
message contains becomes a more tractable question: How unlike is a
particular message from all other messages produced by the same random
source? This question might sound a little familiar. It is similar to a
question that comes up a lot in <a href="http://en.wikipedia.org/wiki/Statistical_physics">Statistical
Physics</a>, where we are
interested in just how unlike a particular configuration of a system is
from all possible configurations of a system. In Statistical physics,
the quantity that helps us answer questions like this is the
<a href="http://en.wikipedia.org/wiki/Entropy">Entropy</a>, where the entropy is
defined as <mathjax>$$ S = -\sum_i p_i \log p_i $$</mathjax> where p_i stands for the
probability of a particular configuration, and we are supposed to sum
over all possible configurations of the system. Similarly, for our
random message source, we can define the entropy in exactly the same
way, but for convenience, let’s replace the logarithm with the logarithm
base 2. <mathjax>$$ S = -\sum_i p_i \log_2 p_i $$</mathjax> At this point, the
<a href="http://en.wikipedia.org/wiki/Shannon_entropy">Shannon Entropy, or Information
Entropy</a> takes on a real
quantitative meaning. It reflects how many bits of information the
message source produces per character. The result of all of this aligns
quite well with intuition. If we have a source that outputs two symbols
0 or 1 randomly, each with probability 1/2. The shannon entropy comes
out to be 1, meaning each of the symbols of our source is worth one bit,
which we already new. If instead of two symbols, our source can output
16 symbols, 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F say, the shannon entropy
comes out to be 4 bits per symbol, which again we should have suspected
since with four bits we can count up to the number 16 in <a href="http://en.wikipedia.org/wiki/Binary_numeral_system">base
2</a> (e.g. 0000 - 0,
0001 - 1, 0010 - 2 , etc ). Where it begins to get interesting is when
all of our symbols don’t occur with equal probability. To get a sense of
this situation, I’ll show 5 example outputs:</p>
<pre><code>'000001000100000000010000010000'
'000000000010000000000001000000'
'010100000000000000000000111000'
'010100000000000000000000111000'
'000000000100000000110000000010'
</code></pre>
<p>looking at these examples, it begins to become clear that since we have
a lot more zeros than ones, each of these messages contain less
information than the case when 0 and 1 occur with equal probability. In
fact, in this case, if 0 occurs 90% of the time, and 1 occurs 10% of the
time, the shannon entropy comes out to be 0.47. Meaning each symbol is
worth just less than half a bit. We should expect our messages in this
case to have to be about twice as long to encode the same amount of
information. In an extreme example, imagine you were trying to transmit
a message to someone in binary, but for some reason, your device had a
sticky 0 key so that every time you pushed 0, it transmitted 0 10 times
in a row. It should be clear in this case that as far as the receiver is
concerned, this is not a very efficient transmission scheme.</p>
<h3>English</h3>
<p>What does this have to do with anything? Well, all of that and I really
only wanted to build up a fact you already know. The fact is, the
English language is not very efficient on a per symbol basis. For
example, I’m sure everyone knows exactly what word will come at the end
of this <strong><em>_</em></strong>. There you go, I was able to express exactly the
same thought with at least 8 fewer characters. n fct, w cn d _ lt bttr
[in fact, we can do a lot better], using 22 characters to express a
thought that normally takes 31 characters. In fact, Shannon has a <a href="http://languagelog.ldc.upenn.edu/myl/Shannon1950.pdf">nice
paper</a> where he
attempted to measure the entropy of the english language itself. Using
more sophisticated methods, he concludes that english has an information
entropy of between 0.6 and 1.3 bits per character, let’s call it 1 bit
per character. Whereas, if each of the 27 symbols (26 letters + space)
we commonly use each showed up equally frequently, we would have 4.75
bits per character possible. Of course, from a practical communication
standpoint, having redundancies in human language can be a useful thing,
as it allows us to still understand one another even over noisy phone
lines and with very bad handwriting. But, with modern computers and
faithful transmission of information, we really ought to be able to do better.</p>
<h3>Twitter</h3>
<p>This brings me back to <a href="http://twitter.com/">twitter</a>. If you are
unaware, twitter allows users to post short, 140 character messages for
the rest of the world to enjoy. 140 characters is not a lot to go on.
Assuming 4.5 characters per word, this means that in traditionally
written english you’re lucky to fit 25 words in a standard tweet. But,
we know now that we can do better. In fact, if we could come up with
some kind of crazy scheme to compress english in such a way as to use
each of the 27 usual characters so that each of those characters
appeared with roughly equal probability, we’ve seen that we could get
4.75 bits per character, with 140 characters and 5.5 symbols per word,
this would allow us to fit not 25 words in a tweet but 120 words. A
factor of 4.8 improvement. Of course we would have to discover this
miraculous encryption transformation. Which to my knowledge remains
undiscovered. But, we can do better. It turns out that twitter allows
you to use <a href="http://en.wikipedia.org/wiki/Unicode">Unicode</a> characters in
your tweets. Beyond enabling you to talk about Lagrangians (ℒ) and play
cards (♣), it enables international communication, by including foreign
alphabets. So, in fact we don’t need to limit ourselves to the 27
commonly used English symbols. We could use a much larger alphabet,
Chinese say. I choose Chinese because there are over 20,900 Chinese
alphabet symbols in Unicode. Using all of these characters, we could
theoretically encode 14.3 bits of information per character, with 140
characters, and 1 bit per English character, and 5.5 symbols per English
word, we could theoretically fit over 365 English words in a single
tweet. But alas, we would have to discover some magical encoding
algorithm that could map typed English to the Chinese alphabet such that
each of the Chinese symbols occurred with equal probability. I wasn’t
able to do that well, but I did make an attempt.</p>
<h3>My Attempt</h3>
<p>So, I tried to compress the English language, an design an effective
mapping from written English to the Chinese character set of Unicode. We
know that we aim to have each of these Chinese characters occur with
equal probability, so my algorithm was quite simple. Let’s just look at
a bunch of English and see which pair of characters occur with the
highest probability, and map these to the first Chinese character in the
Unicode set. Replace their occurring in the text, rinse, and repeat.
This technique is guaranteed to reduce the probability at which the most
common character occurs at every step, by taking some if its occurrences
and replacing them, so it at least aims to achieve our ultimate goal.
That’s it. Of course, I tried to bootstrap the algorithm a little bit by
first mapping the most common 1500 words to their own symbols. For
example, consider the first stanza of the <a href="http://en.wikipedia.org/wiki/The_raven">Raven by Edger Allen Poe</a></p>
<pre><code>Once upon a midnight dreary, while I pondered, weak and weary,
Over many a quaint and curious volume of forgotten lore--
While I nodded, nearly napping, suddenly there came a tapping,
As of some one gently rapping, rapping at my chamber door.
"'Tis some visiter," I muttered, "tapping at my chamber door--
Only this and nothing more."
</code></pre>
<p>The most common character is ‘ ‘ (the space). The most common pair is ‘e
‘ (e followed by space), so let’s replace ‘e ‘ with the first Chinese
Unicode character ‘一’ we obtain:</p>
<pre><code>Onc一upon a midnight dreary, whil一I pondered, weak and weary,
Over many a quaint and curious volum一of forgotten lore--
Whil一I nodded, nearly napping, suddenly ther一cam一a tapping,
As of som一on一gently rapping, rapping at my chamber door.
"'Tis som一visiter," I muttered, "tapping at my chamber door--
Only this and nothing more.'
</code></pre>
<p>So we’ve reduced the number of spaces a bit. Doing one more step, now
the most common pair of characters is ‘in’, which we replace by ‘丁’ obtaining:</p>
<pre><code>Onc一upon a midnight dreary, whil一I pondered, weak and weary,
Over many a qua丁t and curious volum一of forgotten lore--
Whil一I nodded, nearly napp丁g, suddenly ther一cam一a tapp丁g,
As of som一on一gently rapp丁g, rapp丁g at my chamber door.
"'Tis som一visiter," I muttered, "tapp丁g at my chamber door--
Only this and noth丁g more.'
</code></pre>
<p>etc. The end results of the effort are <a href="http://pages.physics.cornell.edu/~aalemi/twitter/">demo-ed
here</a>. Feel free to
play around with it. For the most part, typing some standard English, I
seem to be able to get compression ratios around 5 or so. Let me know
how it does for you. I’ll leave you with this final message:</p>
<pre><code>儌咹乺悃巄格丌凣亥乄叜
</code></pre>
<p>Python code that I used to do the heavy lifting is available as <a href="https://gist.github.com/1182747">a
gist</a>.</p>Futurama Physics2011-08-22T23:44:00-04:00Corkytag:thephysicsvirtuosi.com,2011-08-22:posts/futurama-physics.html<hr />
<p><a href="http://1.bp.blogspot.com/-6STZpSqRmJk/Tk1oSvdCgQI/AAAAAAAAAOc/M9wr0wJUlE8/s1600/global_warming.png"><img alt="image" src="http://1.bp.blogspot.com/-6STZpSqRmJk/Tk1oSvdCgQI/AAAAAAAAAOc/M9wr0wJUlE8/s320/global_warming.png" /></a>
The rotting corpses of sunbeams cause global warming.</p>
<hr />
<p>Good news, everyone! While rummaging through all my old stuff at home, I
found my long-lost copy of <em>Toto <span class="caps">IV</span></em>. Huzzah for me! This is entirely
unrelated to what I wanted to talk about, but I have it on good
authority that Toto’s
<em><a href="http://www.youtube.com/watch?v=azVqekQBK8g">Africa</a></em> syncs up <em>really</em>
well with this post [1]. I’ll tell you when to press play. Anyway, what
I really wanted to talk about was a fairly well-posed problem in
<em>Futurama.</em> In the episode “Crimes of the Hot,” all of the Earth’s
robots vent their various “exhausts” into the sky at the same time,
using the thrust to push the Earth into an orbit slightly further away
from the sun. As a result of this new orbit, the year is made longer by
“exactly one week.” Anything that quantitative is pretty much asking to
be analyzed. Let’s explore this problem a bit more then, why not? [
<em>Those wishing to get the full aural experience of this post should
press</em>play <em>on their cassette players … now ]</em> <em><em> First, a little
background. In this episode, it is learned that all the robots
(especially Hedonism Bot) emit the greenhouse gases responsible for
Global Warming. The previous solution (detailed
<a href="http://www.youtube.com/watch?v=2taViFH_6_Y">here</a>) is no longer viable,
so it is decided that all robots must be destroyed (especially Hedonism
Bot). The disembodied head of Richard Nixon rounds up all the world’s
robots on the Galapagos [2] to have a “party” so that they may be
destroyed by a giant space-based electromagnetic pulse cannon. In a
last-ditch effort to save the robots, Professor Farnsworth has all the
robots blast their exhausts into the sky, using the thrust to push the
Earth into an orbit further away from the sun, thus solving the problem
of global warming once and for all. As a result of changing the Earth’s
orbit, the year is “exactly one week longer.” First Pass Through Ok, so
what can we say about the new orbit if all we know is that its orbital
period is exactly one week longer? Well, we know from our good buddy
Kepler that the square of the period of a bound orbit is proportional to
the cube of its semi-major axis [3], so <mathjax>$$\tau^2 \propto a^3.$$</mathjax> We
already know the Earth’s period (1 year) and semi-major axis (1
<a href="http://en.wikipedia.org/wiki/Astronomical_unit"><span class="caps">AU</span></a>) before the
robo-boost, so we can get rid of the proportionality by writing things
in terms of the initial values. In other words, <mathjax>$$
\left(\frac{\tau}{1\~\mbox{yr}}\right)^2=\left(\frac{a}{1\~\mbox{AU}}\right)^3.$$</mathjax>
Alright, so we know that our new orbital period is 1 year + 1 week, or
since there are 52 weeks in a year, 53/52 years. So our new semi-major
axis is <mathjax>$$ a = \left(\frac{1 +
1/52\~\mbox{yr}}{1\~\mbox{yr}}\right)^{2/3}\mbox{AU}\approx1.013\~\mbox{AU},$$</mathjax>
or a little over 1% larger than it is currently. Fair enough. So would
this fix Global Warming for ever and ever? Let’s see. The solar flux at
some distance d is given by <mathjax>$$ S = \frac{L_{\odot}}{4\pi d^2}, $$</mathjax>
where L is the luminosity of the sun. So the ratio of the flux at the
new semi-major axis [4] to that before the orbit was changed is <mathjax>$$
\frac{S}{S_0}=\left(\frac{a_0}{a}\right)^2=\left(\frac{a}{1\~\mbox{yr}}\right)^{-2}.$$</mathjax>
<span class="caps">OK</span>, so we have the flux, but how do we relate this to temperature? Well,
we know that the power radiated by a blackbody of temperature T is given
by <mathjax>$$ P = \sigma A T^4, $$</mathjax> where sigma is the Stefan-Boltzmann
<a href="http://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law">constant</a>, A
is the area of the emitting region and T is the temperature. For a
blackbody in equilibrium, the power coming in is going to be equal to
the power going out. The power coming in is just the solar flux times
the cross-section area of Earth, given by <mathjax>$$ P_{in} = S\times\pi
R^2_{\oplus} $$</mathjax> and the power going out is just that radiated by the
Earth as a blackbody <mathjax>$$ P_{out} =\sigma \times 4\pi
R^2_{\oplus}\times T^4. $$</mathjax> Equating the power in to the power out
gives <mathjax>$$ T = \left( \frac{S}{4\sigma}\right)^{1/4}. $$</mathjax> Now we can
find the ratio of the new average Earth temperature to the temperature
before the orbital move. We have <mathjax>$$ \frac{T}{T_0} =\left(
\frac{S}{S_0}\right)^{1/4}=\left(
\frac{a_0}{a}\right)^{1/2}\approx0.994$$</mathjax> If we take the initial
average Earth temperature to be something like T = 300 K, then we find a
new temperature of T = 298 K. Huzzah, a whole 2 degrees cooler! That may
not sound like a lot, but remember, that’s the mean global temperature.
Apparently, it only takes a few degrees increase in global average
temperatures to make things a bit uncomfortable for people. The <span class="caps">IPCC</span>
<a href="http://en.wikipedia.org/wiki/Global_warming">indicates</a> that the
average global surface temperature on Earth in the next century is
likely to rise by about 1 to 2 degrees under optimistic scenarios or
about 3 to 6 degrees under pessimistic scenarios. So the robo-boost
option is at least in the right ballpark here. Neat! How Big Is The
Push? So how big of a push did the robots need to give the Earth to
boost it out to this new orbit? Is this possible? Let’s find out! First
we need to find out how the Earth’s velocity has changed. We can do this
by finding the change in energy. The total energy of a bound orbit is
given by <mathjax>$$ E = -\frac{k}{2a}$$</mathjax> where </em>a</em> is the semi-major axis of the
orbit and <em>k</em>= <em>GMm</em>. So the difference in Earth’s energy before and
after the robo-boost is <mathjax>$$ E_f - E_0
=-\frac{k}{2a_f}-\left(-\frac{k}{2a_0}\right)=\frac{k}{2a_0}\left(1-\frac{a_0}{a_f}\right).$$</mathjax>
But we also know that Earth’s energy in the orbit is given by <mathjax>$$ E =
-\frac{k}{r} + \frac{1}{2}mv^2, $$</mathjax> so the difference in energy before
and after is <mathjax>$$ E_f - E_0
=\frac{1}{2}m\left(v^2_f-v^2_0\right), $$</mathjax> where we have found the
energies immediately before and immediately after the boost so Earth is
pretty much at the same distance from the sun, so the potential energy
terms cancel. Combining our expressions for the change in energy and
solving for the final velocity, we find <mathjax>$$ v_f =
\left[\frac{GM_{\odot}}{a_0}\left(1-\frac{a_0}{a_f}\right)+v^2_0\right]^{1/2}.$$</mathjax>
Taking the initial orbital velocity of the Earth to be 30 km/s, we find
that the final velocity of the Earth immediately after the robo-boost is
<mathjax>$$ v_f = 30.2\~\mbox{km/s}$$</mathjax> So the robots just need to give a
“little” 200 m/s boost to the Earth, right? Well, we are adding velocity
vectors here, so it depends on which direction the robots are pushing.
The magnitude of the final velocity is given by <mathjax>$$v_f =
\sqrt{\left({\bf v_0}+{\bf \Delta v}
\right)^2}=\sqrt{v^2_0+\Delta v^2-2v_0\Delta v
\cos{\beta}},$$</mathjax> where delta v is the boost in velocity caused by the
robots and beta is the angle between the initial orbital motion of the
Earth and the velocity boost from the robots. If they wanted to make it
slightly easier on themselves, the robots would have boosted the Earth
in the direction it was already moving. That would make the cosine beta
term equal to one and thus minimize the necessary boost. However, in the
show the robots appear to point their exhaust right at the sun (see
figure below). This is essentially at a 90 degree angle to the Earth’s
orbital motion, so the cosine beta term goes to zero in our expression
above. Plugging it all in we find that the magnitude of the robo-boost
is <mathjax>$$\Delta v \approx 3.5\~\mbox{km/s}.$$</mathjax> We see that this is a fair
bit larger than the 0.2 km/s needed for a boost parallel to Earth’s
initial velocity.</p>
<hr />
<p><a href="http://4.bp.blogspot.com/-w6j6RXF1QEo/TlFUG6hU8PI/AAAAAAAAAOg/lViVcBBpFSA/s1600/delta_v.png"><img alt="image" src="http://4.bp.blogspot.com/-w6j6RXF1QEo/TlFUG6hU8PI/AAAAAAAAAOg/lViVcBBpFSA/s400/delta_v.png" /></a>
Robots blasting from the Galapagos (which now appear to be in China…)</p>
<hr />
<p>Alright, so how much effort would it take to give that kind of boost to
the Earth? We can quantify this effort in terms of a force or in terms
of the energy difference. Let’s do both. For the force, we have <mathjax>$$ F =
\frac{\Delta p}{\Delta t}=\frac{M_{\oplus}\Delta v}{\Delta t}.$$</mathjax>
Here we are a little stuck unless we can figure out the duration of the
robo-boost. Watching the episode again, the robots are blasting up
exhaust for about a minute but then the show cuts to commercial. So we
don’t really know how long they were pushing. Let’s just say an hour,
but we’ll leave the time in if we want to fiddle with that. Plugging in
numbers, the total force is <mathjax>$$ F =
6\times10^{24}\~\mbox{N}\left(\frac{\Delta
t}{1\~\mbox{hr}}\right)^{-1}.$$</mathjax> If this force is spread evenly over
the billion robots present [5], then each robot would be applying a
force of <mathjax>$$ F = 6\times10^{15}\~\mbox{N},$$</mathjax> which is roughly
equivalent to the force it would take to lift up Mount Everest [6]. That
wasn’t terribly helpful. Let’s look at the work then instead. The total
work done by the robots to move the Earth is <mathjax>$$ W =
\frac{1}{2}M_{\oplus}\left(v^2_f - v^2_0\right) \approx
4\times10^{31}\~\mbox{J}.$$</mathjax> Well, that’s a large number. Could a
billion robots feasibly do that much work? If the robots are each 100
kg, then if the mass of all billion robots were directly converted to
energy, we would get <mathjax>$$ E = mc^2 =
10^9\times10^2\~\mbox{kg}\times\left(3\times10^8\~\mbox{m/s}\right)^2
\approx 10^{28}\~\mbox{J},$$</mathjax> or less than a thousandth the total
energy needed. So it looks unlikely that the robots would be able to
push the Earth, but that was to be expected. Changes to The Orbit
Let’s take a look at how the robo-boost affects the entirety of the
Earth’s new orbit. In our first pass through the problem, we ignored the
fact that the shape of the orbit changed and only focused on the new
semi-major axis. To see why we must also consider the changes to the
“shape” of the orbit, take a look at the figure below.
<a href="http://2.bp.blogspot.com/-3tRZVuztuF0/TlGMMLhegtI/AAAAAAAAAOo/3jzGbWvvM90/s1600/orbits.png"><img alt="image" src="http://2.bp.blogspot.com/-3tRZVuztuF0/TlGMMLhegtI/AAAAAAAAAOo/3jzGbWvvM90/s400/orbits.png" /></a>
In the figure, the initial orbit is plotted (black dashed line) as well
as two new orbits that each have the appropriate semi-major axis so that
the period of revolution is one year and one week. The difference
between the two final orbits comes from the robots pushing in different
directions. In the blue orbit, the boost was made in a direction
radially outward from the Sun (that is, perpendicular to the orbital
velocity of the Earth). This is the case shown in the <em>Futurama</em>episode.
In the red orbit, the boost was made parallel to the orbital velocity of
the Earth. In each case, the boost was applied at the point labeled with
an “X.” One thing that jumps out from this figure is that the Earth is
always further away from the sun on the red orbit than it was on the
initial (dashed black) orbit. But on the blue orbit, the Earth is
further away from the Sun than it was initially for only half the orbit.
On the other half, the blue orbit would actually make the Earth’s
temperature <em>higher</em> than it was on the old orbit! The temperature
calculation we made earlier should hold pretty well for the red orbit,
since it is essentially a circle. It would be a little more tricky for
the blue orbit, as one would need to get a time-averaged value of the
flux over the course of the whole orbit. A hundred Quatloos to anyone
that does the calculation. Wrap-Up So what have we found out here? Well,
it seems that there are certain scenarios in which boosting the Earth
out to a new orbit with period of 1 year + 1 week could cool the Earth
by a few degrees. Granted, we have made some simplifications (the Earth
is not a blackbody), but the general idea of the thing should still
hold. I had some fun playing around with this problem and I thought it
was neat that there was a good deal of information to get started with
from the episode. The <em>Futurama</em> people gave an exact period and at
least a visual representation of the direction the robots apply their
push. So 600 Quatloos for the writers! Not Quite As Useless as Usual
Footnotes [1] At least I think it says so if you play this
<a href="http://www.youtube.com/watch?v=uUjIA3Rt7gk">song</a> backwards. [2]
According to the Wikipedia page for the
<a href="http://en.wikipedia.org/wiki/Crimes_of_the_Hot#Production">episode</a>,
the location of the Galapagos for the party was chosen because the
writers felt that it would be most convenient to push the Earth near the
equator. [3] The semi-major axis of an ellipse is half of the longest
line cutting through the center of the ellipse. Likewise, the semi-minor
axis is half of the shortest line drawn through the center of the
ellipse. Check <a href="http://mathworld.wolfram.com/Ellipse.html">this</a> out for
some more fun stuff on ellipses. [4] This is technically incorrect,
since the semi-major axis is measured from the center of the ellipse,
but the sun is located on one of the foci. However, this requires
information on the eccentricity of the orbit, which we are currently
glossing over right now. Our method is then approximate, but becomes
exact in the case where both orbits are circles. The effect, however, is
minor. At worst, it is semi-minor. Zing! [5] My source for this billion
robots number comes from Professor Farnsworth himself. When it looks
like the robots will all be destroyed the Professor says “A billion
robot lives are about to be extinguished. Oh, the Jedis are going to
feel this one!” [6] Well, sort of. The height of Everest is \~10^4 m,
so this gives a volume of \~10^12 m^3. The density of most metals is
around \~10 g/cm^3, which is \~10^4 kg/m^3. This gives a mass of
\~10^16 kg. The weight is then \~10^17 N. Each robot exerts a force of
\~10^16 N. So not quite, but hey it was the first thing I thought of
and it almost worked out so I’m sticking with it!</p>Fun with an iPhone Accelerometer2011-08-03T03:24:00-04:00Bohntag:thephysicsvirtuosi.com,2011-08-03:posts/fun-with-an-iphone-accelerometer.html<p><a href="http://2.bp.blogspot.com/-f7mSv8QGLGs/Tj9_a2qp1SI/AAAAAAAAAEY/Ax5EEw8Cjmg/s1600/accelerometer.jpg"><img alt="image" src="http://2.bp.blogspot.com/-f7mSv8QGLGs/Tj9_a2qp1SI/AAAAAAAAAEY/Ax5EEw8Cjmg/s320/accelerometer.jpg" /></a>
The iPhone <span class="caps">3GS</span> has a built-in <a href="http://pdf1.alldatasheet.com/datasheet-pdf/view/236640/STMICROELECTRONICS/LIS302DL.html">accelerometer, the
<span class="caps">LIS302DL</span></a>,
which is primarily used for detecting device orientation. I wanted to
come up with something interesting to do with it, but first I had to see
how it did on some basic tests. It turns out that the tests gave really
interesting results themselves! A drop test gave clean results and a
spring test gave fantastic data; however a pendulum test gave some
problems. You might guess the accelerometer would give a reading of 0 in
all axes when the device is sitting on a desk. However, this
accelerometer measures “proper acceleration,” which essentially is a
measure of acceleration relative to free-fall. So the device will read
-1 in the z direction (in units where 1 corresponds to 9.8 m/s^2, the
acceleration due to gravity at the surface of Earth). Armed with this
knowledge, let’s take a look at the drop test: To perform this test, I
stood on the couch which was in my office (before it was taken away from
us!), and dropped my phone hopefully into the hands of my officemate. I
suspected that the device would read magnitude 1 before dropping, 0
during the drop, and a large spike for the large deceleration when the
phone was caught.
<a href="http://2.bp.blogspot.com/-g8pt_dT0gR8/Tj9D2GTkc6I/AAAAAAAAADw/1iwHCsPixtY/s1600/DropTest.png"><img alt="image" src="http://2.bp.blogspot.com/-g8pt_dT0gR8/Tj9D2GTkc6I/AAAAAAAAADw/1iwHCsPixtY/s400/DropTest.png" /></a>
As you can see, the results were basically as expected. The purple line
shows the magnitude of the acceleration relative to free-fall. Before
the drop, the magnitude bounces around 1, which is due to my inability
to hold something steadily. The drop occurred near time 12.6, but I
wasn’t able to move my hand arbitrarily quickly so there’s not a sharp
drop to 0 magnitude. The phone fell for around 0.4 seconds corresponding
to <mathjax>$$y = \frac{1}{2} g t^2 = \frac{1}{2} (9.8 \frac{m}{s^2})*(0.4
s)^2 = 0.784 m = 2.57 feet $$</mathjax> As for the spike at 13 seconds, the raw
data shows that the catch occurs in <mathjax>$$ t = 0.02 \pm 0.01 s $$</mathjax>. In order
for the device to come to rest in such a short amount of time, there
needs to be a large deceleration provided by my officemate’s hands. Now
the pendulum test consisted of taping my phone to the bottom of a 20
foot pendulum.
<a href="http://2.bp.blogspot.com/-7uO_824O6t4/Tj9Q1crvlUI/AAAAAAAAAD4/uO3YsrXStak/s1600/pendulum.png"><img alt="image" src="http://2.bp.blogspot.com/-7uO_824O6t4/Tj9Q1crvlUI/AAAAAAAAAD4/uO3YsrXStak/s400/pendulum.png" /></a>
I didn’t think enough about this, but the period of a pendulum, assuming
we have a small amplitude, is given by: <mathjax>$$T = 2 \pi
\sqrt{\frac{L}{g}}$$</mathjax> which is about 5 seconds. With a relatively small
amplitude, the acceleration in the x direction will be small. Basically
I’m reaching the limit of the resolution of the acceleration device. It
appears that the smallest increment the device can measure is 0.0178 g.
This happens to match the specifications from the spec sheet I linked at
the top of the page, where they specify a minimum of 0.0162 g, and a
typical sensitivity of 0.018 g! Now we come to the most exciting test,
the spring test! Setup: I taped my phone to the end of a spring and let
it go. Ok. Here is the actual acceleration data:
<a href="http://3.bp.blogspot.com/-3qbU9p2I3sE/Tj9a7-eOk-I/AAAAAAAAAEA/0ByUQ039dio/s1600/springdata.png"><img alt="image" src="http://3.bp.blogspot.com/-3qbU9p2I3sE/Tj9a7-eOk-I/AAAAAAAAAEA/0ByUQ039dio/s400/springdata.png" /></a>
The first thing I see is that the oscillation frequency looks constant,
as it should be for a simple harmonic oscillator. There is also a decay
which looks exponential! Let’s see how well the data fits if we have a
frictional term proportional to the velocity of the phone. This gives is
a differential equation which looks like this: <mathjax>$$ m \ddot{x} +
F\dot{x} + k x = 0 $$</mathjax> Now we can plug in an ansatz (educated guess) to
solve this equation: <mathjax>$$ x(t) = A*e^{i b t} $$</mathjax> <mathjax>$$-b^2 mx(t) + i b
Fx(t) + kx(t) = 0$$</mathjax> <mathjax>$$-m b^2+iFb+k = 0$$</mathjax> We can solve this equation for
b with the quadratic equation: <mathjax>$$ b = \frac{\sqrt{4km - F^2}}{2m} +
i\frac{F}{2m} \equiv \omega + i \gamma $$</mathjax> where I defined two new
constants here. So we see that our ansatz does solve the differential
equation. Now we want acceleration, which is the second time derivative
of position with respect to time. <mathjax>$$a(t) \equiv \ddot{x} = -b^2 A
e^{ibt} $$</mathjax> Now are only interested in the real part of this solution,
which gives us (adding in a couple of constants to make the solution
more general): <mathjax>$$a(t) = -(\omega^2 - \gamma^2) A e^{-\gamma t}
cos(\omega t + \phi) + C $$</mathjax> Let’s redefine the coefficient of this
acceleration to make things a little cleaner! <mathjax>$$a(t) = B e^{-\gamma t}
cos(\omega t + \phi) + C $$</mathjax> Ok, with that math out of the way (for
now), we can try to fit this data. I actually used Excel to fit this
data using a not-so-well-known tool called Solver. This allows you to
maximize or minimize one cell while Excel varies other cells. In this
case, I defined a cell which is the <a href="http://en.wikipedia.org/wiki/Residual_sum_of_squares">Residual Sum of
Squares</a> of my fit
versus the actual data, and I tell Excel to vary the 5 constants which
make the fit! The values jump around for a little while then it gives up
when it thinks it converged to a solution. Using this you can fit
arbitrary functions, neato! With this, I come up with the following
plot:
<a href="http://1.bp.blogspot.com/-23lXSPVOwuY/Tj9a8Curm3I/AAAAAAAAAEI/F5Fb9dxdvDs/s1600/springlineardamp.png"><img alt="image" src="http://1.bp.blogspot.com/-23lXSPVOwuY/Tj9a8Curm3I/AAAAAAAAAEI/F5Fb9dxdvDs/s400/springlineardamp.png" /></a>
<mathjax>$$B = 0.633740943$$</mathjax> <mathjax>$$\gamma = 0.012097581 $$</mathjax> <mathjax>$$\omega = 8.599670376
$$</mathjax> <mathjax>$$\phi = 0.693075811 $$</mathjax> <mathjax>$$C =-1.004454967 $$</mathjax> with an R^2 value of
0.968! At this point it should be noted that if I discretize my smooth
fit to have the same resolution (0.0178 g) as the accelerometer, then
see what the error is comparing the smooth fit to its own
discretization, I get an R^2 of 0.967! This means that there is a
decent amount of built-in error to these fits due to discretization on
the order of the error we’re seeing for our actual fits. Immediately we
can recognize that C should be -1, since this is just a factor relating
“free-fall” acceleration to actual iPhone acceleration. If we wanted, we
could solve for the ratio of the spring constant to the mass, but I’ll
leave that as <a href="http://www.amazon.com/Classical-Electrodynamics-Third-David-Jackson/dp/047130932X">an exercise for the
reader.</a>
If you look closely, you can see that the frequency appears to match
very well. The two lines don’t go out of phase. One problem with the fit
is the decay. The beginning and the end of the data are too high
compared to the fit, which is a problem. This implies that there is some
other kind of friction at work. Some larger objects or faster moving
objects tend to experience a frictional force proportional to the square
of the velocity. I don’t think my iPhone is large or fast (compared to a
plane for example), but I’ll try it anyway. The differential equation
is: <mathjax>$$ m \ddot{x} + F\dot{x}^2 + k x = 0 $$</mathjax> yikes. This is a tough
one because of the velocity squared term. <a href="http://www.jstor.org/pss/3620747">One trick I found
here</a> attempts a general solution for
a similar equation. They make an approximation in order to solve it, but
the approximation is pretty good in our case. Take a look at the paper
if you’re interested. The basic idea is to note that the friction term
is the only one that affects the energy. So, assuming that the energy
losses are small in a cycle, we can look at a small change in energy
with respect to a small change in time due to this force term. This
gives us an equation which can let us solve for the amplitude as a
function of time approximately! Really interesting idea. So I plugged
the following equation into the Excel Solver: <mathjax>$$a(t) = \frac{A
cos(\omega t + \phi)}{\gamma t + 1} + B$$</mathjax> Here’s the fit:
<a href="http://2.bp.blogspot.com/-fCXr2CkPGE4/Tj9a8pH2KFI/AAAAAAAAAEQ/XaulleF3xK4/s1600/springsqdamp.png"><img alt="image" src="http://2.bp.blogspot.com/-fCXr2CkPGE4/Tj9a8pH2KFI/AAAAAAAAAEQ/XaulleF3xK4/s400/springsqdamp.png" /></a>
Which uses these values: <mathjax>$$A = 0.772773705 $$</mathjax> <mathjax>$$\gamma = 0.029745368 $$</mathjax>
<mathjax>$$\omega = 8.600177692 $$</mathjax> <mathjax>$$\phi = 0.688610161 $$</mathjax> <mathjax>$$B = -1.004530009
$$</mathjax> with an R^2 value of 0.964! This fit seems to have the opposite
effect. The middle of the data is too high compared to the fit, while
the beginning and end of the data seems too low. This makes me think
that the actual friction terms involved in this problem are possibly a
sum of a linear term and a squared term. I don’t know how to make
progress on that differential equation, so I wasn’t able to fit
anything. If you try the same trick I mentioned earlier, you run into a
problem where you can’t separate some variables which you need to
separate in the derivation unfortunately. So there you have it, I wanted
to find something neat to do, and I got really cool data from just
testing the accelerometer. Stay tuned for an interesting challenge
involving some physical data from my accelerometer!</p>Physics in Sports: The Fosbury Flop2011-08-01T01:30:00-04:00Bohntag:thephysicsvirtuosi.com,2011-08-01:posts/physics-in-sports-the-fosbury-flop.html<p><a href="http://1.bp.blogspot.com/-slmXXaMCcMI/TjXnJ3qe-kI/AAAAAAAAADI/PdIuocXmC5w/s1600/Fosbury.jpg"><img alt="image" src="http://1.bp.blogspot.com/-slmXXaMCcMI/TjXnJ3qe-kI/AAAAAAAAADI/PdIuocXmC5w/s320/Fosbury.jpg" /></a>Physics
has greatly influenced the progress of most sports. There have been
continual improvements in equipment for safety or performance as well as
improvements in technique. I’d like to talk about some physics in sports
over a series of posts. Here I’ll talk about a technique improvement in
High Jumping, the Fosbury Flop. The Fosbury Flop came into the High
Jumping scene in the 1968 Olympics, where <a href="http://en.wikipedia.org/wiki/Dick_Fosbury">Dick
Fosbury</a> used the technique
to win the gold medal. The biggest difference between the Flop and
previous methods is that the jumper goes over the bar upside down
(facing the sky). This allows the jumper to bend their back so that
their arms and legs drape below the bar which lowers the center of mass
(See the picture above). Here is a video of the <a href="http://www.youtube.com/watch?v=_bgVgFwoQVE">Fosbury
Flop</a> executed very well.
<a href="http://1.bp.blogspot.com/-UYbUVO1G8JM/TjYpGgCGOxI/AAAAAAAAADQ/wKjdCsuwLB0/s1600/flopdiagram.jpg"><img alt="image" src="http://1.bp.blogspot.com/-UYbUVO1G8JM/TjYpGgCGOxI/AAAAAAAAADQ/wKjdCsuwLB0/s320/flopdiagram.jpg" /></a>Let’s
assume Dick Fosbury is shaped like a semi-circle as he moves over the
bar. The bar is indicated as a red circle, as this is a side view. From
this diagram, we can guess his center of mass is probably near the
marked ‘x’, since most of his mass is below the bar. It is important to
recall the definition of center of mass, which is the average location
of all of the mass in an object. <mathjax>$$ \vec{R} = \frac{1}{M} \int
\vec{r} dm $$</mathjax> Note that this is a vector equation, and the integral
should be over all of the mass elements. This integral gets easier
because I’m going to assume that Dick Fosbury is a constant density
semi-circle. This means that <mathjax>$$ M = C*h $$</mathjax> where C is a constant equal
to the ratio of the mass to the height, and <mathjax>$$dm = C * dh $$</mathjax>. This is a
vector equation, so in principle we need to solve the x integral and the
y integral; however, due to the symmetry about the y-axis, the x
integral is zero. Finally we’ll convert to polar coordinates, leaving us
with: <mathjax>$$ y = \frac{1}{C \pi R} \int_0^\pi R\sin{\theta} C R
d\theta = \frac{1}{C \pi R} R (-\cos{\theta}) C R \bigg|_0^\pi
= \frac{2R}{\pi} $$</mathjax> Ok, so this is the y-coordinate of the center of
mass of our jumper relative to the bottom of the semi-circle. Now we
need to calculate relative to the top of the bar, which is roughly the
location of the top of the circle. We just need to subtract from R: <mathjax>$$ R
- \frac{2}{\pi} R = R * (1 - \frac{2}{\pi}) = \frac{h}{\pi} * (1
- \frac{2}{\pi}) $$</mathjax> Now Dick Fosbury was 1.95m tall, which gives us a
distance of 22.6 cm <span class="caps">BELOW</span> the bar! Of course he’s not a semi-circle, but
this isn’t a terrible approximation, as you can see from the video
linked above. Further, wikipedia mentions that some proficient jumpers
can get their center of mass 20 cm below the bar, which matches pretty
well with our guess. A nifty technique in physics is looking at the
point-particle system, which allows us to see the underlying motion of a
system. If you’re not familiar with this method, you collect any given
number of objects and replace them with a single point at the center of
mass of the object. We can use energy conservation now for our
point-mass instead of the entire body of the jumper.<a href="#note1">^note^</a> In
this case, we can simply deal with the center of mass motion of the
jumper. All of my kinetic energy will be converted to gravitational
potential energy. Again this is an approximation because some energy is
spent on forward motion, as well as the slight twisting motion which
I’ll ignore. <mathjax>$$E = \frac{1}{2} mv^2 = mgh$$</mathjax> Now let’s look at some
data. Here is a plot of each world record in the high jump.
<a href="http://3.bp.blogspot.com/-DVfHxJG5b-U/TjY-0WQ1YkI/AAAAAAAAADg/GDFfNr0KiBo/s1600/worldrecords.png"><img alt="image" src="http://3.bp.blogspot.com/-DVfHxJG5b-U/TjY-0WQ1YkI/AAAAAAAAADg/GDFfNr0KiBo/s400/worldrecords.png" /></a>The
blue data show jumps before the Flop, and the red data show records
after the Flop. <strong>Note: In 1978, the straddle technique broke the
world record, being the only non-flop technique to do so since 1968.
Thanks Janne!</strong> The Flop was revealed in 1968, so I’ll assume that all
jumps before this year used a method where the center of mass of the
jumper was roughly even with the bar, while all jumps after this year
used the flop (see the previous note). Clearly something happened just
before the Flop came out, and this is something called <a href="http://en.wikipedia.org/wiki/Straddle_technique">the Straddle
technique</a>. I want to
know the percent difference in the initial energies required, so I will
calculate <mathjax>$$ 100\% * \frac{E_0-E_f}{E_0} = 100\% *
\frac{mgh_0-mgh_f}{mgh_0} = 100\% * \frac{h_0-h_f}{h_0} $$</mathjax>
where <mathjax>$$E_0$$</mathjax> is the initial energy without the force, err the flop,
and <mathjax>$$ E_f $$</mathjax> is the initial energy using the flop. Since we are using
the point-particle system, the gravitational potential energy only cares
about the center of mass of the flopper, and we need to know the height
of the center of mass for a 2.45m flop, which is the current world
record. This corresponds to a flop center of mass height of 2.25m, which
gives us an 8.2% decrease in energy using the flop (versus a method
where the center of mass is even with the bar)! The current world record
is roughly 20 cm higher than it was when the flop came out. This could
be due to athletes getting stronger, but this physics tells us that some
of the height increase could have been from the technique change. To sum
up, the high jump competition, along with many other sports, is being
exploited by physics! [note] Here we’re relying on the center of mass
being equal to something called the center of gravity of the jumper. The
center of mass is as defined above. The center of gravity is the average
location of the gravitational force on the body. This happens to be the
same as the center of mass if you assume we are in a uniform
gravitational field, which is essentially true on the surface of the Earth.</p>Grains of Sand2011-07-18T11:33:00-04:00Jessetag:thephysicsvirtuosi.com,2011-07-18:posts/grains-of-sand.html<p><a href="http://3.bp.blogspot.com/-87-vnzGa9Po/TiRR2qFWprI/AAAAAAAAAF0/KsfRQhoL5Ds/s1600/SandUDunesUSoft.jpg"><img alt="image" src="http://3.bp.blogspot.com/-87-vnzGa9Po/TiRR2qFWprI/AAAAAAAAAF0/KsfRQhoL5Ds/s320/SandUDunesUSoft.jpg" /></a></p>
<p>Have you ever sat on a beach and wondered how many grains of sand there
were? I have, but I may be a special case. Today we’re going to take
that a step further, and figure out how many grains of sand there are on
the entire earth. (Caveat: I’m only going to consider sand above the
water level, since I don’t have any idea what the composition of the
ocean floor is). I’m going to start by figuring out how much beach there
is in the world. If you look at a map of the world, there are four main
coasts that run, essentially, a half circumference of the world. We’ll
say the total length of coast the world has is roughly two
circumferences. As an order of magnitude, I would say that the average
beach width is 100 m, and the average depth is 10 m. This gives a total
beach volume of <mathjax>$$ (100 m)(10 m)(4 \pi (6500 km) )= 82 km^3$$</mathjax> That’s
not a whole lot of volume. Let’s think about deserts. The Sahara desert
is by far the largest sandy desert in the world. Just as a guess, we’ll
assume that the rest of the sandy deserts amount to 20% (arbitrary
number picked staring at a map) as much area as the Sahara. According to
wikipedia the area of the Sahara is 9.4 million km^2. We’ll take, to an
order of magnitude, that the sand is 100 m deep. 10 m seems to little,
and 1 km too much. That amounts to \~1 million km^3 of sand. We’re
going to assume that a grain of sand is about 1 mm in radius The volume
occupied by a grain of sand is then 1 mm^3. Putting that together with
our previous number for the occupied volume gives <mathjax>$$ \frac{1\cdot
10^6 km^3}{1 mm^3}=\frac{1 \cdot 10^{15}}{1\cdot
10^{-9}}=1\cdot 10^{24}$$</mathjax> That’s a lot of grains of sand. Addendum:
Carl Sagan is quoted as saying</p>
<blockquote>
<p><span class="dquo">“</span>The total number of stars in the Universe is larger than all the
grains of sand on all the beaches of the planet Earth”</p>
</blockquote>
<p>If we just use our beach volume, that gives a total number of grains of
sand as \~1*10^20, which is large, but not as large as what we found
above. Is that less than the number of stars in in the universe? Well,
that’s a question for another day (or google), but the answer is, to our
best estimate/count, yes.</p>Lifetime of Liquid Water2011-07-10T22:07:00-04:00Jessetag:thephysicsvirtuosi.com,2011-07-10:posts/lifetime-of-liquid-water.html<p><a href="http://3.bp.blogspot.com/-fyjvPBm_INs/ThpaZFszL5I/AAAAAAAAAFw/6sJBTUj905c/s1600/water_drop.jpg"><img alt="image" src="http://3.bp.blogspot.com/-fyjvPBm_INs/ThpaZFszL5I/AAAAAAAAAFw/6sJBTUj905c/s320/water_drop.jpg" /></a>
Apologies for the hiatus recently, it’s been a busy time (when isn’t
it). I hope to get back to talking about experiments soon, but for now I
wanted to write up a quick problem I thought up a while back. The
question is this: how long does a molecule of <span class="caps">H2O</span> on earth remain in the
liquid state, on average? I’m going to treat this purely as an order of
magnitude problem. I’m also going to have to start with one assumption
that is almost certainly inaccurate, but makes things a lot easier. I’m
going to assume perfect mixing of all of the water on earth. Given that
assumption, I really only need to figure out two things. The first is
how much liquid water there is on earth. The second is now much liquid
water leaves the liquid phase each year. Let’s start with the total
amount of liquid water on earth. This is relatively easy to estimate. I
happen to know that about 70% of the earth’s surface is covered in
water. Most all of that is ocean. To an order of magnitude, the average
depth of the ocean must be 1 km, as it is certainly not 100 m or 10 km
[1]. For a thin spherically shell, the volume of the shell is roughly <mathjax>$$
4 \pi r_e^2 \Delta r $$</mathjax> where r_e is the radius of the earth. Thus,
the total volume of water on the earth is <mathjax>$$.7*4 \pi r_e^2 (1 km)$$</mathjax>
Now, we need to figure out how much H20 leaves the liquid phase every
year. To an order of magnitude, it rains 1 m everywhere on earth each
year, it’s not .1 m or 10 m [2]. I’m going to ignore any
freezing/melting in the ice caps, assuming that is small fraction of the
water that leaves the liquid phase each year. Since we have a closed
system, all the water that rains must have left the liquid phase. So, on
average, the total volume of water that leaves the liquid phase is <mathjax>$$4
\pi r_e^2 (1 m) $$</mathjax> Thus, the fraction of liquid water that changes
phase per year is <mathjax>$$ \frac{4 \pi r_e^2 (1m)}{.7*4\pi r_e^2 (1
km)} = .0014 $$</mathjax> This means that, given my assumption of perfect mixing,
in somewhere around 1/.0014 = 700 yr all of the water on earth will have
cycled through the vapor phase. Since we’re only operating to an order
of magnitude, I’ll call this 1000 years. This is the answer to our
question if every molecule has been in the vapor phase once in 1000
years, then we expect a molecule to stay in the liquid phase for 1000
years [1] According to wikipedia, this is really about 4 km, so we’re
underestimating a bit. [2] According to wikipedia, this is spot on (.99
m on average).</p>Coriolis Effect on a Home Run2011-07-03T11:52:00-04:00Corkytag:thephysicsvirtuosi.com,2011-07-03:posts/coriolis-effect-on-a-home-run.html<hr />
<p><a href="http://3.bp.blogspot.com/-LxzlQ5iNaaI/Ta458E_AwvI/AAAAAAAAAMc/D0Z4vZS7IzA/s1600/phillies_stadium.jpg"><img alt="image" src="http://3.bp.blogspot.com/-LxzlQ5iNaaI/Ta458E_AwvI/AAAAAAAAAMc/D0Z4vZS7IzA/s320/phillies_stadium.jpg" /></a>
Citizen’s Bank Park</p>
<hr />
<p>I like baseball. Well, technically, I like ~~laying~~[3] lying on the
couch for three hours half-awake eating potato chips and mumbling
obscenities at the television. But let’s not split hairs here.
Anyway, out of curiosity and in partial atonement for the sins of my
past [1] I would now like to do a quick calculation to see how much
effect the Coriolis force has on a home-run ball.
The Coriolis force is one of the artificial forces we have to put in if
we are going to pretend the Earth is not rotating. For a nice intuitive
explanation of the Coriolis force see <a href="http://www.wired.com/wiredscience/2011/04/coriolis-force-in-a-wipeout-rotating-slide/">this
post</a>
over at Dot Physics.
Let’s now consider the following problem. Citizen’s Bank Park (home to
the Philadelphia Phillies) is oriented such that the line from home
plate to the foul pole in left field runs essentially South-North.
Imagine now that Ryan Howard hits a hard shot down the third base line
(that is, he hits the ball due North). Assuming it is long enough to be
a home run, how with the Coriolis force effect the ball’s trajectory?
This is a well-posed problem and we could solve it as exactly as we
wanted. But please don’t <a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0Bwd5hrDOxWsrOGZmMWZkYzQtM2M2Ny00NjlmLTgyYmMtNTQwZjI1ODU1NWI4&hl=en_US">make
me</a>.
It’s icky and messy and I don’t feel like it. So let’s do some
dimensional analysis! Hooray for that!
So what are the relevant physical quantities in this problem? Well,
we’ll certainly need the angular velocity of the Earth and the speed of
the baseball. We’ll also need the acceleration due to gravity. Alright,
so what do we want to get out of this? Well, ideally we’d like to find
the distance the ball is displaced from its current trajectory. So is
there any way we can combine an angular velocity, linear velocity and
acceleration to get a displacement?
Let’s see. We can write out the dimensions of each in terms of some
length, L, and some time, T. So:
<mathjax>$$ \left[ \Omega \right] = \frac{1}{T} $$</mathjax>
<mathjax>$$ \left[ v \right] = \frac{L}{T} $$</mathjax>
<mathjax>$$ \left[ g \right] = \frac{L}{T^2} $$</mathjax>
where we have used the notation that [some quantity] = units of that
quantity. Combining these in a general way gives: <mathjax>$$ L = \left[
v^{\alpha} \Omega^{\beta} g^{\gamma} \right] = \left(
\frac{L}{T}\right)^{\alpha}\left(
\frac{1}{T}\right)^{\beta}\left( \frac{L}{T^2}\right)^{\gamma}
= L^{\alpha+\gamma} T^{-(\alpha+\beta+2\gamma)}$$</mathjax> Since we want
just want a length scale here, we need: <mathjax>$$\alpha+\gamma =
1\~\~\~\mbox{and}\~\~\~\alpha+\beta+2\gamma = 0. $$</mathjax>
We can fiddle around with the above two equations to get two new
equations that are both functions of alpha. This gives: <mathjax>$$\beta =
\alpha - 2\~\~\~\mbox{and}\~\~\~\gamma = 1 - \alpha. $$</mathjax>
Unfortunately, we have two equations and three unknowns, so we have an
infinite number of solutions. I’ve listed a few of these in the Table below.</p>
<hr />
<p><a href="http://4.bp.blogspot.com/-_wBdDaNzEP4/Tg_of5pylyI/AAAAAAAAANE/CU2Oe225_UI/s1600/table.png"><img alt="image" src="http://4.bp.blogspot.com/-_wBdDaNzEP4/Tg_of5pylyI/AAAAAAAAANE/CU2Oe225_UI/s320/table.png" /></a>
Ways of getting a length</p>
<hr />
<p>At this point, we have taken Math as far as we can. We’ll now have to
use some physical intuition to narrow down our infinite number of
solutions to one. Hot dog! One way we can choose from these expressions
is to see which ones have the correct dependencies on each variable. So
let’s consider what we would expect to happen to the deflection of our
baseball by the Coriolis force if we changed each variable. What happens
if we were to “turn up” the gravity and make g larger? If we make g much
larger, then a baseball hit at a given velocity will not be in the air
as long. If the ball isn’t in the air as long, then it won’t have as
much time to be deflected. So we would expect the deflection to decrease
if we were to increase g. This suggests that g should be in the
denominator of our final expression. What happens if we turn up the
velocity of the baseball? If we hit the ball harder, then it will be in
the air longer and thus we would expect it to have more time to be
deflected. Since increasing the velocity would increase the deflection,
we would expect v to be in the numerator. What happens if we turn up the
rotation of the Earth? Well, if the Earth is spinning faster, it’s able
to rotate more while the ball is in the air. This would result in a
greater deflection in the baseball’s path. Thus, we would expect this
term to be in the numerator. So, using the above criteria, we have
eliminated everything on that table with alpha less than 3 based on
physical intuition. Unfortunately, we still have an infinite number of
solutions to choose from (i.e. all those with alpha greater than or
equal to 3). But, we <span class="caps">DO</span> have a candidate for the “simplest” solution
available, namely the case where alpha = 3. Since we have exhausted are
means of winnowing down our solutions, let’s just go with the alpha = 3
case. Our dimensional analysis expression for the deflection of a
baseball is then <mathjax>$$ \Delta x \sim \frac{v^3 \Omega}{g^2} $$</mathjax>
Plugging in typical values of <mathjax>$$ v =
50\~\mbox{m/s}\~\~\~(110\~\mbox{mi/hr}) $$</mathjax> <mathjax>$$ \Omega = 7 \times
10^{-5}\~\mbox{rad/s} $$</mathjax> <mathjax>$$ g = 9.8\~\mbox{m/s}^2 $$</mathjax> we get <mathjax>$$
\Delta x \approx 0.1\~\mbox{m} = 10\~\mbox{cm}. $$</mathjax> That’s all fine
and good, but which way does the ball get deflected? Is it fair or foul?
Well, remembering that the Coriolis force is given by: <mathjax>$$ {\bf F} =
-2m{\bf \Omega} \times {\bf v} $$</mathjax> and utilizing Ye Olde Right Hand
Rule, we see that a ball hit due north will be deflected to the East. In
the case of Citizen’s Bank Park, that is fair territory. But how good is
our estimate? Well, I did the full calculation (which you can find
<a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0Bwd5hrDOxWsrOGZmMWZkYzQtM2M2Ny00NjlmLTgyYmMtNTQwZjI1ODU1NWI4&hl=en_US">here</a>)
and found that the deflection due to the Coriolis force is given by <mathjax>$$
\Delta x =-\frac{4}{3}\frac{\Omega v^3_0}{g^2} \cos \phi
\sin^3 \alpha \left[1 -3 \tan \phi \cot \alpha \right] $$</mathjax>
where phi is the latitude and alpha is the launch angle of the ball. We
see that this is essentially what we found by dimensional analysis up to
that factor of 4/3 and some geometrical terms. Not bad! Plugging in the
same numbers we used before, along with the appropriate latitude and a
45 degree launch angle we find that the ball is deflected by: <mathjax>$$ \Delta
x = 5\~\mbox{cm}. $$</mathjax> For comparison, we note that the diameter of a
baseball is 7.5 cm. So in the grand scheme of things, this effect is
essentially negligible. [2] That wraps up the calculation, but I’m
certain that many of you are still a little wary of this voodoo
calculating style. And you should be! Although dimensional analysis will
give you a result with the proper units and will <em>often</em> give you
approximately the right scale, it is not perfect. But, it can be
formalized and made rigorous. The rigorous demonstration for dimensional
analysis is due to Buckingham and his famous pi-theorem. The original
paper can be found behind a pay-wall
<a href="http://prola.aps.org/abstract/PR/v4/i4/p345_1">here</a> and a really nice
set of notes can be found
<a href="http://www.math.ntnu.no/~hanche/notes/buckingham/buckingham-a4.pdf">here</a>.
It’s a pretty neat idea and I highly recommend you check it out!
Unnecessary Footnotes:
[1] Once in college I argued with a meteorologist named Dr. Thunder over
the direction of the Coriolis force on a golf ball for the better half
of the front nine at Penn State’s golf course. I was wrong. Moral of the
story: don’t play golf with meteorologists.
[2] For a counterargument, see Fisk et al. (1975) [3] Text has been
corrected to illustrate our enlightenment by a former English major as
to the difference between ‘lay’ and ‘lie’ through the following story:
‘Once in a college psych class, a young student said “It’s too hot.
Let’s lay down.” A mature student, a journalist, asked, “Who’s Down?” ’</p>Counting Critters2011-06-28T00:19:00-04:00Corkytag:thephysicsvirtuosi.com,2011-06-28:posts/counting-critters.html<hr />
<p><a href="http://4.bp.blogspot.com/-H4DbCndvDRw/TgfuQZt7aJI/AAAAAAAAAM8/WLfVRd4USK8/s1600/marx_horse.jpg"><img alt="image" src="http://4.bp.blogspot.com/-H4DbCndvDRw/TgfuQZt7aJI/AAAAAAAAAM8/WLfVRd4USK8/s320/marx_horse.jpg" /></a>
This picture allows us to set a lower bound on the number of <a href="http://www.youtube.com/watch?v=9IrCgCKrv8U">creatures</a> that ever lived of \~4.</p>
<hr />
<p>We recently had a big book sale [1] here in town where books were being
sold for about a quarter. Needless to say, I bought far more than I’ll
probably ever need or read. One of the books I bought was called
<em>General Paleontology</em> by A. Brouwer [2]. Anyways, I didn’t really make
it too far in the book. In fact, I only made it to the first sentence of
the second paragraph of the first chapter, when I encountered this line:
<em>“The number of individuals which has populated the Earth since life
began is beyond estimation.”</em> <em><em> Horse feathers, I say! Horse Feathers!
The number of things that ever lived may very well be </em>unknowable</em>, but
it’s certainly not beyond estimation. So below, Alemi and I each provide
an estimate for the total number of creatures that have ever lived on
Earth. We flipped a coin and I lost, so I guess I’ll go first. My
estimation will be a genuine guess-y kind of estimate that doesn’t draw
too heavily on too many physical considerations. Instead, I will
formulate a series of assumptions and base my final answer on that. So
assuming my assumptions are valid, the answer should give a reasonable
estimate to the total number of creatures that have ever lived. My
assumptions are as follows: (1) The number of individuals that have ever
lived will be almost entirely dominated by the number of bacteria that
have ever lived on Earth. So to leading order, all the life that has
ever lived on Earth is bacteria (or something similar). (2) Life began
at some time, T, in the past and immediately spread to all places on
Earth. Hey, man, it’s the power of geometric progression! (3) The
majority of life is found within h = 1 m from the surface of water. I
picked this number since it’s roughly (order of magnitude) how far down
I can see in really clear water. Most of the life will be photosynthetic
and thus need a fair amount of sunlight. (4) The number density of
organisms in water is n \~ 10^5. I have no real justification for this.
(5) The average lifetime of an organism is t \~ 1 hr. Alright, so if
these assumptions are valid (a big if [3]), then the following
prediction should be fairly accurate. So the total volume in which these
creatures may live is just the shell of the Earth down to about a meter:
<mathjax>$$ V = 4 \pi R_{\oplus}^2 h $$</mathjax> where R = 6 * 10^6 m is the radius
of the Earth. Alright, so the number of creatures at any given moment
will be the volume times the number density which I will take to be n \~
10^5. That will give us the total number of creatures at any given
moment. But we want it for <em>all</em> the moments. So I will take the total
number of “generations” to be the time life has been around divided by
the average lifetime of a given organism. Putting this all together
gives <mathjax>$$N = 4 \pi R_{\oplus}^2 h \times \frac{T}{t} $$</mathjax> Plugging in
some numbers I get: <mathjax>$$ N \sim 10^{39}
\left(\frac{h}{1\~\mbox{m}}\right)\left(\frac{n}{10^5\~\mbox{cm}^{-3}}\right)\left(\frac{T}{3\times10^9\~\mbox{yrs}}
\right)\left(\frac{t}{1\~\mbox{hr}} \right)^{-1}$$</mathjax> So for the
nominal values I’ve plugged in, I’ll get that about 10^39 creatures
have ever lived on Earth [4]. I’ve left my equation in a dimensionless
form above, so if you think my individual estimates are garbage, you can
easily plug in your own estimates to see how things change. Except for
the completely arbitrary factor for the average number density of
organisms per cubic centimeter of water, I feel alright about this
estimate. And I’m fairly confident that the number density will not be
off more than about 3 orders of magnitude either high or low. So my
final estimate is: <mathjax>$$ N \sim 10^{39 \pm 3} $$</mathjax> I promised that there
would be two estimates, so I present below in picture form, Alemi’s
back-of-the-wrapper estimate.</p>
<hr />
<p><a href="http://2.bp.blogspot.com/-Eq37GVo3d8A/TggHx19J1pI/AAAAAAAAANA/Y0H3DTVb2so/s1600/jj_phys.jpg"><img alt="image" src="http://2.bp.blogspot.com/-Eq37GVo3d8A/TggHx19J1pI/AAAAAAAAANA/Y0H3DTVb2so/s400/jj_phys.jpg" /></a>
Click for the full Jimmy Johns experience</p>
<hr />
<p>To explain these scribbles, I now cede the floor (and the mic) to Alemi.
[ <span class="caps">SEAMLESS</span> <span class="caps">TRANSITION</span> ] So, when Corky posed this question to me while
we ate some tasty sandwiches, another approach came to mind. Namely, I
wanted to try to estimate the number of critters that have ever lived by
putting some kind of energy bound on the number. Ultimately, all
critters come from the sun. That is, all life on Earth is only able to
exist in so much as it consumes energy, and for almost all life
[ignoring the under ocean heat vent guys], the energy they consume, one
way or another comes from the sun. So, let’s estimate the number of
critters in three parts (1) We need the energy the Earth recieves from
the sun. (2) We need to estimate the energy density of life (3) These
two things, combined with a characteristic length or volume scale for a
critter would enable us to figure out the rate at which the Earth could
produce critters. (4) Assuming this rate and a time scale for how long
life has been around on the Earth would give us a total number of
critters. Let’s begin (1) Energy from the sun. Corky and I happen to
know that the solar flux on the sun is roughly 1000 W/m^2. Multiplying
this by half the surface area of the earth gives us a rough total solar
flux <mathjax>$$ (1000 \text{ W/m}^2) ( 2\pi R_{\oplus}^2) \sim 2 \times
10^{17} \text{ W} $$</mathjax> (2) Energy density of critters For this we used
the bag of potato chips we had on hand, assuming that all life matter
has roughly the same energy content. The bag of chips was 150 calories
in a serving size of 28 grams. This and assuming that life forms are the
density of water gives us a life energy density <mathjax>$$ \left( \frac{ 150
\text{ kcal} }{ 28 \text{ g}} \right)\left( \frac{ 1 \text{ g}}{
\text{ cm}^3} \right) \sim 2 \times 10^4 \text{ J/cm}^3 $$</mathjax> (3)
Length scale of critters We assumed that bacteria are the most abundant
life form, so we chose a length scale of 100 microns. Putting these
pieces together gives <mathjax>$$ \frac{ 2 \times 10^{17} \text{ W} }{
\left( 2 \times 10^4 \text{ J/cm}^3 \right) \left( 100\
\mu\text{m} \right)^3 } \sim 10^{19} \text{ 1/s} $$</mathjax> Which is our
estimated critter creation rate (4) Time scale for life generation
Finally, we estimate that life creation has been chugging along on earth
for about 3 billion years, this gives us our final estimate for the
number of critters that have every lived <mathjax>$$ \left( 10^{19} \text{
1/s} \right) \left( 3 \times 10^9 \text{ years} \right) \sim
10^{36} $$</mathjax> So, there we have it. If the Earth was 100% efficient at
converting solar energy into life, and that life is characteristically
the energy density of a potato chip and the size of a bacteria, we
should have had 1 billion billion billion billion critters ever. To make
us feel a little better we would like to tack on a 10% efficiency, since
we don’t actually expect the Earth to be 100% efficient, and because 10%
seems to be the rule of thumb efficiency estimation used when it comes
to food chains and the like, so our final estimate, motivated purely by
physics is <mathjax>$$ \boxed{ \text{ Total number of critters ever } =
10^{35\pm 3} } $$</mathjax> This number seems pretty good, and is in general
agreement with Corky’s earlier method. Notice that the only parameter we
are a little worried about the is the length scale, especially because
our final answer depends on the inverse cube of this number, so, our
error is probably something around 3 orders of magnitude, as before,
since an order of magnitude goof in the size would cause 3 orders of
magnitude error in the final estimate. So there you have it. Two
not-egregiously-horrible estimates for the total number of critters that
have ever lived. All in all, I think that book was a quarter well spent!
Unnecessary footnotes: [1] This is a bit misleading. They actually sold
books of all sizes. [2] I like to read about paleontology and such just
in case I’m ever sent back in time. This way, I’ll know what dinosaurs
are safe to eat. [3] Here’s a bigger if: if [4] <span class="caps">FUN</span> <span class="caps">FACT</span>: The total
number of atoms in all the people on Earth is roughly 10^39. A proof of
this is left as an exercise for the reader.</p>Day in the life of Clicky2011-06-01T16:34:00-04:00Matttag:thephysicsvirtuosi.com,2011-06-01:posts/day-in-the-life-of-clicky.html<p><a href="http://4.bp.blogspot.com/-og-KOzfbR1s/TeepA_j_e3I/AAAAAAAAB64/jboORBAZGsQ/s1600/soslow.png"><img alt="image" src="http://4.bp.blogspot.com/-og-KOzfbR1s/TeepA_j_e3I/AAAAAAAAB64/jboORBAZGsQ/s400/soslow.png" /></a>Remember
when you first learned about other planets and their many fun facts? You
were probably bombarded by such truisms as: “Jupiter is approximately
the mass of 318 Earths, has a orbital period that is 4,300 Earth days,
is made out of pure love, and is mostly transparent.” Well, I was
curious about what Clicky’s day was like in terms of Earth days. When
did Clicky get to sleep, when did he eat dinner, what is his orbital
radius and eccentricity? To do this, I looked at only the 3rd column of
the data that you may or may not have downloaded yesterday from
<a href="http://thevirtuosi.blogspot.com/2011/05/clickin-night-away.html">here</a>.
Starting out, it should be noted that this is not an ordinary time
series where you have some value measured at given intervals. It is
actually a series of time points at which a click was entered. I thought
it would be most natural to go ahead and bins these to give a sense of
the access to Clicky over the past two months. It looks something like
this:
<a href="http://4.bp.blogspot.com/-CGv4NMXIA_Y/TeenyMR2SEI/AAAAAAAAB6g/_GymFecaR4c/s1600/rawtimehist.png"><img alt="image" src="http://4.bp.blogspot.com/-CGv4NMXIA_Y/TeenyMR2SEI/AAAAAAAAB6g/_GymFecaR4c/s400/rawtimehist.png" /></a>
Well, that wasn’t as informative as I had hoped. We see some spikes here
and there with a general trend toward neglect and abandonment towards
the end of the second month. What if we take these days and bin them
into one single day worth of traffic? We get this:
<a href="http://1.bp.blogspot.com/-bmZb_JtN4yg/TefICYE6zMI/AAAAAAAAB7I/xqsCiUjBtQE/s1600/oneday-hist.png"><img alt="image" src="http://1.bp.blogspot.com/-bmZb_JtN4yg/TefICYE6zMI/AAAAAAAAB7I/xqsCiUjBtQE/s400/oneday-hist.png" /></a>
Again, not so informative yet, but we are definitely starting to see
some structure in the day. In particular, there is a lot of activity in
the afternoon and evening with a definite lull around 5pm. There is also
a distinct minima around 5am. It appears that Clicky is on the average
most active between noon and 9pm, getting a break through most of the
night.
Next, let’s look at the autocorrelation of the times. For standard time
series, the autocorrelation is defined to be
<mathjax>$$ C_{ss}(\tau) = \int_{-\infty}^{\infty}s(t) * \bar s(t-\tau)
dt $$</mathjax>
which measures the amount of similarity in a signal as a function of the
time separation between two points. Again, we don’t have a continuous
signal, so our autocorrelation function is instead a histogram of the
all-pairs differences in the data points that we do have. That is, start
at the first time point and subtract it from all of the other time
points in the series. Then, move to the next data point and subtract it
from all the subsequent times while keeping tracking of all of these
differences. This is our autocorrelation in real space.
<a href="http://2.bp.blogspot.com/-TcTlLQL4ccw/TefSgRN3KrI/AAAAAAAAB7Q/aLHgDXT-ZOU/s1600/time-time2-raw.png"><img alt="image" src="http://2.bp.blogspot.com/-TcTlLQL4ccw/TefSgRN3KrI/AAAAAAAAB7Q/aLHgDXT-ZOU/s400/time-time2-raw.png" /></a>
We can also zoom in and smooth the data a bit
<a href="http://2.bp.blogspot.com/-VRWxWGORJt0/TefSrrD-DHI/AAAAAAAAB7Y/FncLYViZ3Jw/s1600/time-time2.png"><img alt="image" src="http://2.bp.blogspot.com/-VRWxWGORJt0/TefSrrD-DHI/AAAAAAAAB7Y/FncLYViZ3Jw/s400/time-time2.png" /></a>Ah,
now there we go. We see a distinct over-abundance of time differences
around 1 day, 2 days, etc. What is the primary oscillation we see in the
correlation? To see that, let’s look at the frequency space
autocorrelation and plot its power, or square.
<a href="http://2.bp.blogspot.com/-CKFrfzY37Uc/TefTfvhS5pI/AAAAAAAAB7g/njevnB4JPac/s1600/time-time-power.png"><img alt="image" src="http://2.bp.blogspot.com/-CKFrfzY37Uc/TefTfvhS5pI/AAAAAAAAB7g/njevnB4JPac/s400/time-time-power.png" /></a>Finally,
we get the primary component of the variation of Clicky’s visits
throughout the 2 month period that he was in operation. It occurs, to
within error, at 1 day. No very surprising at all. I’m sure we could
look more closely at the peak and its width, but I am satisfied to say
that Clicky’s day is defined by the Earth day to within a few percent.
*In response to more messages in Clicky, we agree that it is “So slow.”
Rest assured, management is looking into the problem.</p>Clicky Data v1.02011-05-31T11:36:00-04:00Matttag:thephysicsvirtuosi.com,2011-05-31:posts/clicky-data-v1-0.html<p><a href="http://2.bp.blogspot.com/-2c6Bjhd3XmI/TeUNGvesUiI/AAAAAAAAB6I/5ndWL1d8AXQ/s1600/snap.png"><img alt="image" src="http://2.bp.blogspot.com/-2c6Bjhd3XmI/TeUNGvesUiI/AAAAAAAAB6I/5ndWL1d8AXQ/s400/snap.png" /></a>
As we sift through the Clicky data Corky presented yesterday, let us not
forget the Clicky that came before.
Let us not forget the Great Server Move of 2011 and the great pains we
felt while waiting for Clicky to come back to us: Corky sat in his room
crying into his pillow, Alex was pressing his arrow keys longingly, and
I just couldn’t seem to get up in the morning. Let us not forget the
many hours in which untold numbers of anonymous internet users tried in
vain to spell simple words using only discrete steps on a finite
lattice. Let us not forget the Great Server Crash that caused the
physics department to be charged extra for data usage. Let us not forget
Clicky v1.0.
So today we remember him, our beloved Clicky v1.0, by releasing his data
as well. It can be found close to the other Clicky data dump from
yesterday at this location:
<a href="http://www.mattbierbaum.com/clicky/clickydat2.tar.bz2">Download Clicky
v1.0</a>
Be warned, this was actually available to the anonymous internet and
there are a few things that may surprise you. That being said, the last
half of this data set is actually comprised of multiple users
interacting with Clicky at once so it should have different statistics
than yesterday’s data, though we have yet to sort through that too. More
of our analysis is to follow..
Also, props to whoever made the dinosaur.</p>Clickin’ the Night Away2011-05-31T00:07:00-04:00Corkytag:thephysicsvirtuosi.com,2011-05-31:posts/clickin-the-night-away.html<p>Hey, everybody! Do you remember
<a href="http://thevirtuosi.blogspot.com/2011/04/collective-wanderings.html">Clicky</a>
[1]? Well, we finally got around to analyzing data, so here goes. But
first, a brief summary.
Matt, Alemi and I came up with the idea for Clicky in the beginning of
April. Perhaps “idea” is a bit too generous… it was really just a
passing thought: “Hey wouldn’t it be cool if we had an internet Ouija
board?” It was just a stupid lunch-time discussion that wouldn’t have
gone anywhere had Alemi and Matt not taken it as some sort of challenge.
So after a few hours that night we had Clicky.
To say we had some goal with Clicky would be an overstatement. But, if
anything, we were kind of hoping to see some sort of Brownian motion. We
figured if we had lots of people pulling on the same dot, some kind of
Brownian walk would show up. This was grossly overestimating how many
people actually view this blog and it turned out that most of the time
Clicky was moved by one person at a time. Anyway, what we did end up
finding was more interesting than just a Brownian random walk…
Behold, in its full 133,000 point glory, Clicky!</p>
<hr />
<p><a href="http://1.bp.blogspot.com/-Ugbn0uGZnOU/TeRUMSPqMLI/AAAAAAAAAMk/KTgArv-WShU/s1600/clicky_far_eq.png"><img alt="image" src="http://1.bp.blogspot.com/-Ugbn0uGZnOU/TeRUMSPqMLI/AAAAAAAAAMk/KTgArv-WShU/s400/clicky_far_eq.png" /></a>
Far View of Clicky. Click to super-size.</p>
<hr />
<p>Well, I guess that isn’t that impressive. But you can click on it for a
larger view or download the data and plot it yourself from
<a href="http://www.mattbierbaum.com/clicky/clickydat.tar.bz2">here</a>.
Alternatively, you can view a super-duper large version of the above
picture that will almost certainly make your browser sad (seriously,
it’s big) at his website
<a href="http://www.mattbierbaum.com/clicky/clickyfull.png">here</a>.
We note that each step taken by Clicky is 1 unit long, and the above
image goes about 2500 on the y-axis and about 5000 in the x-axis. Though
we make no explicit comparison between our humble traveler and the great
men of lore, we do note that Clicky’s long and tortuous path both begins
and ends in Ithaca [2].
Now the big picture is all well and good for some folks, but let’s zoom
in a bit. We’ll now zoom into a portion that is about 1000 by 1000 and
is located about in the middle of the Clicky map.</p>
<hr />
<p><a href="http://2.bp.blogspot.com/-xGLDJGxVTDo/TeRYoMvFAiI/AAAAAAAAAMo/ejKB6Xd3D-I/s1600/clicky_mid.png"><img alt="image" src="http://2.bp.blogspot.com/-xGLDJGxVTDo/TeRYoMvFAiI/AAAAAAAAAMo/ejKB6Xd3D-I/s400/clicky_mid.png" /></a>
Mid-level view of Clicky. Click for a more cromulent view.</p>
<hr />
<p>So this view is pretty neat. Whereas the previous view appeared largely
random, we start to seem some structure here. We can see that some brave
soul has made a spiral that, at its biggest, goes for about a hundred
squares (remember, you could only see ten squares at a time!). We can
also see that most of the steps are small and tend to cluster, but every
now and again there is a large jump to uncharted territory.
Neat! Let’s zoom in a bit more. Now we will zoom down to about a 100 by
100 square.</p>
<hr />
<p><a href="http://1.bp.blogspot.com/-2nvaMlGiAM4/TeRaNsOYlpI/AAAAAAAAAMs/PMHfhKDroF4/s1600/clicky_near.png"><img alt="image" src="http://1.bp.blogspot.com/-2nvaMlGiAM4/TeRaNsOYlpI/AAAAAAAAAMs/PMHfhKDroF4/s400/clicky_near.png" /></a>
Near view. Note the primitive form of communication.</p>
<hr />
<p>So this is neat. We see some very non-random structures. We see spelled
out the phrase “<span class="caps">IM</span> <span class="caps">IN</span> <span class="caps">FIVE</span>-<span class="caps">TEN</span>” (Phys 510 is the required graduate
physics lab here). This was actually not uncommon. There are lots of
little communications that go on throughout the Clicky map. Most are
just people marking their territory, but there are some fun ones. If you
find anything neat, let us know! (<span class="caps">MILD</span> <span class="caps">WARNING</span>: As this was open to The
Internet, we make no claims that everything written is appropriate, but
the worst thing I’ve seen so far is “butts lol.” So I think you’re
safe).
Now dedication to write something is fine, but how about some real
dedication. I found this little Italian gem here, although its means of
creation are suspect, to say the least…</p>
<hr />
<p><a href="http://3.bp.blogspot.com/-R6-E1Gf25S4/TeRdkV2Cu8I/AAAAAAAAAMw/rYY3YpIaYgc/s1600/clicky_nonrandom.png"><img alt="image" src="http://3.bp.blogspot.com/-R6-E1Gf25S4/TeRdkV2Cu8I/AAAAAAAAAMw/rYY3YpIaYgc/s400/clicky_nonrandom.png" /></a>
It’s a Mario!</p>
<hr />
<p>Hot dog. So is there anything quantitative we can say about the path of
Clicky? Sure. Let’s take a look at the distribution of step sizes. By
step sizes I mean the lengths of continuously straight paths. So if you
go right for 5 clicks in a row, the length will be five. Unfortunately,
this will not include the lengths of diagonal paths. Anyway, here’s what
I get:</p>
<hr />
<p><a href="http://3.bp.blogspot.com/-fLEC65jbG3I/TeRfaN41EKI/AAAAAAAAAM0/KeHoRtCGtyc/s1600/MLE_FIT.png"><img alt="image" src="http://3.bp.blogspot.com/-fLEC65jbG3I/TeRfaN41EKI/AAAAAAAAAM0/KeHoRtCGtyc/s400/MLE_FIT.png" /></a>
Power-law fit to Click step-size distribution</p>
<hr />
<p>The red line here is a maximum likelihood fit to a power law
distribution of the form:
<mathjax>$$ p(x) \propto x^{-\mu}. $$</mathjax>
(For an outstanding reference guide to fitting power law distributions
see this <a href="http://arxiv.org/pdf/0706.1062v2">preprint</a>.)
So it appears as though we have a power law distribution here (but see
the paper above!). Well what does that mean? Well it seems we have a
roughly random walk path where the step sizes are pulled from a power
law distribution. This type of random walk is called a Levy flight (a
nice tutorial
<a href="http://classes.yale.edu/fractals/randfrac/Levy/Levy.html">here</a>) and
shows up (or at least appears to) in all kinds foraging patterns in
animals (for example,
<a href="http://physicsworld.com/cws/article/news/42899">sharks</a>).
To test this, we can simulate a Levy flight on a grid like Clicky. Doing
this with the power law found in the above fit gives:</p>
<hr />
<p><a href="http://4.bp.blogspot.com/-e3vHjTxPqs0/TeRirLWJd8I/AAAAAAAAAM4/HZe7gFheYwI/s1600/fake_clicky.png"><img alt="image" src="http://4.bp.blogspot.com/-e3vHjTxPqs0/TeRirLWJd8I/AAAAAAAAAM4/HZe7gFheYwI/s400/fake_clicky.png" /></a>
Impostor Clicky!</p>
<hr />
<p>Not exactly the same, but still looks pretty close!
So that’s all for this installment of <em>Virtuosi Theatre</em>, but there’s
still a whole lot to be analyzed with Clicky. With that much data,
you’re bound to find <em>something</em> (whether it’s there or not!). So if you
find something neat, let us know. (Remember the data can be downloaded
as a txt file
<a href="http://www.mattbierbaum.com/clicky/clickydat.tar.bz2">here</a>).</p>
<hr />
<p>Superfluous Footnotes:
[1] Yes, I know you loved him as Mr. Dottington. I did too! But
apparently “the man” thought that was a “lame” name and made it all
“commercial” with the buzzname Clicky. So it goes.
[2] Although if Clicky is Odysseus, then I guess that makes you Homer.
D’oh! [3]
[3] All my knowledge of “culture” comes from The Simpsons.</p>Anatomy of an Experiment I - The Question2011-05-05T21:06:00-04:00Jessetag:thephysicsvirtuosi.com,2011-05-05:posts/anatomy-of-an-experiment-i-the-question.html<hr />
<p><a href="http://4.bp.blogspot.com/-V-68a5ev1W0/TcNG1UXmaBI/AAAAAAAAAEY/0f-Uq2EjMJY/s1600/installation_of_world_largest_silicon_tracking_detector.jpg"><img alt="image" src="http://4.bp.blogspot.com/-V-68a5ev1W0/TcNG1UXmaBI/AAAAAAAAAEY/0f-Uq2EjMJY/s320/installation_of_world_largest_silicon_tracking_detector.jpg" /></a>
Warning: picture has little or no relation to this post.</p>
<hr />
<p>I realized the other day that I’ve seen a lot of people talk about
research results, but it is much more rare that I see someone talk about
how we do research. I think that may be because, to us as scientists,
the process is second nature. We’ve been doing it for years. Other folks
may be less familiar with the process though. With this in mind, I’m
going to do a short series of posts focused on how we do an experiment.
Not the results, not so much the physics, but the process that we go
through to create, setup, and carry out an experiment. As my example
I’ll use a short little experiment that I built from the ground up in
the last few weeks, that I’m currently in the process of (hopefully)
wrapping up. Today I’ll talk about the driving force behind almost any
experiment: The Question. It might be argued that there are two types of
experiments. There are those that set out to answer a specific question,
and those that set out to explore what happens under certain conditions
(explore some part of phase space). An example of the first type that
comes immediately to mind is the recently announced results from
<a href="http://science.nasa.gov/science-news/science-at-nasa/2011/04may_epic/">Gravity Probe
B</a>
(<span class="caps">GP</span>-B). This was an experiment designed with one goal in mind, to test
the validity of einstein’s theory of general relativity, specifically
geodesic precession and frame dragging. They asked the question, built
the apparatus, and then got results. Here’s a spoiler from the article:
Einstein was right, to remarkable precision. I’m going to mostly ignore
the second type of experiment. While very important, I argue that those
exploratory experiments are (almost?) always done on experimental
apparatus that was built for another experiment. You don’t spend the
time, money, and energy to build an experimental apparatus without
having good evidence that it’s worth doing, that is, without expecting
to see something. This brings me to The Question. The name might be
misleading, the motivation for an experiment might not be a question
(though it can usually be phrased as one). One common motivation is to
test theoretical predictions, as was the case with <span class="caps">GP</span>-B. Theory without
experimental verification is empty. It may sound nice, but we can’t
trust it unless we’ve tested it against what nature actually does.
Sometimes theory develops because of experimental results, for example
the knowledge of the quantization of light came out of anomalous
experimental effects of the photoelectric effect and blackbody radiation
(among others). Other times, experiment develops to test theory, the
<span class="caps">GP</span>-B and the Large Hadron Collider for example. Another common
motivation is a question based on a physical observation, for example:
<a href="http://physicsbuzz.physicscentral.com/2011/04/small-insects-paddle-through-air.html">how does a fly
fly</a>?
That question is, as these things go, very simply stated. For an idea of
how complicated they can get, just take a look at any recent collection
of articles from any physics journal, wherein we find things like the
form and source of ‘itinerant magnetism in FeAs’ (grabbed from a recent
Phys. Rev. B article). I classify a third type of question, one that is
more process based: “How can we do X?”. This third category is where the
question that motivates (at least in the broad sense) the experiment I’m
going to describe comes from. I can phrase it as: “How can we
successfully cryopreserve biological samples?” For those unfamiliar with
biological cryopreservation, this is something I discussed <a href="http://thevirtuosi.blogspot.com/2010/05/cryopreservation.html">almost a
year
ago</a>.
From there, we get into smaller questions, most of those are type two,
based on physical observations. This particular small experiment has
grown out of my work on cryopreservation, and has more to do with the
structure of water on freezing. Over the past year, one of the projects
I’ve been working on has been to measure the so called critical warming
rate of aqueous solutions. This is the rate at which you have to warm
vitreous aqueous solutions (see my earlier cryopreservation post for
more details) to prevent ice formation on warming. The question that has
grown out of this work is: how does the cooling history of my vitreous
sample affect the critical warming rate? Having arrived at the question,
we’ll next discuss the apparatus.</p>End of the Earth VI: Nanobot destruction2011-04-22T13:11:00-04:00Alemitag:thephysicsvirtuosi.com,2011-04-22:posts/end-of-the-earth-vi-nanobot-destruction.html<p><a href="http://1.bp.blogspot.com/-hGkMD-tB1RY/TbGfhRvcA6I/AAAAAAAAAQ4/eCaG-z1Zarc/s1600/612px-C60a.png"><img alt="image" src="http://1.bp.blogspot.com/-hGkMD-tB1RY/TbGfhRvcA6I/AAAAAAAAAQ4/eCaG-z1Zarc/s320/612px-C60a.png" /></a></p>
<p>Let’s destroy the earth with technology. A while ago, I read the novel
<em>Postsingular</em> by Rudy Rucker, and in the first chapter the Earth gets
destroyed, and then undestroyed, and then the novel unfolds and the
Earth’s likelihood is threatened again, and it looks like the Earth will
be destroyed, but it isn’t. How does all of this craziness happen you
might ask: nanobots! The story revolves around little self-replicating
robots. The story explores what it would be like to live in a world
where every surface on Earth was coated in little computers, all of
which were networked together. It’s certainly a neat idea, but whenever
you have self-replicating things, you need to worry a bit about what
might happen if they get out of control. So, let’s assume we, evil
scientists that we are, have managed to create a little self-replicating
nanobot. This little guy can scurry around, running off something
ubiquitous, probably some combination of solar, and some kind of
infrared photovoltaics. This little guy, call him Bob, his only mission
in life is to create a friend. He scurries around collecting the various
ingredients necessary, and using his little robot arms, he slices and
dices up the pieces and welds them together to create another copy of
himself, Rob. Not satisfied with his work; Bob found Rob quite the bore,
and honestly Rob didn’t too much like Bob either, both of them part ways
and try to fashion a new friend. How long until Bob and Rob and their
cohorts manage to chew through all of the material on Earth? What we
have here is the setup to a problem in <a href="http://en.wikipedia.org/wiki/Exponential_growth">Exponential
Growth</a>.</p>
<h3>Exponential Growth</h3>
<p>Let’s simplify things a bit and assume that the nanobots always take a
fixed amount to time to make a new copy of themselves, call that time T.
We’ll start with one guy, so we know that at t =0, we have 1 bot <mathjax>$$ N(t
=0 ) = 1 $$</mathjax> And we know that after T seconds we should have 2 <mathjax>$$ N(T) =
2 $$</mathjax> and after 2T seconds, we’ve managed to double twice and get 4 <mathjax>$$
N(2T) = 4 $$</mathjax> after 3T seconds we’ll double again to 8, etc. In fact,
after nT seconds, so m repetitions we should have doubled m times <mathjax>$$
N(nT) = 2^n $$</mathjax> So if we want to describe all times, we need only ask
how many doublings can fit into t seconds <mathjax>$$ t = n T $$</mathjax> which gives us
<mathjax>$$ N(t) = 2^{t/T} $$</mathjax> At this point you might object, as this formula
doesn’t always give an integer, so we could ask things like how many
bots are there after 0.5T seconds? We know the true answer is still 1,
Bob hasn’t finished Rob yet, but our formula tells us the answer is
1.414… What we’ve done is made a continuous approximation to a
discrete function. Certainly, we’ve paid a price, in that our new
formula doesn’t get answers right in fractions of T, but its a small
price to pay for the mathematical simplicity afforded by the nice
continuous function, and as long as we don’t really care about time
scales smaller than T in the long run, we haven’t done any real harm.
These kinds of approximations show up all over the place in physics, and
going both ways too. Sometimes it is advantageous to treat some discrete
quantity as continuous, and sometimes it might be beneficial to treat
some continuous quantity as discrete. These kinds of approximations are
more than adequate, provided you don’t really take the answers they give
you in the cases where your approximation starts to break too seriously.
In this case, as long as we don’t try to seriously predict the number of
nanobots to an exact count in time scales less than a fraction of their
doubling time, we will have a nice prediction of the number of bots
running around.</p>
<h3>Earth Destruction</h3>
<p>As promised, we wanted to calculate the time it would take the nanobots
to devour the earth. For this we need a little bit more to our model.
How will the nanobots eat the earth, I reckon it will be through using
up its mass. Assuming the bots are made out of elements that are rich
enough, something like iron, they ought to have a field day on Earth,
seeing as it’s composed of about 5% iron on the surface, and with an
interior that is probably about 32% iron overall
<a href="http://en.wikipedia.org/wiki/Abundance_of_the_chemical_elements#Abundance_of_elements_in_the_Earth">[ref]</a>.
So, we need to estimate the mass of a single nanobot. Let’s say the
nanobot is roughly a 1 micron sized cube, made out of iron. This gives
us a nanobot mass of <mathjax>$$ m = (\text{ density of iron }) * (\text{ 1
micron} )^3 = \rho_{\text{Fe}} L^3 \sim 8 \text{ picograms} $$</mathjax>
From here we can estimate the time it would take to chew through the
earth, as the time for the nanobots to be as massive as the earth. <mathjax>$$
\frac{M_{\oplus}}{\rho_{\text{Fe}} L^3 } = N(t) = 2^{t/T} $$</mathjax>
Solving for t we obtain <mathjax>$$ t = T \log_2 \frac{ M_{\oplus}}{
\rho_{\text{Fe}} L^3 } $$</mathjax></p>
<h3>Solution</h3>
<p>Let’s say it takes Bob one month to make Rob, which I don’t think is a
completely unrealistic time for nanobot replication, assuming Bob and
Rob and all of their cohorts are 1 micron in size, I calculate that in
10 years time they would chew through the Earth. The power of
exponential growth! Even with a 1 month gestation, if left unabashed,
the self-replicating robots would eat the entire earth in 10 years time.
They could eat through Mars in about 2. In fact in <em>Postsingular</em> this
is what the humans planned. They wanted a Dyson sphere, so they sent
some self-replicating robots to Mars, let them chew through it a couple
years, and they had 10^37 little robots to do their bidding. That is of
course until the nants set their sites on Earth as their next target…
In order to let you play around with the doubling time and bot size,
I’ve created a Wolfram Alpha widget that solves the above equation, feel
free to play around with the parameters and see how long Earth would survive.</p>
<p>The widget should be right above this text. If it isn’t working for some
reason, here’s a
<a href="http://developer.wolframalpha.com/widgets/gallery/view.jsp?id=6a645314f9be6be7b902d4cc1f776d00">link</a></p>Earth Day Special: Post-Apocalyptic Literature2011-04-22T10:30:00-04:00Alextag:thephysicsvirtuosi.com,2011-04-22:posts/earth-day-special-post-apocalyptic-literature.html<p><a href="http://upload.wikimedia.org/wikipedia/en/2/23/Emergence_cover_first_edition.jpg"><img alt="image" src="http://upload.wikimedia.org/wikipedia/en/2/23/Emergence_cover_first_edition.jpg" /></a>
At some point in elementary school I got into the habit of reading Franz
Kafka’s <a href="http://en.wikipedia.org/wiki/The_Metamorphosis">The
Metamorphosis</a> every
time that I got sick. I found it strangely comforting to be reminded
that while I might have <a href="http://en.wikipedia.org/wiki/Scarlet_fever">scarlet
fever</a> and be intermittently
hallucinating about Mickey Mouse, at least I had not been (spoiler
alert!) turned into a giant cockroach and disowned by my family. Today
is <a href="http://www.google.com/webhp?hl=en#q=Earth+Day&bav=on.2,or.r_gc.r_pw.&fp=38378e84586d88e6">Earth
Day</a>!
The earth has seen better days, and I got too depressed googling various
environmental problems to even come up with a suitable list of examples.
However, look on the <a href="http://www.youtube.com/watch?v=WlBiLNN1NhQ">bright
side</a>: things could be much,
much worse. To explore how much worse it could be, here’s a few of my
favorite works of post-apocalyptic fiction - perfect reading for Earth
Day. Skip past the cut to check them out. In no particular order, here’s
some of my favorite post-apocalyptic fiction. Many of these are aimed
more toward young adults, and since this is a science blog, I’ve also
tried to score them arbitrarily on their scientific plausibility (0-10).
Check out the associated amazon pages for better descriptions and reviews.</p>
<ul>
<li><a href="http://www.amazon.com/Z-Zachariah-Robert-C-OBrien/dp/0020446500">Z for
Zachariah</a><ul>
<li>Robert C. O’Brien - Where’s the best place to be when a nuclear
war goes down? In a isolated valley in upstate New York, apparently!
Z for Zachariah follows a 16 year old girl who is left to fend for
herself after the bombs go off, until a Cornell chemistry postdoc
shows up in a radiation suit. 7/10</li>
</ul>
</li>
<li><a href="http://www.amazon.com/Postman-Bantam-Classics-David-Brin/dp/0553278746/ref=pd_sim_b_3">The
Postman</a><ul>
<li>David Brin - Another in the post-nuclear war sub-genre. The story
gets bogged down in weird survivalist themes in the second half, but
paints a rather believable portrait of the aftermath of a nuclear
winter. 5/10</li>
</ul>
</li>
<li><a href="http://www.amazon.com/Canticle-Leibowitz-Walter-Miller-Jr/dp/0060892994/ref=pd_sim_b_5">A Canticle for
Liebowitz</a><ul>
<li>Walter Miller - Have you ever been stuck in a waiting room at the
dentist’s and the only thing to read is a Reader’s Digest from 1983?
In the future, it’s like that, only way worse. 6/10</li>
</ul>
</li>
<li><a href="http://www.amazon.com/Childhoods-End-Del-Rey-Impact/dp/0345444051/ref=pd_sim_b_5">Childhood’s
End</a><ul>
<li>Aurthur C. Clark - Sometimes the end of the world is surprisingly
zen. 3/10</li>
</ul>
</li>
<li><a href="http://www.amazon.com/Gift-Upon-Shore-M-Wren/dp/0595143415/ref=sr_1_1?ie=UTF8&s=books&qid=1303485614&sr=1-1">A Gift Upon the
Shore</a><ul>
<li><span class="caps">M. K.</span> Wren - Rural Oregon also turns out to be a decent place to
ride out a nuclear winter. Everything is great, unless your only
surviving neighbors are fundamentalists. 7/10</li>
</ul>
</li>
<li><a href="http://www.amazon.com/Emergence-David-R-Palmer/dp/B002U4W1QA/ref=sr_1_8?s=books&ie=UTF8&qid=1303485800&sr=1-8">Emergence</a><ul>
<li>David R. Palmer - This is probably my favorite work in this genre.
Emergence is the diary of a very plucky Candidia Smith-Foster, who,
along with a pet parrot, has survived a communist bio attack. Things
get a bit nutty in the end, but overall a very enjoyable read.
Despite great reviews it’s currently out of print, although a movie
may be in the works. 8/10</li>
</ul>
</li>
<li><a href="http://listverse.com/2009/02/12/10-great-post-apocalyptic-science-fiction-novels/">I am
Legend</a><ul>
<li>It turns out that Will Smith was actually playing an older white
dude. Who knew? It shares strange religious overtones with the
movie, but much better written and with a totally different ending. 4/10</li>
</ul>
</li>
<li><a href="http://www.amazon.com/Pesthouse-Vintage-Jim-Crace/dp/0307278956/ref=sr_1_1?s=books&ie=UTF8&qid=1303487441&sr=1-1">The
Pesthouse</a><ul>
<li>Jim Crace - Society has gone a long way backwards, but they hear
everything is better in Europe. 5/10</li>
</ul>
</li>
<li><a href="http://www.amazon.com/Road-Movie-Tie--Vintage-International/dp/0307476316/ref=sr_1_1?s=books&ie=UTF8&qid=1303487550&sr=1-1">The
Road</a><ul>
<li>Cormac McCarthy - I can’t say I’m a huge fan of his writing style,
but the world that Cormac McCarthy creates here is very compelling,
although mind-numbingly depressing. 9/10</li>
</ul>
</li>
</ul>
<p>This is by no means an exhaustive list - there’s a lot of classics that
I haven’t gotten around to reading yet such as <a href="http://www.amazon.com/Beach-Vintage-International-Nevil-Shute/dp/0307473996/ref=sr_1_1?s=books&ie=UTF8&qid=1303487212&sr=1-1">On the
Beach.</a>
I also have high hopes for Kim Stanley Robinson’s <a href="http://www.amazon.com/gp/product/0312890362?ie=UTF8&tag=jamifrat-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0312890362">The Wild
Shore</a>
since I enjoyed his Mars series. Anyone else have anything to add? Happy
Earth Day!</p>End of the Earth V: There Goes the Sun2011-04-22T09:42:00-04:00Corkytag:thephysicsvirtuosi.com,2011-04-22:posts/end-of-the-earth-v-there-goes-the-sun.html<hr />
<p><a href="http://4.bp.blogspot.com/-LtXsyHxxSi0/TbDfL6l4tnI/AAAAAAAAAMg/l_12lafg6LQ/s1600/creepy_sun.jpg"><img alt="image" src="http://4.bp.blogspot.com/-LtXsyHxxSi0/TbDfL6l4tnI/AAAAAAAAAMg/l_12lafg6LQ/s320/creepy_sun.jpg" /></a>
The Sun [photo courtesy of <span class="caps">NASA</span>]</p>
<hr />
<p>People that know me well know that I have a lot in common with Robert
Frost. We both were born in March and we both employ rural New England
settings to explore complex social and philosophical themes in our
poetry. We also like the same rap groups. In honor of my literary
doppelganger, I will now, having already had the world end in
<a href="http://thevirtuosi.blogspot.com/2010/04/end-of-earth-physics-i.html">fire</a>,
try my hand at ice. Let’s try to answer the question: “If the sun blinks
out of existence this instant, what is the temperature of the Earth as a
function of time?” The Sun, in addition to being the <a href="http://www.youtube.com/watch?v=haAhdtDmsOw">King of
Planets</a>, is also what keeps
us all warm and toasty and alive. What happens if we turn that off?
Well, the Earth will cool by radiating its heat away into space. To see
how long this would take, let’s make some assumptions. Let’s model the
surface of the Earth as an ocean 1 km deep and let’s pretend that all
the heat is stored in this ocean. Let’s take the ocean to be liquid
water at T = 0 degrees Celsius. How long will it take this ocean to
freeze into ice at 0 degrees Celsius? Well, the amount of energy
released from the oceans as the water freezes is given by <mathjax>$$ Q = L_{w}
M_{ocean} $$</mathjax> where L is the “latent heat of fusion” and M is the mass
of the water. The “latent heat of fusion” is a fancy way of saying “the
amount of energy released per unit mass as water turns to ice at
constant temperature.” For water, we have that <mathjax>$$ L_{w} = 3.3 \times
10^5 \mbox{J/kg} $$</mathjax> And for the mass of the ocean, it will be
convenient later to write it as <mathjax>$$ M_{ocean} = 4\pi {R_{\oplus}}^2
\Delta R \rho $$</mathjax> Alright, so now we’ve got enough to say how much heat
energy we have, so how fast do we lose it? We can take the Earth to be a
blackbody radiator, so the power lost in such a case is: <mathjax>$$ P =4\pi
\sigma R_{\oplus}^2 T^4 $$</mathjax> Since Power is just Energy per unit
Time, we now have all we need to get the time for total freezing of all
the oceans. We have: <mathjax>$$ t = \frac{Q}{P} = \frac{4\pi
{R_{\oplus}}^2 \Delta R \rho L_{w}}{4\pi \sigma R_{\oplus}^2
T^4} $$</mathjax> Simplifying the above expression a bit, we get <mathjax>$$ t
=\frac{\Delta R \rho L_{w}}{\sigma T^4} $$</mathjax> Now we can plug in some
numbers, <mathjax>$$ t =\frac{\left(10^3 \mbox{ m}\right) \times
\left(10^3 \frac{kg}{m^3}\right) \times \left(3.3 \times 10^5
\mbox{J/kg}\right)}{\left( 5.67 \times 10^{-8} J s^{-1} m^{-2}
K^{-4}\right) \times \left( 273 K\right)^4} $$</mathjax> where we have made
sure to put our temperatures in Kelvin. Crunching the numbers with the
calculator we “borrowed” from Nic three months ago gives: <mathjax>$$ t = 10^9
\mbox{ s} $$</mathjax> Remembering that a year is very nearly <mathjax>$$ 1 \mbox{ year}
= \pi \times 10^7 \mbox{ s}, $$</mathjax> we find that the time for the oceans
to freeze after the sun disappears is about 30 years. Hooray! Now this
model was very simple. First of all, I assumed that the ocean
temperatures were already at 0 degrees, but they are a bit warmer. If
the oceans are about 300 K (ie 80 degrees in not-Yariv units), then we
get another 30 years to cool down to freezing temperatures. Secondly, I
have completely neglected the heat stored in the Earth. Will this change
my answer by an embarrassingly large factor? Lastly, I have ignored all
internal heating mechanisms (ie, radioactive decay) that will also heat
up the Earth. But ignoring all that…. So is there a way for anyone to
survive this? Well, for the most part it will mean the end of life on
Earth. There could potentially be a few exceptions, like by geothermal
vents and such. But for the most part, it’s one quick cold spiral down
to eternal nothingness. But what about a few people, could they survive
for a bit even if the human race is doomed? I’m glad you asked! You see,
I have this plan involving mine shafts. Hunkering down underground with
a nuclear power plant and all the canned food food we can stomach should
allow us to at least ride the rest of our lives. Details can be found
<a href="http://www.youtube.com/watch?v=iesXUFOlWC0&feature=related">here</a>.</p>End of the Earth IV - Shocking Destruction2011-04-22T07:54:00-04:00Jessetag:thephysicsvirtuosi.com,2011-04-22:posts/end-of-the-earth-iv-shocking-destruction.html<p><a href="http://4.bp.blogspot.com/-aa4EF60W7m0/TbCwQ8Vc3WI/AAAAAAAAAEU/03HiJiGJ6hc/s1600/exploding-earth11.jpg"><img alt="image" src="http://4.bp.blogspot.com/-aa4EF60W7m0/TbCwQ8Vc3WI/AAAAAAAAAEU/03HiJiGJ6hc/s200/exploding-earth11.jpg" /></a>
Earth day is upon us once more. So many other namby-pamby bloggers out
there (don’t hurt me!) are writing about how wonderful the earth is and
how great earth day is. We here at The Virtuosi take a more hardline
approach. Today I’m going to tell you how to destroy the earth.
Completely and totally. Unlike
<a href="http://thevirtuosi.blogspot.com/2010/04/end-of-earth-physics-i.html">last</a>
<a href="http://thevirtuosi.blogspot.com/2010/04/end-of-earth-ii-blaze-of-glory.html">year’s</a>
<a href="http://thevirtuosi.blogspot.com/2010/04/end-of-earth-physics-iii-asteroids.html">methods</a>,
this one should work. In fact, this method is so simple that I can tell
you what to do right now. Just tweak the charge on the electron so it is
a bit out of balance with the charge on the proton. Just a little bit.
How little a bit, you might ask? A very little bit. Really, this doesn’t
sound hard. I mean, sure, you have to do it for all of the electrons in
the earth, but we’re talking about a very very small percentage change.
Not convinced? Let me show you just how small a change we’re talking. If
there is a charge imbalance in the electron and the proton, this will
give the earth a net charge, throughout it’s volume. I’ve got to make a
few assumptions about the earth here, so hold on. I’m going to assume
that the earth is a uniform density everywhere, and I’m going to assume
that the earth is made entirely of iron.<em> Now, the net charge of any
iron atom will be <mathjax>$$ (q_e-q_p)Z=(q_e-q_p)26$$</mathjax> where Z is the atomic
number of iron, the number of protons (and electrons) the atom has. The
net charge of the earth, Q, is the number of iron atoms, N, times this
charge, <mathjax>$$Q=(q_e-q_p)ZN$$</mathjax> I’ve <a href="http://thevirtuosi.blogspot.com/2010/04/end-of-earth-ii-blaze-of-glory.html">previously
estimated</a>
that N is about 3</em>10^50 atoms. Now, the electric potential energy of a
sphere of radius r with charge q uniformly distributed throughout it’s
volume is <mathjax>$$U_e=\frac{3kq^2}{5r}$$</mathjax> where k is the coulomb constant.
Dissolution of the earth will occur when the electrostatic energy of the
earth equals the gravitational potential energy of the earth. The
gravitational bound energy of the earth is given by
<mathjax>$$U_g=\frac{3GM^2}{5R}$$</mathjax> Where M is the mass of the earth, G is
Newton’s gravitational constant, and R is the radius of the earth.
Setting this equal to the electrostatic energy of the earth,
<mathjax>$$\frac{3GM^2}{5R}=\frac{3kQ^2}{5R}$$</mathjax> <mathjax>$$Q^2=\frac{G}{k}M^2$$</mathjax> so
<mathjax>$$(q_e-q_p)ZN=\left(\frac{G}{k}\right)^{1/2}M$$</mathjax> Now, N is given,
in our approximations, by <mathjax>$$N=\frac{M_{earth}}{m_{iron}}$$</mathjax> so
<mathjax>$$q_e-q_p=\left(\frac{G}{k}\right)^{1/2}\frac{m_{iron}}{Z_{iron}}$$</mathjax>
Now we can plug in some numbers. G=6.7<em>10^-11 m^3</em>kg^-1<em>s^-2, k=
9</em>10^9 m^3<em>kg</em>s^-2<em>C^-2, m_iron=9</em>10^-26 kg, Z=26. Thus,
<mathjax>$$q_e-q_p=3*10^{-37} C$$</mathjax> To put this in perspective, the charge on
the electron is 1.6*10^-19 C, so this is roughly 10^18 times less
than that charge. Put another way, if the charge on the electron was
imbalanced from that of the proton by roughly 1 part in 10^18, the
earth would cease to exist due to electrostatic repulsion.<strong> As I told
you at the beginning, you only have to change the charge by a very small
amount! So get working. There are only about
1000000000000000000000000000000000000000000000000000 electrons you need
to modify! *According to the internet, the density of the earth, on
average, is roughly 5.5 g/cm^3. The density of iron is 7.9 g/cm^3 at
room temperature, and the density of water is 1 g/cm^3 at room
temperature. So, while the earth is not entirely iron (of course), it is
a better approximation to assume the earth is iron than the earth is
water. And those, of course, were really our only two choices. </strong> It
turns out that this is a good argument for the charge balance of the
electron and the proton.</p>Matt Raises Clicky From the Dead2011-04-08T13:52:00-04:00Corkytag:thephysicsvirtuosi.com,2011-04-08:posts/matt-raises-clicky-from-the-dead.html<hr />
<p><a href="http://mattbierbaum.com/clicky/"><img alt="image" src="http://2.bp.blogspot.com/-4_stoxVGUBw/TZ6JaKU2EQI/AAAAAAAAAMY/nZuXEMKRRE8/s640/clicky.png" /></a></p>
<hr />
<p>After languishing for four days in the digital hereafter, Clicky has
been successfully raised from the dead (thanks to Matt). He can be found
at his new (and bandwidth-unrestricted) home
<a href="http://mattbierbaum.com/clicky/">here</a>.
For the initial post about Clicky, see
<a href="http://thevirtuosi.blogspot.com/2011/04/collective-wanderings.html">this</a>
earlier post. Enjoy!</p>A Sense of Scale (in Dollars and Cents)2011-04-07T21:42:00-04:00Corkytag:thephysicsvirtuosi.com,2011-04-07:posts/a-sense-of-scale-in-dollars-and-cents-.html<p>I hate politics, but for some reason I obsessively read about it. I
don’t know why I do this, but I assume it’s the same reason people slow
down for car wrecks and pay to see the geek [1]. Anyway, the big thing
in political news now is that if Congress can’t pass a budget [2] by the
end of the day Friday, the government will shut down. Shutting down the
government means that 800,000 federal employees will go without pay [3],
lots of services will be put on hold and you won’t be able to go to the
Smithsonian or the Grand Canyon. So it’s kind of a big deal. Since the
ramifications of a government shutdown are so serious, there must be
some really important disagreements holding it up, right? Right? A quick
search (for example,
<a href="http://www.guardian.co.uk/world/2011/apr/07/us-congress-staff-government-shutdown">here</a>),
shows that the big hold-up in passing the budget comes over a
disagreement on how much money should be cut from the budget.
Republicans want to cut <mathjax>$40 billion dollars and Democrats are willing to
cut $</mathjax>34.5 billion dollars. So the hold-up is over <mathjax>$5.5 billion. Let's
consider how utterly and stupidly insignificant this is. We shall take
as our inspiration Alemi's
[post](http://thevirtuosi.blogspot.com/2010/12/law-and-large-numbers.html)
a while back offering several ways to help visualize the large and scary
sounding numbers thrown around in government budgets. The first thing
we'd like to do is find out what fraction this $</mathjax>5.5 billion discrepancy
is compared to the total budget. Alemi cites the total budget of the
<span class="caps">U.S.</span> government in the 2010 fiscal year to be <mathjax>$3.55 trillion. Hot dog!
So we see that: $</mathjax><mathjax>$ \frac{\mbox{Disputed Difference}}{\mbox{Total
Budget}} = \frac{\$</mathjax>5.5 \times 10^9}{\$3.55 \times 10^{12}} =
0.0015 <mathjax>$$ So the disputed part amounts to 0.15% of the total budget or
one and half parts in a thousand. Let's compare this to some things for
which we have a better sense of scale. Let's pretend we are looking to
buy a car. We need it to get to work and, you know, get stuff done. If
our car costs $10,000 dollars then 0.15% of our total costs will be $$</mathjax>
\mbox{\$10,000} \times 0.0015 = \$15 <mathjax>$$ So the current budget
situation is like arguing for 6 months over a $15 charge on your $10,000
car. Sounds reasonable! Now let's switch gears and consider a timescale.
Let's consider a 40 hour work week. What's 0.15% of 40 hours? This comes
to $$</mathjax> 40 \mbox{ hr} \times 0.0015 = 0.06 \mbox{ hr} <mathjax>$$ or, if you
prefer, $$</mathjax> 0.06 \mbox{ hr} \times \frac{60 \mbox{ min}}{1 \mbox{
hr}} = 3.6 \mbox{ minutes} <mathjax>$$ So the current budget problem is like
arguing for six months over about three and a half extra minutes to your
work week. Now let's consider a length scale. Consider the US $1 bill.
According to
[Wikipedia](http://en.wikipedia.org/wiki/United_States_one-dollar_bill#Small_size_notes),
the dollar bill is 6.14 inches long, 2.61 inches wide and 0.0043 inches
thick. That means that the ratio of the thickness to the width of a
dollar bill is $$</mathjax> \frac{\mbox{Thickness}}{\mbox{Width}} =
\frac{0.0043 \mbox{ inches}}{2.61 \mbox{ inches}} = 0.0016 $$ or
slightly <em>more</em> than the disputed fraction of the budget. Hooray!
“That’s all well and nice,” you say, lowering your voice and leaning
over in a way that makes me uncomfortable, “but who is to <em>blame</em>?”
That’s a great question! My answer will take the form of an experiment.
First, get a coin. Got it? Great. Now if you’re Republican, let heads be
“The Republicans” and tails be “The Democrats.” If you’re a Democrat,
let heads be “The Democrats” and tails be “The Republicans.” If you are
neither, then randomly assign a party to heads and allow the other to be
tails. Ready? The coin is to blame. [1] I use this in its original
meaning. That is, the guy whose job it was to do horribly gross things
at a carnival for money, not the guy whose main form of social
interaction is debating scenes in Star Wars on internet forums. A more
up-to-date comparison would have been to say “for the same reason that
people watch <em>Jersey Shore</em>.” [2] That is, the budget for the 2011
fiscal year, which started on October 1st 2010. [3] Don’t worry, all
members of Congress would be exempt from this and would still pick up paychecks!</p>Collective Wanderings2011-04-04T22:39:00-04:00Corkytag:thephysicsvirtuosi.com,2011-04-04:posts/collective-wanderings.html<p><em>Update [04/04/11]: It seems Clicky got too popular for our bandwidth
limits on the physics servers. Hopefully we’ll be able to fix this
sometime soon…</em> <em><em> </em>Update [04/02/11]: We made the code faster, so
check out the new and improved Clicky!</em> Hey, kids! Would you like to be
a bit player in a grand experiment with poorly thought out objectives?
If so, then check
<a href="http://pages.physics.cornell.edu/~aalemi/clicky/">this</a> out. It’s a
little interactive “game” [1] that Alemi and Matt coded up. When you
click on the link, you will be redirected to a page showing something
like this:
<a href="http://4.bp.blogspot.com/-FYqCFtPrbT0/TZaZF4ubmPI/AAAAAAAAAMM/5Hf-_v0wTrY/s1600/clicky1.png"><img alt="image" src="http://4.bp.blogspot.com/-FYqCFtPrbT0/TZaZF4ubmPI/AAAAAAAAAMM/5Hf-_v0wTrY/s400/clicky1.png" /></a>
Using the arrow keys on your keyboard or by clicking on the arrow
buttons on the screen, you can move the dot around. Sound fun yet? Well,
the interesting bit comes in with the fact that anyone who wants to can
move the same dot at the same time. The plot will update automatically,
so if you’re really bored you can sit around and watch someone else move
the dot around. Heck, you could even give the dot a cute name like Mr.
Dottington and tell it how your day was. We won’t judge you. [2] Anyway,
check it out if you’ve got a bit of time to kill. Explore the space and
we’ll get back to you with the “results” of this “experiment” sometime
in the “future.” [1] There are no objectives and “winning” is undefined
(for a contrary argument, however, see Sheen et. al (2011)) [2] But Mr.
Dottington will.</p>Special Virtuosi Book Announcement!2011-04-01T01:41:00-04:00Corkytag:thephysicsvirtuosi.com,2011-04-01:posts/special-virtuosi-book-announcement-.html<hr />
<p><a href="http://4.bp.blogspot.com/-R1-a_FlO6sA/TZVXVvoqvcI/AAAAAAAAALA/nNu7S12Sliw/s1600/kindle.jpeg"><img alt="image" src="http://4.bp.blogspot.com/-R1-a_FlO6sA/TZVXVvoqvcI/AAAAAAAAALA/nNu7S12Sliw/s1600/kindle.jpeg" /></a>
Coming soon to a Kindle near you!</p>
<hr />
<p>We’ve been at this whole blogging thing for about a year now and I think
we’ve amassed a large and dedicated enough fanbase to finally release a
book! The track record so far for physicist-writers has been <a href="http://www.amazon.com/Spiral-Novel-Paul-McEuen/dp/038534211X/ref=sr_1_1?ie=UTF8&qid=1301633241&sr=8-1">quite
good</a>
of late, so we figured why not us? Well, lots of reasons actually. For
one thing, it’s <em>really hard</em>. Books are, like, <em>hundreds</em> of pages
long. I barely stay coherent and on-topic in a one page blog post. For
another thing, it takes lots of time. I hardly have enough time to do my
laundry in time scales deemed “socially acceptable.” How could I ever
find the time to write a book? Despite these potential setbacks, the
millions and millions of dollars that writers make still seems really
appealing. Who wouldn’t want to be rich and popular forever. I mean,
just look at Oscar Wilde, Edgar Allan Poe and Herman Melville! Luckily,
a solution presented itself. I don’t have time to write a book now, but
I found an old copy of my novel <em>Blue Dragon</em> laying around the house
that I was able to sell using the immense popularity of the Virtuosi
brand. The book will be published this summer by Clark Hall Publishing.
Here are a few advance reviews: <em>“Audacious, bodacious, entropic,
synoptic, electric, eclectic, entertaining, hyperbraining, high-roller,
tripolar… Buy</em>~~Against the Day~~ Blue Dragon.” — <span class="caps">THE</span> <span class="caps">PHILADELPHIA</span>
<span class="caps">INQUIRER</span> <em>“Corky isn’t easy to ‘get’. His melodies and chords I would
describe as complicated. But, behind all the disharmony and 11ths and so
forth.. it swings. This is my favorite Virtuosi book. It’s an amazing
book and that’s all you can say.”</em> — <span class="caps">J.D.</span> Jackson “<em>This is a book rich
in allegory and poignant social commentary. Also, you should call home
more”</em> — Corky’s Mom So these unbiased critics think it’s totally rad.
Should you just take their word for it? Absolutely! But if you really
want see the book for yourself, here it is:
<a href="http://4.bp.blogspot.com/-8h9NruKfpcY/TZVfkIGsNdI/AAAAAAAAALE/ksvtyGo97YE/s1600/Blue_Dragon+0.jpg"><img alt="image" src="http://4.bp.blogspot.com/-8h9NruKfpcY/TZVfkIGsNdI/AAAAAAAAALE/ksvtyGo97YE/s400/Blue_Dragon+0.jpg" /></a>
<a href="http://2.bp.blogspot.com/-GdbX2pEIgxI/TZVg5TR7AqI/AAAAAAAAALI/xGaSUfVSEZo/s1600/Blue_Dragon+1.jpg"><img alt="image" src="http://2.bp.blogspot.com/-GdbX2pEIgxI/TZVg5TR7AqI/AAAAAAAAALI/xGaSUfVSEZo/s400/Blue_Dragon+1.jpg" /></a>
<a href="http://3.bp.blogspot.com/-0-DVsda2lx8/TZVg5gkGEeI/AAAAAAAAALM/02ydunms7wU/s1600/Blue_Dragon+2.jpg"><img alt="image" src="http://3.bp.blogspot.com/-0-DVsda2lx8/TZVg5gkGEeI/AAAAAAAAALM/02ydunms7wU/s400/Blue_Dragon+2.jpg" /></a>
<a href="http://4.bp.blogspot.com/-22WcWYnjf6o/TZVg6YfuvgI/AAAAAAAAALQ/7sLTLvmEk7E/s1600/Blue_Dragon+3.jpg"><img alt="image" src="http://4.bp.blogspot.com/-22WcWYnjf6o/TZVg6YfuvgI/AAAAAAAAALQ/7sLTLvmEk7E/s400/Blue_Dragon+3.jpg" /></a>
<a href="http://1.bp.blogspot.com/-AeCvk8c6cZ8/TZVg7dQrLYI/AAAAAAAAALU/zvyTSQVfiFM/s1600/Blue_Dragon+4.jpg"><img alt="image" src="http://1.bp.blogspot.com/-AeCvk8c6cZ8/TZVg7dQrLYI/AAAAAAAAALU/zvyTSQVfiFM/s400/Blue_Dragon+4.jpg" /></a>
<a href="http://3.bp.blogspot.com/-RgIlU2GhMwM/TZVg8OJZQRI/AAAAAAAAALY/Xyf-bzmvEVQ/s1600/Blue_Dragon+5.jpg"><img alt="image" src="http://3.bp.blogspot.com/-RgIlU2GhMwM/TZVg8OJZQRI/AAAAAAAAALY/Xyf-bzmvEVQ/s400/Blue_Dragon+5.jpg" /></a>
<a href="http://1.bp.blogspot.com/-665JCYacjvs/TZVg816ylbI/AAAAAAAAALc/bRiAV25GLrs/s1600/Blue_Dragon+6.jpg"><img alt="image" src="http://1.bp.blogspot.com/-665JCYacjvs/TZVg816ylbI/AAAAAAAAALc/bRiAV25GLrs/s400/Blue_Dragon+6.jpg" /></a>
<a href="http://3.bp.blogspot.com/-yRDUoYQICC8/TZVg9smkxTI/AAAAAAAAALg/4WQ_icWzhTo/s1600/Blue_Dragon+7.jpg"><img alt="image" src="http://3.bp.blogspot.com/-yRDUoYQICC8/TZVg9smkxTI/AAAAAAAAALg/4WQ_icWzhTo/s400/Blue_Dragon+7.jpg" /></a>
<a href="http://4.bp.blogspot.com/-JWO3R4e-CPo/TZVg-SITf_I/AAAAAAAAALk/5EEBXi0Pfqs/s1600/Blue_Dragon+8.jpg"><img alt="image" src="http://4.bp.blogspot.com/-JWO3R4e-CPo/TZVg-SITf_I/AAAAAAAAALk/5EEBXi0Pfqs/s400/Blue_Dragon+8.jpg" /></a>
<a href="http://3.bp.blogspot.com/-hhApIwOKwC8/TZVg-8tBZ-I/AAAAAAAAALo/1p739Qh5M0Q/s1600/Blue_Dragon+9.jpg"><img alt="image" src="http://3.bp.blogspot.com/-hhApIwOKwC8/TZVg-8tBZ-I/AAAAAAAAALo/1p739Qh5M0Q/s400/Blue_Dragon+9.jpg" /></a>
<a href="http://4.bp.blogspot.com/-QTmCNMcqOtw/TZVg_pZdr6I/AAAAAAAAALs/wvJs5V5_KH4/s1600/Blue_Dragon+10.jpg"><img alt="image" src="http://4.bp.blogspot.com/-QTmCNMcqOtw/TZVg_pZdr6I/AAAAAAAAALs/wvJs5V5_KH4/s400/Blue_Dragon+10.jpg" /></a>
<a href="http://4.bp.blogspot.com/-sUgqqPFsz64/TZVhAAQls9I/AAAAAAAAALw/eZKCELD6RAw/s1600/Blue_Dragon+11.jpg"><img alt="image" src="http://4.bp.blogspot.com/-sUgqqPFsz64/TZVhAAQls9I/AAAAAAAAALw/eZKCELD6RAw/s400/Blue_Dragon+11.jpg" /></a>
<a href="http://1.bp.blogspot.com/-ZngjLnMZ1sE/TZVhAvZ_9cI/AAAAAAAAAL0/nOGYrVxjudc/s1600/Blue_Dragon+12.jpg"><img alt="image" src="http://1.bp.blogspot.com/-ZngjLnMZ1sE/TZVhAvZ_9cI/AAAAAAAAAL0/nOGYrVxjudc/s400/Blue_Dragon+12.jpg" /></a>
<a href="http://1.bp.blogspot.com/-6K0BqeU9h90/TZVhBYv-dSI/AAAAAAAAAL4/W3WScRdkCvQ/s1600/Blue_Dragon+13.jpg"><img alt="image" src="http://1.bp.blogspot.com/-6K0BqeU9h90/TZVhBYv-dSI/AAAAAAAAAL4/W3WScRdkCvQ/s400/Blue_Dragon+13.jpg" /></a>
<a href="http://4.bp.blogspot.com/-m6rVltXBHDU/TZVhB8dgN0I/AAAAAAAAAL8/0SJeEqunX4Y/s1600/Blue_Dragon+14.jpg"><img alt="image" src="http://4.bp.blogspot.com/-m6rVltXBHDU/TZVhB8dgN0I/AAAAAAAAAL8/0SJeEqunX4Y/s400/Blue_Dragon+14.jpg" /></a>
<a href="http://4.bp.blogspot.com/-WrewtPzlKfI/TZVhCZby49I/AAAAAAAAAMA/iRzSOigU43o/s1600/Blue_Dragon+15.jpg"><img alt="image" src="http://4.bp.blogspot.com/-WrewtPzlKfI/TZVhCZby49I/AAAAAAAAAMA/iRzSOigU43o/s400/Blue_Dragon+15.jpg" /></a></p>Nickel Gnomes2011-03-31T02:17:00-04:00Corkytag:thephysicsvirtuosi.com,2011-03-31:posts/nickel-gnomes.html<hr />
<p><a href="http://4.bp.blogspot.com/-i8ltaoDuLpI/TZQPlf1wbaI/AAAAAAAAAK4/svtKmQjxBsE/s1600/underpants-gnomes.jpg"><img alt="image" src="http://4.bp.blogspot.com/-i8ltaoDuLpI/TZQPlf1wbaI/AAAAAAAAAK4/svtKmQjxBsE/s320/underpants-gnomes.jpg" /></a>
Perhaps Step 2 was to steal copper?</p>
<hr />
<p>While flipping through a <span class="caps">CRC</span> Handbook whose days in the United States
are dwindling, I came across a section that described the naming
conventions of each chemical element. Most of the names made sense to
me. For example, Nobelium is named after Alfred Nobel (surprise!).
However, the Nickel entry was the following: Nickel: Named after
<em>Satan</em>or <em>Old Nick</em> <em><em> This confused me greatly. What the heck is Santa
Claus doing hanging out with Satan [1]? After a bit of poking around on
the internet, I found an article from 1931 by a guy named William
Baldwin called </em>The Story of Nickel, How “Old Nick’s” Gnomes were
Outwitted</em>. Needless to say, this did not allay my confusion. Here is
the relevant passage from the article: <em>In the early part of the
eighteenth century fresh lodes of ore were laid open in Saxony where
from times immemorial silver and copper mines had been worked. This new
ore was so glittering and full of promise as to cause the greatest
excitement, but after innumerable trials and endless labor all</em> <em>that
could he obtained from the ore was a worthless metal. In disgust the
superstitious miners named the ore kupfer-nickel after “Old Nick” and
his mischievous gnomes who were charged with plaguing the miners and
bewitching the ore.</em> This “worthless” metal is nickel. Kupfernickel is
the German name for the nickel ore (see image below) which comes from
the words <em>kupfer</em>, meaning “copper” and <em>nickel</em>, meaning “demon.”</p>
<hr />
<p><a href="http://1.bp.blogspot.com/-9psfFrm6qec/TZQZVPtLD0I/AAAAAAAAAK8/wyMrVWYT7Zo/s1600/niccolite.jpg"><img alt="image" src="http://1.bp.blogspot.com/-9psfFrm6qec/TZQZVPtLD0I/AAAAAAAAAK8/wyMrVWYT7Zo/s400/niccolite.jpg" /></a>
Ore that, if not for gnomes, would contain that sweet sweet copper</p>
<hr />
<p>And that’s the story of Nickel! Tune in next time [2] for our ongoing
118-part series, <em>Better Know an Element!</em> [1] Turns out “Old Nick” is
an English name for the Devil. Go figure. But those of you named
Nicholas should not be upset, as your name comes from the Greek
Nikholaos, which literally means “victory-people.” So you’ve got that
going for you. [2] I have no plans of ever doing this again.</p>Blown Away2011-03-24T17:46:00-04:00Jessetag:thephysicsvirtuosi.com,2011-03-24:posts/blown-away.html<p><a href="http://2.bp.blogspot.com/-NFVShpYzscc/TYutYtPSIVI/AAAAAAAAAEM/vRA8KbFV6tc/s1600/wind-turbine-2.jpg"><img alt="image" src="http://2.bp.blogspot.com/-NFVShpYzscc/TYutYtPSIVI/AAAAAAAAAEM/vRA8KbFV6tc/s200/wind-turbine-2.jpg" /></a></p>
<p>I was reading a discussion on green energy recently, in particular wind
power, where the following claim was made</p>
<blockquote>
<p><em>enough wind turbines to power the world would cover the surface of
the world.</em></p>
</blockquote>
<p>Now, this was quickly decried by supporters of wind power, but the claim
has stuck with me. The question on my mind today is: How much of the
earth’s surface would have to be covered to power the earth with wind
turbines? We can’t hope to put an exact number on this, the best we’ll
be able to do is an order of magnitude. I also don’t know much about
wind turbines, so I’ll be making liberal use of
<a href="http://en.wikipedia.org/wiki/Wind_turbine">wikipedia</a> as I go. Let’s
start with the size of the wind turbine. According to wikipedia the
largest wind turbine has a rotor sweep diameter of 128 m. To an order of
magnitude, we’ll say that our average wind turbine has a diameter of 100
m. Next we need to know how much power this puts out. The maximum power
of this turbine is \~8 <span class="caps">MW</span>. However, it certainly wouldn’t be producing
that at all times. Current wind farms produce around 20-30% maximum
capacity. However, these turbines are careful placed in areas of high
wind. We’re not going to get that lucky with our wind dose when we place
our turbines haphazardly, so we’ll assume they produce at 1% maximum
capacity. According to
<a href="http://en.wikipedia.org/wiki/World_energy_resources_and_consumption">wikipedia</a>,
the world energy consumption in 2008 was 474 <span class="caps">EJ</span> (exajoules), or an
average power use in 2008 of 15 <span class="caps">TW</span>. To an order of magnitude then, the
area we’d have to occupy with wind turbines to power the world would be:
<mathjax>$$\left( \frac{(100\text{ m})^2}{1\text{
turbine}}\right)\left(\frac{1\text{ turbine}}{.01*8 \text{
MW}}\right)15 \text{TW} = 2\cdot 10^{12}\text{ m}^2$$</mathjax> That’s
2<em>10^6 km^2, or, in english, 2 million square kilometers. For
comparison, the land area of the united states is roughly 10 millon
square kilometers. So we’d only have to cover 1/5th of the united states
with wind turbines to power the entire world (in 2008, no doubt power
use has risen since then)! While that is a lot of space taken up, it is
nowhere near the entire surface of the world. There are, of course,
<a href="http://xkcd.com/556/">other concerns</a> about wind power. </em>Note: maybe
the wind turbines are less efficient overall. Also, I assumed that the
footprint was just the square area of the turbine diameter. I know this
is the size of the face of the turbine, but to an order of magnitude I
imagine it is correct for the space occupied on the ground.*</p>Physics Challenge Award Show2011-03-17T23:30:00-04:00Corkytag:thephysicsvirtuosi.com,2011-03-17:posts/physics-challenge-award-show.html<hr />
<p><a href="https://lh5.googleusercontent.com/-pyHug0rZ0X0/TYGM0O4fznI/AAAAAAAAAK0/ZTrk9NSTYuw/s1600/macgyver.jpg"><img alt="image" src="https://lh5.googleusercontent.com/-pyHug0rZ0X0/TYGM0O4fznI/AAAAAAAAAK0/ZTrk9NSTYuw/s320/macgyver.jpg" /></a>
In an emergency, Richard Dean Anderson’s mullet can be used as a flotation device and/or standard kilogram.</p>
<hr />
<p>Welcome to the First Physics Challenge Problem Award Show! We received
an integer number of solutions to our challenge problem and at long last
and after much deliberation, we have chosen our winner. We had before
indicated vaguely that there may be some sort of prizes involved in this
competition. After consultation with our financial advisors and breaking
Alemi’s piggy bank, we have decided on the following prizes: First
Place: A brand new <span class="caps">CRC</span> Handbook! Second Place: An autographed [1]
picture of Scott Bakula! Honorable Mention: Nothing! [2] So before we
officially announce our winner, let’s backtrack and build up some
suspense. The challenge was to come up with a bunch of MacGyveresque
experiments to determine as closely as possible the standard second,
meter and kilogram using only the materials handy to you on a desert [3]
island. Just about every response we got successfully answered the
question, so we had to base our final result on robustness and
uncertainties. We also tended to favor those that did not rest on
precise knowledge of one’s own height, weight, etc (though there is
nothing wrong with these approaches). So without further ado, the winner
is…. [pause] [unnecessarily long secondary pause] George from
Australia! George provided a list of no fewer than 11 different methods
for determining the second, meter and kilogram. The second was found by
timing (by means of a pendulum) the time it takes for the sun to move
through an a given angle measure. This measurement can be made each day
for a year to get the best results (remember, you’ve got all the time in
the world!). Now we have the time it takes the sun to go through a given
angle measured in “swings of a pendulum.” We can calculate the transit
time of the sun into seconds and then we can find the number of seconds
per swing of the pendulum. So now we have a second. But since the period
of a pendulum is given by: <mathjax>$$ T = 2\pi \sqrt{l/g}, $$</mathjax> we also have the
length of the pendulum in meters. From this we can make a standard
meter. Now all that is left is to find the kilogram. This can be done by
making a water-tight enclosure of volume 1000 cm^3 and filling it with
water, giving a kilogram that can be compared to other objects to get a
longer lasting standard kilogram. Any other knowledge you may have (i.e.
known height and weight) may be used to check the above measurements. A
very close second place goes to Alireza, who provided a very strong
submission and perhaps one of the best uses of snail mucus in the entire
competition! Alireza’s solution was to use one’s own known height to
construct the standard meter. From this you can make a pendulum to get
the second and a box o’ water (sealed with tar, glue and snail mucus) to
get the kilogram. Both George and Alireza provided very detailed
responses with special attention paid toward reducing uncertainty and
checking their answers through several independent measurements. They
also both carried out parts of their experiments. Congratulations to you
both! In addition to the winning responses, we also received several
other submissions that merit honorable mention. They are…. Oliver, who
provided several acoustical experiments using his <span class="caps">SUPERPOWER</span> of perfect
pitch. Nicole, who noted that we were not correct in giving the Titanic
the designation
“<a href="http://en.wikipedia.org/wiki/Her_Majesty%27s_Ship"><span class="caps">HMS</span></a>” as this is
reserved for ships in the British Royal Navy. In fact, the Titanic was
designated “<a href="http://en.wikipedia.org/wiki/Royal_Mail_Ship"><span class="caps">RMS</span></a>” since
it carried mail. Thanks, Nicole! Gary, who completely ignored the
premise, used arbitrary units and left a very amusing note on his
deathbed giving all his arbitrary units to be found and converted by the
first enterprising explorer who lands on his island. I know you had
asked for your prize in cash and while I cannot directly accomodate that
wish, do know that Richard Dean Anderson smiles can be cashed in most
banks nationwide (see [2]). So thanks to everyone who submitted a
response to this Challenge. We enjoyed reading all of the solutions and
we hope you had fun thinking about it. And next time you go on a boat,
don’t forget that <span class="caps">CRC</span>! [1] Autograph will be signed by me on an 8.5” x
11” picture of Scott Bakula printed by a laser printer on regular copy
paper. [2] Well, you receive nothing of monetary value but, know deep
down in your hearts that in the picture at the top of this post, Richard
Dean Anderson is smiling at you. So you’ve got that going for you, which
is nice. [3] Sadly, we did not choose a dessert island. That would have
been much more fun.</p>Japan Nuclear Crisis2011-03-16T10:36:00-04:00Jessetag:thephysicsvirtuosi.com,2011-03-16:posts/japan-nuclear-crisis.html<p>Though I know that two posts in one day is recently unprecedented, I’ve
been meaning to post about the Japan nuclear crisis for a few days. The
various major news outlets are doing a good job, or so it seems, of
keeping us informed of the events going on over there. However, I found
myself rather puzzled over the physics of what was happening. From the
news articles I was unable to figure out what was actually causing the
meltdown, beyond some problem with the cooling. As a postdoc in my lab
asked, “Isn’t all they have to do drop the control rods and the reaction
ends?” So I decided to do a little digging. I’ve found a couple of
places that do a nice job of explain some of the physics of what is
actually happening, <a href="http://blogs.nature.com/news/thegreatbeyond/2011/03/fukushima_crisis_anatomy_of_a.html">nature
news</a>
(not sure if the nature blogs are behind a paywall), and <a href="http://www.scientificamerican.com/article.cfm?id=fukushima-core">scientific
american</a>
(not up on current events, but a nice summary of what can/might go
wrong). I’m sure there are many other places doing a good job of
explaining things, but these are the ones and I found, and hopefully
they help clarify what is actually happening.</p>Apologies + Saturn!2011-03-16T00:35:00-04:00Corkytag:thephysicsvirtuosi.com,2011-03-16:posts/apologies-saturn-.html<p>Hello, again! Remember us? I don’t. Anyway, apologies for the lack of
activity here. There are plenty of people* to blame for this lack of
activity, but I don’t want to name names. The real purpose of this
pseudo-update is to <span class="caps">SUPER</span> <span class="caps">DUPER</span> promise that the winners of our first
Monthly Physics Challenge problem will be announced tomorrow. Thanks for
your patience!
In the meantime, there’s a totally rad video-ification of photos of
Saturn (and moons) taken by the Cassini spacecraft that was Astronomy
Picture of the Day yesterday and will (presumably) be part of an <span class="caps">IMAX</span>
<a href="http://www.outsideinthemovie.com/">movie</a> in the future. You can check
it out <a href="http://apod.nasa.gov/apod/ap110315.html">here</a>.</p>
<hr />
<p><a href="https://lh5.googleusercontent.com/-ehzfH813QMI/TYA9RsjqZgI/AAAAAAAAAKw/ZUCp64M29Dc/s1600/saturn.jpg"><img alt="image" src="https://lh5.googleusercontent.com/-ehzfH813QMI/TYA9RsjqZgI/AAAAAAAAAKw/ZUCp64M29Dc/s400/saturn.jpg" /></a>
Saturn. It’s a planet!</p>
<hr />
<p>See you tomorrow! * Everyone. Especially Matt.</p>Fun Fact: Lebron James Plays Basketball2011-02-07T12:37:00-05:00Bohntag:thephysicsvirtuosi.com,2011-02-07:posts/fun-fact-lebron-james-plays-basketball.html<p><a href="http://turbo.inquisitr.com/wp-content/2010/07/lebron-james.jpg"><img alt="image" src="http://turbo.inquisitr.com/wp-content/2010/07/lebron-james.jpg" /></a>
Between building airplanes and playfully destroying everyone else in my
apartment at Super Smash Brothers, my roommate Nathan brought up an
interesting recent fact about LeBron James. He told me that LeBron
scored 11 consecutive field goals (not in football… you know who you
are) in one game. Apparently this was a pretty special event, but how
rare is it for a player of LeBron’s caliber? <span class="caps">TO</span> <span class="caps">THE</span> <span class="caps">SCIENCE</span>-<span class="caps">MOBILE</span>! The
Problem! <a href="http://www.youtube.com/watch?v=kHltCzuwlOs&feature=related"><span class="caps">ESPN</span> 8, The
Ocho</a> tells
me that LeBron’s career field goal percentage is 47.5%. Considering the
number of shots he takes, this is a pretty good number. To compare, the
highest field goal percentage for a single season was Wilt Chamberlin
with 72.7%, but eye witness testimony says he was around 10 feet tall
and would wait in the offensive paint all game. Let’s see how improbable
this 11 in a row streak is. The generic question we are going to need to
answer is as follows: If a basketball player takes N shots in one game,
with a shooting probability of q, what is the probability that the
player will make <span class="caps">AT</span> <span class="caps">LEAST</span> k shots in a row? We’ll call this probability
P(N) This turns out to be a tricky problem, but let’s take a shot (awful
pun… I sincerely apologize). We can take care of simple cases: If N <
k, then P(N) = 0. This tells us you can’t have a streak of k if you
don’t take k shots! If N = k, P(N) = q^k. This is the probability of
getting k in a row if you take k shots, not too surprising yet. When N
> k, things get more interesting. Finding the Recurrence Relation Our
goal is to write a relationship that has this form: P(N) = P(N-1) +
blank What’s blank? “You don’t worry about blank… let me worry about
blank!” We’ll need to look at the <a href="http://en.wikipedia.org/wiki/Inclusion_exclusion_principle">inclusion-exclusion
principle</a>.
This principle basically says that when we want to take all <span class="caps">DISTINCT</span>
items in two sets, we need to take all of the elements in one set, and
add all elements in the second set which are <span class="caps">DISTINCT</span> from the first.
For example, if A = {0, 1, 2, 3, 4} and B = {3, 4, 5, 6, 7, 8}, then the
union of A and B is {0, 1, 2, 3, 4, 5, 6, 7, 8}. Note that I did not
include 3 and 4 twice. Let’s take a look at the expertly designed (5
minutes before class) google docs drawing below:
<a href="https://docs.google.com/drawings/pub?id=1Ef34hZJ9mtF-GDUSpmJA4Ke2mS3BAHLsDwAk1GX19Dc&w=1122&h=485"><img alt="image" src="https://docs.google.com/drawings/pub?id=1Ef34hZJ9mtF-GDUSpmJA4Ke2mS3BAHLsDwAk1GX19Dc&w=1122&h=485" /></a>
The entire line represents N shots being taken. Each shot gets its own
little column (not all columns shown). Using the inclusion-exclusion
principle with the following sets will give us the answer. Choose A to
be the first N-1 shots, and B to be all N shots. The principle tells us
first to take everything from A, which is the probability P(N-1) shown
in red. B will be the entire line, but the principle tells us to only
add <span class="caps">DISTINCT</span> chances from B. Since the only difference in B is one more
shot than A, the only distinct chance for a streak of k shots will be in
the last k shots, shown in yellow as P(k). This is only distinct if the
(k+1)th to last shot shown in green is missed! Otherwise a streak of k
would have been included in A already. There is one more place for a
streak to be already included in A. If there was a streak in the blue
section, we must not include the B streak so we don’t double count.
Phew… Let’s put this all together by multiplying the probabilities of
each of those events: P(N) = P(N-1) + (probability of yellow
streak)<em>(probability we miss green)</em>(probability of no streak in blue)
<mathjax>$$P(N) = P(N-1) + q^k \times (1-q) \times (1 - P(N-k-1))$$</mathjax> This gives
us a recurrence relation for the probabilities! This is a general
statement about the probability of at least one streak of length k out
of N chances, given each has a probability q. Since I’m just going to
plug this into Python anyway to handle the data, this equation is good
enough. The expectation value of an event is the probability multiplied
by the number of chances. For example, the expectation value of getting
heads with 2 tosses is just (1/2)*2 = 1. The plan is to compile a list
of his field goal attempts in every game LeBron has played in the <span class="caps">NBA</span>,
and sum the expectation values for each N. <mathjax>$$ \mbox{Expectation} =
\sum_i P(i) \times \mbox{(number of games with i shots)} $$</mathjax> Using
LeBron’s actual field goal attempt data for each game (up to February 4,
2011), we find that LeBron is expected number of games with at least a
streak of 11 in a row is 1.128. This is a higher expectation value than
the number of heads in 2 coin flips! So this is <span class="caps">MORE</span> expected than the
number of heads we would see with 2 coin flips. This isn’t very exciting
given the number of shots he has taken and his shooting percentage. Data Tables</p>
<p>Consecutive Shots</p>
<p>Expected out of 667</p>
<p>Percent of Games</p>
<p>1</p>
<p>666.9</p>
<p>99.99</p>
<p>2</p>
<p>643.6</p>
<p>96.49</p>
<p>3</p>
<p>489.0</p>
<p>73.32</p>
<p>4</p>
<p>281.8</p>
<p>42.25</p>
<p>5</p>
<p>140.0</p>
<p>21.00</p>
<p>6</p>
<p>65.23</p>
<p>9.780</p>
<p>7</p>
<p>29.55</p>
<p>4.431</p>
<p>8</p>
<p>13.20</p>
<p>1.980</p>
<p>9</p>
<p>5.854</p>
<p>0.8778</p>
<p>10</p>
<p>2.578</p>
<p>0.3866</p>
<p>11</p>
<p>1.128</p>
<p>0.1691</p>
<p>12</p>
<p>0.4898</p>
<p>0.07344</p>
<p>13</p>
<p>0.2110</p>
<p>0.03164</p>
<p>14</p>
<p>0.09011</p>
<p>0.01351</p>
<p>15</p>
<p>0.03807</p>
<p>0.005709</p>
<p>16</p>
<p>0.01592</p>
<p>0.002387</p>
<p>17</p>
<p>0.006560</p>
<p>0.0009839</p>
<p>18</p>
<p>0.002660</p>
<p>0.0003995</p>
<p>19</p>
<p>0.001067</p>
<p>0.0001600</p>
<p>20</p>
<p>0.0004180</p>
<p>6.267E-05</p>
<p>21</p>
<p>0.0001599</p>
<p>2.397E-05</p>
<p>22</p>
<p>6.03E-05</p>
<p>9.033E-06</p>
<p>23</p>
<p>2.22E-05</p>
<p>3.328E-06</p>
<p>24</p>
<p>8.06E-06</p>
<p>1.208E-06</p>
<p>25</p>
<p>2.84E-06</p>
<p>4.259E-07</p>
<p>26</p>
<p>9.98E-07</p>
<p>1.495E-07</p>
<p>27</p>
<p>3.51E-07</p>
<p>5.255E-08</p>
<p>28</p>
<p>1.20E-07</p>
<p>1.797E-08</p>
<p>29</p>
<p>3.92E-08</p>
<p>5.870E-09</p>
<p>30</p>
<p>1.15E-08</p>
<p>1.717E-09</p>
<p>31</p>
<p>3.45E-09</p>
<p>5.171E-10</p>
<p>32</p>
<p>1.06E-09</p>
<p>1.589E-10</p>
<p>33</p>
<p>3.37E-10</p>
<p>5.050E-11</p>
<p>34</p>
<p>7.23E-11</p>
<p>1.083E-11</p>
<p>35</p>
<p>1.70E-11</p>
<p>2.554E-12</p>
<p>36</p>
<p>2.30E-12</p>
<p>3.442E-13</p>
<p>The streak in question is highlighted in red, so it appears we expect it
to happen 0.169% of his games. The Realization Of course I did all of
this before looking up the <a href="http://espn.go.com/nba/truehoop/miamiheat/notebook/_/page/heatreaction-110203/miami-heat-orlando-magic">actual
article</a>.
I’ll quote the blurb here:</p>
<blockquote>
<p>LeBron James set a personal record by making his first 11 field goals
to start the game. His previous career-high was 10 straight field
goals after tip-off, recorded against Chicago in 2008. After hitting
his first 11 field goal attempts on Thursday night, James shot
6-for-14 thereafter.</p>
</blockquote>
<p>Well… this calculation just got a bit easier. He has played 667 games,
and the probability of getting 11 straight off the bat is q^k =
0.475^11 = 0.0002777. Multiply this by 667 games to get the expected
value of 0.185. Sure this is 6 times smaller than our previous
calculation; however it’s still not statistically that impressive. How
would LeBron’s expected number change if he shot the same percentage
(72.7%) as Wilt for his record breaking season? The expected number of
games with 11 in a row during the game would be 72.38 games!! So this is
incredibly dependent on the shooting percentage. We have a factor of
q^k everywhere! Certainly it’s dependent on the number of shots taken
in a game too. The probability P(N) is a monotonically increasing
function! Moral Given LeBron’s shooting percentage and high number of
shots per game, we expect that he would have at least 1 of these streak
of 11 games so far in his career. This is certainly not to diminish this
feat though. You still need to take 20 some shots a game in the <span class="caps">NBA</span> with
nearly 50% shooting accuracy! We also have a nice formula to apply to
more sports streaks! More to come…</p>Problem of the Month: Gilligan Physics2011-02-06T12:55:00-05:00Alemitag:thephysicsvirtuosi.com,2011-02-06:posts/problem-of-the-month-gilligan-physics.html<p><a href="http://1.bp.blogspot.com/_YOjDhtygcuA/TU7ghJWo1oI/AAAAAAAAAQg/PCnPJ4aM9Ig/s1600/coconut.jpg"><img alt="image" src="http://1.bp.blogspot.com/_YOjDhtygcuA/TU7ghJWo1oI/AAAAAAAAAQg/PCnPJ4aM9Ig/s320/coconut.jpg" /></a>
So, some of us over here at Virtuosi Central have organized a challenge
problem for the physics community here at Cornell. Well, we thought we
would open up the challenge to the great wide world. The more
submissions the merrier. The deadline is March 1, and submissions can be
sent to our email. Details can be found at
<a href="http://bit.ly/physicschallenge">bit.ly/physicschallenge</a>. Good luck and
happy hunting.</p>Life in the Infrared2011-02-06T00:47:00-05:00Jaredtag:thephysicsvirtuosi.com,2011-02-06:posts/life-in-the-infrared.html<hr />
<p><a href="http://2.bp.blogspot.com/_fa6AZDCsHnY/TU4egR75ENI/AAAAAAAAAJU/eEFirBEL2xY/s1600/authors.jpg"><img alt="image" src="http://2.bp.blogspot.com/_fa6AZDCsHnY/TU4egR75ENI/AAAAAAAAAJU/eEFirBEL2xY/s320/authors.jpg" /></a>
Corky, Matt, and Jared, with the experimental apparatus.</p>
<hr />
<p>There’s a place where <span class="caps">TV</span> remotes are flashlights, Wii’s are torches, and
Snuggies are translucent. It’s our kitchen. We modified a 3 dollar
webcam to view in the infrared portion of the electromagnetic spectrum.
We’ll show you how, and what you can do with it. First, a little bit of
background. If you read the previous post by Alex, you’d know that
visible light waves are the same kind of waves as radio waves,
microwaves, and x-rays; they just wave at different frequencies. Light
that has a frequency just below what we can see is called “infrared”,
which apparently derives from the latin <em>infra—</em>below red (thanks
wikipedia!). So we know that if we get an object hot enough, it will
glow visibly. However, warm objects (say, humans, cars, tanks) while not
emitting enough visible light to glow, will emit easily detectable
infrared light. This makes infrared imaging a handy technology for
finding warm things in the dark. And, since many opaque things in the
visible are transparent in the infrared (or vice versa), you can dream
up a lot of fun to be had if you could only see in the infrared. Well,
it turns out that the CCDs in many common webcams are sensitive to the
infrared (<span class="caps">IR</span>). However, since an infrared signal would make for a weird
looking visible light picture, it is simply filtered out. Well, we
(mostly Matt) got our hands on such a
<a href="http://www.amazon.com/gp/product/B0019WF4FE/ref=oss_product">webcam</a>,
for a whopping 3 dollars, and removed the filter. Then we turned the
tables, and inserted a visible light filter. For this we used the
darkest part of a developed film roll, and while not getting rid of all
the visible, it did a pretty good job with the lights down low. You can
get a junk roll from your local Rite Aid photo center for free, just ask
for Sue. She is very helpful. So we took some pictures of whatever we
could find that was a strong <span class="caps">IR</span> source. It turns out that our particular
camera wasn’t sensitive to detect the signal of a warm human (or a cold
one for that matter). We had to get something just a little hotter. So
we lit a stick on fire, and then blew it out. Here’s what it looks like
in the now very pedestrian visible spectrum, with the lights out:
<a href="http://3.bp.blogspot.com/_fa6AZDCsHnY/TU4e6vuACII/AAAAAAAAAJY/gD53zNX1nDI/s1600/stick+visible.jpg"><img alt="image" src="http://3.bp.blogspot.com/_fa6AZDCsHnY/TU4e6vuACII/AAAAAAAAAJY/gD53zNX1nDI/s320/stick+visible.jpg" /></a>
It’s just barely glowing. Now, let’s look at it in what I’ll call
“enhanced” visible with the lights on, meaning that it’s sensitive to
both <span class="caps">IR</span> and visible at the same time.
<a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/TU4ff9P0ljI/AAAAAAAAAJc/X5a75LQKr-Q/s1600/stick+enhanced.jpg"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/TU4ff9P0ljI/AAAAAAAAAJc/X5a75LQKr-Q/s320/stick+enhanced.jpg" /></a>
The infared obviously dominates the output. It looks like an <span class="caps">IR</span>
sparkler. Now, let’s shut the lights off, and see only <span class="caps">IR</span>:
<a href="http://3.bp.blogspot.com/_fa6AZDCsHnY/TU4g8q5tiZI/AAAAAAAAAJk/qLct2HN-ZuQ/s1600/stick+infra.jpg"><img alt="image" src="http://3.bp.blogspot.com/_fa6AZDCsHnY/TU4g8q5tiZI/AAAAAAAAAJk/qLct2HN-ZuQ/s320/stick+infra.jpg" /></a>
It’s bright enough to “light up” my hand, which it certainly couldn’t do
in the visible. Okay, so we had to light this thing on fire, and it was
already glowing. Well, my response to that is…this spoon:
<a href="http://3.bp.blogspot.com/_fa6AZDCsHnY/TU4iF8qPjjI/AAAAAAAAAJo/kywO7JDYG3Q/s1600/hot+spoon+vis.jpg"><img alt="image" src="http://3.bp.blogspot.com/_fa6AZDCsHnY/TU4iF8qPjjI/AAAAAAAAAJo/kywO7JDYG3Q/s400/hot+spoon+vis.jpg" /></a>
Normal spoon. But actually we heated it up a bit on the stove. Here it
is in enhanced visible:
<a href="http://4.bp.blogspot.com/_fa6AZDCsHnY/TU4iR1HraFI/AAAAAAAAAJs/qjk-xDgyFkM/s1600/hot+spoon+enhanced.jpg"><img alt="image" src="http://4.bp.blogspot.com/_fa6AZDCsHnY/TU4iR1HraFI/AAAAAAAAAJs/qjk-xDgyFkM/s320/hot+spoon+enhanced.jpg" /></a>
And now, lights off, infrared only:
<a href="http://2.bp.blogspot.com/_fa6AZDCsHnY/TU4ikGGMEfI/AAAAAAAAAJw/gYsusNx8RNw/s1600/hot+spoon+dark.jpg"><img alt="image" src="http://2.bp.blogspot.com/_fa6AZDCsHnY/TU4ikGGMEfI/AAAAAAAAAJw/gYsusNx8RNw/s320/hot+spoon+dark.jpg" /></a>
Apparently we only heated up the tip significantly.
Now, we have a super high tech flashlight that has three modes. It can
produce green <span class="caps">LED</span> light, white <span class="caps">LED</span> light, or incandescent white light.
Corky pointed each one at me. White <span class="caps">LED</span> light, visible:
<a href="http://3.bp.blogspot.com/_fa6AZDCsHnY/TU4jmRe24EI/AAAAAAAAAJ0/c4h7wMdW_7c/s1600/jared+white.jpg"><img alt="image" src="http://3.bp.blogspot.com/_fa6AZDCsHnY/TU4jmRe24EI/AAAAAAAAAJ0/c4h7wMdW_7c/s320/jared+white.jpg" /></a>
Creepy, eh? It’s the shaved head.
Now, white <span class="caps">LED</span> light in the <span class="caps">IR</span>:
<a href="http://2.bp.blogspot.com/_fa6AZDCsHnY/TU4kEKAtnEI/AAAAAAAAAJ4/RKKCmhRxu5Q/s1600/dark.jpg"><img alt="image" src="http://2.bp.blogspot.com/_fa6AZDCsHnY/TU4kEKAtnEI/AAAAAAAAAJ4/RKKCmhRxu5Q/s320/dark.jpg" /></a>
Nuthin’ doin’. Similarly for green <span class="caps">LED</span> light. This is included only for
its creepiness. First, in the visible:
<a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/TU4kTWeT2-I/AAAAAAAAAJ8/ZAJ3aUCC2qY/s1600/jared+green.jpg"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/TU4kTWeT2-I/AAAAAAAAAJ8/ZAJ3aUCC2qY/s320/jared+green.jpg" /></a>
Corky calls this “the most palatable [pronounced incorrectly] picture of
Jared we could find.” In the <span class="caps">IR</span>, nada:
<a href="http://4.bp.blogspot.com/_fa6AZDCsHnY/TU4kkgkjufI/AAAAAAAAAKA/I8kV2SZKpiE/s1600/dark+2.jpg"><img alt="image" src="http://4.bp.blogspot.com/_fa6AZDCsHnY/TU4kkgkjufI/AAAAAAAAAKA/I8kV2SZKpiE/s320/dark+2.jpg" /></a>
But for incandescent white, in visible:
<a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/TU4kvV7Z5fI/AAAAAAAAAKE/0jHh-_5huRg/s1600/jared+incan.jpg"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/TU4kvV7Z5fI/AAAAAAAAAKE/0jHh-_5huRg/s320/jared+incan.jpg" /></a>
And <span class="caps">IR</span>:
<a href="http://4.bp.blogspot.com/_fa6AZDCsHnY/TU4k7_LZAbI/AAAAAAAAAKI/eTZdYYArUdc/s1600/jared+infra.jpg"><img alt="image" src="http://4.bp.blogspot.com/_fa6AZDCsHnY/TU4k7_LZAbI/AAAAAAAAAKI/eTZdYYArUdc/s320/jared+infra.jpg" /></a>
I am aglow! Matt is clearly shocked. This happens because light
generated by <span class="caps">LED</span>’s occurs because of electrons hopping around
semi-definite energy states of a material, making semi-definite
frequencies of light. However, incandescent light is due to the filament
being glowing hot, emitting radiation over a wide range of frequencies,
and we know that it’s got a strong contribution in the <span class="caps">IR</span>.
So much for hot things. You have a bunch of <span class="caps">IR</span> dedicated sources around
your house that maybe you didn’t know about. One such source is many <span class="caps">TV</span>
remote controls. Here we are, in the visible, with the lights low,
pointing remotes at our faces.
<a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/TU4mbYxxVWI/AAAAAAAAAKM/bR-WdOFTQ0o/s1600/remotes+vis.jpg"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/TU4mbYxxVWI/AAAAAAAAAKM/bR-WdOFTQ0o/s320/remotes+vis.jpg" /></a>
This is actually a promo pic for our 80’s new wave band. Here’s what
this looks like, with buttons depressed, in the <span class="caps">IR</span>:
<a href="http://2.bp.blogspot.com/_fa6AZDCsHnY/TU4nENHKUoI/AAAAAAAAAKQ/waQc1sh3AS4/s1600/remotes+infra.jpg"><img alt="image" src="http://2.bp.blogspot.com/_fa6AZDCsHnY/TU4nENHKUoI/AAAAAAAAAKQ/waQc1sh3AS4/s320/remotes+infra.jpg" /></a>
A personal favorite.
These remotes weren’t the brightest source we could find. It turns out
that the Wii sensor bar (which actually transmits, rather than receives
data) is a freakin’ beacon (say that out loud). Here it is, on top of
our <span class="caps">TV</span>, which Corky is apparently flying towards.
<a href="http://2.bp.blogspot.com/_fa6AZDCsHnY/TU4oAdkkSQI/AAAAAAAAAKU/HVn-di40lSA/s1600/corky+flying.jpg"><img alt="image" src="http://2.bp.blogspot.com/_fa6AZDCsHnY/TU4oAdkkSQI/AAAAAAAAAKU/HVn-di40lSA/s320/corky+flying.jpg" /></a>
Unassuming little guy in the visible. But with the Wii on and in <span class="caps">IR</span>
mode:
<a href="http://2.bp.blogspot.com/_fa6AZDCsHnY/TU4ocjPYcBI/AAAAAAAAAKY/eJkV5XE4GAs/s1600/wii+dark.jpg"><img alt="image" src="http://2.bp.blogspot.com/_fa6AZDCsHnY/TU4ocjPYcBI/AAAAAAAAAKY/eJkV5XE4GAs/s320/wii+dark.jpg" /></a>
There’s a person in there too, <span class="caps">IR</span> bathing. But I promised you
translucent Snuggies. (Yes, we own a Snuggie, and highly recommend
them.) Here’s Matt hiding behind one, whilst I angrily supervise:
<a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/TU4o-dCYcdI/AAAAAAAAAKc/ppkhs3o9nKc/s1600/snuggie+visible.jpg"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/TU4o-dCYcdI/AAAAAAAAAKc/ppkhs3o9nKc/s320/snuggie+visible.jpg" /></a>
Now, let’s look behind the curtain with <span class="caps">IR</span>:
<a href="http://3.bp.blogspot.com/_fa6AZDCsHnY/TU4pM05CabI/AAAAAAAAAKg/lNEwZMLrL2g/s1600/snuggie+infra.jpg"><img alt="image" src="http://3.bp.blogspot.com/_fa6AZDCsHnY/TU4pM05CabI/AAAAAAAAAKg/lNEwZMLrL2g/s320/snuggie+infra.jpg" /></a>
Look, I told you it was a Snuggie, the sleeve is visible! Now, if we had
more resolution, we could even pick out a person in a dark room, and
look at the temperature variations of objects. However, with
approximately an hour of fiddling and 3 dollars, I think we did pretty
good. We’re not hard to entertain, but this definitely did the trick.</p>Free data set of the month: Imaging Spectroscopy2011-02-04T16:44:00-05:00Alextag:thephysicsvirtuosi.com,2011-02-04:posts/free-data-set-of-the-month-imaging-spectroscopy.html<p>There’s a lot of free data sets floating around the internet, and while
things like funny cat videos and the <a href="http://blog.xkcd.com/2010/05/03/color-survey-results/">results of color-naming
surveys</a> get a
lot of play, many others don’t get used for much. Recently I’ve been
playing around with one such data set: images from the <a href="http://aviris.jpl.nasa.gov/">Airborne
Visible/Infrared Imaging Spectrometer
(<span class="caps">AVIRIS</span>)</a>.</p>
<p>I’ve always found it interesting that the way we perceive color is very
different from how light actually works. Most of us have three different
types of cones in our eyes and we perceive different colors as different
combinations of stimuli to these three types of cones. In a very rough
sense, when we look at a color, our brain gets three different numbers
to figure out what it is. Light, on the other hand, is a bunch of
photons with some distribution of wavelengths. To fully describe the
light coming from an object you need a function that shows how many
photons are at any given wavelength, which is way more complicated than
just the three numbers we get.</p>
<p>So what about all that information that gets thrown away on the way to
our brain? Are we missing out on a magical world of super-duper colors
and wonder? Not really, but skip past the break anyways to find out more.</p>
<p>There are a few things our eyes have a hard time distinguishing. For
example, take a look at this picture of a <a href="http://en.wikipedia.org/wiki/Sodium-vapor_lamp#Low_pressure_sodium">low-pressure sodium
lamp</a>:</p>
<p><a href="http://commons.wikimedia.org/wiki/File:LPS_Lamp_35W_running.jpg"><img alt="image" src="http://2.bp.blogspot.com/_f2jEPVoC9C4/TUx02kW16bI/AAAAAAAABI0/vonaPzNiAag/s320/LPS_Lamp_35W_running.jpg" /></a></p>
<p>If you’ve ever been in a sketchy parking lot at night or an intro
physics class you’ve probably seen one of these in real life. It looks
just like the picture right? It does to us, but the light coming from
the picture and the light from the real lamp are totally different. All
the photons from the real lamp have wavelengths very close to 589
nanometers, while the ones coming from the picture on your screen have a
bunch of different wavelengths ranging from 500 to 700 nanometers,
depending on what type of monitor you have. (It’s easy to <a href="http://www.cs.cmu.edu/%7Ezhuxj/astro/html/spectrometer.html">build your
own
spectrometer</a>
and see this for yourself.)</p>
<p><a href="http://maps.google.com/maps?gl=us&ie=UTF8&ll=37.432068,-122.017937&spn=0.196834,0.308647&t=h&z=12"><img alt="image" src="http://4.bp.blogspot.com/_f2jEPVoC9C4/TUx1kWr0ZpI/AAAAAAAABI8/K7RQ-Qzxa50/s400/Moffett%2BField%2BCrop.jpg" /></a>This
is an extreme example since there are few objects that emit purely
monochromatic light. What do normal objects look like spectroscopically?
I wanted to find out, but unfortunately there’s not too many
freely-available spectroscopic images of everyday objects floating
around the internet, and my attempts to make my own were stymied by the
fact that no one wanted to let me borrow $20,000 worth of optical
bandpass filters. However, there are a several orbital satellites and
planes which take spectroscopic images of the earth, and one of them,
<span class="caps">AVIRIS</span>, has a few sample data sets for people to play around with. I
recently wrote some code to help me look at one data set in particular –
a spectroscopic image of <a href="http://en.wikipedia.org/wiki/Moffett_Federal_Airfield">Moffett Federal
Airfield</a> in
California, shown here in a normal picture from Google Earth.</p>
<p><span class="caps">AVIRIS</span> acts much like a normal camera, but instead of three wide-band
filters for distinguishing color it uses 225 narrow bands ranging from
366 nanometers (near <span class="caps">UV</span>) to 2500 nanometers (near <span class="caps">IR</span>). Any of these
bands can be thought of as a single monochromatic image. Here I have
plotted two bands side by side for comparison. The right image is at 463
nanometers (a nice blue color) and the other is at 2268 nanometers, well
into the infrared. (Both images are false-colored to enhance the contrast).</p>
<p><a href="http://3.bp.blogspot.com/_f2jEPVoC9C4/TUx2M29FDwI/AAAAAAAABJE/lSUTxxI4ZFw/s1600/Img10and200withInsets.png"><img alt="image" src="http://3.bp.blogspot.com/_f2jEPVoC9C4/TUx2M29FDwI/AAAAAAAABJE/lSUTxxI4ZFw/s400/Img10and200withInsets.png" /></a></p>
<p>The scene looks a lot different at the two different wavelengths. In the
first image there’s one area that’s particularly bright – it is one of
several settling ponds belonging to a Morton Salt factory, although I
have no idea what is special about this one that makes it reflect blue
light so strongly. The second image highlights different features and
clearly shows the difference between the moving water in the creeks and
the standing water in the bay.</p>
<p>These are just two of the bands, but you can see all of them fly by in
the movie below. (Note that the color scale is set for each band
individually, which is why there sometimes seems to be a large change
between adjacent bands).</p>
<p>If you agree with me that the embedded video looks super-crappy you can
<a href="http://gotfork.net/virtuosi/VariableGain.avi">download the original
here</a> (6.4 <span class="caps">MB</span>).</p>
<p>One of the things that struck me about this video is that it seems like
much of the image doesn’t change too much through the whole visual
range. Not surprisingly, it’s a lot easier for us to distinguish between
the slight differences in color in a normal image than it is for us to
distinguish between the slight changes of intensity here.</p>
<p>So what do the spectra of individual points in the image look like?
Let’s focus on a few easily-identifiable objects. (I’ll talk about how I
made the roughly-true color image below in a later post).</p>
<p><a href="http://1.bp.blogspot.com/_f2jEPVoC9C4/TUx7dCumIAI/AAAAAAAABJk/9b8gXeMkqg4/s1600/FiveLocMapCombo.png"><img alt="image" src="http://1.bp.blogspot.com/_f2jEPVoC9C4/TUx7dCumIAI/AAAAAAAABJk/9b8gXeMkqg4/s400/FiveLocMapCombo.png" /></a></p>
<p>The first thing that stands out is that there’s a lot of features that
are common to all of the locations, such as the big gap near 1750
nanometers. While the sun sends photons with a distribution of
wavelengths roughly described by <a href="http://en.wikipedia.org/wiki/Planck%27s_law">Planck’s
Law</a>, certain wavelengths
are strongly absorbed by gasses in the atmosphere as shown below:</p>
<p><a href="http://commons.wikimedia.org/wiki/File:Solar_Spectrum.png"><img alt="image" src="http://upload.wikimedia.org/wikipedia/commons/4/4c/Solar_Spectrum.png" /></a></p>
<p>Looking at the different spectra we can see that <a href="http://en.wikipedia.org/wiki/Electromagnetic_absorption_by_water">liquid water absorbs
strongly in the
infrared</a>,
and the green grass on the golf course reflects strongly in the
near-infrared (no clue why). There is also a clear difference in
intensities for each of the locations in the visual range, but the five
locations otherwise look quite similar, even though they are very
different colors in real life.</p>
<p>I’ll look at a few more interesting things in this data set and talk a
bit about about how our eyes and brains process color in a later post.</p>Exploration of Cameras I2011-02-03T19:50:00-05:00Matttag:thephysicsvirtuosi.com,2011-02-03:posts/exploration-of-cameras-i.html<p>In the next posts, I’d like to attempt to make a camera from ‘scratch.’
And by that, I mean explore the creation of cameras from their
components and then create a very primitive one from readily available materials.</p>
<p>In terms of history and simplicity, we should start with the pinhole
camera. I’ve heard stories that Newton used a pinhole camera to look at
the sun though I don’t know if this was before or after he stared
directly at it for 8 minutes. The pinhole is neat because it is so
simple. With a pinhole, light is focused simply by restricting the paths
which an incident ray may take to hit our film. Typically, diffuse and
specular scattering sends light bouncing every which way off an object.
The pinhole just restricts which directions hit the film. I think a
picture is a better guide to this concept.</p>
<p><img alt="image" src="http://4.bp.blogspot.com/_qY9DSyjj8Ro/TUtOYzRfZAI/AAAAAAAAB3Y/MoKfumAjRTs/s400/pinholes2.png" /></p>
<p>For a small hole (aperture) there is approximately one area of the
object that will send rays to a particular part of the image. Of course
there is a very small angle of error for pinhole cameras made with
millimeter sized pins. As the hole increases in size, more rays are
incident to the same section of film. And finally, when the hole is big
enough for the whole object to be seen through it (think window), no
cohesive image is formed.</p>
<p>So if we make the hole small enough, then we can have all the clarity we
want, right? Well, I guess so. It would have to be a very circular hole
and it would only let in a very tiny amount of light making exposure
times long. How to fix this?</p>
<p>Yea, you guessed it. A lens is the answer. It is able to focus light on
its own. Now we can collect more light and still make clear images. But
the catch is that it only works for a range of distances. So again, lets
consider a lens and the images of two objects at different distances.</p>
<p><a href="http://3.bp.blogspot.com/_qY9DSyjj8Ro/TUtO6Js1a6I/AAAAAAAAB3g/6s0Nh2XYoFY/s1600/lenses2.png"><img alt="image" src="http://3.bp.blogspot.com/_qY9DSyjj8Ro/TUtO6Js1a6I/AAAAAAAAB3g/6s0Nh2XYoFY/s400/lenses2.png" /></a></p>
<p>Ray drawing can be done with 3 simple rules (though two are needed in practice).</p>
<ul>
<li>rays that go in parallel to the axis go out through the focal point</li>
<li>rays that go in through the focal point go out parallel (time
reversal symmetry)</li>
<li>rays that go through the center are not altered</li>
</ul>
<p>Using these rules, the first object which is at the proper distance for
the film position and focal length of the lens is in focus. However, the
rays from the object further away do not converge at the film and so are
out of focus.</p>
<p>Here its time for two experiments.</p>
<p>\1) <strong>The Window Camera</strong> Go to a room with a single window and cover it
with thick paper that has a single hole in it. Given enough light, you
should see an image on the far wall. If not, hold a piece of white paper
up close to the hole. [Edit: I just learned this has a name: <a href="http://en.wikipedia.org/wiki/Camera_obscura">camera
obscura</a>]</p>
<p>\2) <strong>The Doorway Camera</strong> Now, find a lens and a doorway. In one room,
leave the light on and go to the far wall in the dark room. Bring the
lens to the wall until you can see an image. The doorway is the
aperature, lens the lens, and wall the film. This demonstration is very
simple and not too surprising. <span class="caps">BUT</span> <span class="caps">SO</span> <span class="caps">COOL</span>. I encourage it <em>vigorously</em>.
The following pictures were taken of my images in case you can’t find a
lens. [Edit: I guess this falls under camera obscura too]</p>
<p><a href="http://4.bp.blogspot.com/_qY9DSyjj8Ro/TUtR0FcieuI/AAAAAAAAB3o/AETwbMrWO_A/s1600/bailey-real.JPG"><img alt="image" src="http://4.bp.blogspot.com/_qY9DSyjj8Ro/TUtR0FcieuI/AAAAAAAAB3o/AETwbMrWO_A/s400/bailey-real.JPG" /></a></p>
<p>Bailey Hall through a window in the physics building.</p>
<p><a href="http://1.bp.blogspot.com/_qY9DSyjj8Ro/TUtSCaBSQZI/AAAAAAAAB3w/YePJ9Jy-Qj0/s1600/bailey-lens.JPG"><img alt="image" src="http://1.bp.blogspot.com/_qY9DSyjj8Ro/TUtSCaBSQZI/AAAAAAAAB3w/YePJ9Jy-Qj0/s400/bailey-lens.JPG" /></a></p>
<p>Same scene as imaged with a lens using the window as an aperture.</p>
<p><a href="http://1.bp.blogspot.com/_qY9DSyjj8Ro/TUtS-MYX9RI/AAAAAAAAB34/VSLnyE4Gbs8/s1600/light%2Bfixture.jpg"><img alt="image" src="http://1.bp.blogspot.com/_qY9DSyjj8Ro/TUtS-MYX9RI/AAAAAAAAB34/VSLnyE4Gbs8/s320/light%2Bfixture.jpg" /></a></p>
<p>Image of a ceiling light with a smaller lens</p>
<p><a href="http://4.bp.blogspot.com/_qY9DSyjj8Ro/TUtTSBzC1LI/AAAAAAAAB4A/W6xD0OEykkk/s1600/psb.JPG"><img alt="image" src="http://4.bp.blogspot.com/_qY9DSyjj8Ro/TUtTSBzC1LI/AAAAAAAAB4A/W6xD0OEykkk/s320/psb.JPG" /></a>Physical
Sciences Building imaged on wood.</p>
<p><a href="http://3.bp.blogspot.com/_qY9DSyjj8Ro/TUtTqmUzTPI/AAAAAAAAB4I/rDMMFj5AsJE/s1600/small.jpg"><img alt="image" src="http://3.bp.blogspot.com/_qY9DSyjj8Ro/TUtTqmUzTPI/AAAAAAAAB4I/rDMMFj5AsJE/s320/small.jpg" /></a>An
extra small image (\~1cm on a side) from the lens that will be in next
post’s camera.</p>
<p>Together, these two elements – aperture and lens – make a very good
camera. The lens is able to collect a lot of light and focus it on the
film. The aperture can enhance clarity by reducing the number of paths
that light rays can take to your lens. It also provides higher order
corrections that come from the fact that the lens is probably not
perfect. That is, lenses are notorious for misbehaving around the edges
and introduce displacements in the whole image as well as between the
colors. The aperture helps keep light from traveling through these edges.</p>
<p>Next time, a very simple camera.</p>The Magnetar Credit Card Swipe2011-01-30T17:22:00-05:00Corkytag:thephysicsvirtuosi.com,2011-01-30:posts/the-magnetar-credit-card-swipe.html<hr />
<p><a href="http://2.bp.blogspot.com/_fa6AZDCsHnY/TTumQ-3TLgI/AAAAAAAAAJE/BI366RfOTR0/s1600/nedflanderscredit.jpg"><img alt="image" src="http://2.bp.blogspot.com/_fa6AZDCsHnY/TTumQ-3TLgI/AAAAAAAAAJE/BI366RfOTR0/s320/nedflanderscredit.jpg" /></a>
Ned Flanders’ credit card doesn’t satisfy the <a href="http://en.wikipedia.org/wiki/Luhn_algorithm">Luhn</a> checksum, but could probably still be erased by a magnetar.</p>
<hr />
<p>Hello, Internet! Today I’d like to talk about the Magnetar Credit Card
Swipe. Sounds like some sinister short on a derivatives deal, doesn’t
it? Well, no need to worry, we don’t deal with scary things like that
here. Instead, we are going to talk about a super-magnetized neutron
star speeding past Earth. A while ago I heard that a magnetar can erase
all the world’s credit cards from half the distance to the moon. I did a
little research and it seems like this is the go-to “fun fact” about
magnetars. Almost every time they are brought up in a popular science
article, their credit card-erasing prowess is sure to get a mention. So
let’s check it out! First things first, though. What the heck is a
magnetar [1]? Well, “magnetar” is just a spiffy name for a particular
flavor of neutron star. Now, neutron stars are already pretty extreme
objects. They’ve got a little more than the mass of the sun squished
down to the size of a big city with a central density over 10 trillion
times greater than lead.</p>
<hr />
<p><a href="http://4.bp.blogspot.com/_fa6AZDCsHnY/TUXMdXcrmoI/AAAAAAAAAJI/K2MVmanh5g4/s1600/earthmag.png"><img alt="image" src="http://4.bp.blogspot.com/_fa6AZDCsHnY/TUXMdXcrmoI/AAAAAAAAAJI/K2MVmanh5g4/s200/earthmag.png" /></a>
Dipolar magnetic field</p>
<hr />
<p>What makes magnetars truly name-worthy (and more extreme than Doritos
and Mountain Dew at the X-Games) is the fact that they have strong
magnetic fields. And when I say strong, I mean <em>really</em> strong. Taking
the magnetic field to be dipolar (like the Earth’s, see figure), the
field at the poles of a magnetar can be as high as 10^15 gauss [2]. For
comparison, the magnetic field of the Earth is about half a gauss and
the big magnets used in MRIs are about 10^4 gauss. So we’re talking
about some big fields! Looking up the specs for a typical credit card,
it looks like most take about 1000 gauss to erase. And now we can get
started. I always forget the exact form of the magnetic field of a
dipole, but Jackson doesn’t. He tells me that it is <mathjax>$$
\vec{B}\left(\vec{r}\right) =
\frac{3\vec{n}\left(\vec{n}\cdot\vec{m}\right) -
\vec{m}}{|\vec{r} |^3}$$</mathjax> where m is the magnetic dipole moment, r is
the displacement vector to where we are measuring the field, and n is a
unit vector that points to where we are measuring. To find the magnitude
(which is what we really care about), we just take the dot product of
the vector with itself and take the square root. Working this out we get
<mathjax>$$ B\left(\vec{r}\right) = \frac{|\vec{m}|}{r^3}{\left[
3\cos^2{\theta}+1\right]}^{1/2} $$</mathjax> where theta is the angle between
the magnetic moment vector and the direction vector n. But we are just
looking for a rough estimate here so let’s just set the cosine term to
one. Now we have
<mathjax>$$ B\left(r\right) = \frac{2|\vec{m}|}{r^3}$$</mathjax>
We can now use the above to our advantage. Since we are assuming we know
the polar magnetic field strength, we can rearrange and solve for the
magnetic moment. At the pole of a star the distance is just going to be
the radius, so for radius R and field strength Bp, we get
<mathjax>$$ |\vec{m}|} = \frac{1}{2}B_{p}R^3 $$</mathjax>
Plugging this back in, we get a nice little formula for the strength of
the magnetic field in terms of the stellar radius and the polar field
strength
<mathjax>$$B(r) = B_{p}{\left(\frac{R}{r}\right)}^3 $$</mathjax>
And we are almost there! Rearranging now for r, we get <mathjax>$$r =
R{\left[\frac{B_p}{B(r)}\right]}^{1/3} $$</mathjax> Huzzah! Now we just need
to plug in some values. Well, we’ll use Bp = 10^15 gauss, B(r) = 1000
gauss (strong enough field to erase credit cards) and R = 10 km, which
gives… <mathjax>$$ r \approx 10km \times
{\left(\frac{10^{15}gauss}{10^{3}gauss}\right)}^{1/3} $$</mathjax> so <mathjax>$$ r
\approx \mbox{100,000 km} $$</mathjax> The moon is about 380,000 km away. So we
find that the magnetar will erase all credit cards up to a little over a
quarter the distance to the moon. Not bad! However, all this talk of
“half” and “quarter” is a bit misleading given that our best guesses
here will be order of magnitude. But, overall, we see that a magnetar at
roughly the Earth-Moon distance would have a good shot at erasing your
credit cards. Fun fact confirmed!
[1] For a really nice Scientific American article about magnetars, see
<a href="http://solomon.as.utexas.edu/~duncan/sciam.pdf">here</a>. For more
information than you could probably ever want, see
<a href="http://solomon.as.utexas.edu/~duncan/magnetar.html">here</a>.
[2] For those of you who prefer your magnetic field strengths given in
units named after someone played by David Bowie in a
<a href="http://en.wikipedia.org/wiki/The_Prestige_(film)#Cast">movie</a>, you may
note the conversion: 1 gauss = 10^-4 tesla.</p>Falling Ice2011-01-27T22:41:00-05:00Jessetag:thephysicsvirtuosi.com,2011-01-27:posts/falling-ice.html<p><a href="http://4.bp.blogspot.com/_SYZpxZOlcb0/TUINK4AQYVI/AAAAAAAAAEA/XWqewl-lS_o/s1600/shattered_windshield.jpg"><img alt="image" src="http://4.bp.blogspot.com/_SYZpxZOlcb0/TUINK4AQYVI/AAAAAAAAAEA/XWqewl-lS_o/s320/shattered_windshield.jpg" /></a>
It’s been a while since I posted anything, much to my shame. Hopefully
this post marks a change in that streak. Today I’m going to consider a
very practical application of all this physics stuff. One of my
housemates parks his car on the side of the house, with the front of the
car facing the house. Living in Ithaca, <span class="caps">NY</span>, the weather has been the
usual cold and snowy, like the rest of the northeast <span class="caps">USA</span> this winter.
Yet, early last week, we had some unusually warm weather, in the 30s
(fahrenheit). A few days later, my housemate went out to his car, and
discovered that falling chunks of ice had broken his windshield! Now, to
be clear here, I’m not talking about icicles, I’m talking about large,
block-like, chunks. My best guess is that during the warm days, snow on
the roof turned into chunks of ice, and slid off the roof. The question
I’m going to try to answer today is: How far from the house could these
chunks possibly land? Put another way, what I want to know is, how far
from the house would we have to park our cars to not risk broken
windshields from falling ice? <strong>The First Attempt</strong> We’ll start with the
simplest assumptions we can think of. First, we’ll assume that there is
no friction on the ice block as it slides down the roof. We’ll also
assume there’s no air resistance slowing down the ice in the air. The
maximum range will be given by a block of ice sliding from the top of
the roof. Taking the height of the peak of the roof as h, relative to
the edge of the roof, we can write down the magnitude of the velocity of
the ice chunk when it reaches the edge of the roof. We start by setting
the change in gravitational potential energy equal to the change in
kinetic energy. Recalling the form for both of these, <mathjax>$$PE=mgh$$</mathjax>
<mathjax>$$KE=\tfrac{1}{2}mv^2$$</mathjax> we can set these equal and solve for v, <mathjax>$$mgh
= \tfrac{1}{2}mv^2$$</mathjax> so <mathjax>$$|v|=\sqrt{2gh}$$</mathjax> This should be a familiar
expression to anyone who went through introductory mechanics. Now, given
that the roof is at an angle theta, we can write down the x (horizontal)
and y (vertical) components of velocity, <mathjax>$$v_x=|v|\cos\theta$$</mathjax>
<mathjax>$$v_y=-|v|\sin \theta$$</mathjax> where I’ve introduced a minus sign in the y
component of velocity to indicate that the ice chunk is falling. Now
that we have the velocity, we have to call upon some more kinematics. To
figure out how far the ice flies, we have to know how long it is in the
air. So we start by considering the vertical motion. The distance
traveled by an object with an initial velocity, v_0, and a constant
acceleration, a, is given by <mathjax>$$\Delta y=\tfrac{1}{2}at^2+v_0t$$</mathjax> In
our case, the distance traveled is the height of the first two floors of
my house. The acceleration is that of gravity, g, and the initial
velocity is the y component of velocity we found above. We’d like to
find the time it takes to travel this distance. We have to be a little
careful with our minus signs, by our convention the acceleration is in
the negative direction, and the change in position is negative. Working
all of that out, and plugging in our known values, we get
<mathjax>$$\tfrac{1}{2}gt^2+|v|\sin \theta t - l =0$$</mathjax> where l is the height
of the house. We can solve this for t, finding <mathjax>$$t=\frac{-|v|\sin
\theta + \sqrt{(|v|\sin \theta)^2+2gl}}{g}$$</mathjax> The horizontal
distance traveled is simply the horizontal velocity times the time,
<mathjax>$$x=\frac{|v|\cos\theta}{g}(-|v|\sin \theta + \sqrt{(|v|\sin
\theta)^2+2gl})$$</mathjax> a result that you may recognize as the ‘projectile
range formula’ (particularly if I brought the minus on the v sine theta
term into the sine, indicating that I’m firing at a negative angle, that
is, downwards). Having found that result, lets plug in our velocity, and
then some numbers. First,
<mathjax>$$x=\frac{\sqrt{2gh}\cos\theta}{g}(\sqrt{2gh}\sin \theta +
\sqrt{(2gh\sin^2 \theta+2gl})$$</mathjax> Now, for some estimation. I’d say
that the height of the roof peak is 10 ft, the height of the first two
floors of the house is 20 ft, and the angle of the roof is 30 degrees.
Having made those estimates, now I just have to plug in all the numbers,
yielding <mathjax>$$x=5.2 m=17 ft$$</mathjax> That’s a very long range! Now, I didn’t see
any chunks of ice that were more than about 7 ft from the house. So we
have to question what went wrong with the above derivation. Well, maybe
nothing went wrong. I did calculate the <em>maximum</em> range. It’s quite
possible none of these ice chunks were from the very top of the roof.
Still, I’m inclined to think we may have overestimated. I’d say that our
initial velocity was too high. The ice, as it comes down the roof, will
have to push a bunch of snow out of the way. Even though it may not have
much friction with the roof, all that snow will slow it down, and reduce
the velocity with which it comes off. I’m just going to guess that about
half of the potential energy it had is lost to the snow and roof, as a
rough estimation. That would give a velocity <mathjax>$$|v| = \sqrt{gh}$$</mathjax> and a
maximum distance of <mathjax>$$x= 4m = 13ft$$</mathjax> which is closer to what I observed.
<strong>The Second Attempt</strong> I’m still not completely satisfied with the
previous work, the answer doesn’t match my observation. As a wise man
(Einstein) once said, “make things as simple as possible, but no
simpler.” I may be guilty of making the problem too simple here. So I’m
going to add back in air resistance. In general, we physicists like to
avoid this because it usually means we can’t get nice, analytic
expressions as answers (like the one I have above). Instead, we usually
just have to calculate the result numerically. This isn’t the end of the
world, and often times it is actually a bit easier, but it’s not as
pretty looking. Still, to satisfy myself, and you, gentle reader, I will
step into that realm. We start by writing down the force on our block of
ice once it is falling. We’ve got gravity, and air resistance. Thus
<mathjax>$$\vec{F}=-mg\hat{y}-bv^2\hat{v}$$</mathjax> I’ve input a drag force that goes
as v^2, and is in the opposite direction of v. The ‘v direction’ is a
cop out, because I didn’t want to do the explicit direction, so lets fix
that. We’ll have x and y components, and we note that the magnitude of v
times the direction of v is the velocity vector. So,
<mathjax>$$\vec{F}=-mg\hat{y}-bv\hat{v}_x-bv\hat{v}_y$$</mathjax> Breaking this up
into components we get <mathjax>$$a_x=-\frac{bv}{m}v_x$$</mathjax>
<mathjax>$$a_y=-g-\frac{bv}{m}v_y$$</mathjax> This is as far as we can take this work
analytically. I’ll say a little more about the coefficient b. This
depends on the exact size and shape of the object, as well as the medium
it is moving through. I’m going to use <mathjax>$$b=.4\rho A$$</mathjax> because that’s
what we used for hay bales in my classical mechanics class years ago.
Here, rho is the density of air, and A is the surface area of the
object. I would estimate that the large face of the ice chunk is roughly
one square foot, or .1 m^2. I’d estimate the mass of the ice was around
2 kg. Now, for some magic. I’ve put all of this into mathematica, and
asked it to solve this numerically. First we have the plot for the full
initial velocity, <mathjax>$$v=\sqrt{2gh}$$</mathjax></p>
<hr />
<p><a href="http://1.bp.blogspot.com/_SYZpxZOlcb0/TUILYLcLr3I/AAAAAAAAAD4/q_j8djBIPP0/s1600/Falling+Ice.jpg"><img alt="image" src="http://1.bp.blogspot.com/_SYZpxZOlcb0/TUILYLcLr3I/AAAAAAAAAD4/q_j8djBIPP0/s320/Falling+Ice.jpg" /></a>
The solid line is with air resistance, the dashed line without air resistance. The plot shows vertical vs. horizontal distance, and the units are meters. (click to enlarge)</p>
<hr />
<p>Next we have the plot for the half initial velocity, <mathjax>$$v=\sqrt{gh}$$</mathjax></p>
<hr />
<p><a href="http://3.bp.blogspot.com/_SYZpxZOlcb0/TUIMIxdlzLI/AAAAAAAAAD8/G3P9KWUB3mw/s1600/Falling+Ice2.jpg"><img alt="image" src="http://3.bp.blogspot.com/_SYZpxZOlcb0/TUIMIxdlzLI/AAAAAAAAAD8/G3P9KWUB3mw/s320/Falling+Ice2.jpg" /></a>
The solid line is with air resistance, the dashed line without air resistance. The plot shows vertical vs. horizontal distance, and the units are meters. (click to enlarge)</p>
<hr />
<p>As you can see from the plots, in neither case does it make a large
difference, about .2 m. <strong>The Third Round</strong> The final thought that
occurs to me is that perhaps I got the angle of the roof wrong. That
would be quite easy. Humans are notoriously bad at estimating angles.
I’ll plot the results (with air resistance) for 15, 30, and 45 degree
angles and the lower velocity.</p>
<hr />
<p><a href="http://3.bp.blogspot.com/_SYZpxZOlcb0/TUI5Q4qB5DI/AAAAAAAAAEE/9ydcM8EXzxA/s1600/Falling+Ice3.jpg"><img alt="image" src="http://3.bp.blogspot.com/_SYZpxZOlcb0/TUI5Q4qB5DI/AAAAAAAAAEE/9ydcM8EXzxA/s320/Falling+Ice3.jpg" /></a>
The plot shows vertical vs. horizontal distance, and the units are meters. The red line is 15 degrees, the blue line is 30 degrees, and the black line is 45 degrees. (click to enlarge)</p>
<hr />
<p>In summary, the answer is unclear. What I really need to do is measure
the angle of my roof better, because there’s a significant angle
dependence. It’s also quite possible that we didn’t see a maximum
distance hit (thankfully!). In addition, air resistance doesn’t seem to
matter much in this particular problem, probably because the distance
the thing falls is short enough that terminal velocity is not reached.
Hopefully this gave you a bit of a taste of a more practical physics
problem, and how to approach air resistance (if you want to see the
mathematica code, let me know). The lesson here seems to be either don’t
park too close to roofs, or have insurance for your windshield!</p>Darts2011-01-13T16:29:00-05:00Alemitag:thephysicsvirtuosi.com,2011-01-13:posts/darts.html<p><a href="http://4.bp.blogspot.com/_YOjDhtygcuA/TS9kYgpoSMI/AAAAAAAAAPc/2xDWZVjOGC8/s1600/dart_target.jpg"><img alt="image" src="http://4.bp.blogspot.com/_YOjDhtygcuA/TS9kYgpoSMI/AAAAAAAAAPc/2xDWZVjOGC8/s320/dart_target.jpg" /></a></p>
<p>Over break I went out with a buddy of mine and played some darts. This
got me to thinking, where exactly should someone aim in order to get the
largest expected number of points? Now, obviously when you are playing a
game like <a href="http://en.wikipedia.org/wiki/Cricket_(darts)">Cricket</a>, where
you should aim is fairly obvious, you are trying to hit particular
numbers on the board, but in the most popular darts game
-<a href="http://en.wikipedia.org/wiki/Darts#Playing_darts">501</a>, for most of
the game you are just trying to accumulate points. So, where should you
shoot on the board to get the most points? Well, something that I didn’t
quite realize before I started this adventure is that while the double
bullseye in the center is worth 50 points, the triple 20 is worth more:
60 points. For the uninitiated, in games like 501 you score points based
on where the dart falls. The center is the bullseye, where the inner
most circle is worth 50 and the ring around it is worth 25, after that
you score depending on which of the pie slice things you fall in, the
points being the number on the slice. The little ring around the outside
is worth double points, and the little ring at about half the board
radius is worth triple points. So perhaps the triple 20 is where you
should be aiming all the time. But you’ll notice that to the left and
right of the 20 section are low numbers 1 and 5. So you might suspect
that if you can’t throw all that accurately, you’ll be paying a price
for shooting at the triple 20.</p>
<h3>The Model</h3>
<p><a href="http://1.bp.blogspot.com/_YOjDhtygcuA/TS9mjSvWVMI/AAAAAAAAAPk/Q3dKlgTH47M/s1600/dartsdistsig1p0.png"><img alt="image" src="http://1.bp.blogspot.com/_YOjDhtygcuA/TS9mjSvWVMI/AAAAAAAAAPk/Q3dKlgTH47M/s320/dartsdistsig1p0.png" /></a></p>
<p>In order to answer a question like that, we need to develop a model for
dart throwing. In this case, I thought it was safe to assume that dart
throws are <a href="http://en.wikipedia.org/wiki/Normal_distribution">normally
distributed</a> about the
place you aim, with some sigma determined by your skill level. To the
left is an example of what normally-distributed dart throws look like
when the aim is at the center, and with a 1 inch standard deviation in
the throws. The dashed line marks a one inch ring to give a sense of how
scattered darts can be from 1 standard deviation.</p>
<h3>Results</h3>
<p>So, off I went, having drawn a dart board (to regulation) in Gimp, and
coloring each section in gray scale according to its point values, I
used python to perform all of the necessary computations (using
primarily the ndimage package in scipy). The result can be seen below.</p>
<p><a href="http://1.bp.blogspot.com/_YOjDhtygcuA/TS9j9ivNItI/AAAAAAAAAPU/QtuSM7MZr48/s1600/dartscircleplusdot.png"><img alt="image" src="http://1.bp.blogspot.com/_YOjDhtygcuA/TS9j9ivNItI/AAAAAAAAAPU/QtuSM7MZr48/s400/dartscircleplusdot.png" /></a></p>
<p>This image shows the optimal position on the board to aim for as a
function of how good of a player you are. The rings denote the sigmas,
and the dots the center point to aim for. The colorscale gives a
quantitative measure of the sigma, in inches. As you can see, the best
players should (and do according to youtube) aim for the triple 20,
since they are good enough to hit it most of the time, but once you’re
throw is at about a 1 inch sigma, you should be aiming for the triple 19
in the bottom left. As you can see on the numbered board at the top, the
triple 19 is buffered on either side by the 3 and the 7, which are both
2 points above the 20 section’s neighbors (1 and 5). So as you might
expect if you have a reasonable chance of hitting the sections to either
side, the triple 19 offers a higher expected score in the long run. The
other limit we can understand is the limit of really bad throws. If you
have a nontrivial chance of missing the board altogether, then obviously
you should just aim for the center of the board, in the hopes that you
at least hit the thing. But interestingly, in between the track that the
optimal aiming point takes is a little interesting. It tends to the
center (as we should expect), but it takes a curvy sort of root along
the bottom left quadrant of the board. Neat.</p>
<h3>Heat Maps</h3>
<p>In order to get a little better of a feel for why the track takes the
path it does, I decided to look at the heat maps for the expected score
at every location on the board for a set of given sigmas. So, in the
images below, the colors above the board indicate the relative score
expected if you aimed at that point.</p>
<p><a href="http://1.bp.blogspot.com/_YOjDhtygcuA/TS9piZleQaI/AAAAAAAAAPs/xKR1XVK4oM0/s1600/darts-sig0p25flair.png"><img alt="image" src="http://1.bp.blogspot.com/_YOjDhtygcuA/TS9piZleQaI/AAAAAAAAAPs/xKR1XVK4oM0/s200/darts-sig0p25flair.png" /></a></p>
<p>Above is for a quarter inch sigma throw [Click to zoom]. Notice that the
triple 20 is the place to hit, as expected.</p>
<p><a href="http://1.bp.blogspot.com/_YOjDhtygcuA/TS9pwXSiutI/AAAAAAAAAP0/UW-U3zETxkU/s1600/darts-sig0p50flair.png"><img alt="image" src="http://1.bp.blogspot.com/_YOjDhtygcuA/TS9pwXSiutI/AAAAAAAAAP0/UW-U3zETxkU/s200/darts-sig0p50flair.png" /></a></p>
<p>Above is a half inch sigma throw. The triple 20 is still in the lead,
but not by a whole lot. You can really see how if your aim is as good as
a half inch sigma, you can really still see the triple spots as true features.</p>
<p><a href="http://2.bp.blogspot.com/_YOjDhtygcuA/TS9qDpBCjOI/AAAAAAAAAP8/nnP8us-V3yU/s1600/darts-sig1p00flair.png"><img alt="image" src="http://2.bp.blogspot.com/_YOjDhtygcuA/TS9qDpBCjOI/AAAAAAAAAP8/nnP8us-V3yU/s200/darts-sig1p00flair.png" /></a></p>
<p>Above is a 1 inch sigma throw. Now the lower left hand quadrant has
taken over as the optimal place to throw. Notice that both the triple 16
and triple 19 make decent targets. The triple 14 also makes a showing,
due probably to its large neighbors.</p>
<p><a href="http://4.bp.blogspot.com/_YOjDhtygcuA/TS9qf0GXLiI/AAAAAAAAAQE/64Jua-PBMtE/s1600/darts-sig1p50flair.png"><img alt="image" src="http://4.bp.blogspot.com/_YOjDhtygcuA/TS9qf0GXLiI/AAAAAAAAAQE/64Jua-PBMtE/s200/darts-sig1p50flair.png" /></a></p>
<p>Above is a 1.5” sigma. The triple 20 is nearly gone as a place of
interest on the board, since we are no longer good enough to really
capitalize on it. The lower left hand portion of the board is the place
to be. We’ve really sort of lost any distinct features of the triple
spots, and now are just looking at quadrants of the board as a whole.
Our aim seems to tend to center a bit, as we are now in a little danger
of falling off the board.</p>
<p><a href="http://2.bp.blogspot.com/_YOjDhtygcuA/TS9rAU2TGsI/AAAAAAAAAQM/uqIvzG_jqoE/s1600/darts-sig2p00flair.png"><img alt="image" src="http://2.bp.blogspot.com/_YOjDhtygcuA/TS9rAU2TGsI/AAAAAAAAAQM/uqIvzG_jqoE/s200/darts-sig2p00flair.png" /></a></p>
<p>At 2” sigma, we can really only hope to aim left-of-center.</p>
<p><a href="http://1.bp.blogspot.com/_YOjDhtygcuA/TS9rLG9LRKI/AAAAAAAAAQU/1cz6YZer9Bs/s1600/darts-sig2p50flair.png"><img alt="image" src="http://1.bp.blogspot.com/_YOjDhtygcuA/TS9rLG9LRKI/AAAAAAAAAQU/1cz6YZer9Bs/s200/darts-sig2p50flair.png" /></a></p>
<p>At 2.5” sigma, we really just want to hit the board.</p>
<h3>Lesson</h3>
<p>So, now I know, personally, I really just ought to aim just left of center.</p>Holiday Hidden Message Revealed2011-01-03T23:52:00-05:00Corkytag:thephysicsvirtuosi.com,2011-01-03:posts/holiday-hidden-message-revealed.html<p><a href="http://4.bp.blogspot.com/_fa6AZDCsHnY/TSDGkyEUkPI/AAAAAAAAAIc/4rTQYB9jyNE/s1600/longfellow.jpg"><img alt="image" src="http://4.bp.blogspot.com/_fa6AZDCsHnY/TSDGkyEUkPI/AAAAAAAAAIc/4rTQYB9jyNE/s320/longfellow.jpg" /></a>
Here we present the solution to the Holiday Code (original post
<a href="http://thevirtuosi.blogspot.com/2010/12/holiday-hidden-message.html">here</a>).
The content of the message is from the creepy looking gentleman to the
left, Henry Wadsworth Longfellow. He is perhaps most famous for writing
the poem, “Paul Revere’s Ride.” I have taken another one of his popular
poems, “<a href="http://www.potw.org/archive/potw118.html">Christmas Bells</a>,”
and hidden its first verse in a huge mess of random letters. The
message: “I heard the bells on Christmas Day Their old, familiar carols
play, And wild and sweet The words repeat Of peace on earth, good-will
to men!” Sounds pretty pleasant at the start. But it was written at the
height of the Civil War and it gets pretty heavy towards the end.
Longfellow was an ardent abolitionist and most of his poems contain
allusions to the plight of slaves. He was also close friends with
Senator Charles Sumner, whose own fiery oratory and opposition to
slavery famously put him on the <a href="http://en.wikipedia.org/wiki/Charles_Sumner#Antebellum_career_and_attack_by_Preston_Brooks">wrong
side</a>
of a Southern cane. So how do we go about divining the message above
from the mess of the original code? Well, the message is explicitly in
there, it’s just hidden by a bunch of randomly generated letters. To get
the message out, one needs to know how the junk is distributed. To do
that we use the first hint. The first hint was a list of pie fillings.
So we will need to use pi to find out how the junk is filling the
message:
ybeinhhhzcezavdqfnrkutxyvqlzdwctagqdzbhderikeazrbcgjhwentgyqjnylvonrzobvclzeskypvscejbpftuzoladngzckwuhwcvdreyxrsmlwivrauuxssotmhakglmtawuahzdslwudvouxcasjaqzeynatsvzizxlhlxzbcrsziersohkirguobghmobedlwjwunozwdgptofdatcmgspjmrmprxepckiulxwiewniqgegzlzbpauntrzqvcsuscacpndngxjxyanvrrfqthhisomgnqxlsspnrufgljlhcwcywavxyaibvndjyonnfuxstkydsqpawrhpbjbwpeixkgblwcvddcrcofaipfdkkkgdnjkdrbaswfhqdypoevwrbezwtegtwnobuhtqnsyhethvoxhwcookyhahvaqrzquyoiduusrupmeqdefeypsyneoecpvvlatexnweorsufzhsaphcenptwpoywhuxqlrfprnaeusrqaqxdqrlqzcsnejaozjohxpnfccsemuavrltvafxoujhgjebvyyofehogomooljtoshbrdpeknoxdwwvrislevhplxyrzcfiotokrvjqlvwmvkgfdfedhqdin
There are 3 junk letters before the first message letter, 1 before the
next, then 4, 1, 5, … So the number of junk letters before a message
letter is given to you by pi. This
<a href="http://www.eveandersson.com/pi/digits/1000000">site</a> may prove useful….</p>Benford’s Law2010-12-31T14:42:00-05:00Corkytag:thephysicsvirtuosi.com,2010-12-31:posts/benford-s-law.html<p>Given a large set of data (bank accounts, river lengths, populations,
etc) what is the probability that the first non-zero digit is a one? My
first thought was that it would be 1/9. There are nine non-zero numbers
to choose from and they should be uniformly distributed, right? Turns
out that for almost all data sets naturally collected, this is not the
case. In most cases, one occurs as the first digit most frequently, then
two, then three, etc. That this seemingly paradoxical result should be
the case is the essence of Benford’s Law. Benford’s Law [1] states that
for most real-life lists of data, the first significant digit in the
data is distributed in a specific way, namely: <mathjax>$$ P(d) =
\mbox{log}_{10}\left(1 + \frac{1}{d}\right) $$</mathjax> The probabilities
for leading digits are roughly P(1) = 0.30, P(2) = 0.18, P(3) = 0.12,
P(4) = 0.10, P(5) = 0.08, P(6) = 0.07, P(7) = 0.06, P(8) = 0.05, P(9) =
0.04. So we would expect the first significant digit to be a one almost
30% of the time! But where would such a distribution come from? Well, it
turns out that it comes from a distribution that is logarithmically
uniform. We can map the interval [1,10) to the interval [0,1) by just
taking a logarithm (base ten). These logarithms are then distributed
uniformly on the interval [0,1). We can now get some grasp for why one
should occur as the first digit more often in a uniform log
distribution. In the figure below, I have plotted 1-10 on a logarithm
scale. In a uniform log distribution, a given point is equally likely to
be found anywhere on the line. So the probability of getting any
particular first digit is just its length along that line. Clearly, the
intervals get smaller as the numbers get bigger.
<a href="http://2.bp.blogspot.com/_fa6AZDCsHnY/TR4s91iaKKI/AAAAAAAAAIU/uxYE4eqknCY/s1600/logscale.png"><img alt="image" src="http://2.bp.blogspot.com/_fa6AZDCsHnY/TR4s91iaKKI/AAAAAAAAAIU/uxYE4eqknCY/s400/logscale.png" /></a>
But we can quantify this, too. For a first digit on the interval [1,10),
the probability that the first digit is <em>d</em> is given by:
<mathjax>$$ P(d) = \frac{\mbox{log}_{10}(d+1)
-\mbox{log}_{10}(d)}{\mbox{log}_{10}(10) -\mbox{log}_{10}(1)} $$</mathjax>
which is just <mathjax>$$ P(d) =\mbox{log}_{10}(d+1) -\mbox{log}_{10}(d) $$</mathjax>
or <mathjax>$$ P(d) = \mbox{log}_{10}\left( 1 + \frac{1}{d} \right) $$</mathjax> which
is the distribution of Benford’s Law. So how well do different data sets
follow Benford’s Law? I decided to test it out on a couple easily
available data sets: pulsar periods, <span class="caps">U.S.</span> city populations, <span class="caps">U.S.</span> county
sizes and masses of plant genomes. Let’s start first with pulsar
periods. I took 1875 pulsar periods from the <span class="caps">ATNF</span> Pulsar Database (found
<a href="http://www.atnf.csiro.au/research/pulsar/psrcat/">here</a>). The results
are plotted below. The bars represent the fraction of numbers that start
with a given digit and the red dots are the fractions predicted by
Benford’s Law.
<a href="http://4.bp.blogspot.com/_fa6AZDCsHnY/TRpObM6LxrI/AAAAAAAAAH8/tz9WQ98H258/s1600/benford_pulsar.png"><img alt="image" src="http://4.bp.blogspot.com/_fa6AZDCsHnY/TRpObM6LxrI/AAAAAAAAAH8/tz9WQ98H258/s400/benford_pulsar.png" /></a>
From this plot, we see that the pulsar period data shows the general
trend of Benford’s Law, but not exactly. Now let’s try <span class="caps">U.S.</span> city
populations. This data was taken from the <span class="caps">U.S.</span> census bureau from the
2009 census and contains population data for over 81,000 <span class="caps">U.S.</span> cities. We
see from the chart below that there is a near exact correspondence
between the observed first-digit distribution and Benford’s Law.
<a href="http://3.bp.blogspot.com/_fa6AZDCsHnY/TRpQhzrSFmI/AAAAAAAAAIA/ZP3YTbWiiM4/s1600/benford_uscities09.png"><img alt="image" src="http://3.bp.blogspot.com/_fa6AZDCsHnY/TRpQhzrSFmI/AAAAAAAAAIA/ZP3YTbWiiM4/s400/benford_uscities09.png" /></a>
Also from the <span class="caps">U.S.</span> census bureau, I got the data for the land area of
over 3000 <span class="caps">U.S.</span> counties. These data also conform fairly well to
Benford’s Law.
<a href="http://2.bp.blogspot.com/_fa6AZDCsHnY/TRpSE9pHC1I/AAAAAAAAAII/dqG382dqoCw/s1600/benford_land.png"><img alt="image" src="http://2.bp.blogspot.com/_fa6AZDCsHnY/TRpSE9pHC1I/AAAAAAAAAII/dqG382dqoCw/s400/benford_land.png" /></a>
Finally, I found
<a href="http://data.kew.org/cvalues/CvalServlet?querytype=1">this</a> neat website
that catalogs the genome masses of over 2000 different species of
plants. I’m not totally sure <em>why</em> they do this, but it provided a ton
of easy-to-access data, so why not?
<a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/TRpR82I2X1I/AAAAAAAAAIE/XdFQozbC7eY/s1600/benford_plant.png"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/TRpR82I2X1I/AAAAAAAAAIE/XdFQozbC7eY/s400/benford_plant.png" /></a>
Neat, so we see that wide variety of natural data follow Benford’s Law
(some more examples
<a href="http://mathworld.wolfram.com/BenfordsLaw.html">here</a>). But why should
they? Well, as far as I have gathered, there are a few reasons for this.
The first two come from a paper published by Jeff Boyle [2]. Boyle makes
(and proves) two claims about this distribution. First, he claims that
“the log distribution [Benford’s Law] is the limiting distribution when
random variables are repeatedly multiplied, divided, or raised to
integer powers.” Second, he claims that once such a distribution is
achieved, it “persists under all further multiplications, divisions and
raising to integer powers.” Since most data we accumulate (scientific,
financial, gambling,…) is the result of many mathematical operations,
we would expect that they would tend towards the logarithmic
distribution as described by Boyle. Another reason for why natural data
should fit Benford’s Law is given by Roger Pinkham (in <a href="http://www.williams.edu/go/math/sjmiller/public_html/BrownClasses/197/benford/Pinkham_FirstDigit.pdf">this
paper</a>).
Pinkham proves that<em>”</em>the only distribution for the first significant
digits which is invariant under scale change of the underlying
distribution” is Benford’s Law. This means that if we have some data,
say the lengths of rivers in feet, it will have some distribution in the
first digit. If we require that this distribution remain the same under
unit conversion (to meters, yards, cubits, … ), the only distribution
that satisfies this distribution would be the uniform logarithmic
distribution of Benford’s Law. This “scale-invariant” rationale for this
first digit law is probably the most important when it comes to data
that we actually measure. If we find some distribution for the first
digit, we would like it to be the same no matter what units we have
used. But this should also be really easy to test. The county size data
used above was given in square miles, so let’s try some new units.
First, we can try square kilometers.
<a href="http://3.bp.blogspot.com/_fa6AZDCsHnY/TR09Vq1jCAI/AAAAAAAAAIM/1Yz5gp0-7CY/s1600/benford_landkm.png"><img alt="image" src="http://3.bp.blogspot.com/_fa6AZDCsHnY/TR09Vq1jCAI/AAAAAAAAAIM/1Yz5gp0-7CY/s400/benford_landkm.png" /></a>
Slightly different than square miles, but still a very good fit. Now how
about square furlongs?
<a href="http://3.bp.blogspot.com/_fa6AZDCsHnY/TR093IwIt8I/AAAAAAAAAIQ/ern61I_MJQ0/s1600/benford_landfurlong.png"><img alt="image" src="http://3.bp.blogspot.com/_fa6AZDCsHnY/TR093IwIt8I/AAAAAAAAAIQ/ern61I_MJQ0/s400/benford_landfurlong.png" /></a>
Neat! Seems like the distribution holds true regardless of the units we
have used. So it seems like a wide range of data satisfy Benford’s Law.
But is this useful in any way or is it just a statistical curiosity?
Well, it’s mainly just a curiosity. But people have found some pretty
neat applications. One field in which it has found use is <a href="http://en.wikipedia.org/wiki/Forensic_accounting">Forensic
Accounting</a>, which I
can only assume is a totally rad bunch of accountants that dramatically
remove sunglasses as they go over tax returns. Since certain types of
financial data (for example, see
<a href="http://www.uic.edu/classes/actg/actg593/Readings/Auditing/The-Effective-Use-Of-Benford's-Law-To-Assist-In-Detecting-Fraud-In-Accounting-Data.pdf">here</a>)
should follow Benford’s Law, inconsistencies in financial returns can be
found if the data is faked or manipulated in any way. Moral of the
story: If you’re going to cook the books, remember Benford! [1]
Benford’s Law, in the great tradition of <a href="http://en.wikipedia.org/wiki/Stigler's_law_of_eponymy">Stigler’s
Law,</a> was
discovered by Simon Newcomb. [2] Paper can be found
<a href="http://www.jstor.org/pss/2975136">here</a>. Unfortunately, this is only a
preview as the full version isn’t publicly available without a library
license. The two points that I use from this paper are at least stated
in this preview.</p>Holiday Hidden Message2010-12-24T13:05:00-05:00Corkytag:thephysicsvirtuosi.com,2010-12-24:posts/holiday-hidden-message.html<hr />
<p><a href="http://3.bp.blogspot.com/_fa6AZDCsHnY/TRTgRvmyQYI/AAAAAAAAAHw/Hg5dDp4N22I/s1600/robot-santa.jpg"><img alt="image" src="http://3.bp.blogspot.com/_fa6AZDCsHnY/TRTgRvmyQYI/AAAAAAAAAHw/Hg5dDp4N22I/s200/robot-santa.jpg" /></a>
Evil gun-wielding code-breaking robo-santa from Futurama</p>
<hr />
<p>Greetings and happy holidays! Everyone has gone home for the break, so
we will be taking a break from the grossly misnamed “Problem of the
Week” for a while. Instead, here’s a “Christmas Code” I made up for a
friend. Figure it out and win the respect of strangers on the Internet!
Largely unhelpful hints after the break.
ybeinhhhzcezavdqfnrkutxyvqlzdwctagqdzbhderikeazrbcgjhwentgyqjnylvonrzobvclzeskypvscejbpftuzoladngzckwuhwcvdreyxrsmlwivrauuxssotmhakglmtawuahzdslwudvouxcasjaqzeynatsvzizxlhlxzbcrsziersohkirguobghmobedlwjwunozwdgptofdatcmgspjmrmprxepckiulxwiewniqgegzlzbpauntrzqvcsuscacpndngxjxyanvrrfqthhisomgnqxlsspnrufgljlhcwcywavxyaibvndjyonnfuxstkydsqpawrhpbjbwpeixkgblwcvddcrcofaipfdkkkgdnjkdrbaswfhqdypoevwrbezwtegtwnobuhtqnsyhethvoxhwcookyhahvaqrzquyoiduusrupmeqdefeypsyneoecpvvlatexnweorsufzhsaphcenptwpoywhuxqlrfprnaeusrqaqxdqrlqzcsnejaozjohxpnfccsemuavrltvafxoujhgjebvyyofehogomooljtoshbrdpeknoxdwwvrislevhplxyrzcfiotokrvjqlvwmvkgfdfedhqdin
Hint 1: Apple, pumpkin, pecan, …
Hint 2: Paul Revere</p>A Buffoon’s Toothpicks2010-12-18T17:33:00-05:00Corkytag:thephysicsvirtuosi.com,2010-12-18:posts/a-buffoon-s-toothpicks.html<hr />
<p><a href="http://2.bp.blogspot.com/_fa6AZDCsHnY/TQ0mf2YO5II/AAAAAAAAAHo/WMor9trnTnM/s1600/P1010471.png"><img alt="image" src="http://2.bp.blogspot.com/_fa6AZDCsHnY/TQ0mf2YO5II/AAAAAAAAAHo/WMor9trnTnM/s320/P1010471.png" /></a>
Figure 1: Two of the thousands of toothpicks on my floor</p>
<hr />
<p>You’re sitting at a bar, bored out of your mind. You’ve got an unlimited
supply of pretzel rods and a lot of time to kill. The floor is made of
thin wooden planks. How can you calculate pi? This is how the problem of
Buffon’s needle was first presented to me. Stated more formally the
problem is this: given a needle of length <em>l</em> and a floor of parallel
lines separated by a distance <em>d</em>, what is the probability of a randomly
dropped needle crossing a line? Working this all out (see a derivation
<a href="http://en.wikipedia.org/wiki/Buffon's_needle">here</a>, for example), we
find that the probability a needle crosses a line is <mathjax>$$ P =
\frac{2l}{d\pi} $$</mathjax> So now we have a nice way of experimentally coming
up with a value for pi. Simply by tossing a bunch of needles of length
<em>l</em> on a striped surface with lines separated by a distance <em>d</em> and
counting the total number of times a needle crosses a line and the total
number of throws, we can approximate the probability (and thus, pi). I
say “approximate” because it will only be true in the limit of an
infinite number of throws. Anyway, we have that <mathjax>$$ \frac{\mbox{Number
of crosses}}{\mbox{Number of throws}} \approx P = \frac{2l}{\pi d}
$$</mathjax> so, rearranging a bit, we have that <mathjax>$$ \pi \approx \left(
\frac{2l}{d} \right) \left(\frac{\mbox{Number of
throws}}{\mbox{Number of crosses}} \right) $$</mathjax> Now we have something
that we can go about measuring. I am going to define the following value
to be the experimental quantity we aim to measure: <mathjax>$$ \tilde{\pi} =
\left( \frac{2l}{d} \right) \left(\frac{\mbox{Number of
throws}}{\mbox{Number of crosses}} \right) $$</mathjax> So now that we know what
we are measuring, let’s get to it! Since I’m not allowed to use needles
in my home experiments anymore, I decided to use toothpicks. For my
striped surface, I just used the wooden floor in our house (see Figure
1). The toothpick length was almost exactly the same as the distance
between lines on the floor, so we see that that the <em>l</em>and <em>d</em> terms
cancel in our expression above. To make the measurements, I threw ten
toothpicks at a time onto the floor and counted how many crossed the
lines. I chose ten because it seemed like a nice number. It was small
enough that I shouldn’t expect too much clumping of the toothpicks (and
unwanted correlations in the data), but large enough that I didn’t have
to drop and pick up a single toothpick a thousand times. I threw the
groups of ten toothpicks 100 times and tallied the results. Thus, I have
1000 throws of a single needle. It took the entirety the movie
<a href="http://en.wikipedia.org/wiki/Undercover_Brother">Undercover Brother</a> to
throw and pick up all those toothpicks, but when all was said and done I
found that out of 1000 thrown toothpicks, 618 crossed the line. Plugging
this back into our equation above (and remembering <em>l</em> = <em>d</em>), we get <mathjax>$$
\tilde{\pi}=\left(\frac{2l}{d}\right) \left(\frac{\mbox{Number
of throws}}{\mbox{Number of crosses}} \right)=2
\left(\frac{1000}{618}\right)=3.23$$</mathjax> Well that’s not too far off I
guess. But it’s certainly not the pi that I know and love. What went
wrong? As I mentioned before, since I am only doing a finite number of
runs here I am not finding the exact probability. So is there anyway to
gauge our uncertainty? Sure. Since we are doing a counting experiment
with a lot of events, we can approximate our error using Poisson
statistics. For a Poisson distribution, the standard deviation is just
the square root of the number of events (in this case, crosses). So we
have that our total number of crosses is <mathjax>$$ \mbox{Number of crosses} =
618 \pm \sqrt{618} = 618 \pm 24.9 $$</mathjax> So now if we want to find the
uncertainty in our final measurement, we’ll have to propagate the error
through. This gives us a final value of <mathjax>$$ \tilde{\pi} = 3.23 \pm
0.13 $$</mathjax> and we see that the exact value of pi falls within there. We can
see that this value gets better and better if we plot our value of pi as
a function of the number of throws. Figure 2 shows the measured value of
pi (with error bars) over a wide range of throw numbers. The actual
value of pi is plotted as a green line.</p>
<hr />
<p><a href="http://4.bp.blogspot.com/_fa6AZDCsHnY/TQ0rGuJSDQI/AAAAAAAAAHs/Sllf1A-h1tw/s1600/buffonpi.png"><img alt="image" src="http://4.bp.blogspot.com/_fa6AZDCsHnY/TQ0rGuJSDQI/AAAAAAAAAHs/Sllf1A-h1tw/s400/buffonpi.png" /></a>
Figure 2: Measured pi value in blue, actual in green, click for bigger version</p>
<hr />
<p>So we see that the more toothpicks we drop, the closer and closer we get
to pi. Hot dog! Certainly an evening well spent.</p>Problem of the Week #3: Solution2010-12-15T02:01:00-05:00Corkytag:thephysicsvirtuosi.com,2010-12-15:posts/problem-of-the-week-3-solution.html<hr />
<p><a href="http://4.bp.blogspot.com/_fa6AZDCsHnY/TPnT0ucqSUI/AAAAAAAAAHk/kOFxbYCSx1g/s1600/koala.jpg"><img alt="image" src="http://4.bp.blogspot.com/_fa6AZDCsHnY/TPnT0ucqSUI/AAAAAAAAAHk/kOFxbYCSx1g/s200/koala.jpg" /></a>
Cold-Blooded Killer</p>
<hr />
<p>Hello all and welcome to to another roundup of Problem of the Week. If
the time intervals don’t seem to be adding up, just remember that “week”
is an illusion here at Virtuosi headquarters. “Problem of the Week,”
doubly so. But enough with the excuses, let’s see if Mr. Bond lives to
Die Another Day. The situation presented in the problem was one of a
glass of water filled all the way to the top with a single ice cube in
it. The goal is to see if any water falls to the floor as the ice cube
melts. The simplest analysis would be to just consider the displacement
of water by the ice cube. By Archimedes’ principle we know that the
buoyant force on the ice cube is equal to the weight of the water
displaced by the ice cube. But since the ice cube is floating, we know
that the buoyant force is equal to the weight of the ice cube.
Therefore, the ice cube just displaces it’s own weight in water. So as
the ice cube melts, the water level stays the same and no water is
spilled over. Stated a bit more explicitly, we have that the buoyant
force is the weight of the displaced water: <mathjax>$$ F_b = m_{water}g =
{\rho}_{water}V_{water}g $$</mathjax> where we have just expressed the mass of
the water as the density of water times the volume of the water
displaced. Likewise, the weight of the ice cube is <mathjax>$$ W_{ice} =
m_{ice}g = {\rho}_{ice}V_{ice}g $$</mathjax> But since the ice cube is
floating, the buoyant force is equal to the weight. Setting the above
equations equal to each other and rearranging a bit, we can solve for
the volume of the water displaced by the ice cube: <mathjax>$$ V_{water} =
\frac{{\rho}_{ice}V_{ice}}{{\rho}_{water}} $$</mathjax> Great, now we have
the volume of water that the ice cube displaced, so now we just need to
see what volume of water the ice cube will add once it melts. Since we
know that the mass of ice cube better be the same as the mass of the
water left after it melts, we can write <mathjax>$$ {\rho}_{ice}V_{ice} =
{\rho}_{water}V_{water} $$</mathjax> which gives that the total volume of the
water is just
<mathjax>$$ V_{water} = \frac{{\rho}_{ice}V_{ice}}{{\rho}_{water}} $$</mathjax>
which is exactly the same volume we found before. Therefore, when the
ice cube melts it fills up exactly the same amount of volume as the
water displaced to hold up the ice cube, so the water level remains the
same and James Bond lives!
Or does he? Well, he sure does if the system is entirely described by
the analysis above. But we need to be exact here since a single drop
would set off the Koala death trap. We have considered the biggest
contributions to the change of volume of the water (and easiest to
calculate!), but we have ignored a large number of much much smaller
effects on the volume. Typically these effects would be swept aside and
callously labeled “negligible,” but since even the smallest change of
volume could mean life or death for our hero, they must be accounted for
to give a final “exact” answer.
Unfortunately, “exact” isn’t something science can do very well. We can
only do as good as the measurements we make allow. And since I gave you
practically no information, this was very tough. However, one could
consider the magnitude of the effects given typical values and see which
side wins out. But if two competing effects give similar values under
reasonable conditions, it becomes very difficult to give an answer.
The last two “Problems of the Week” were well posed math problems, each
with a clear-cut “correct” answer. This one was a physics problem and
physics is a lot messier than math. That may initially seem less
satisfying, but it means that there is a whole lot of interesting “mess”
to sort out! I really enjoyed reading the
<a href="http://thevirtuosi.blogspot.com/2010/11/problem-of-week-3.html#more">comments</a>
detailing all the neat small effects to the change of volume that I
ignored in my solution. Density variations as a function of temperature,
surface tension effects, evaporation and thermal expansion all play some
role in determining the final answer. There’s a whole lot of interesting
physics going on!
Without knowing the exact conditions a bit more and doing a whole lot
more work, it seems as though I will not be able to provide a final
physics solution. But, having seen just about every Bond movie, I can
give an answer with a certainty not afforded to me by science: Mr. Bond
lives. How do I know this? Well, it is common knowledge that James Bond
has tiny smoke grenades in the soles of his shoes that release a potent
koala knock-out potion specifically created by Q. So Bond releases these
into the koala death pit, rendering the vicious beasts harmless. Then he
uses his watch laser to cut any bindings Dr. Cherrycoke may have placed
him in, falling into the death pit. The fall leaves Mr. Bond uninjured,
as a cuddly mass of sleeping koalas break his fall. By stacking the
koalas into a rudimentary series of steps, Mr. Bond ascends out of the
death pit and free to save the world’s ice supply from the evil Dr.
Cherrycoke. Obviously.</p>The Law and Large Numbers2010-12-09T22:34:00-05:00Alemitag:thephysicsvirtuosi.com,2010-12-09:posts/the-law-and-large-numbers.html<p>Human beings are not equipped for dealing with large numbers. Honestly,
7 thousand, 7 million, 7 billion and 7 trillion all register about the
same in my mind, namely 7 big. Unfortunately, there is a world a
different between each of these, three whole orders of magnitude, a
thousand, the difference between lifting me and a <span class="caps">US</span> quarter. This lack
of respect for orders of magnitude has really been rearing its head
recently with most of the political discussions surrounding the <span class="caps">US</span>
budget. Turns out the <span class="caps">US</span> Budget is really large. In 2010 it weighed in
at <mathjax>$3.55 trillion. Thats big. Really big. So big that I can't fathom it.
Without getting too political, there has been a site going around
recently; the [You Cut
program](http://republicanwhip.house.gov/YouCut/week13.htm), which
invites public suggestions for cuts to be made to the budget to try and
fix the deficit. Now, personally, I believe we ought to do something
about the deficit. To this end, I think it is useful to point out the
scales involved. In particular, the link I gave above is to one of the
suggested cuts: federal funding of NPR (Disclaimer alert: I love NPR),
which weighs in at 7 million dollars. Seven million dollars is a lot of
money. A lot of money, more than I can imagine having personally. But to
suggest that a 7 million dollar cut is any sort of progress towards
solving a $</mathjax>1.2 trillion dollar deficit is a little amusing. As a
fraction, this comes out to <mathjax>$$ \frac{ 7 \text{ million} }{ 3.55
\text{ trillion} } = 2 \times 10^{-6} $$</mathjax> Two parts in a million. To
give a sense of scale to this, the gravitational influence of the moon
on my weight is: <mathjax>$$ \frac{ \frac{ G M_{\text{moon}} }{
R_{\text{earth-moon}}^2 } }{ 10 \text{ m/s}^2 } = 3 \times
10^{-6} $$</mathjax> Three parts in a million. So, suggesting that you have made
real gains in reducing the <span class="caps">US</span> budget by cutting federal funding for <span class="caps">NPR</span>
is as silly as suggesting that if I want to lose weight, my first
concern should be the current tides. [I want to point out that I don’t
really mean to get too political, and that I’ve noticed both parties
pulling these kinds of numbers tricks.] So, wanting to get a little
better understanding of the numbers at stake, I collected some data (all
from the 2010 budget). My goal is to attempt to represent how the <span class="caps">US</span>
government spends its money. Before I begin I need to plug two great
tools towards this end:
<a href="http://www.nytimes.com/interactive/2010/02/01/us/budget.html">Here</a> the
NYTimes graphically represents government spending, helping to give a
sense of scale to different categories.
<a href="http://www.nytimes.com/interactive/2010/11/13/weekinreview/deficits-graphic.html">Here</a>
the NYTimes lets you try and balance the budget, not only for next year
but down the line, letting you choose from a wide array of proposed
changes. The Data So, below is a list of some of the relevant numbers I
could thing of, and some reference numbers (trillion, billion, million),
as well as the <span class="caps">US</span> debt, and total deficit. In addition to just reporting
the numbers (which you can find anywhere), in the second column I give
the fraction of total spending in scientific units.</p>
<hr />
<p>Name $ Fraction
<span class="caps">US</span> Debt 13.8 trillion 3.9
Total Spending 3.55 trillion 1.0
Budget Deficit 1.17 trillion 3.3E-1
1 trillion 1 trillion 2.8E-1
<span class="caps">SS</span> / Def / <span class="caps">MM</span> 730 billion 2.1E-1
Education 93 billion 2.6E-2
Taxes (250G+) 54 billion 1.5E-2
Science 31 billion 8.7E-3
1 billion 1 billion 2.8E-4
<span class="caps">NPR</span> 7 million 2.0E-6
1 million 1 million 2.8E-7</p>
<hr />
<p>So here, <span class="caps">SS</span> / Def / <span class="caps">MM</span> means each of Social Security, Defense spending
and Medicare and Medicaid, which each comes in at about the same per
program. We spend 730 billion on each of these. To first glance, these
three programs are how the <span class="caps">US</span> spends money, each of these coming in at
21%. Notice also just how large the deficit is, coming in at 33% of
total spending. The rather cryptic “Taxes (250G+)” is how much money
would be saved by letting the Bush tax cuts expire for those making more
than 250,000 a year, a currently hotly debated topic. Notice that it
would only ease the burden by 15%. So this alone would only cut the
budget deficit in half. The above table shows the power of scientific
notation. (Even though I had to use the ugly “E” notation). A number
like “2.8E-4” is really 2.9 x 10^(-4). But honestly, even a list like
this doesn’t really make an impact for me. So, I thought of a couple
other ways to represent the same numbers, scaling them to some ‘big’
things I can conceive of:</p>
<h3>Barry’s Budget</h3>
<p>Now, this might not be a fair comparison, but let’s scale down the <span class="caps">US</span>
budget to sizes that a person can understand. By scaling by a factor of
$100 million, we end up with the story of my friend Barry.</p>
<hr />
<p>Name <mathjax>$ Fraction Barry
US Debt 13.8 trillion 3.9 $</mathjax>138,495.50
Total Spending 3.55 trillion 1.0 <mathjax>$35,500.00
Budget Deficit 1.17 trillion 3.3E-1 $</mathjax>11,690.0
1 trillion 1 trillion 2.8E-1 <mathjax>$10,000.00
SS / Def / MM 730 billion 2.1E-1 $</mathjax>7,296.67
Education 93 billion 2.6E-2 <mathjax>$930.00
Taxes (250G+) 54 billion 1.5E-2 $</mathjax>540.00
Science 31 billion 8.7E-3 <mathjax>$310.00
1 billion 1 billion 2.8E-4 $</mathjax>10.00
<span class="caps">NPR</span> 7 million 2.0E-6 <mathjax>$0.07
1 million 1 million 2.8E-7 $</mathjax>0.01</p>
<hr />
<p>As you’ll notice, Barry makes <mathjax>$23,810.00 a year (total receipts)[he's a
grad student], but spends $</mathjax>35,500 a year. This has created his current
debt problem. Barry has <mathjax>$138,495 in credit card debt, and still
overspends by $</mathjax>11,690 a year. How does Barry spend his money? Well,
every year Barry buys some <mathjax>$7,300 in guns, spends another $</mathjax>14,600 mostly
taking care of his grandma. He pays a tuition of <mathjax>$930 a year at school,
and $</mathjax>310 on science books. Every year he donates 7 cents to <span class="caps">NPR</span>. Every
million <span class="caps">US</span> dollars is 1 penny in Barry dollars.</p>
<h3>Work Week</h3>
<p>That might not have been fair. This time, lets scale the <span class="caps">US</span> government
spending to a work week of 40 hours. I think a work week is a large
amount of time that I still have a real grasp for.</p>
<hr />
<p>Name $ Work Time Guess at time
<span class="caps">US</span> Debt 13.8 trillion 156 hours Work month
Total Spending 3.55 trillion 40 hours Work week
Budget Deficit 1.17 trillion 13 hours All Mon and Tue morn
1 trillion 1 trillion 11.3 hours two days of good work
<span class="caps">SS</span> / Def / <span class="caps">MM</span> 730 billion 8.2 hour true work in a day
Education 93 billion 63 minutes lunch break
Taxes (250G+) 54 billion 36 minutes cooler chat
Science 31 billion 21 minutes Show on Hulu
1 billion 1 billion 40 seconds Stretching at desk
<span class="caps">NPR</span> 7 million 0.3 seconds mouse click
1 million 1 million 0.04 seconds blink</p>
<hr />
<h3>Cross Country</h3>
<p>Now lets scale to a distance. The longest drive I’ve ever done is from
Los Angeles, <span class="caps">CA</span> to Orlando, Fl, which was 2511 miles. In terms of this
scale, the budget breaks down as:</p>
<hr />
<p>Name $ Dist <br />
<span class="caps">US</span> Debt 13.8 trillion 9796 mi <span class="caps">NY</span> - Sydney
Total Spending 3.55 trillion 2511 mi <span class="caps">LA</span> - Orlando
Budget Deficit 1.17 trillion 827 mi Texas
1 trillion 1 trillion 707 mi <br />
<span class="caps">SS</span> / Def / <span class="caps">MM</span> 730 billion 516 mi Orlando - Mobile
Education 93 billion 66 mi <br />
Taxes (250G+) 54 billion 38 mi <br />
Science 31 billion 22 mi Cross town
1 billion 1 billion 0.7 mi 14 blocks
<span class="caps">NPR</span> 7 million 9 yards <br />
1 million 1 million 4 feet </p>
<hr />
<p>(So I just want to point out again, claiming that cutting <span class="caps">NPR</span> funding
makes a dent in the <span class="caps">US</span> budget is similar to claiming you’ve moved closer
to Orlando (while in <span class="caps">LA</span>) by crossing the room.)</p>
<h2>The Lesson</h2>
<p>So, without getting too political… I’d just like to point out that if
our politicians are serious about solving the budget crisis, they need
to stop talking about million dollar programs, and start taking about
100 billion dollar ones. The problem is that it’s hard to either slash
funding for large programs like defense, or social security, and it’s
even harder raise taxes (really at all). But if we never consider those
options, we’re never going to get out of the rut. In my undergraduate
physics lab, the instructor had a mantra: “A number without context is
meaningless”. Now, he originally meant the statement to be a lesson on
how important it is to quote errors on your measurements, but I think I
can adapt it to apply to giving out numbers like 7 billion without a
sense of scale.</p>Your Week in Seminars: Short Thanksgiving edition2010-11-29T18:51:00-05:00Yarivtag:thephysicsvirtuosi.com,2010-11-29:posts/your-week-in-seminars-short-thanksgiving-edition.html<p>Good Monday evening all, and welcome to another edition of Your Week in
Seminars. Last week was a half week here in Cornell, but we still
managed two talks, the general colloquium and the
Wednseday-talk-on-Tuesday. Monday we had Charles Marcus of Harvard talk
about Building Schrödinger’s Chip. This was a quantum computing talk. We
don’t actually have a quantum computing group in Cornell, but I’ve taken
a couple of courses on it back home, and it’s an interesting subject,
although I’ve been a bit disillusion by the notable lack of problems
solvable by quantum computers. Marcus started by talking about the
wonders and insanities of quantum mechanics - the usual spiel about the
two slit experiment, electrons passing through walls, and Schrödinger’s
cat. He said he dubbed the talk Schrödinger’s chip because unlike cats,
that appear to break when we put them in the kind of low-temperature
vacuum conditions we like to do quantum experiments in, chips keep
working pretty well. Next he introduced the concept of entanglement,
which is at the basis of the whole concept of quantum computing.
Entangled particles are two or more particles that do not have a
definite quantum state, but are definitely in the same quantum state. If
I take them apart and measure their state - say, spin up or down - then
I do not know the answer, but as soon one turns up, the other is up as
well, and if one is down the other is immediately down as well. The
experimental side of quantum computing is all about making little
quantum boxes that contain a small number of states (like up or down)
and then making it possible to entangle two of those boxes together. Add
up enough boxes, under some criteria that were posited a decade ago but
still not achieved, and you have a quantum computer. The majority of the
talk was a survey of the various state of the art boxes and the methods
used to make them. For those keeping track, 15 is still the largest
number factored by a quantum computer. On Tuesday, I came in just in
time for the second half of John Terning’s talk on Monopoles, Anomalies,
and Electroweak Symmetry Breaking. It’s not really ideal to go into the
second half of a talk in Newman 311, and this one actually sounded like
I missed some interesting stuff. The part that I heard was about adding
magnetic monopoles to the standard model., Those lovely magnetic
equivalents of the electric point charge that we all heard about in our
undergraduate E&M course turn out to be surprisingly hard to integrate
into basic particle theories, which is perhaps for the best as we have
not detected them so far. The gist of what I got from the second half of
the talk was that it is not enough to add a magnetic counterpart to the
electric part of the standard model, but in fact one needs a magnetic
<span class="caps">QCD</span> and Weak force as well. Under some conditions, this kind of
configuration can work, and predict magnetic monopoles with TeV-scale
masses - the kind we might see in the <span class="caps">LHC</span>. Terning also talked about how
these could be detected in the <span class="caps">LHC</span>. It turns out this isn’t simple,
because at TeV we would be producing monopole-anti-monopole pairs just
barely, and so without a lot of kinetic energy they would tend to
collapse back on themselves and annihilate, creating what is essentially
an omnidirectional shower of photons. He mentioned that one of the big
detectors at the <span class="caps">LHC</span> - the <span class="caps">CMS</span> - was equipped to detect these kind of
photon bursts, and so this another prediction or possibility that we can
look forward to seeing or not seeing soon. That’s it for last week, kind
of on the short side due to the holiday and so on. This week is our last
normal one, as the semester ends, but I’ll have a couple more particle
talks the week after that, as well.</p>Problem of the Week #2: Solution2010-11-23T17:46:00-05:00Holmestag:thephysicsvirtuosi.com,2010-11-23:posts/problem-of-the-week-2-solution.html<p>Thanks to all who sent in solutions! We are very happy with the vast
number of responses, and we will put up a leader board shortly! Solution
Let A be the fraction of the rubber band that the ant has traversed.
<mathjax>$$A\left(t\right)=\frac{x\left(t\right)}{L+vt}.$$</mathjax> The ant’s
velocity relative to any point along the rubber band is equal to the
length of the rubber band times the first time derivative of A:
<mathjax>$$u=\left(L+vt\right)\frac{dA}{dt}=\left(L+vt\right)\left[\frac{\dot{x}}{L+vt}-\frac{vx}{\left(L+vt\right)^{2}}\right]=\dot{x}-\frac{vx}{L+vt}.$$</mathjax>
This gives us an inseparable, first-order differential equation for
x(t). The general differential equation of this type,
<mathjax>$$\dot{x}+p\left(t\right)x=q\left(t\right),$$</mathjax> has the general
solution <mathjax>$$x\left(t\right)=e^{-\int_{0}^{t}
p\left(t\right)dt}\int_{0}^{t} q\left(t\right)e^{\int_{0}^{t}
p\left(t\right)dt}dt.$$</mathjax> In our case,
<mathjax>$$p\left(t\right)=-\frac{v}{L+vt},\quad q\left(t\right)=u.$$</mathjax>
Therefore, <mathjax>$$x\left(t\right)&=&e^{\int\frac{v}{L+vt}dt}\int
ue^{-\int\frac{v}{L+vt}dt}dt=\frac{u}{v}\left(L+vt\right)\ln\left(1+\frac{vt}{L}\right).$$</mathjax>
When the ant has reached the other side, x(t) = L + vt, so
<mathjax>$$x\left(t\right)=L+vt=\frac{u}{v}\left(L+vt\right)\ln\left(1+\frac{vt}{L}\right).$$</mathjax>
Solving for t, we get <mathjax>$$t=\frac{L}{v}\left(e^{v/u}-1\right).$$</mathjax> So
the ant always makes it to the other side (unless u = 0)! In the limit
as v -> 0, we get <mathjax>$$t\approx\frac{L}{u},$$</mathjax> which is exactly what we
would expect. We can show that the ant will reach the other side without
doing any calculations. As the ant moves across the rubber band, the
velocity of the point on the rubber band that the ant is currently
walking on increases from 0 to v. Therefore, the ant is accelerating.
Since the other end of the rubber band is moving at a constant velocity,
the accelerating ant will always eventually catch up to the far end.
Another way to see this is that since the ant is accelerating, if it
makes it halfway, it will make it all the way to the other side; if it
makes it one-quarter of the way, it will make it halfway, and so on. We
know that since the ant is traveling at some nonzero velocity, it must
make it some nonzero fraction of the way to the other side. Therefore,
it must make it twice that far, and so on, all the way to the other
side. Thanks to Frank for pointing this out!</p>Your Week in Seminars: One for Two Edition2010-11-22T11:12:00-05:00Yarivtag:thephysicsvirtuosi.com,2010-11-22:posts/your-week-in-seminars-one-for-two-edition.html<p>Hello everyone and welcome to another week of talks here at the physics
department. I was out of Ithaca for a bit this past week, so in this
very special edition I’m going to present a full week’s worth of
seminars (one from last week and two from the previous week) in one post
covering two weeks. The Colloquium two weeks ago was given by our own
Csaba Csaki, who talked about Electroweak Symmetry Breaking and the
Physics of the TeV Scale. This was essentially an overview of
beyond-the-standard-model physics and the kind of things we expect out
of the <span class="caps">LHC</span>. Csaba started off by reminding us of the Standard Model, our
very successful model of particle physics that’s withstood nearly every
test over the last thirty or forty years. The Standard Model is a set of
three gauge theories - three forces that a relayed by massless particles
- along with the theory of electroweak symmetry breaking that explains
why one of these powers, the Weak one, is relayed by massive particles.
This theory is well-backed by experiment, with the exception of the
crucial Higgs boson, the one that gives mass to those previously
massless W and Z, which we hope to see soon in the <span class="caps">LHC</span>. There are a few
problems with the Standard Model, and the big one is the Hierarchy
problem. Given what we know of symmetry breaking and how the W and Z get
their masses, we expect elementary particles to have masses that
correlate with the energy scale of the interaction that gives them this
mass. Since that interaction is not one we see at low energies, we
expect the elementary particles to be very massive. Since they are not,
we conclude that there must be some symmetry that keeps them massless or
nearly so. Some solutions to this was mentioned, beginning with
current-favorite supersymmetry. This extra symmetry relation bosons to
fermions and vice versa works well to solve the original problem, but
creates a few of its own, like the Little Hierarchy Problem - if there’s
all this new physics at energies just a little higher than we’ve been
exploring, why don’t we see its effects on the low energy physics? In
other words, why does the non-sypersymmetry Standard Model work so well?
Csaba went on to mention some ways of solving these problems, such as
burying the Higgs by allowing it to decay only in very specific ways. He
also talked about a few more, like no-Higgs theories that accomplish
electroweak symmetry breaking by different means, and extra-dimensional
theories that allow us to give different energy scales to different
forces. And the exciting thing about all of this is that we are likely
to know a great deal of the answers soon, within the next few years,
once the <span class="caps">LHC</span> starts giving data. On Wednesday after it we had Sven
Krippendorf from Cambridge talk about Particle Physics from local
D-brane models at toric singularities. This was a heavy string theory
talk and I couldn’t follow much of it. The question at hand was how to
get the Standard Model, or parts of it, out of string theory models, and
the gist of the talk revolved around toric singularities in the
spacetime that the string theory lives in. No, I’m not entirely sure
what makes a singularity toric. There were a lot of colorful graphs and
some explanations. At the end there seemed to be some analogy made
between different types of singularities on the manifold in string
theory language and different gauge theories in the quantum field theory
language, with a way to map them to each other. Possibly exciting, but
you’d have to ask a proper string theorist about it. Then just this last
Friday, we had Rachel Rosen from Stockholm University talk about Phase
Transitions of Charged Scalars and White Dwarf Stars. This was a
blackboard talk, which is always exciting and is usually more
illustrative than Power Point ones. The subject was the thermodynamics
of white dwarf stars - stars that are very dense and old, where fusion
has mostly stopped and the only thing preventing the collapse of the
star upon itself is the fermionic pressure of the electrons, that cannot
fall into the same quantum states. The physical description of these
stars is one of relatively free positive ions, specifically helium ions
in this case, floating through a background of fermions. They are
described by a quantum condensate, which has a good theory explaining
it, but with the addition of Coulombic interaction between the ions.
This state applies, specifically, to a subgroup of these stars that are
mostly made of helium. This kind of ion condensate tends to crystallize
depending on the ratio between the kinetic and potential Coulombic
energy. Quantum effects, on the other hand, depend on some ratios of
mass and charge between the ions. The only material where this applies
turns to be helium, but luckily there are plenty of these helium-made
white dwarfs. The derivation, as Rosen showed it, starts with a neutral
Bose-Einstein condensate, which has a simple phase diagram - uncondensed
above some critical temperature, and increasing condensation as the
temperature is lowered to zero. The charged condensate introduces
photons as it is usually done in field theory and follows the
consequences. The result is a more complicated phase diagram. Under the
old Tc, the ions still condense, but things change above it. There is
now some higher temperature above which there is no condensation, but in
between there are two solutions to the equations of motions, a
condensate and a non-condensed state, and both a local energy minima.
This means that the transition into a condensate is not continuous, and
this is a first order phase transition. The nice thing here is that we
have such white dwarf stars to observe and we can compare this theory to
observations. That’s it for these last two weeks. All you Americans out
there have a good Thanksgiving, and I’ll see you next week with two new seminars.</p>Problem of the Week #32010-11-22T03:53:00-05:00Corkytag:thephysicsvirtuosi.com,2010-11-22:posts/problem-of-the-week-3.html<p><em>We welcome you to send in solutions, or even any ideas you have about
how to solve the problems
to</em><a href="mailto:the.physics.virtuosi@gmail.com"><em>the.physics.virtuosi@gmail.com</em></a><em>with
“problem of the week” in the subject line. We will keep track of the top
Virtuosi problem solvers.</em> Welcome to the third installment of <em>Problem
of the Week</em>! We are very pleased with the number of responses we have
gotten so far and we super duper promise to put up some kind of leader
board soon. In fact, I super duper promise to relegate that assignment
to Alemi. We intend to keep this up as long as we can and give out
prizes for high scores maybe…? They will be lame internet prizes…?
The solution to the last problem of the week will be up shortly. Adam is
the only one who knows the answer and he was busy all weekend taking a
magic ring to Mordor. One more housekeeping note. Since we want everyone
to have a clean shot at answering the question, we would prefer the
solutions to be sent by email instead of posted in the comments.
However, we certainly don’t want to stop discussion on the problem, so
if you don’t want any hints you might want to avoid the comments! <strong><em>Now
for the problem…</em></strong> <strong>*</strong>*
<a href="http://2.bp.blogspot.com/_fa6AZDCsHnY/TOohHfT9CTI/AAAAAAAAAHg/kpLDQteBptA/s1600/james+bond007.jpg"><img alt="image" src="http://2.bp.blogspot.com/_fa6AZDCsHnY/TOohHfT9CTI/AAAAAAAAAHg/kpLDQteBptA/s320/james+bond007.jpg" /></a>
James Bond has been captured by the evil mastermind Dr. Vontavious
Cherrycoke, who is trying to take over the world’s <a href="http://en.wikipedia.org/wiki/Ice_Cube">ice
cube</a> supply. The minions of Dr.
Cherrycoke have placed Mr. Bond in an elaborate and unnecessary death
contraption that, if triggered, would drop our hero into a pit of
ravenous killer koalas. In other words, certain death! Dr. Cherrycoke
has constructed a fitting trigger for his death machine / koala feeder.
A single ice cube is placed in a glass of water so that the water is
completely filling the glass up to the brim. The glass of water is then
placed on a very sensitive detector. If even a single drop of water
spills out of the glass as the ice cube melts, James Bond will find
himself on the wrong end of a murderous marsupial mauling.
Vontavious exits the room with his minions, confident that his death
trap will be triggered and Bond will be vanquished once and for all.
Does Mr. Bond survive? If not, roughly how long does it take until he
checks in to the big <span class="caps">MI6</span> in the sky? Be sure to back up your answer!</p>Problem of the Week #22010-11-14T16:30:00-05:00Holmestag:thephysicsvirtuosi.com,2010-11-14:posts/problem-of-the-week-2.html<p><em>As always, we welcome you to send in solutions, or even any ideas you
have about how to solve the problems
to</em><a href="mailto:the.physics.virtuosi@gmail.com"><em>the.physics.virtuosi@gmail.com</em></a><em>with
“problem of the week” in the subject line. We will keep track of the top
Virtuosi problem solvers.</em>Why did the ant cross the rubber band?**
A rubber band is held fixed at one end. The other end is pulled at a
velocity v. At time t = 0, the rubber band has a length of L, and an ant
starts crawling from one end to the other at velocity u. Does the ant
reach the other side? If so, how long does it take to get there? Assume
that the rubber band is able to be stretched indefinitely without breaking.</p>Pet Projects2010-11-14T12:38:00-05:00Corkytag:thephysicsvirtuosi.com,2010-11-14:posts/pet-projects.html<hr />
<p><a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/TOAM4LfDLkI/AAAAAAAAAHU/-bbyWAodpc0/s1600/cat+drink+straw.jpg"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/TOAM4LfDLkI/AAAAAAAAAHU/-bbyWAodpc0/s320/cat+drink+straw.jpg" /></a>
You’re doing it wrong!</p>
<hr />
<p>Here at the Virtuosi, we have a very specific way of asking a very
specific type of question that sounds anything but specific. These are
the “How come [blank]?” questions [1]. These are very simple questions
that just about every four year old asks, but likely never get
sufficiently answered. To get a feel for what I mean by these questions
I provide the following translations of problems we have either
considered or will consider:
Q: How come trees?
Translation: How tall can trees be?
Q: How come plants?
Translation: Why are plants green?
These are my very favorite types of questions because they are
completely understandable by everyone and promise to have very
interesting physics working behind the scenes. So I’ve been thrilled to
see two such questions considered by scientists lately that have also
had a good run in popular media. They are:
Q: How come cats?
Translation: How do cats drink? and
Q: How come dogs?
Translation: How do dogs shake?
The question of how cats drink was
<a href="http://web.mit.edu/preis/www/mypapers/cats_Science_Express_Reis_Aristoff_Stocker.pdf">answered</a>
recently by a few dudes from <span class="caps">MIT</span>, Virginia Tech and Princeton. One
morning, one of the guys was just watching his cat drink water and
realized he couldn’t immediately figure out how it worked, so he decided
to do a bit more research and <span class="caps">BAM</span>! science happens.
It turns out that the cat isn’t just scooping up the water with it
tongue as one would probably have expected. Instead the cat uses its
tongue to effectively punch the water, drawing up a column of fluid.
They then bite this column and get a very itty bitty kitty mouthful of
water. Here it is in a slow motion video I stole from the Washington
Post who stole it from Reuters: (You’ll probably have to sit through an
ad, apologies)
And here’s a series of stills that illustrates the same thing:
<a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/TOAVxjml42I/AAAAAAAAAHc/06dcbYUsowM/s1600/12cats_graphic-popup-v2.jpg"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/TOAVxjml42I/AAAAAAAAAHc/06dcbYUsowM/s400/12cats_graphic-popup-v2.jpg" /></a>
Not a cat person? Well there was also a study done a month or so ago
about dog shaking. If you have a dog, you have almost certainly
experienced the elegant way in which a wet dog un-wettafies itself.
Well, a few students at Georgia Tech were also familiar with this
shaking and just happened to have a super high speed camera that could
track water droplets. I assume it only took a short period of time to
put two and two together and <span class="caps">BAM</span>! science happens. In addition to
providing the logical extension of the “spherical cow” joke to “dog of
radius R”, the study also found some fairly surprising results. If we
are spinning a wet cylinder (i.e. dog), we would assume that the water
is held on by some “sticking force.” Then to shake off these droplets,
we’d assume that the dog would have to shake fast enough that the
centrifugal force would overcome the sticking force. In other equations,
<mathjax>$$ F_{centrifugal} = m{\omega}^2 R $$</mathjax> and <mathjax>$$ F_{sticking} = C $$</mathjax> So
we would then expect the shaking frequency to be the frequency that gets
those two guys to equal each other. We would thus predict
<mathjax>$${\omega} \sim R^{-0.5} $$</mathjax>
where the little squiggle just means that the frequency scales as the
radius to the -0.5 power, with some constant multipliers out front that
we don’t know so we just ignore.
But this is not what the Ga Tech guys observed! Check out the video
below:
So there’s still something else going on that wasn’t in our simple
model. But this is what makes science so exciting! Even something dogs
and cats figured out long ago can have some really rich and interesting
physics. So keep on asking those questions! Notes: [1] Interestingly
enough, the first “How come [blank]?” questions we asked did not have
the now canonical form. Instead, it was stated as “Why are cows?”. After
much deliberation, we found that the solution is “Because milkshakes.”
Fair enough.</p>Your Week in Seminars Dark Edition2010-11-08T15:46:00-05:00Yarivtag:thephysicsvirtuosi.com,2010-11-08:posts/your-week-in-seminars-dark-edition.html<p>Good afternoon everyone, and welcome to another week of seminars here in
the physics department. Our theme of the week is dark matter - where
does it come from, how do we see it, and why is there so much of it.
Along with that we have a little more AdS/<span class="caps">CFT</span>, seemingly continuing last
week’s subjects theme. All in all, it looks like seminars on similar
subjects tend to condense here in the department. We start with the
Monday Colloquium, where Richard Schnee from Syracuse University told us
about What’s the Matter in the Universe? Direct Searches for <span class="caps">WIMP</span> Dark
Matter. Dark matter, we’ll recall, is the astrophysics name for any kind
of matter that doesn’t emit light - one that is not inside stars. Our
knowledge of our own solar system, which has its mass concentrated
almost entirely inside the sun, led us to expect that mass in the
universe in general would behave similarly. It turns out that the motion
of observed galaxies is not consistent with the mass we measure them to
have, and so we hypothesize the existence of non-luminary matter around
us. These days dark matter is an object of interest not only for
astrophysicists but for particle theorists as well. With our variety of
beyond-the-standard models of the universe we try to account for dark
matter, guess at its properties and explains why it is dark. That last
quality is rather easy to explain, in fact. Our expectation of dark
matter is simply that it does not interact electromagnetically, and so
does not emit photons. If we also posit that it does not interact
strongly, we are left with a particle that can only decay weakly, and so
we might expect a lot of it to stick around. Of course, the particle
must also have a mass, which is the original property we postulated for
it, and so we are looking for the <span class="caps">WIMP</span>, the Weakly Interacting Massive
Particle. Schnee talked about the various ways we hope to see WIMPs in
the coming decade, focusing on two avenues, the <span class="caps">LHC</span> and passive
detectors trying to pick up cosmic particles passing through the Earth.
WIMPs are by definitions hard to detect, because they interact only
Weakly and thus only weakly. This means that we can’t actually see them
directly in our detectors, and we have to look for either missing energy
in accelerator results, which we deduce has gone to them, or their
effects on detectable particles in large particle reservoir, much as we
would detect neutrinos. Of course, when we have such weak signals the
art is in reducing the background noise, by putting them underground,
using the least radioactive materials we can find, and so on. The last
part of the talk revolved around two events in his own detector that
seemed to be far enough above background level to be WIMPs . Schnee then
explained how statistical analysis proved in fact these many statistical
outliers were, well, statistically expected, which I found very
interesting. There was also a mention at the very end, of another
experiment called <span class="caps">DAMA</span>, which is looking for a “<span class="caps">WIMP</span> wind”, checking for
signals as the Earth moves through space in opposite directions, in a
kind of modern parallel of the Michelson-Morley experiment. This one has
actually shown a positive signal, though this is of course still
controversial. On Wednesday we had a local postdoc, Enrico Pajer, talk
about Striped holographic superonductor. I mentioned AdS/<span class="caps">CFT</span> last week,
and how it’s induced some crossover between the condensed matter study
of high-temperature superconductors and particle physics. This was one
of those crossover seminars, with a few <span class="caps">CM</span> people in the audience.
Enrico spent about half of the talk introducing the audience to the
basics of superconductors - there were a lot of discontent as people
asked questions or had objections to statements which I imagine would
have gone over more smoothly with a condensed matter crowd . The bottom
line of the first half were three important attributes of
high-temperature superconductors, being a strong coupling between the
electrons, the existence of a quantum critical point and an
inhomogeneity of the material. There was then another general
introduction of the AdS/<span class="caps">CFT</span> duality, and I’ll send you to last week’s
summary (or the rest of the internet) if you want to hear more about
that. Enrico was working on a field theory in AdS space and trying to
apply the results to superconductors through the duality. In particular,
strong coupling and quantum criticalities are known features of the AdS
theory, and the addition here was of striped inhomogeneity, where things
change alone one axis only. This is incorporated into the AdS space by
applying boundary conditions, in particularl to the gauge field in the
relevant field theory, and reading the results by applying Einstein’s,
Maxwells and the Klein-Gordon equations, in the bulk of the theory. One
interesting feature that was reproduced from other, non-AdS theories,
was the dependence of the critical temperature Tc on the inhomogeneity.
This is the temperature where superconductors turn into normal
conductors, and previous work had shown that it would to drop as the
scale of the inhomogeneity grows either very large or very small, and
have some maximum point for a finite scale of inhomogeneity. Enrico’s
work showed a dropoff in Tc for inhomogeneity on very small scales,
giving the same qualitative behavior albeit with an exponential rather
than logarithmic dropoff. There were more details and math then, as they
studied these inhomogeneities in the AdS model, trying to determine the
conductivity along the stripes and perpendicular to them and so on,
producing some promising results and promising to produce some more.
Finally on Friday we had Kuver Sinha of Texas A&M talk about The
Cosmological Moduli Problem and Non-thermal Histories of the Universe.
As mentioned above, this talk revolved around dark matter as well,
though from a particle theory perspective. The trick in particle physics
is always making sure that your solution to one problem, in this case
dark matter, does not interfere with our solution to another problem.
The other problem here was the baryon asymmetry, or the overabundance of
matter compared with antimatter in the universe. In particle physics the
two are generally sides of the same coin and we have no reason to prefer
one or the other, and so we must have some reason that we only observe
regular matter in the universe. There is a well-established model for
this, called nucleosynthesis, and now when we explain the amount of dark
matter in the universe we have to keep from interfering with it. And
while we’re at it, we might solve a third problem - why is the density
of dark matter and regular matter in the universe on the same order of
magnitude? Sinha went through this, presenting two models of baryon
genesis, occurring at different times in the history of the universe,
each with their own features problems. Finally, he suggested one
solution to the coincidence problem that is non-thermal - that is,
rather than configuring the equilibrium point of matter and dark matter
to be similar separately, we have them coming from the same source with
similar decay rate. And that was that for last week, as dark matter
obscures the better-lit one. This week have a sweeping overview of
particle physics, toric singularities on branes, and possibly some news
from the <span class="caps">LHC</span>. See you in seven.</p>Problem of the Week #12010-11-05T19:39:00-04:00Holmestag:thephysicsvirtuosi.com,2010-11-05:posts/problem-of-the-week-1.html<p><em>I thought we could spice things up a bit with a more interactive post
on The Virtuosi. Starting this week, a new problem of the week will be
posted each week. Solutions will be posted the following week. These
problems will be a collection of physics and math problems and riddles,
and although hopefully challenging enough to be fun and interesting,
they should mostly be solvable using concepts from introductory
undergraduate physics and math classes.</em>
<em>We welcome you to ponder these problems, and send in solutions, or even
any ideas you have about how to solve the problems
to</em><a href="mailto:the.physics.virtuosi@gmail.com"><em>the.physics.virtuosi@gmail.com</em></a><em>with
“problem of the week” in the subject line. We will keep track of the top
Virtuosi problem solvers.</em>
<em>Here it goes…<strong>*Maximizing Gravity</strong>
</em><em>
You are given a blob of Play-Doh (with a fixed mass and uniform density)
that you can shape however you choose. How can you shape it to maximize
the gravitational force at a given point </em>P* on the surface of the
Play-Doh?
Solution
In cylindrical coordinates, where the point P is taken to be the origin,
the z-component of the gravitational field due to any point (s, z) felt
at the origin, is proportional to
<mathjax>$$\frac{1}{s^{2}+z^{2}}\frac{z}{\sqrt{s^{2}+z^{2}}},$$</mathjax>
where the second factor is necessary to take the z-component. If we have
azimuthal symmetry, the magnitude of the gravitational field is given by
the sum of all the z-components of the forces due to all points in the
planet. For a given contour
<mathjax>$$\frac{z}{\left(s^{2}+z^{2}\right)^{3/2}}=C,$$</mathjax>
<a href="http://3.bp.blogspot.com/_kdZd6FJQtZQ/TN9Jgv8f9gI/AAAAAAAAAAM/qdRknJVbCT0/s1600/maxgravity.jpg"><img alt="image" src="http://3.bp.blogspot.com/_kdZd6FJQtZQ/TN9Jgv8f9gI/AAAAAAAAAAM/qdRknJVbCT0/s320/maxgravity.jpg" /></a>
for some constant C, each point on the interior contributes more to the
total gravitational field than each point on the exterior. Thus, the
gravitational field is maximized if the surface of the planet takes the
shape of one of these contours. Solving for s(z), we
get<mathjax>$$s\left(z\right)=\sqrt{\left(\frac{z}{C}\right)^{2/3}-z^{2}}.$$</mathjax>
If the length of the planet in the z-direction is <mathjax>$$z_0,$$</mathjax> we can
replace C in the above expression to get
<mathjax>$$s\left(z\right)=\sqrt{\left(z_{0}^{4}z^{2}\right)^{1/3}-z^{2}}.$$</mathjax>
As can be seen by the plot above, this shape looks a lot like a sphere,
but slightly smushed toward the point of interest P. We can rigorously
compare the field of the maximal gravity solid to that of a sphere with
the same volume. The volume of the maximal solid is given by
<mathjax>$$V=\pi\int_{0}^{z_{0}}s^{2}\left(z\right)dz=\pi\int_{0}^{z_{0}}\left[\left(z_{0}^{4}z^{2}\right)^{1/3}-z^{2}\right]dz=\frac{4\pi}{15}z_{0}^{3}$$</mathjax>
The volume of a sphere of radius r is of course <mathjax>$$V=\frac{4}{3}\pi
r^{3},$$</mathjax> so in order for the volumes to be the same, the sphere must
have a radius of <mathjax>$$r=z_{0}/5^{1/3}.$$</mathjax>
The acceleration due to the maximal solid is proportional to
<mathjax>$$a_{max}=\int dVa_{z}=2\pi
G\rho\int_{0}^{z_{0}}dz\int_{0}^{s\left(z\right)}sds\frac{z}{\left(s^{2}+z^{2}\right)^{3/2}}=\frac{4\pi
G\rho z_{0}}{5},$$</mathjax>
while the acceleration due to the sphere is just
<mathjax>$$a_{sphere}=\frac{G\rho\frac{4}{3}\pi r^{3}}{r^{2}}=\frac{4\pi
G\rho z_{0}}{3\cdot5^{1/3}}.$$</mathjax>
Thus, we have
<mathjax>$$\frac{F_{max}}{F_{sphere}}=\frac{4\pi G\rho z_{0}/5}{4\pi
G\rho
z_{0}/\left(3\cdot5^{1/3}\right)}=\frac{3}{5^{2/3}}\approx1.026.$$</mathjax>
So the solid that gives the maximum gravitational field at a point is
only about 3% better than a sphere.
For a more detailed discussion, see <a href="http://pages.physics.cornell.edu/%7Eaalemi/random/planet.pdf">Alemi’s
solution</a>.</p>Your Week in Seminars: Conformal Edition2010-11-01T17:58:00-04:00Yarivtag:thephysicsvirtuosi.com,2010-11-01:posts/your-week-in-seminars-conformal-edition.html<p>Another week has gone by here in Cornell. The last leaves are turning
red, a hint of snow passed us on the weekend, and the undergrads have
hit the streets and parties in minimal clothing, then did the same again
next day wearing a set of cat ears. And in the physics department, we
had the usual three talks. On Monday, the colloquium speaker was Holger
Müller from <span class="caps">UC</span> Berkeley, talking about Gravitational Redshift,
Equivalence Principle, and Matter Waves. The center of the talk was
Muller’s experimental device, an atom interferometer. Many of you will
remember the Michelson-Morley interferometer, the device used to
disprove the existence of the ether. A light-interferometer essentially
takes a beam of light, splits it in two and then merges it back again,
using the result of the interference between the two parts to learn
something about the relative difference between the optical paths taken
by the two. The atom interferometer, then, performs a similar function
with atom wavefunctions. An atom is shot up into the air and a laser is
directed towards it and calibrated to interact with the atom half the
time. The atom’s wavefunction is split two trajectories, at the end of
which another laser is calibrated to bring the two paths back together.
The detector can then measure the path difference between the two
trajectories, and as we have excellent ways of measuring time and the
mass and energy of the atom, this amounts to a very accurate measurement
of g, the free-fall constant. Muller went on to show how his team has
been using the interferometer to perform very accurate measurements of
General Relativity, from its isotropy to the universality of free fall
motion for objects of different masses. There were some neat tricks
described, and they mentioned the ability to measure those <a href="http://thevirtuosi.blogspot.com/2010/09/microseconds-and-miles_7470.html">minute
differences</a>
in gravity experienced by moving the system one meter upwards. It’s
always a little difficult to get excited about tests that confirm an
accepted theory, especially one like General Relativity, but I think
this is important work. To paraphrase the words of fellow Virtuos Jared,
<span class="caps">GR</span> is always going to be right up until we find where it breaks. On
Wednesday, David Kaplan talked about Conformality Lost. This talk was
about <span class="caps">QCD</span>, but not about <span class="caps">QCD</span>. One of the features of <span class="caps">QCD</span>, or really
field theories in general, is the running of the coupling constants.
Where in classical theories the strength of the interaction between two
particles is constant and depends only on the distance between them,
field theory shows us how the strength of the interaction changes with
the energy of the participating particles. This is crucial, for
instance, for theories of grand unification that posit that the known
forces are all the same at very high energies. In <span class="caps">QCD</span>, in particular,
the running of the constant also has to with confinement and asymptotic
freedom. Confinement is the notion that quarks can never break free of
each other, and so we never observe them alone in nature, only within
particles such as protons, neutrons, baryons and mesons. Asymptotic
freedom is the notion that at high energies, if we collide another
particle with a quark, it behaves as if it was free of other influence.
If we associate long distances with low energy and short distances with
high energy, we can see how the coupling must flow from very small at
one end to very large at the other end. One of the interesting things
about the running of the coupling is that it defines a scale for the
theory. If the coupling is different for particles of energy E~1~ and
E~2~, then we can choose some value of the coupling and describe our
energy in relation to the energy relative to this scale. Theories
without running coupling are called conformal and have no natural scale.
<span class="caps">QCD</span>, it seems, behave this way if you take it all the way to asymptotic
freedom. Kaplan talked about the investigation of this conformal stage
of the theory, its existence and inexistence. As an analog he showed a
quantum-mechanical system of a particle in a Coulombic, potential. The
minimum energy of this system is given by solution of a quadratic
equation, which can have either two solutions, one or none, depending on
the relation between the mass of the particle and the strength of the
potential. A scale exists in this case only if there are two solutions:
a single energy is meaningless, of course, because we can always add a
constant, but if there’s two of them then the difference defines a
scale. This toy model, it turns out, can be analogous to a <span class="caps">QCD</span> with the
equivalent parameter being the relative number of flavors (kind of
particles) and colors (different charges in the theory, red, blue and
green in our regular <span class="caps">QCD</span>). There were a number of interesting results
from this model, the most exciting one, perhaps, being the possible
existence of a “mirror” <span class="caps">QCD</span> theory beyond the conformal point of <span class="caps">QCD</span>, a
sort of theory with a different number of colors and different gauge
groups. Kaplan ended his talk by talking of at least one possible
candidate for this mirror theory that they had recently found. Finally,
on Friday, we had Ami Katz from Boston University talk about <span class="caps">CFT</span>/AdS.
AdS/<span class="caps">CFT</span> has been a big buzzword for the last decade or so. The <span class="caps">CFT</span> here
stands for conformal field theory of the kind mentioned in the previous
summary, and AdS stands for Anti-de Sitter space, a geometry of
spacetime possible in general relativity. The slash in between stands
for a duality that allows results from one theory to be interpreted in
the other and vice versa. This has some exciting implications since it
allows us to use each theory in the regime where we can solve it.
Particle theorists are, in general, trying to use the <span class="caps">CFT</span> to solve for
high-energy theories that behave like AdS. Katz had apparently rewritten
the duality as <span class="caps">CFT</span>/AdS, to signal that he was asking the opposite
question, starting with a <span class="caps">CFT</span> and asking whether it is a good fit for
the duality. A large part of the talk was dedicated to making an analogy
from CFTs into conventional field theories. We know pretty well when a
field theory is a good description of reality and when it tends to break
down. This has to do, usually, with some cutoff energy, a scale at which
new physics comes into play. As long as we stay at energies far below
that cutoff, the effects of the unknown physics will be a small
correction to the calculations we make with our known physics. In CFTs,
we had just said, there is no energy scale, and so the question must be
different. The relevant question, apparently, is the dimensionality of
operators - not what their energy scale is, but how they scale with a
change of energy. For instance, a derivative behaves like inverse
distance, and distance behaves like inverse energy, so a single
derivative scales linearly in energy, while a double derivative scale
quadratically. I didn’t understand much past the half-point of this
lecture, but the bottom line appeared to be that a well-behaved <span class="caps">CFT</span> has
a gap in its operators dimensionality, allowing us to focus on one
operator and plenty of its derivatives before coming to the scaling of
the next operator. This kind of gap allows our perturbative corrections
to remain perturbative when we go to the AdS side. That’s it for last
week, with its conformal ups and down. As usual, we’re past the first
seminar of the new week, which was non-wimpy talk about WIMPs. Still
ahead this week are superconductors (and more AdS/<span class="caps">CFT</span>, presumably) and
some non-thermal histories of the universe. (that is, of course, if I
don’t freeze first - temperatures have dropped below zero already. It’s
so much colder when you work in Celsius)</p>Your Week in Seminars Fermionic Edition2010-10-25T11:56:00-04:00Yarivtag:thephysicsvirtuosi.com,2010-10-25:posts/your-week-in-seminars-fermionic-edition.html<p>Good evening, and welcome to the second edition of YWiS. Last week I
took in the full range of seminars, from colloquium to Friday lunch. I
don’t know if I can say I took in the full content of these talks as
well, but let’s see what I learned On Monday we had Andrew Millis from
Columbia University talk about Materials with Strong Electronic
Correlations: The (Theoretical) End of the Beginning? (I think the
subtitle wasn’t actually there in the talk itself). This was a condensed
matter theory talk, and like all condensed matter talks it started off
with the phase diagram for cuprates and a mention of the illustrious
pseudogap, the Dark Energy of condensed matter. The pseudogap is a phase
of cuprates - the materials that make high-temperature superconductors -
that occurs at about the same concentration of defects as
superconductivity but at a higher temperature. It is a little-understood
phase that sits between two well-understood phases (antiferromagnetism
and Fermi liquid) and perhaps holds some answers to the nature of
high-temperature superconductivity. Millis started with the pseudogap
picture and a short overview of the current state of condensed matter
theory. He claimed that perhaps some of the phases of matter in question
have local, short-range ordering, but no overarching long-range order in
the system, and that the investigation of these phases should take this
into account. At the end of this introduction he asked why we cannot
easily solve the problems of condensed matter. The basic equations that
govern the interactions in the field are known - the electromagnetic
potential and Schrödinger’s equation - and we should be able to just
plug them into a computer and calculate away. The trouble, as Millis
presented it, comes from the fermionic nature of the problem. What we’re
trying to calculate, in metals, is the behavior of the electrons running
through the bulk of the metal. Electrons are fermions, which means that
no two can have the same quantum numbers, that is no two can be in the
same place with the same momentum and spin. It turns out that the
configurations with lowest energies tend to be symmetric, with many
particles in the same position. Finding low-energy configurations that
put every particle in a different place is much harder. I didn’t get a
lot more from this talk. Millis went on to suggest a method that avoids
tackling the problem directly, but rather solves an analogous one that
we can translate to into a solution. I believe that there was some talk
of a local, rather than global, solution, and of the Hubbard model,
which is a popular approximation used in modeling electrons in a solid.
I phased in and out of this talk, but I’d peg my Understanding at 25
minutes, and my Interest at about 35 minutes. The Wedenesday particle
talk was by Jesse Thaler from <span class="caps">MIT</span>. He talked about Aspects of Goldstini.
Goldstini is the Italian plural form of goldstino, which is the
fermionic version - we put “ino” at the end of fermionic particles,
influenced by the neutrino - of the Goldstone boson. A Goldstone boson
is a massless particle that we find in theories of spontaneous symmetry
breaking. Spontaneous symmetry breaking is a popular concept in particle
physics, which springs from the concept of an unstable energy maximum.
Imagine a pencil standing on its tip, a system which is symmetric in
every direction. The pencil is unstable, though, and left by itself it
would fall down in any one of the equivalent directions around it. Once
it has fallen, it’s broken the symmetry and created one preferred
direction. Thus the symmetry of the system is broken when one direction
is chosen spontaneously. This sort of thing is at the bottom of our
understanding the electroweak force, and pops up quite a bit in particle
physics. When it does, we expect a Goldstone boson, a massless particle
that roughly corresponds to spinning the fallen-down pencil around its
tip. The goldstini is the fermionic version of that particle which
springs from the breaking of supersymmetry - the symmetry that relates
fermions and bosons. The goldstino, then, is well known and accepted in
common theories of supersymmetry. It breaks supersymmetry, and then
interacts with the gravitino - another fermion, which mediates the force
of gravity - to become massless. Thaler’s work posits more than one
goldstino, hence, goldstini. How can we have more than one goldstino? By
breaking supersymmetry more than once. We do this by imagining several
“sectors” in our theory, different sets of fields (particles) that break
supersymmetry but don’t interact with each other significantly. When you
work through this model it turns out that you can have several
goldstini. Also, as the original goldstini lost its mass by giving it to
the gravitino, and the gravitino is now satisfied, the new goldstini get
to keep their mass, which turns out to be exactly twice that of the
(satisfied) gravitino. Thaler then discussed three possible scenarios
for this mass, and what we would expect to see at the <span class="caps">LHC</span> in each case.
The important thing, it turns out, is how this mass compares with that
of the lightest ordinary superpartner, the first supersymmetry-related
particle we expect to see in the <span class="caps">LHC</span>. If the mass of the goldstini is
very small, they will not come into play as the <span class="caps">LOSP</span> will decay into
particles we already know. If the mass of the goldstini is too large,
then the <span class="caps">LOSP</span> cannot decay into it. But if the mass is in some
goldstinilocks region in-between, things become interesting and we can
expect to see evidence of the gravitino and the goldstini, and
distinctly see one having double the mass of the other. I followed a
good portion of this talk, with Understanding of 30 minutes all in all,
and perhaps 45 minutes of interest. Finally, the Friday particle theory
lunch had a talk by our own David Curtin, one of Csaba’s grad students.
He talked about Solving the gaugino mass problem in Direct Gauge
Mediation. I came into this one to follow more of it, on account of the
speaker being a student, but ended up following very little as it was
technical and above my level. It revolved, again, around supersymmetry
breaking. David does model building, which means he starts out with some
acceptable results, i.e. the universe as we know it, and tries to tinker
up a combination of particles and interactions that would reproduce it,
one portion at a time. What he was trying to build this time was a
metastable level in the broken supersymmetric potential. If we think
back to our pencil, we had an unstable maximum, the pencil standing on
its tip, and a minimum point, the pencil laying on the table, from which
it cannot fall. But we can also imagine a midpoint - perhaps resting one
side of the pencil on a book. It can’t fall any further right away, but
there is another, preferred position lying flat on the table. That’s
what we call a metastable energy level. As it turns out, the metastable
level has some desirable outcomes within the context of supersymmetry,
and the talk revolved around the ways we have of getting the right
energy structure to our system while avoiding things we don’t want in
our models - arbitrary particle masses, a large number of new particles,
or anything blatantly unphysical. My Understanding here was quite close
to 0, as the technicalities were beyond me. (in fact, the pre-seminar
discussion was about soccer, so one might say my understanding was
negative). I probably kept trying to follow for about half the talk, or
30 minutes. That’s it for last week. This week we can expect gravity,
(heavy!) Conformality Lost (literary!) and <span class="caps">CFT</span>/AdS (buzzwordy!). And
hopefully less headscratching and more nodding in a agreement.</p>Paradigm Shifts 3: With a Vengence2010-10-21T14:32:00-04:00Samtag:thephysicsvirtuosi.com,2010-10-21:posts/paradigm-shifts-3-with-a-vengence.html<p>The last shift I wanted to present is best explained at
<a href="http://tauday.com/">http://tauday.com/</a> . There you will find a
manifesto (yes, a manifesto) about why we should change from using <mathjax>$$
\pi = \text{180 degrees} $$</mathjax> as the circle constant to <mathjax>$$ \tau = 2
\pi = \text{360 degrees} $$</mathjax> It’s quite a convincing argument, and it’s
a shift that can easily be made. Check the website for more. <span class="caps">TAU</span> <span class="caps">VS</span> <span class="caps">PI</span>
<a href="http://tauday.com/images/figures/tau-angles.png"><img alt="image" src="http://tauday.com/images/figures/tau-angles.png" /></a></p>Your Week in Seminars Intro Edition2010-10-18T12:50:00-04:00Yarivtag:thephysicsvirtuosi.com,2010-10-18:posts/your-week-in-seminars-intro-edition.html<p>We’ve done a lot of talking over the past few months here on the
Virtuosi, but one important subject has not come up so far. An issue
that is central to the day to day life of the average grad student. The
subject of free food. The average graduate student in an American
university shops for food 0.7 times per semester, paying a total of
$13.22. He eats an average of three vegetables and one fruit, all at
home during Thanksgiving. He turns his oven on once per year while
trying to ascertain if the power is out or the light bulb in the kitchen
needs to be replaced. The rest of his nutrition is made up entirely of
free donuts, bagels and pizza. The place to get all this free food,
naturally, is various department talks and seminars. And while we’re
there, we may as well try to learn some physics. With that noble goal in
mind, I’d like to welcome you to the first edition of Your Week in
Seminars, where I shall endeavor to relay the content of the weekly
seminars I attend in Cornell. On an average week this will be one
general interest colloquium and two particle theory talks. One of my
colleagues may want to take up the <span class="caps">LASSP</span> (Condensed Matter) talk or any
of the other seminars going around in the department I’ll try to relate
what I got out of each talk, with more words than equations and with no
figures. I’ll aim for a general audience level but I think I’m likely to
end up at a physics undergrad or a popular-science-savvy level, as
technical terms are bound to be thrown about. If there’s one you don’t
know, feel free to ask over in the comments or take this as an
opportunity to delve into Wikipedia. I’ll also provide two handy metrics
to the quality of the talk, my Interest Level, defined as the amount of
time before I start playing with my phone, and my Comprehension level,
defined as the amount of time where I was still following the speaker.
Last week there was no colloquium due to Fall break, so this post will
cover just the Wednesday and Friday <a href="http://lepp.cornell.edu/Events/ParticleTheory/WebHome.html">particle
seminars</a>.
On Wednesday we had David Kagan from Columbia University tell us about
Conifunneling - Stringy Tunneling Between Flux Vacua. As you may know,
string theory demands that our universe have a large number of
dimensions, generally 10 or 11, to avoid such nastiness mass particles.
To bridge the gap between the theoretical and observed number of
dimensions (four) one has to “compactify” the extra dimensions, that is,
to posit that they have some shape and size and write down an effective
four-dimensional theory that takes their presence into account. This
compactification creates an energy surface, or some effective potential
in space. What we call “vacuum”, the ground state of the universe, rests
in one of the minimum points of that potential, as ground levels are
wont to do. But it need not be the absolute minimum, just a local one,
and where there are local minima in a quantum theory we know that there
is also tunneling. Kagan, then, talks of tunneling between these local
energy minima created by compactification of the extra dimensions of
string theory. This tunneling, from what I gathered, can be described as
an evolution in time of the manifold, the geometric layout of spacetime.
The main conceit of the talk was that this evolution takes the manifold
into the form of a “conifold”, which is a manifold with a conic
singularity. This conifold then nucleates a 5d-brane; branes are a
objects in string theory that have some dimensionality less than that of
the entire spacetime. After creating this object, the conifold
transforms back into a non-singular manifold, but one where the vacuum
is in another energy minimum. We can visualize this process by thinking
of spacetime as an elastic sheet of of sorts, pinched at a point and
pulled. It is deformed, creating an elongated cone-like area, until
finally it tears, emitting a five-dimensional brane, and reverting back
to its original form. There was some discussion at the end which mostly
went over my head, but at some point Henry Tye, Liam and Maxim were
trying to figure out whether the tunneling is necessarily done via a
conifold or whether Kagan was just describing what happens if it does.
The conclusion, I believe, was that it is the latter case, though Kagan
said they have some good arguments on why the conifold tunneling had to
happen. Interest: 40 minutes. Understanding: 20 minutes. On Friday we
had Zvi Lipkin from the Weizmann Institute tell us about Heavy quark
hadrons and exotics, a challenge for <span class="caps">QCD</span>. This talk revolved around the
constituent quark model for <span class="caps">QCD</span>. Our usual picture of hadrons is one of
two or three valence quarks sitting in a sea of gluons and virtual
quark-antiquark pairs, due to the strong interactions of Strong
Interaction. Lipkin’s work focuses on trying to abstract this sea away
and focus on the valence quarks as if we were discussing a
hydrogen-atom-like system of two particles and a potential between them.
This kind of treatment allows us to maximize the use of flavor
symmetries. Flavor is <span class="caps">QCD</span>-speak for “type of particle”, that is, up,
down, strange, charm and bottom quarks. Using the constituent quark
model we may be able to say things like “the difference between the
B^0^~s~ and the B^0^ (mesons made up of an anti-b and an s or d quark,
respectively) is the same as the difference between the Ξ^0^ and the
Σ^0^” (baryons made up of uss and uds quarks, respectively). (Don’t take
that last example too seriously - I made it up by looking at lists of
baryons and mesons. But that was the gist of the talk) Lipkin showed
done by him and Marek Karliner, (who taught me differential equations in
Tel Aviv) including lots of numbers nicely matching between their theory
and experiment as well as a less-convincing attempt to characterize the
two-body potential in this two-body problem. At the end of the talk he
also mentioned the X(3872) seen by the Belle experiment. This is a
particle that does not seem to fit into our regular models as either a
baryon or a meson, and Lipkin suggested that this might be a
“tetraquark,” a combination of two quarks and two antiquarks. This kind
of exotic hadron has been talked about for a long time, and there was
some excitement a few years ago with the discovery and eventual
un-discovery of the Θ^+^ pentaquark. (made up of four quarks and an
antiquark) Interest: 60 minutes. (I was sitting in the front and could
not politely take out the phone) Understanding: 60 minutes.</p>Four Fantastic Books (3 of which are free)2010-10-16T00:50:00-04:00Alemitag:thephysicsvirtuosi.com,2010-10-16:posts/four-fantastic-books-3-of-which-are-free-.html<p><a href="http://4.bp.blogspot.com/_YOjDhtygcuA/TLkO7BNOPnI/AAAAAAAAAOo/uIuwbUHkVtU/s1600/9780262514293-f30.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="320" src="http://4.bp.blogspot.com/_YOjDhtygcuA/TLkO7BNOPnI/AAAAAAAAAOo/uIuwbUHkVtU/s320/9780262514293-f30.jpg" width="248" /></a>Well, we just had our fall break, which means I get a bit of a break, coincidently enough. Somehow I’ve managed to read three books in the last two days, and each of them were excellent enough that I need to tell people about them.
<h3>Street Fighting Mathematics - Sanjoy Mahajan </h3><h4>The art of educated guessing and opportunistic problem solving </h4><div><a href="http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=12156">Link to <span class="caps">MIT</span> Press Site</a></p>
<p>You know that feeling you get when it’s the second half of January and you put on new clothes that have just come out of the dryer? This book is like a cross between that and a kick in the face. </p>
<p>The warm fuzzy-clothes-out-of-the-dryer feeling will come from the realization that you can wield unsurmountable power. The kick in the face will come when you realize you’re not doing it yet.
<a name='more'></a>
The premise of the book is something along the lines of: We’ve all been taught how to solve math problem exactly. Science isn’t exact. Turns out when you realize this, you can do a heck of a lot. Let Sanjoy show you how.</p>
<p>As an undergrad, I had the supreme fortune of taking some life changing courses. One of the ones that has struck me the deepest was Ph 101: Order of Magnitude Physics. It did a remarkable job building my confidence. It’s one thing to go through your classes and complete the homework assignments. It’s another thing entirely to feel as you can take a stab at just about any question anyone can ask.</p>
<p>This book is the handbook that will introduce you to the techniques and ways of thinking you’ll need in order to tackle the most general of questions. The first chapter is Dimensional Analysis, something that every high school student should be exposed to. I love Dimensional Analysis. The rest of the book goes on to estimate Integrals, Sums and Differential Equations, thinking about limiting cases and scaling, and thinking pictorially. </p>
<p>The best part: it’s available in a creative commons version, i.e. for free. Just follow the creative common pdf link in the left sidebar.</p>
<p>One of the biggest flaws I see in modern physics teaching is that physics courses have a tendency of being reduced to plugging numbers into highlighted and yellow boxed equations. That’s not physics! Physics is a way of thinking about the world. It’s the delight you obtain when you understand something for the first time. It’s the power you can wield by being able to properly predict phenomenon that only minutes ago you found baffling. In a word: it’s awesome. In order to be able to see past all of the equations, you need to have an appreciation for how powerful intelligent approximations can be.</p>
<p>The amazing fact is that with a proper introductory physics course, you are capable of understanding a huge deal of the world around you. </p>
<p>If physics classes were taught the way Sanjoy would like them to be taught, if they relied fundamentally on the kinds of techniques he discusses, I think students would like physics a lot more. I think the world would be a better place.<br />
</div></p>
<h3>Why Things Break - Mark E. Eberhart </h3>
<h4>Understanding the world by the way it comes apart </h4>
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/_YOjDhtygcuA/TLkXVqJBxlI/AAAAAAAAAOs/WEGJaROjFhI/s1600/4115MFY61ML._SS500_.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="320" src="http://2.bp.blogspot.com/_YOjDhtygcuA/TLkXVqJBxlI/AAAAAAAAAOs/WEGJaROjFhI/s320/4115MFY61ML._SS500_.jpg" width="320" /></a></div>
<p><a href="http://books.google.com/books?id=wo9wGKk9MVsC&printsec=frontcover&dq=Why+Things+Break&hl=eo&ei=4Be5TJ6jPIL48Aa2uKzPDg&sa=X&oi=book_result&ct=result&resnum=1&ved=0CCUQ6AEwAA#v=onepage&q&f=false">Link to Google Books Page</a></p>
<p>This book was mentioned to me by someone in my group. I decided to check it out, and read the first 70% of it in one sitting. I think that says something about it. </p>
<p>This is a really fun read. It’s a popular science book, but on something you’ve probably never read about before - material science.</p>
<p>Mixing very interesting history, science, and biography, Eberhart takes you on a journey attempting to answer the question: Why do things break? Which he is quick to point out is probably not the question you think it is. His life goal is not to answer what happens when things break, or which materials break sooner than others (which he manages to mention along the way anyway), but he is primarily interested in answergin why things even break in the first place, a rather subtle and non-trivial question when you think about it.</p>
<p>I actually learned quite a lot from this book. It’s full of really interesting accounts and digressions. I can’t recommend it enough. Very fun read.</p>
<h3>Soap Bubbles - C. V. Boys </h3>
<h4>Their colours and the forces which mold them </h4>
<p><a href="http://books.google.com/books?id=EcgCKTPYqCIC&printsec=frontcover&source=gbs_atb#v=onepage&q&f=false">Link to Google Books page</a></p>
<p>I found this book by accident, but boy am I glad I found it. It’s a printing of a series of lectures the author gave to some children near the close of the 19th century about bubbles.</p>
<p>This goes into the drawer of happy little discoveries I’ve made of old science literature for which the copyright has expired. Meaning its free on Google Books as a pdf download. </p>
<p>I don’t know what it is, but I find basically anything written before about 1950 at least an order of magnitude easier to understand than anything since. Sure, some of it has to do with the fact that older science literature is necessarily dated, while new physics can tend to be a lot more complicated, and you could point out that there is a clear selection bias in the old texts that I manage to find, but I really believe there is something more to it than that. Old science authors wrote to be understood. You get the distinct impression that most of these guys really loved their craft and really wanted to explain their findings to others. Sometimes I get the impression that modern articles are written less to be understood and more as the modern version of mailing your patent idea to yourself in a closed envelope - as a way to get a stamp on your lab notebook to prove you did something first.</p>
<p>That said, this little gem was not what I thought it was going to be. Going in, I thought it would be a bunch of cool things you could do with bubbles. Oh but it’s so much more. Boys manages in these three little lectures to give one of the clearest introductions to some basic fluid dynamics and electricity I’ve seen. Boys manages to teach, and while using bubbles.</p>
<p>I recommend it. If not for the science and cool bubble tricks, I think it can serve as another find indicating that physics education doesn’t need to be boring in order to get real ideas planted.</p>
<h3>Calculus Made Easy - Silvanus Phillips Thompson </h3>
<h4>Being a very-simplest introduction to those beautiful methods of reckoning which are generally called by the terrifying names of the <span class="caps">DIFFERENTIAL</span> <span class="caps">CALCULUS</span> and <span class="caps">INTEGRAL</span> <span class="caps">CALCULUS</span></h4>
<p><a href="http://books.google.com/books?id=BrhBAAAAYAAJ&printsec=frontcover&dq=Calculus+Made+Easy&hl=eo&ei=WRu5TKDJLsT38Ab4wNiaDw&sa=X&oi=book_result&ct=result&resnum=1&ved=0CCoQ6AEwAA#v=onepage&q&f=false">Google Books Link</a> - another freebie</p>
<p>I have to admit, I didn’t just read this one. I read it a while ago, but while writing up the other ones I could let such a fantastical book as this pass by without mention.</p>
<p>Another book I found by accident for free on Google Books. If I remember correctly, this one was pure serendipity. But it has to be the best introductory calculus book ever written. Seriously. I don’t joke about these things. I fell in love with it as soon as I finished reading the subtitle (and the author’s name).</p>
<p>This. book. rocks. If nothing else, do yourself a favor and read the first couple chapters of this bad boy. It’s free. I won’t hurt.</p>
<p>It’s so good, I read it online. Then I checked it out from the library. Then I bought the <a href="http://www.amazon.com/Calculus-Made-Easy-CALCULUS-MADE/dp/B001TIKS36/ref=sr_1_4?ie=UTF8&qid=1287200177&sr=8-4">shiny new edition</a> because I needed to have it on my shelf. Turns out I’m not the only one in love with the book. Martin Gardner so loved it as to release the shiny edition with recreational problems and his commentary.</p>
<p>This is not Calculus crib notes. This is not spark notes or Calculus for Dummies. This is not just a condensed version of the calculus book you used in highschool. This isn’t just a list of formulas. This book <i>explains</i> what calculus is. You do not understand what I meant by the sentence. You will not understand until you read Calculus Made Easy.</p>
<p>This is another book that just makes me sad at the current state of education. Calculus is one of those things that’s feared by the general public. It’s feared because it’s misunderstood. Calculus isn’t hard. And I don’t mean to just sound like a jerk when I say that. It isn’t meant to be hard at least. As the opening proverb of Calculus Made Easy says:
<blockquote>What one fool can do, another can.</blockquote>
All it really takes to understand calculus is the ability to imagine a very little bit of something. That and a caring and skilled tutor to lead you on your way. What name can you think of that sounds more caring and skillful than Silvanus Phillips Thompson. </p>
<p>I can think of no legitimate reason this book isn’t used in each and every high school calculus in America. Seriously.</p>Caught In The Rain II2010-10-10T14:41:00-04:00Jessetag:thephysicsvirtuosi.com,2010-10-10:posts/caught-in-the-rain-ii.html<p><a href="http://1.bp.blogspot.com/_SYZpxZOlcb0/TLIIqVYyp-I/AAAAAAAAADo/KOuEncc3zFo/s1600/Rain.jpg"><img alt="image" src="http://1.bp.blogspot.com/_SYZpxZOlcb0/TLIIqVYyp-I/AAAAAAAAADo/KOuEncc3zFo/s200/Rain.jpg" /></a>
I was rather proud of my last post about being <a href="http://thevirtuosi.blogspot.com/2010/09/caught-in-rain.html">caught in the
rain</a>. In
that post, I concluded that you were better off running in the rain, but
that the net effect wasn’t incredibly great. However, when I told people
about it, the question I inevitably got asked was: What if the rain
isn’t vertical? That’s what I’d like to look at today, and it turns out
to be a much more challenging question. I’m still going to assume that
the rain is falling at a constant rate. Furthermore, I’m going to assume
that the angle of the rain doesn’t change. With those two assumptions
stated, let me remind you of the definitions we used last time. <mathjax>$$\rho
- \text{the density of water in the air in liters per cubic meter}$$</mathjax>
<mathjax>$$A_t - \text{top area of a person}$$</mathjax> <mathjax>$$\Delta t - \text{time
elapsed}$$</mathjax> <mathjax>$$d - \text{distance we have to travel in the rain}$$</mathjax> <mathjax>$$v_r
- \text{raindrop velocity}$$</mathjax> <mathjax>$$A_f - \text{front area of a person}$$</mathjax>
<mathjax>$$W_{tot} - \text{total amount of water in liters we get hit with}$$</mathjax>
As a reminder, our result from last time was: <mathjax>$$W_{tot}= \rho d (A_t
\frac{v_r}{v} + A_f)$$</mathjax> Now, let’s look at the new analysis. As
before, let us consider the stationary state first. Our velocity now has
two components, horizontal and vertical. Analogous to the purely
vertical situation, we can write down the stationary state, but now we
have rain hitting both our top and front (or back). I’m going to define
the angle, theta, as the angle the rain makes with the vertical (check
out figure 1 below). this gives <mathjax>$$W = \rho d A_t v_r \cos(\theta)
\Delta t+\rho A_f v_r \sin(\theta) \Delta t$$</mathjax> Let’s check our
limits. As theta goes to zero (vertical rain), we only get rain on top
of us, and as theta goes to 90 (horizontal rain), we only get rain on
the front of us. Makes sense! Alright, so let’s add in the effect of
motion now. This is going to be more challenging than in the vertical
rain situation. We’re going to examine two separate cases</p>
<hr />
<p><a href="http://1.bp.blogspot.com/_SYZpxZOlcb0/TLIARlf0FBI/AAAAAAAAADY/uyx083CikOU/s1600/CITR+II+-+against.jpg"><img alt="image" src="http://1.bp.blogspot.com/_SYZpxZOlcb0/TLIARlf0FBI/AAAAAAAAADY/uyx083CikOU/s320/CITR+II+-+against.jpg" /></a>
Fig. 1 - The rain, and our angle.</p>
<hr />
<p><strong>Case 1: Running Against The Rain</strong> This is the easier of the two
cases. After thinking about it for a while, I believe that it is the
same as when the rain is vertical. Let me explain why. If you are moving
with some velocity v, in a time t you will cover a distance x=v<em>t. Now,
suppose we paused the rain, so it is no longer moving, then moved you a
distance x, turned the rain back on, and had you wait for a time t. And
repeated this over and over until you got to where you were going. This
would result in an </em>average<em> velocity equal to v, even though it is not
a smooth motion. However, my claim is that in the limit that t and x go
to zero, this is a productive way of considering our situation. We note
that v=x/t, and in the limit that both x and t go to zero, that is the
</em>definition* of instantaneous velocity. The recap is, that my ‘pausing
the rain’ scheme of thinking about things is fine, as long as we
consider moving ourselves only very small distances over very short
times. Using this construction, we have an additional amount of rain
absorbed by moving the distance delta x of:
<mathjax>$$ \Delta W = \rho A_f \Delta x $$</mathjax>
<mathjax>$$ \Delta W = \rho A_f v \Delta t $$</mathjax>
This gives a net expression of
<mathjax>$$\Delta W = \rho A_t v_r \cos(\theta) \Delta t+\rho A_f v_r
\sin(\theta) \Delta t+\rho A_f v \Delta t $$</mathjax>
<mathjax>$$\Delta W = \rho A_f v \Delta t \left( \left(
\frac{A_t}{A_f}\right)
\left(\frac{v_r}{v}\right)\cos(\theta)+\left(\frac{v_r}{v}\right)\sin(\theta)
+ 1 \right)$$</mathjax>
As before, turning the deltas into differentials and integrating yields
<mathjax>$$W = \rho A_f v t \left( \left( \frac{A_t}{A_f} \right)
\left(\frac{v_r}{v}\right)\cos(\theta)+\left(\frac{v_r}{v}\right)\sin(\theta)
+ 1\right)$$</mathjax>
<mathjax>$$W=\rho A_f d \left( \left( \frac{A_t}{A_f}\right)
\left(\frac{v_r}{v}\right)\cos(\theta)+
\left(\frac{v_r}{v}\right)\sin(\theta) + 1 \right)$$</mathjax>
Note that when theta is zero, our vertical rain result gives the same
thing as we found in the last post (the first term lives, the second
term goes to zero, the third term lives). I’m going to use the
reasonable numbers I came up with in the last post. However, since we
have wind, we’ll have to modify our rain velocity. More specifically,
we’ll assume the rain has the same vertical component of velocity in all
cases. Then the wind speed, v_w, will be what controls the angle. More
exactly, the magnitude of the raindrop velocity will be
<mathjax>$$v_r=\sqrt{(6 m/s)^2+v_w^2}$$</mathjax>
While the angle will be
<mathjax>$$\theta=\tan^{-1}(v_w/ (6 m/s))$$</mathjax>
Next we note that
<mathjax>$$v_r\cos\theta = 6 m/s$$</mathjax>
which is just the vertical component of our rain. Similarly, the other
term is just the horizontal component of our rain. So we can write our
as a function of our velocity and the wind speed (the angle and wind
speed is interchangeable):
<mathjax>$$W = \rho A_f d\left( \left( \frac{A_t}{A_f}\right)
\left(\frac{6 m/s}{v}\right) +\left(\frac{v_w}{v}\right) +
1\right)$$</mathjax>
Using the reasonable numbers I came up with in my last post yields (with
a distance of 100m)
<mathjax>$$W = .2 liters \left( \left(\frac{.72
m/s}{v}\right)&+\left(\frac{v_w}{v}\right) + 1\right)$$</mathjax>
Once again, we have a least wet asymptote, which is the same as before.
I’ve plotted this function for various values of theta, and, more
intuitively, for various wind speeds (measured in mph, as we’re used to
here in the <span class="caps">US</span>), and the plots are shown below (click to enlarge).
Unsurprisingly, you get the most wet when the rain is near horizontal,
but interestingly enough you can get the most percentage change from a
walk to a run when the rain is near horizontal. All angles are in degrees.</p>
<hr />
<p><a href="http://3.bp.blogspot.com/_SYZpxZOlcb0/TLCFzqCBV2I/AAAAAAAAADI/iUpSnEgsrkM/s1600/Caught+in+Rain+II+-+running+against+theta.jpg"><img alt="image" src="http://3.bp.blogspot.com/_SYZpxZOlcb0/TLCFzqCBV2I/AAAAAAAAADI/iUpSnEgsrkM/s400/Caught+in+Rain+II+-+running+against+theta.jpg" /></a>
Fig. 2 - How wet you get vs. how fast you run for various wind angles.</p>
<hr />
<hr />
<p><a href="http://2.bp.blogspot.com/_SYZpxZOlcb0/TK-CNTq4DTI/AAAAAAAAADE/wx39XVNB-f0/s1600/Caught+in+Rain+II+-+running+against+vw.jpg"><img alt="image" src="http://2.bp.blogspot.com/_SYZpxZOlcb0/TK-CNTq4DTI/AAAAAAAAADE/wx39XVNB-f0/s400/Caught+in+Rain+II+-+running+against+vw.jpg" /></a>
Fig. 3 - How wet you get vs. how fast you run for various wind speeds in mph.</p>
<hr />
<p><strong>Case 2: Running With The Rain</strong>
This is the potentially harder case. We’ve got two obvious limiting
cases. If you run with the exact velocity of the rain and the rain is
horizontal, you shouldn’t get wet. If the rain is vertical, it should
reduce to the result from my first post. We’ll start with the stationary
case. This should be identical to case 1, if you’re stationary it
doesn’t matter if the rain is blowing on your front or back. That means
that for v=0, we should have <mathjax>$$\Delta W = \rho A_t v_r
\cos(\theta) \Delta t + \rho A_f v_r \sin(\theta) \Delta t$$</mathjax>
Now, let’s use the same method as before, pausing the rain, advancing in
x, then letting time run. First we’ll deal with our front side. Consider
figure 4.</p>
<hr />
<p><a href="http://3.bp.blogspot.com/_SYZpxZOlcb0/TLIDC2OwATI/AAAAAAAAADc/TfIMFDmOl7Y/s1600/CITR+II+-+with1.jpg"><img alt="image" src="http://3.bp.blogspot.com/_SYZpxZOlcb0/TLIDC2OwATI/AAAAAAAAADc/TfIMFDmOl7Y/s320/CITR+II+-+with1.jpg" /></a>
Fig. 4 - Geometry for small delta x.</p>
<hr />
<p>Note that in front of us there is a rainless area, which we’ll be
advancing into. Consider a delta x less than the length of the base of
that triangle. If we advance that delta x, we’ll carve out a triangle of
rain as indicated, which, by some simple geometry, contains an amount of
rain <mathjax>$$\rho w \frac{(\Delta x)^2}{2 \tan(\theta)} = \rho w
\frac{v^2 (\Delta t)^2}{2 \tan(\theta)}$$</mathjax> where w is the width of
our front. Now, consider if delta x is longer than the base of the
rainless triangle, as shown in figure 5.</p>
<hr />
<p><a href="http://1.bp.blogspot.com/_SYZpxZOlcb0/TLID3V-ITNI/AAAAAAAAADg/V2oek4ZMsC4/s1600/CITR+II+-+with2.jpg"><img alt="image" src="http://1.bp.blogspot.com/_SYZpxZOlcb0/TLID3V-ITNI/AAAAAAAAADg/V2oek4ZMsC4/s320/CITR+II+-+with2.jpg" /></a>
Fig. 5 - Geometry for large delta x.</p>
<hr />
<p>We’ll carve out an amount of rain equal to the indicated triangle plus
the rectangle. From the diagram we see this gives an amount of water
<mathjax>$$A_f \rho (\Delta x - h \tan(\theta)) + A_f \rho h
\tan(\theta)/2 = A_f \rho (\Delta x - \frac{h
\tan(\theta)}{2})$$</mathjax> We could write two separate equations for these
two cases, but that’s rather inefficient notation. I’m going to use the
<a href="http://en.wikipedia.org/wiki/Heaviside_step_function">Heaviside step
function</a>, H(x).
This is a function that is zero whenever the argument is negative, and 1
whenever the argument is positive. That means that for our front side,
<mathjax>$$\Delta W_f=\rho w \frac{v^2 (\Delta t)^2}{2 \tan(\theta)} H(
h\tan(\theta) - \Delta x) $$</mathjax> <mathjax>$$+A_f \rho \left(\Delta x -
\frac{h \tan(\theta)}{2}\right)H(\Delta x - h \tan(\theta))$$</mathjax>
Note that I’ve written my step function in terms of the relative length
of delta x and the base of the rainless triangle. We get the first term
when delta x is less than the base length, and the second term when
delta x is more than the base length. Now, let us consider the rain
hitting our back. There are two cases here as well. First consider the
case where we’re running with a velocity less than that of the rain. See
figure 6..</p>
<hr />
<p><a href="http://4.bp.blogspot.com/_SYZpxZOlcb0/TLIFgzj3yNI/AAAAAAAAADk/C5ImwwcoYME/s1600/CITR+II+-+with3.jpg"><img alt="image" src="http://4.bp.blogspot.com/_SYZpxZOlcb0/TLIFgzj3yNI/AAAAAAAAADk/C5ImwwcoYME/s320/CITR+II+-+with3.jpg" /></a>
Fig. 6 - The back.</p>
<hr />
<p>We get two terms. There’s the triangle of rain that moves down and hits
our back, shown above. Hopefully it is apparent that this is the same as
the triangle of rain we carved out with our front, and so will
contribute a volume of water <mathjax>$$\rho w \frac{v^2 (\Delta t)^2}{2
\tan(\theta)}$$</mathjax> There’s also the rain that manages to catch up with
us, <mathjax>$$A_f \rho (v_r \sin(\theta) \Delta t - \Delta x) =A_f \rho
\Delta t (v_r \sin(\theta) - v)$$</mathjax> In the case where we outrun the
rain, we don’t want this term, and our triangle gains a maximal length
of the horizontal and vertical components of the rain velocity times
delta t. We can write this backside term using a step function as
<mathjax>$$\Delta W_b =A_f \rho \Delta t \left(v_r \sin(\theta) - v +w
\frac{v^2 (\Delta t)^2}{2 \tan(\theta)}\right)H( v_r
\sin(\theta) - v)$$</mathjax> <mathjax>$$+\rho w v_r^2 \Delta t^2
\frac{\sin(\theta)\cos(\theta)}{2} H(v-v_r\sin(\theta))$$</mathjax> We can
combine these terms, with our usual top term, to get <mathjax>$$\Delta W =A_f
\rho \Delta t \left[ \left(v_r \sin(\theta) - v +\frac{w}{A_f}
\frac{v^2 (\Delta t)}{2 \tan(\theta)}\right)H(v_r \sin(\theta)
- v)$$</mathjax> <mathjax>$$+ \frac{w}{A_f} v_r^2 \Delta t
\frac{\sin(\theta)\cos(\theta)}{2} H(v-v_r\sin(\theta) $$</mathjax> <mathjax>$$+
\frac{w}{A_f} \frac{v^2 (\Delta t)}{2 \tan(\theta)} H(
h\tan(\theta) - \Delta x)$$</mathjax> <mathjax>$$+\left(\frac{\Delta x}{\Delta t} -
\frac{h \tan(\theta)}{2 \Delta t}\right)H(\Delta x - h
\tan(\theta))+\frac{A_t}{A_f} v_r \cos(\theta) ] $$</mathjax> I’m sure
this four line equation looks intimidating (I’m also sure that it is the
longest equation we’ve written here on the virtuosi!). But it’ll
simplify when we take our limit as delta t goes to zero. Let’s do this a
little more carefully than usual. <mathjax>$$\lim_{\Delta t \to
0}\frac{\Delta W}{\Delta t} =\lim_{\Delta t \to 0}A_f \rho
\left[ \left(v_r \sin(\theta) - v +\frac{w}{A_f} \frac{v^2
(\Delta t)}{2 \tan(\theta)}\right)$$</mathjax> <mathjax>$$*H(v_r \sin(\theta) - v)+
\frac{w}{A_f} v_r^2 \Delta t
\frac{\sin(\theta)\cos(\theta)}{2} H(v-v_r\sin(\theta) $$</mathjax> <mathjax>$$+
\frac{w}{A_f} \frac{v^2 (\Delta t)}{2 \tan(\theta)} H(
h\tan(\theta) - v \Delta t)$$</mathjax> <mathjax>$$+\left(v - \frac{h
\tan(\theta)}{2 \Delta t}\right)H(v \Delta t - h
\tan(\theta))+\frac{A_t}{A_f} v_r \cos(\theta) ] $$</mathjax> We’ll take
this term by term. On the left side of our equality, we recognize the
definition of a differential of W with respect to t. Any term on the
right without a delta t we can ignore. The first term with a delta t is
<mathjax>$$\frac{w}{A_f} \frac{v^2 (\Delta t)}{2 \tan(\theta)}H(v_r
\sin(\theta) - v)$$</mathjax> In all cases except when theta = 0, this term goes
to zero. Now, when theta = 0, tan(theta) = 0, so our limit gives zero
over zero, which is a number (note, I’m not being extremely careful. If
you’d like, tangent goes as the argument to leading order, so we have
two things going to zero linearly, hence getting a number back out).
However, looking at the step function, when theta goes to zero, we
likewise require v to be zero to get a value. However, our term goes as
v^2, so we conclude that in our limit, this term goes to zero. Next we
have <mathjax>$$\frac{w}{A_f} v_r^2 \Delta t
\frac{\sin(\theta)\cos(\theta)}{2} H(v-v_r\sin(\theta)$$</mathjax> This
obviously goes to zero, no mitigating circumstances like a division by
zero. The next term is <mathjax>$$\frac{w}{A_f} \frac{v^2 (\Delta t)}{2
\tan(\theta)} H( h\tan(\theta) - v \Delta t)$$</mathjax> This term presents
the same theta = 0 issues as the first term. The resolution is slightly
more subtle and less mathematical than before. Remember that this term
physically represents the rain that hits us when we move forward through
the section that our body hasn’t shielded from the rain (see the drawing
above). I argue from a physical standpoint that when the rain is
vertical, this term would double count the rain we absorb with the next
term (which doesn’t go to zero). I’m going to send this term to zero on
physical principles, even though the mathematics are not explicit about
what should happen. Next we have <mathjax>$$vH(v \Delta t - h \tan(\theta))$$</mathjax>
The argument of the step function makes it clear that to have any chance
at a non-zero value we need theta = 0. The mathematics isn’t completely
clear here, as the value of a step function at zero is usually a matter
of convention (typically .5). Let’s think physically about what this
term represents. This is the rain we absorb beyond the shielded region
(see above figure). This is the term I said the previous term would
double count with when the rain is vertical, so we’re required to keep
it. However, only when theta = 0. I’m going to use another special
function to write that mathematically, the <a href="http://en.wikipedia.org/wiki/Kronecker_delta">Kronecker
delta</a>, which is 1 when
the subscript is zero, and zero otherwise. This is a bit of an odd use
of the Kronecker delta, because it’s typically only used for integers,
but for those purists out there, there is an integral definition which
has the same properties for any (non-integer) value. Thus <mathjax>$$vH(v \Delta
t - h \tan(\theta))=v\delta_{\theta}$$</mathjax> The last term we have to
concern ourselves with is <mathjax>$$- \frac{h \tan(\theta)}{2 \Delta t}H(v
\Delta t - h \tan(\theta))$$</mathjax> Again, there is some mathematical
confusion when theta = 0, so we think physically again. This term
represents the rain in the unblocked triangle (see above). Obviously,
there is no rain in the triangle when theta is zero, because there is no
triangle! We set this term to zero as well. This gives us a much simler
expression than before,
<mathjax>$$\frac{dW}{dt} =A_f \rho \left[ (v_r \sin(\theta) - v)H(v_r
\sin(\theta) - v)+v\delta_{\theta}+\frac{A_t}{A_f} v_r
\cos(\theta) \right]$$</mathjax>
We can pull out a v and integrate with respect to t, giving
<mathjax>$$W=A_f \rho v t \left[ (\frac{v_r \sin(\theta)}{v} - 1)H(v_r
\sin(\theta) - v)+\delta_{\theta}+\frac{A_t}{A_f} \frac{v_r
\cos(\theta)}{v} \right]$$</mathjax>
As before, we can write this in terms of the wind velocity and the
vertical rain velocity,
<mathjax>$$W=A_f \rho d \left[ (\frac{v_w}{v} - 1)H(v_w -
v)+\delta_{v_w}+\frac{A_t}{A_f} \frac{v_{r,vert}}{v} \right]$$</mathjax>
This is a nice, simple expression that we can easily plot. There is one
thing that bothers me, I feel like there should be another step function
term that kicks in when your velocity exceeds the horizontal rain
velocity, and you start getting more rain on your front. But I’m going
to trust my analysis, and assert that such a term would be at least
second order in our work. If someone does find it, let me know! Using
the reasonable numbers from my last post gives <mathjax>$$W=.2 liters \left[
(\frac{v_w}{v} - 1)H(v_w - v)+\delta_{v_w}+\frac{.72 m/s}{v}
\right]$$</mathjax> Because this post is long enough already, I’ve gone ahead and
plotted this only vs. wind velocity. I’ve also plotted the former least
wet asymptote. Most interesting (and you’ll probably have to click on
the graph to enlarge to see this) is that there no longer is a least wet
asymptote! In theory if you run fast enough you can stay as dry as you want.</p>
<hr />
<p><a href="http://1.bp.blogspot.com/_SYZpxZOlcb0/TLH8aqysIkI/AAAAAAAAADQ/onOsQT_VMvA/s1600/Caught+in+Rain+II+-+running+with.jpg"><img alt="image" src="http://1.bp.blogspot.com/_SYZpxZOlcb0/TLH8aqysIkI/AAAAAAAAADQ/onOsQT_VMvA/s400/Caught+in+Rain+II+-+running+with.jpg" /></a>
Fig. 6 - How wet you get vs. how fast you run for various wind speeds in mph.</p>
<hr />
<p><strong>Comparison</strong>
I will conclude with a comparison of the two results, to each other and
to the vertical case. First, lets take the appropriate limits.
<mathjax>$$W_{with}=A_f \rho d \left[ (\frac{v_w}{v} - 1)H(v_w -
v)+\delta_{v_w}+\frac{A_t}{A_f} \frac{v_{r,vert}}{v} \right]$$</mathjax>
<mathjax>$$W_{against} = \rho A_f d\left( \left( \frac{A_t}{A_f}\right)
\left(\frac{v_{r,vert}}{v}\right) +\left(\frac{v_w}{v}\right) +
1\right)$$</mathjax>
<mathjax>$$W_{stationary} = \rho t A_f \left(\frac{A_t}{A_f}
v_{r,vert}+v_w\right)$$</mathjax>
<mathjax>$$W_{vert}= \rho d A_f \left(\frac{A_t}{A_f} \frac{v_r}{v} + 1
\right)$$</mathjax>
In the stationary limit, we have to break up the d in our equations into
v t, and that gives
<mathjax>$$\lim_{v \to 0}W_{with}= \lim_{v \to 0} W_{against}=\rho t
A_f \left(\frac{A_t}{A_f} v_{r_vert}+v_w\right)$$</mathjax>
While in the vertical rain limit
<mathjax>$$\lim_{v_w \to 0}W_{with}= \lim_{v_w \to 0} W_{against}
=\rho d A_f \left(\frac{A_t}{A_f} \frac{v_r}{v} + 1 \right)$$</mathjax>
So our limits work. Finally, it’s a little hard to tell the difference
between the forward and backwards case, so I’ve plotted the two lines
together for a few values of v_w. You’ll notice that for zero wind
speed they have the same result (which is good, since our limit was the
same), but for the other wind speeds they are remarkably divergent, more
so as you run faster! (again, click to enlarge)</p>
<hr />
<p><a href="http://3.bp.blogspot.com/_SYZpxZOlcb0/TLH87WlO8aI/AAAAAAAAADU/pOq_rEz2wkI/s1600/Caught+in+Rain+II+-+compare.jpg"><img alt="image" src="http://3.bp.blogspot.com/_SYZpxZOlcb0/TLH87WlO8aI/AAAAAAAAADU/pOq_rEz2wkI/s400/Caught+in+Rain+II+-+compare.jpg" /></a>
Fig. 7 - Solid lines are running with the rain, dashed lines are running against the rain.</p>
<hr />
<p><strong>Conclusions</strong>
Hopefully this has been an interesting exercise for you. I know it
certainly took me longer to work and write than I initially thought.
While you can’t see it in the post, there were a lot of scribblings and
thinking going on before I came to these conclusions. Most of it went
something like: “No, that can’t be right, it doesn’t have the right
(zero velocity/zero angle) limit!”. I think this concludes all of the
running in the rain that I want to do, but if you have more followup
questions, post them below, and I’ll do my best to answer. Also, I admit
that my analysis may be a bit rough, so if you have other approaches,
let me know. Finally, note that everything I’ve found favors running in
the rain, so get yourself some exercise and stay dry!</p>Beards and Pulsars2010-09-28T02:56:00-04:00Corkytag:thephysicsvirtuosi.com,2010-09-28:posts/beards-and-pulsars.html<hr />
<p><a href="http://4.bp.blogspot.com/_fa6AZDCsHnY/TKF59ueq6-I/AAAAAAAAAHM/-dRSw4CNM3w/s1600/hulse_postcard.jpg"><img alt="image" src="http://4.bp.blogspot.com/_fa6AZDCsHnY/TKF59ueq6-I/AAAAAAAAAHM/-dRSw4CNM3w/s200/hulse_postcard.jpg" /></a>
The bearded half of Hulse-Taylor</p>
<hr />
<p>A few weeks ago I was on a bus going through Scranton and I read a
super-awesome fun fact regarding the Hulse-Taylor binary pulsar in
<em>Black Holes, White Dwarfs and Neutron Stars</em>. Sadly, I have since
forgotten it and left the book a few thousand miles away. So, let’s just
make up our own! First, we need a little background. What the heck is a
pulsar? A pulsar is a rapidly rotating neutron star that beams
electromagnetic radiation towards us, which is how we can see them.
Typical rotation periods range from a millisecond to a few seconds. So
each time the pulsar rotates, we observe a blip when the radiation beams
towards us. Since these objects are additionally very stable rotators,
they are essentially very accurate clocks with which we may make
astronomical measurements. So what’s the Hulse-Taylor binary pulsar? The
Hulse-Taylor binary is almost exactly what it sounds like: it’s a pulsar
binary where one of the pulsars is pointed towards earth. It was the
first binary of it’s kind discovered and offers a unique look into a
very high gravity environment. It also provided a very nice test for
General Relativity. General Relativity predicts that two orbiting
massive bodies should emit gravitational waves. This emission of
gravitational waves will then cause the orbit to decay and the two
bodies to move closer together. So does the Hulse-Taylor binary show
this? Take a look:
<a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/TKF5SwXxsXI/AAAAAAAAAHI/wPELMEGlWW0/s1600/PSR_1913_new_large.jpg"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/TKF5SwXxsXI/AAAAAAAAAHI/wPELMEGlWW0/s400/PSR_1913_new_large.jpg" /></a>The
data fit the prediction of general relativity perfectly! For this
discovery Hulse and Taylor shared the 1993 Nobel prize in Physics. Now
that’s all well and good, but I was promised some fun facts…? Ah, yes!
Well, we mentioned that the Hulse-Taylor binary orbit is decaying. It
turns out that the orbit is decaying at about 3.5 meters per year.
That’s pretty slow. Let’s put it into a more conventional speed, like
meters per second. So <mathjax>$$ 3.5 m/yr = 3.5 m/yr \times \frac{1 year}{3.14
\times 10^7 s} = 1.1 \times 10^{-7} m/s $$</mathjax> or, in less useful units,
<mathjax>$$ 3.5 m/yr = 110 nm/s $$</mathjax> Great, so what to compare this to? Well, all
people who are in the know know that I am a manly man who gained the
ability to grow facial hair sometime after my sophomore year of college.
And since I have to pretend to be an upstanding member of society this
week, I happen to know the last time I shaved. Thus, a few simple
measurements and I can estimate how long hair takes to grow. The last
time I shaved was three days ago and a quick eyeball measurement (sadly
I have no ruler) gives a facial hair length of about 2mm. Thus, a beard
grows at about 0.7 mm/ day. <mathjax>$$ 0.7 mm/day = 0.7 mm/day \times
\frac{10^{-3} m}{mm} \times \frac{1 day}{86400s} = 8 nm/s $$</mathjax> This is
a universal speed constant, which we shall call the speed of beard. Or,
bowing to our oppressive overload sponsors<em>, we shall call it “Gillette
Mach 1.” So doing a quick division, we find that the rate at which the
Hulse-Taylor binary’s orbit is shrinking is roughly 14 times beard
speed, or in our commercial units, Gillette Mach 14 (a razor close
shave!). “Well,” I hear you cry (a bit disappointed…?), “that’s </em><em>a
</em>pretty<em> useless unit, but can’t we be </em>more<em> useless?” Yes, dear
reader, we certainly can! We are currently at
</em><a href="http://www.youtube.com/watch?v=2xZp-GLMMJ0">Snuggie</a><em><a href="http://www.youtube.com/watch?v=0Ym65h1bmJ0"></a>levels
of uselessness right now, but I think we can just about bump it up to
</em><a href="http://www.youtube.com/watch?v=0ONJfp95yoE">Member of Congress</a>*
useless if we try. A furlong is a unit of length about 200 meters long.
A fortnight is a unit of time about 14 days long. Therefore, if we want
a speed we just… <mathjax>$$ \frac{furlong}{fortnight} = 1
\frac{furlong}{fortnight} \times \frac{200 m}{furlong} \times
\frac{1 fortnight}{14 \times 86400 s} = 1.6 \times 10^{-4}
\frac{m}{s} $$</mathjax>
So the rate of decay of the Hulse-Taylor binary is:
<mathjax>$$ 3.5 \frac{m}{yr} = 1.1 \times 10^{-7} m/s \times \frac{1
furlong/fortnight}{ 1.6 \times 10^{-4} m/s} = 7 \times 10^{-4}
\frac{furlong}{fortnight} $$</mathjax> Hooray! So now we know the decay rate of
the Hulse-Taylor binary orbit in two horrible units: either 700
microfurlongs per fortnight or 14 times the speed of beard (<span class="caps">AKA</span> Gillette
Mach 14). Please write these in your copybooks now and forever commit
them to memory. * In no way is The Virtuosi affiliated with the
wonderful Gillette Company, which makes the world’s best razors. Since
we aren’t affiliated with this great Gillette company, we are not
obligated to repeat their slogan that it’s “The Best A Man Can Get”
despite its self-evident truth. Nor is the author required to say that
the silky smooth shave I get with a Mach 20 razor is the only reason I
can even must social interaction. Hooray!</p>Paradigm Shifts 2: Paradigm ShiftER2010-09-19T23:55:00-04:00Samtag:thephysicsvirtuosi.com,2010-09-19:posts/paradigm-shifts-2-paradigm-shifter.html<p>Last time, I presented reasons why it would be economically infeasible
for the <span class="caps">US</span> to switch to the metric system. This time, I’d like to talk
about a change that could relatively easily be brought about soon. A
change that would barely cost a thing, but could improve efficiency
dramatically in many jobs and in every day life for many people. A
change of this type would be very handy. Puns aside though, what I’m
talking about is this: <span class="caps">DVORAK</span> <span class="caps">SIMPLIFIED</span> <span class="caps">KEYBOARD</span> (again lots from
Wikipedia)
<a href="http://upload.wikimedia.org/wikipedia/commons/thumb/2/25/KB_United_States_Dvorak.svg/500px-KB_United_States_Dvorak.svg.png"><img alt="image" src="http://upload.wikimedia.org/wikipedia/commons/thumb/2/25/KB_United_States_Dvorak.svg/500px-KB_United_States_Dvorak.svg.png" /></a><a href="http://upload.wikimedia.org/wikipedia/commons/thumb/2/25/KB_United_States_Dvorak.svg/1000px-KB_United_States_Dvorak.svg.png"></a>
<a href="http://upload.wikimedia.org/wikipedia/commons/thumb/2/25/KB_United_States_Dvorak.svg/1000px-KB_United_States_Dvorak.svg.png"></a>
<a href="http://upload.wikimedia.org/wikipedia/commons/thumb/2/25/KB_United_States_Dvorak.svg/1000px-KB_United_States_Dvorak.svg.png"></a>
Look down at your keyboard. Chances are very good that if you bought
your keyboard in an English speaking country, you’re using the <span class="caps">QWERTY</span>
keyboard layout. You’ll also probably know (or else you’ll learn from
me) that the letter “E” is the most common in the English language. You
might wonder then, why it’s not in the “home row” (the row of keys in
the middle of the keyboard that would be right under your fingers if
you’re typing in the standard way). You might also wonder why other
common letters like “T” were exiled to the top row while less common
letters like “J” and “K” sit right under your fingertips as they rest
idle. You may then think about how slow it is to type words like
“December” which require you to use the same finger for consecutive
letters. You may even get to thinking that a lot of words require using
the same hand for consecutive letters, but it would be much nicer to
alternate hands as you type. The reasons for the slow speed of <span class="caps">QWERTY</span>
are not entirely clear. There are stories floating around about the
inventor of the typewriter deliberately laying out the keyboard this way
to keep typing speeds down so that the mechanical keys wouldn’t jam. The
authenticity of these stories are disputed, but you’d be hard pressed to
argue that <span class="caps">QWERTY</span> is the most efficient layout there could be. One
competitor for the title that has stood out is called Dvorak, named
after its creator. Some statistics from Wikipedia (sources given there):</p>
<ul>
<li><span class="dquo">“</span>the Dvorak layout uses about 63% of the finger motion required by <span class="caps">QWERTY</span>”</li>
<li><span class="dquo">“</span>the vast majority of the Dvorak layout’s key strokes (70%) are done
in the home row” whereas <span class="caps">QWERTY</span> “has only 32% of the strokes done in
the home row”</li>
<li><span class="dquo">“</span>The <span class="caps">QWERTY</span> layout has more than 3,000 words that are typed on the
left hand alone and about 300 words that are typed on the right hand
alone”, but “with the Dvorak layout, only a few words can be typed
on the left hand and not one syllable can be typed with the right
hand alone, much less a word.”</li>
<li><span class="dquo">“</span>On <span class="caps">QWERTY</span> keyboards, 56% of the typing strokes are done by the left
hand. As the left hand is weaker for the majority of people, the
Dvorak keyboard puts the more often used keys on the right hand
side, thereby having 56% of the typing strokes done by the right hand.”</li>
<li><span class="dquo">“</span>Because the Dvorak layout requires less finger motion from the
typist compared to <span class="caps">QWERTY</span>, many users with repetitive strain
injuries have reported that switching from <span class="caps">QWERTY</span> to Dvorak
alleviated or even eliminated their repetitive strain injury.”</li>
<li><span class="dquo">“</span>The fastest English language typist in the world, according to The
Guinness Book of World Records” achieved “a peak speed of 212 wpm”
using Dvorak</li>
</ul>
<p>Okay, so maybe by now you see that Dvorak can be more efficient. So why
hasn’t it been implemented yet? Well, back in the typewriter era, it was
one layout or the other, and people picked <span class="caps">QWERTY</span>. Look at Wikipedia for
a summary. But now, assuming you’re not using a typewriter or a 386 or
something equivalently ancient, it’s actually quite easy to switch back
and forth between <span class="caps">QWERTY</span> and Dvorak. Check out your control panel if
you’re on a <span class="caps">PC</span> (or the equivalent on a mac or linux or what have you)
and I bet you’ll find the setting for changing the keyboard layout
pretty easily, and I bet that Dvorak will be one of the layouts you can
choose. So it’s easy enough to switch all the devices to the new system,
with pretty much no cost. The problem is of course that hardly anybody
can type with Dvorak. Everybody is used to <span class="caps">QWERTY</span>! And I’m sure there
are people out there who are happy to try to learn Dvorak, but I’m not
one of them. I’m used to <span class="caps">QWERTY</span> and I’m pretty sure if I tried to
switch, I’d get confused and completely jumble the two systems. I just
can’t see the slight increase in my typing speed being worth the hassle.
Maybe some can, but I’m sure that many feel the same way I do,
especially those who, like me, don’t do a lot of transcribing or the
like, and so rarely need to type very fast. The key? (sorry, no more
puns, I promise) Get ‘em while they’re young. That’s right, all the
young typists out there can be trained on Dvorak instead of <span class="caps">QWERTY</span>. If
they never learn <span class="caps">QWERTY</span>, they’ll never be confused. And they can use the
same keyboards as everyone else. All that we’d have to do is make it
easier to switch the keyboard layout (put something a little closer than
the control panel, maybe right on the keyboard), and print two sets of
characters on the keyboard when they are made (note that I’ve neglected
cell phones and other devices that also have keyboards, but they can be
updated in the same way). It would be as easy as that to implement the
change. Give it maybe 5-10 years to phase in keyboards and operating
systems with the easy-switch built in (in this case, it’s nice that
computers have such short life spans), then have schools start teaching
kids to use Dvorak in typing classes. In a generation, hardly anyone
will even remember <span class="caps">QWERTY</span> except as a weird hiccup along the way to efficiency.</p>Breaking Intuition2010-09-19T16:52:00-04:00Bohntag:thephysicsvirtuosi.com,2010-09-19:posts/breaking-intuition.html<p>When I walked into my first day of physics class in high school, I
carried with me a set of ideas which I learned from simply observing and
interacting with the world. In fact everyone builds up what they believe
to be intuitive concepts, whether it be in science, math, or any other
field. Without any scientific training whatsoever, we begin to build
intuition. If you let go of a ball in the air, what will happen? If you
try to run on the ice of a frozen lake, will it be easier than running
on the sidewalk? If you stand in the sun and on the ground you see a
strange dark misshapen copy of yourself imitating your every move… who
is following you? Unfortunately we run into an issue when our intuition
disagrees with experimental results or someone else’s intuition. At that
point, it is essential to break down and analyze our intuition to find
where any problems in our logic may exist. This process of continually
breaking down and analyzing intuition is key to progressing in science.
<a href="http://4.bp.blogspot.com/_CPJjnXOJ-mQ/TJZ5DVIGsPI/AAAAAAAAABU/-22lS-Cq6_U/s1600/ThreeDice.jpg"><img alt="image" src="http://4.bp.blogspot.com/_CPJjnXOJ-mQ/TJZ5DVIGsPI/AAAAAAAAABU/-22lS-Cq6_U/s320/ThreeDice.jpg" /></a>Let’s
take a look at a simple dice game. The rules of the game dictate that
you pick a die first, then I pick a die, then we roll together 100 times
(we’re really bored, apparently). The winner is the person who rolls a
higher number more times in 100 rolls. The catch is that the numbers are
not the standard 1-6 on each die, but a magic set of numbers which may
repeat any number from 1-6 as many times as desired, for example {1, 2,
3, 4, 5, 5}. “Sounds easy,” you say, as you pick up the yellow die. I
choose blue. We roll, and I win 74 out of 100 times. “Obviously the blue
die is better, give me that one,” you say. I proceed to pick up the
green die, lo and behold, I win 63 out of 100 times. “Okay okay, I’ve
got the hang of it now. Clearly the green die is better than all of the
rest.” I choose the yellow die and win 65 out of 100 times. In a fit of
rage you proclaim “witchcraft” and storm off for your witch-hunt gear.
There is no deception here with the exception of logic, the younger
sister of witchcraft. It is actually an interesting challenge to try to
come up with a set of numbers which will yield the following result: The
probability of the value on the blue die being higher than the value on
the yellow die is greater than 1/2. The probability of the value on the
green die being higher than the value on the blue die is greater than
1/2. The probability of the value on the yellow die being higher than
the value on the green die is greater than 1/2. There are definitely a
multitude of possible solutions, so I encourage you to attempt to find
one using only numbers 1-6 before scrolling down. Got a solution? Let’s
take a look at the following dice: Yellow : {1, 4, 4, 4, 4, 4} Blue :
{2, 2, 2, 5, 5, 5} Green : {3, 3, 3, 3, 3, 6} I should note that my
solution is set up to have no ties, which makes the analysis a bit more
straightforward. It is certainly possible to come up with interesting
solutions which allow ties.
<a href="http://1.bp.blogspot.com/_CPJjnXOJ-mQ/TJZ436UNErI/AAAAAAAAABM/m-BpJNofQFk/s1600/Screen+shot+2010-09-19+at+4.10.27+PM.png"><img alt="image" src="http://1.bp.blogspot.com/_CPJjnXOJ-mQ/TJZ436UNErI/AAAAAAAAABM/m-BpJNofQFk/s320/Screen+shot+2010-09-19+at+4.10.27+PM.png" /></a>The
chart on the right shows how each die compares to the others. The color
of each square indicates the winner when the number of the same row and
column are compared. We can see that blue beats yellow 21 out of 36
times, green beats blue 21 out of 36 times, and yellow beats green 25
out of 36 times. So this combination of dice will show the
non-transitive effect we were looking for. So I explain this “sorcery”
to you, before you try to burn me at the stake for being a witch, and
you calm down. Now I tell you that I’d like to try a new game. I select
two dice of the same color, then you get to select two dice of the same
color, then we roll both pairs 100 times. The winner this time is the
person who rolls a higher total, the sum of their two dice, more times
in 100 rolls. I select two yellow dice. After learning of my trick, you
decide to pick two blues and proceed to lose 60 out of 100 times. You
declare, “‘tis but a statistical error, let’s have another go!” I select
two blues and you, two greens. I win again! Just to rub it in, I choose
green and you choose yellow, and I win once again. Softly weeping, you
listen as I explain that the probabilities have now switched! The chart
on the right shows the different sums that are possible for a given set
of colored dice. When you
look<a href="http://3.bp.blogspot.com/_CPJjnXOJ-mQ/TJZ4wlSQh4I/AAAAAAAAAA0/p69QSdtqFdc/s1600/Screen+shot+2010-09-19+at+4.10.38+PM.png"><img alt="image" src="http://3.bp.blogspot.com/_CPJjnXOJ-mQ/TJZ4wlSQh4I/AAAAAAAAAA0/p69QSdtqFdc/s320/Screen+shot+2010-09-19+at+4.10.38+PM.png" /></a>
at the possible sum 4 for the blue dice, you see that 4 can meet up with
2 a total of nine times, with blue winning each, 4 can meet up with 5 a
total of 90 times with yellow winning each, and can meet up with 8 a
total of 225 times with yellow winning each. So the value in each cell
is the number of times each match-up can occur, with the color of the
cell showing who will win each match-up. There are 6^4 = 1296
possibilities, so winning half corresponds to 648. This dice trick is an
example of non-transitive logic, which can certainly be a non-intuitive
topic (Stay tuned for some non-transitive logic involving coins!). In
this case, you must break your intuition that there must be one “best”
die. In science, it’s a great idea to try to look for other examples of
the behavior you are observing to help reinforce what you’ve learned. It
turns out that one of the most basic schoolyard games involves
non-transitive logic! In the game of rock, paper, scissors, we find that
rock crushes scissors, scissors cuts paper, and paper covers rock. This
is analogous to the behavior of our special dice, and I believe makes
the logic much easier to understand. Compare against your intuition,
break down and analyze, build up and reinforce.</p>Visualizing Quantum Mechanics2010-09-14T20:06:00-04:00Alemitag:thephysicsvirtuosi.com,2010-09-14:posts/visualizing-quantum-mechanics.html<p>Or how I learned to stop worrying and love the computer. [Note: There’s
a neat video below the fold. ]</p>
<h3>A Confession</h3>
<p>I was recently rereading the <a href="http://books.google.com/books?id=_6XvAAAAMAAJ&q=Feynman+lectures+on+physics&dq=Feynman+lectures+on+physics&hl=en&ei=6wmQTPzsDIG78gbAp_joDQ&sa=X&oi=book_result&ct=result&resnum=2&ved=0CDgQ6AEwAQ">Feynman Lectures on
Physics</a>.
If you haven’t read them lately, I highly recommend them. Feynman is
always a pleasure to read. As usual, I was surprised. This time the
surprise came in lecture 9, which the way the course was laid out meant
that this was something like the last lecture in the third week that
these students had ever received of university level physics. The
lecture is on Newton’s laws of dynamics. The start is of course Newton’s
~~first~~ (second) law, <mathjax>$$ F = \frac{d }{dt } (mv ) $$</mathjax> which, provided
the mass is constant takes the more familiar form <mathjax>$$ F = ma $$</mathjax> After
discussing the meaning of the equation and how in general it can give
you a set of equations to solve, he naturally uses an example to
illustrate the kinds of problems you can solve. What system does he
choose to use as the first illustration of a dynamical system? The Solar
System. That’s right. Let that settle for a second. The sad thing is
that if you caught me off guard before I read the lecture, caught me in
an honest moment and asked me how you would solve the solar system, I
would probably have launched into a discussion of the <a href="http://en.wikipedia.org/wiki/N-body_problem">N-body
problem</a> and how there is
no closed form solution to newtonian gravity that involves 3 or more
bodies. (Depending on who you are, I might have then mentioned the
<a href="http://adsabs.harvard.edu/abs/1991CeMDA..50...73W">recent caveat</a>,
namely that there is a closed form solution to the N-body problem, but
that it involves a very very very slowly convergent series). Now, how
can Feynman use the Solar System as his first example of solving
Newtonian dynamics and I have told you that it’s impossible as my first
words on the subject? Well, the answer of course is that Feynman was
much smarter than I am. Perhaps another way to say it is that in a lot
of ways Feynman was a more contemporary physicist than I am.</p>
<h3>A Realization</h3>
<p>Physics education has changed very little in the last 50 years or so.
Now in some ways this isn’t a problem. The laws of nature also haven’t
changed in the last 50 years. What’s unfortunate is that the tools
available to physicists to answer their questions have changed
remarkably. Namely, computers. Computers are great. They permeate daily
life nowadays. They are capable of performing millions of computations
per second. This is great for physics. You see, a lot of the time, as
you all know, the way you achieve answers to specific questions about
the evolution of a system is to do a lot of computation. So what did
physicists do before computers? Well, a lot of time they would have to
do a lot of calculations out by hand, but no one enjoys that, so a lot
of times you would have to make sacrifices, make assumptions that meant
that your analytical investigations were simple enough to yield tiddy
little equations. This is reflected in the kinds of problems we still
solve in our physics classes. I never solved the solar system in my
mechanics class. I never did it because there isn’t a closed form
analytical solution to the solar system. But you know what… that
doesn’t matter. It doesn’t matter in the least. Because while there
doesn’t exist a closed form solution to the problem, it is very easy to
come up with a numerical approximation scheme (see <a href="http://en.wikipedia.org/wiki/Euler_method">Euler
Method</a>). You see, the point
of physics is to get answers to questions. And the fact of the matter is
that those answers don’t have to be ‘exact’, they don’t have to be
perfect. They need to be good enough that we can’t tell the difference
between them being ‘exact’ and them being an approximation. To do this
numerically with a pad of paper and a pencil is a heroic task. Do do
this with a computer takes a couple of lines of python code and a couple seconds.</p>
<h3>An example</h3>
<p>As an example of the neat things you can do with a few lines of python
code and a few minutes on your hand, check this out.
<a href="http://www.youtube.com/watch?v=J4Wg_b8bVm8">and</a>
<a href="http://www.youtube.com/watch?v=idpQOJKOh6Y">there’s</a>
<a href="http://www.youtube.com/watch?v=Z9121zwpbBs">more</a> This video depicts
time dependent quantum mechanics. I set up a gaussian wavepacket, inside
of a potential that includes a hard wall on the sides and is
proportional to x. That sounds fancy but what it means is that this is
the quantum mechanics equivalent of a bouncing ball. The amplitude of
the wave function corresponds to the probability of finding the particle
at any location. That is, imagine picking one of the colored pixels at
random. If you pick any of the colored pixels at random, and look down
at the x position, that is what measuring the position of the particle
would do. But what are the colors? Quantum mechanical wave functions are
complex. This means you can represent them either with a real and
imaginary part, or with a magnitude and a phase. Here it’s the latter.
Like I said the amplitude is shown with the height (actually the
amplitude squared). The color corresponds to the phase, where the phase
is mapped to a location on the color wheel, just like the one that pops
up in Photoshop or <span class="caps">GIMP</span>. And theres sound too! The sound is what the
wave function would sound like if it was making noise. Its the real part
of the wave function played as a sound. To that end, in this video it is
very low frequency, because I made the movie slow enough to see the
colors changing well. Its fun to watch the video and listen to the
sound. For this movie the sound correlates nicely to when the ‘ball’
reaches its maximum height. Whats also cool is that you can hear the
‘ball’ delocalize after each bounce. The sound and function start off
being nice and sharp, but after a few bounces it starts to spread out.
You can also see how momentum is encoded in quantum mechanics. Funny
thing is that instead of being something separate that you need to
specify like in classical mechanics, in quantum mechanics the wave
function is a complete description of the evolution of the system. I.e.
if I showed you just one frame of this bouncing ball, you would be able
to recreate the entire movie. If I showed you just one frame of a
classical basketball, you’d have no idea what frame came next since
you’d only know its position, not its velocity. In quantum mechanics the
momentum gets encoded in the wave function, and as you can tell its
encoded as a complex twist. A phase gradient. A crazy rainbow. If you
look closely, you can even see that you can tell the difference between
whether the particle is falling left or right. When it goes left the
rainbow pattern goes (reading left to right) blue red green. When its
moving right it goes blue green red. It twists one way then the other in
the complex plane. The colors are a little hard to see in this one,
they’re a little easier to see in this one: This second one I dressed up
a bit, labelling the axes with units, putting a time counter,
superimposing the potential I was talking about, and marking the average
expected position with a tracer black dot on the bottom.</p>
<h3>A Call to Arms</h3>
<p>Any student who has taken a first course in quantum mechanics knows
enough physics to make these movies. The physics isn’t complicated. But
the movies really neat, right? More than neat. Making these videos
taught me things about quantum mechanics I should have learned a long
time ago. I really think computers are underestimated in physics
education. They can be a great tool. A picture is worth a thousand
words, so a movie must be worth millions<em>. </em>: denotes stolen quote
More than just as an illustrative tool, the fact that even students in
the first introductory mechanics physics course can solve for something
like the solar system shouldn’t be hidden from them. Classical mechanics
after all is the physics of pretty much every object we can see and
touch, but classics mechanics classes only ever talk about <a href="http://en.wikipedia.org/wiki/Atwood_machine">Atwood
machines</a> and frictionless
planes. Often the closest they come to realism is in discussing
projectile motion, where the laws you learn in the book (neglecting air
resistance) are very good at describing the trajectories of very dense
large objects (i.e. cannonballs). I can’t remember the last time I’ve
fired a cannon. But air resistance serves little trouble to my computer.
Or <a href="http://www.wired.com/wiredscience/tag/air-resistance/">Rhett’s</a> (of
Dot Physics, which has just moved to Wired). Basically, if you give a
student an intro physics course and an intro programming course,
suddenly you have a human being who is better equipped to answer
questions about natural phenomenon than 99% of human beings that have
ever lived. So lets take a tip from Feynman and teach physics students
how to solve the solar system.</p>
<h3>Code</h3>
<p>As per request, here is the python code I used to generate the videos.
Its rather messy, so I apologize in advance.
<a href="https://docs.google.com/leaf?id=0B8Il0b2saix4NzYzMmRhZDUtODFhZS00YTE1LTgzZWYtMzVhODI5YzRhNWJm&hl=en&authkey=CPrk9IUM">schrod.py</a>
- A general script which finds the eigenvalues and eigenbasis for a 1D
particle with an arbitrary potential.
<a href="https://docs.google.com/leaf?id=0B8Il0b2saix4ZDYxZmFlNzQtYzdkNC00YTVkLWJhNWMtN2IxM2ZmZDg4Mzg4&hl=en&authkey=CJ3m4ogJ">qmsolver-bouncy.py</a>
- Code to generate the movie. You need to create a directory with the
same name as the name in the script in the save folder as the script.
The last two lines make the sound and the directory full of images. I
used ffmpeg to wrap the two together.</p>Microseconds and Miles2010-09-12T23:04:00-04:00Corkytag:thephysicsvirtuosi.com,2010-09-12:posts/microseconds-and-miles.html<p>The following is an unfinished manuscript found under heaps of rubble
and pizza boxes here at Virtuosi headquarters. It appears to be some
sort of screen play, though one would be hard-pressed to figure this out
solely from the script. The true giveaway was the 100 page addendum (not
published) full of potential titles and acceptance speeches. I dare not
bore you with these vanity pages in their entirety, but just for
completeness and posterity I include some samples.
For possible titles we have: “Dr. Dre, <span class="caps">OR</span>: How I Learned to Stop
Worrying and Love the Metric,” “How to Teach Physics to your Dee Oh
Double G (West Coast Edition),” “Bring Da Ruckus: ODEs by <span class="caps">ODB</span>” and
“Flavor Flav’s Flavor Physics…boooyeeeee!” among other even worse and
less relevant titles.
Among the acceptance speeches we have one that starts: “I would like to
thank the Academy, Scott Bakula and Chuck D. You know what you did.
Here’s a song I wrote…”, etc. It is all very painful.
There is almost no value to this document whatsoever, but it does
present a nice fun fact about <span class="caps">GPS</span>. The legible parts of the script are
thus presented below. The illegible parts appear to have been obscured
by some caustic mixture of Mountain Dew, pizza sauce and tears. <strong> <em>We
descend to the bottom of the abandoned mineshaft recently converted to
the headquarters of General Stanley K. Ripper. He is currently engaged
in a heated discussion with his science advisor, Dr. Vontavious Dre. We
join mid-sentence as a result of lazy writing…</em></strong>
<em><span class="caps">DR</span>. <span class="caps">DRE</span>: …shortage of scientists! What’s that? Yeah, you could
definitely call it a “chronic” shortage if you want. But semantics
aren’t important right now. The “G”-sector needs more funding.</em>
<strong>
<em><span class="caps">GEN</span>. <span class="caps">RIPPER</span>: You are my most trusted science advisor, Dr. Dre, but you
already have one staff scientist…a Dr. Snoop Dogg, I believe…? How
many scientists do we need doing relativity here? This is the military!
We need Moonraker lasers and nuclear hand grenades. So no more funding
unless… That is, unless you figured out the…</em></strong>
<em><span class="caps">DR</span>. <span class="caps">DRE</span>: No sir, we still don’t have a Stargate.</em>
<strong>
<em><span class="caps">GEN</span>. <span class="caps">RIPPER</span>: Well then you are just wasting my time! I want something
useful. Either something that goes boom or something that helps
something that goes boom. What I don’t need is a theoretical money
drain.</em></strong>
<em><span class="caps">DR</span>. <span class="caps">DRE</span>: And how did you find your way to work today, sir?</em>
<strong>
</strong><em><span class="caps">GEN</span>. <span class="caps">RIPPER</span>: What? You know very well I ride my horse, Neigh-braham
Lincoln. He knows how to get here.</em>
<strong>
<em><span class="caps">DR</span>. <span class="caps">DRE</span>: And how does Nifty ‘Nabe know where to go…?</em></strong>
<em><span class="caps">GEN</span> <span class="caps">RIPPER</span>: <span class="caps">GPS</span>, of course!</em>
<strong>
<em><span class="caps">DR</span>. <span class="caps">DRE</span>: Bingo. That’s our department. Without correcting for general
relativistic effects (the specialty of the “G”-sector, I may add), <span class="caps">GPS</span>
would be completely useless. Let me show you.</em></strong>
<em>A large blackboard drops down from the ceiling and a slow steady beat,
just barely audible, seems to come from all directions. Dre writes the
following equation on the board:</em>
<mathjax>$$ ds^2 = -\left(1-\frac{R_s}{r} \right)c^2dt^2+ \left( 1 -
\frac{R_s}{r} \right)^{-1} dr^2 + r^2 \left( d{\theta}^2 +
{\sin\theta} {d\phi}^2 \right)$$</mathjax>
This equation gives the line element for a Schwarzschild metric. The
R_s in the equation is called the “Schwarzschild Radius” and is given
by
<mathjax>$$ R_s = \frac{2GM}{c^2} .$$</mathjax>
<span class="caps">GEN</span> <span class="caps">RIPPER</span>: Is that Karl Schwarzschild? I remember reading a delightful
biography of him somewhere…? Anyway, what the heck is this “line
element” thing…?
<span class="caps">DR</span>. <span class="caps">DRE</span>: Good question. Essentially what we get is the differential
change in the space-time interval ( ds ) if we change all the
coordinates by a very tiny little bit. What is nice about this is that
although coordinates are a tricky thing in general relativity and can
change from one observer to another, the space-time interval is an
invariant quantity. That is, different observers will measure the same
space-time interval between events even though they may measure
different times and distances.
<span class="caps">GEN</span> <span class="caps">RIPPER</span>: So this space-time interval is just a kind of space-time
distance that different observers will agree on?
<span class="caps">DR</span>. <span class="caps">DRE</span>: Right-o. And this invariance allows us to compare different
reference frames. Eventually, we will use this invariance to get the
frequencies observed by an observer on the surface of the earth and one
traveling along with a satellite in orbit. Since we will only be
considering observers at fixed radius and fixed theta = 90 degrees (ie
at the equator), we can simplify things a bit. Since we will have a
fixed radius and theta value we have that:
<mathjax>$$ dr = 0 $$</mathjax>
<mathjax>$$ d\theta = 0 $$</mathjax>
and
<mathjax>$$\sin\theta = 1 $$</mathjax>
Plugging these simplifications into our line element gives:
<mathjax>$$ ds^2 = -\left( 1 - \frac{R_s}{r} \right)c^2 dt^2 + r^2
d\phi^2 $$</mathjax>
<span class="caps">GEN</span> <span class="caps">RIPPER</span>: Neato Toledo!
<span class="caps">DR</span> <span class="caps">DRE</span>: Right, so now we divide out the -c^2 dt^2 term on the right
hand side. So we have:
<mathjax>$$ ds^2 = -c^2 dt^2 \left[ \left( 1 - \frac{R_s}{r} \right) -
\frac{r^2}{c^2} \frac{d\phi^2}{dt^2} \right] $$</mathjax>
But this is just <mathjax>$$ ds^2 = -c^2 dt^2 \left[ \left( 1 -
\frac{R_s}{r} \right) - \frac{r^2}{c^2} {\left(\frac{d\phi}{dt}
\right)}^2 \right] $$</mathjax>
But what does that d phi / dt term look like?
<span class="caps">GEN</span> <span class="caps">RIPPER</span>: Sure looks like an angular velocity to me. But don’t we need
to be careful with these rates?
<span class="caps">DR</span>. <span class="caps">DRE</span>: Yep, that is an angular velocity term. We need to be a little
careful with these rates. Essentially, in the coordinate system we have
chosen the time is something like the time measured at r = infinity and
thus the rate would also be measured from r = infinity. Plugging in
omega as our angular velocity, we now have:
<mathjax>$$ ds^2 = -c^2 dt^2 \left[ \left( 1 - \frac{R_s}{r} \right)
-\left( \frac{r {\omega}}{c} \right)^2 \right] $$</mathjax>
But we want to figure out the times on the <span class="caps">GPS</span> satellites, so we’ll need
some measure of how time ticks by as measured by the orbiting observer.
In the rest frame of the observer, we have that
<mathjax>$$ ds^2 = -c^2 {d\tau}^2 $$</mathjax>
where the tau is the “proper time” of the observer. It tells us the time
that the observer measures on his clocks. To find the frequency (in
other words, how quickly the observer clock ticks by a second) we just
have
<mathjax>$$ f = \frac{1}{d\tau} $$</mathjax>
And now we have enough background to start talking about <span class="caps">GPS</span>!
<span class="caps">GEN</span> <span class="caps">RIPPER</span>: Hooray! Should I go get my horse…?
<span class="caps">DR</span>. <span class="caps">DRE</span>: I don’t think that’s necessary…? Anyway, let’s model our <span class="caps">GPS</span>
system as a satellite in a 26,000 km orbit [1]. Meanwhile our earth
reference frame will be an observer standing on the surface of the
earth. So let’s write out our line element for each reference frame.
First the satellite frame:
<em>As Dr. Dre works, the beat steadily gets louder.</em>
<mathjax>$$ ds^2 = -c^2 {dt}^2 \left[ \left( 1 - \frac{R_s}{R_{sat}}
\right) -{\left( \frac{\omega_{sat} R_{sat}}{c} \right)}^2
\right] $$</mathjax>
So the frequency at which a satellite clock ticks is
<mathjax>$$ f_{sat} = \frac{1}{dt} \left[ \left( 1 - \frac{R_s}{R_{sat}}
\right) -{\left(\frac{\omega_{sat} R_{sat}}{c} \right)}^2
\right]}^{-1/2} $$</mathjax> and likewise the earth frame:
<mathjax>$$ ds^2 = -c^2 {dt}^2 \left[ \left( 1 - \frac{R_s}{R_{earth}}
\right) -{\left(\frac{\omega_{earth} R_{earth}}{c} \right)}^2
\right] $$</mathjax>
So the frequency at which an earth clock ticks is
<mathjax>$$ f_{earth} = \frac{1}{dt} \left[ \left( 1 -
\frac{R_s}{R_{earth}} \right) -{\left( \frac{\omega_{earth}
R_{earth}}{c} \right)}^2 \right]}^{-1/2} $$</mathjax>
Taking the ratio of these frequencies tells us how quickly the satellite
clocks tick relative to the earth clocks. We get:
<mathjax>$$ \frac{f_{sat}}{f_{earth}} = {\left[ \frac{\left( 1 -
\frac{R_s}{R_{earth}} \right) -{\left( \frac{\omega_{earth}
R_{earth}}{c} \right)}^2 }{\left( 1 - \frac{R_s}{R_{sat}}
\right) -{\left( \frac{\omega_{sat} R_{sat}}{c} \right)}^2 }
\right]}^{1/2} $$</mathjax>
And we’re done. So what do we get? Let’s plug in some numbers:
<mathjax>$$ r_{sat} = 26 \times 10^6 m $$</mathjax>
<mathjax>$$ R_{earth} = 6.3 \times 10^6 m$$</mathjax>
<mathjax>$$\omega_{earth} = 7.3 \times 10^{-5} rad/s $$</mathjax>
<mathjax>$$\omega_{sat}=\sqrt{\frac{GM}{{r_{sat}}^3}}=15 \times 10^{-5}
rad/s $$</mathjax>
<mathjax>$$ c = 3 \times 10^8 m/s $$</mathjax>
We can also find the Schwarzschild radius of the Earth to be
<mathjax>$$ R_s = 8.9 mm $$</mathjax>
Now you still have that fancy calculator, right General? Can you plug
all this stuff in there and tell me what you get?
<em>General Ripper works diligently on the calculator for a few minutes,
then shows the calculator to Dr. Dre.</em>
<strong>
<span class="caps">DR</span>. <span class="caps">DRE</span>: “01134”..? No that can’t be correct. Did you plug in the values
I … ?
<em>General, with a wry smile, turns the calculator upside down and shows
Dr. Dre.</em></strong>
<span class="caps">DR</span>. <span class="caps">DRE</span>: Ah, well “hello” to you to sir. <span class="caps">GEN</span>. <span class="caps">RIPPER</span>: And I’ve got
another! Now where’s that eight key…? <span class="caps">DR</span>. <span class="caps">DRE</span>: I guess I’ll just do
the math in my head…
<em>The background beat, which had just reached a crescendo immediately
drops out as Dre thinks</em>. <em>Then comes right back in as he begins to
speak again…</em>
<strong>
Alright, I got that
<mathjax>$$\frac{f_{sat}}{f_{earth}} = 1 + 4.4 \times 10^{-10} $$</mathjax>
From this we see that the satellite clock ticks faster (i.e. at a higher
frequency) than does the earth clock. The difference is very very small.
For every second that ticks by on earth, we see that the difference in
the earth and satellite clocks increases by 4.4 * 10^{-10} seconds.
<span class="caps">GEN</span> <span class="caps">RIPPER</span>: But we are talking about half a billionth of a second!
There’s no way that can do much harm.
<span class="caps">DR</span>. <span class="caps">DRE</span>: Well remember, that’s per second. So over the course of a day
(86,400 s), the satellite has picked up 38 microseconds. But that
corresponds to<mathjax>$$ d = c \times t = ( 3 \times 10^8 m/s ) \times ( 38 \times
10^{-6} s ) = 11 km $$</mathjax>
So without correcting for general relativity, <span class="caps">GPS</span> systems would be off
by 11 km per day!
<span class="caps">GEN</span> <span class="caps">RIPPER</span>: I am impressed! Here’s millions of dollars in funding!
Hooray!
<span class="caps">DR</span>. <span class="caps">DRE</span>: Hooray!
<em>Dr. Dre and General Ripper jump up and give each other a mid-air high
five. We freeze-frame this scene and</em>“Don’t You Forget About Me” <em>plays
as the credits roll.</em></strong>
<em><span class="caps">THE</span> <span class="caps">END</span> (?)</em> <strong> </strong> [1] Thanks to Tom from <a href="http://blogs.scienceforums.net/swansont/">Swans on
Tea</a> (one of my favorite
physics blogs) for fixing a mistake for me. I had initially done the
calculation assuming the satellite was in a geosynchronous orbit (about
42,000 km) and got a time delay of 48 microseconds. As it turns out, the
<span class="caps">GPS</span> satellites are really at an orbit of about 26,000 km, which gives a
time delay of 38 microseconds. I have made the appropriate changes and
the calculations now reflect this more accurate value. **</p>Quantum Chess!2010-09-08T21:48:00-04:00Samtag:thephysicsvirtuosi.com,2010-09-08:posts/quantum-chess-.html<p><a href="http://www.showiphonewallpapers.com/iPhonewallpapers/20102/iphonewallpapers/Chess-20100726.jpg"><img alt="image" src="http://www.showiphonewallpapers.com/iPhonewallpapers/20102/iphonewallpapers/Chess-20100726.jpg" /></a>
Ever find out when you’re playing chess that the Queen you reached for
is actually a pawn? Probably not. But most chess games aren’t affected
by the weirdness of the quantum world. This one is:
<a href="http://research.cs.queensu.ca/Parallel/QuantumChess/QuantumChess.html">http://research.cs.queensu.ca/Parallel/QuantumChess/QuantumChess.html</a></p>Paradigm Shifts 12010-09-07T23:01:00-04:00Samtag:thephysicsvirtuosi.com,2010-09-07:posts/paradigm-shifts-1.html<p>Hi everybody, I’m Sam, and this will be my first contribution to the
blog! (cue applause) It will not be a physical modeling exercise;
instead I will be writing a little bit about about paradigm shifts in a
series of a few posts. I hope it will provoke some interesting
discussion. “But Sam,” you ask, “Isn’t ‘paradigm shift’ just a buzzword
that people use to sound important?” Well, maybe, but it’s also useful
phrase used to describe a substantial change in the way something is
done. Consider, for example, <span class="caps">THE</span> <span class="caps">METRIC</span> <span class="caps">SYSTEM</span> (many details from
<a href="http://en.wikipedia.org/wiki/Metric_system">Wikipedia</a>)
<a href="http://www.debateitout.com/wp-content/uploads/2009/11/metric-system.jpg"><img alt="image" src="http://www.debateitout.com/wp-content/uploads/2009/11/metric-system.jpg" /></a>
In 1791, in the wake of revolution, France became the first country to
adopt the recently developed metric system. Since then, every nation in
the world has officially adopted the metric system except Liberia,
Burma, and the United States. It is the standard measurement system for
most physical science, even in the <span class="caps">US</span> (as far as I know, no other unit
system even has a measurement for quantities like electric field or
magnetic field) (also, when I say metric, I of course include cgs and
mks and ignore systems that are not meant for measurement, like natural
units and Planck units). It has the advantages of easy unit conversion
(1 km = 1000 m vs 1 mile = 5280 ft, a value which I had to look up from
<a href="http://thevirtuosi.blogspot.com/2010/09/remembering-two-things.html">Yariv’s
post</a>),
and lack of ambiguity in units (mass = kg, force = N vs mass = lbs_mass
or stones, force = lbs_force). The strong preference of scientists for
the metric system is evident from past experiences: From
<a href="http://www.cnn.com/TECH/space/9909/30/mars.metric.02/"><span class="caps">CNN</span></a>, September
1999: “<span class="caps">NASA</span> lost a $125 million Mars orbiter because a Lockheed Martin
engineering team used English units of measurement while the agency’s
team used the more conventional metric system for a key spacecraft
operation” This story also illustrates the equally strong preference of
engineers for the English system. Ah, and herein lies the problem. You
see, when the metric system was first adopted in Europe, it created a
standardized unit system. This proved useful to merchants selling their
wares by weight, but more relevant to myself as a scientist, it provided
a means for creating scientific recipes, for providing the utterly
essential aspect of reproducibility to scientific experiments. However,
now the standardization exists even with the English system, given a
simple unit conversion. But why not adopt the metric system to avoid
situations like the Mars orbiter and make me less confused when I cross
the border to Canada and see speed limit signs telling me to do 80?
<em>Because it would be too huge of a paradigm shift</em>. Allow me to
illustrate my point.
One of the most (if not the most) strongly affected groups by a change
in measurement systems is manufacturers, ie people who make stuff. I
will use a typical example of a manufacturer, the kind who I interact
with in my lab: the noble machinist (keep in mind that machinists are
very important, as they are required to make many, many products). If
you have ever worked with a machinist in the states, chances are he or
she will be totally confused if you try to give them dimensions in
millimeters (I have done this, and they weren’t very happy with me
because it meant they had to convert all the dimensions I gave them into
English). It would be extremely difficult to retrain people who have
used the English system all their lives. It would be like learning a new
language. Inevitably, it would cause a large number of mistakes. More
significantly, they would have to get <span class="caps">ENTIRELY</span> new equipment. Every
machine shop would have to completely replace their tools (drill bits,
screwdrivers, wrenches etc) and materials (standardized sizes of bolts,
nuts, sheet and bulk material, pipes, connectors, cables, etc etc). You
could say, “Come on, it wouldn’t be so bad! Listen, we could gradually
phase out the old English equipment and just make everything in metric
from now on!” However, I would counter that this is not a realistic
plan. For starters, there’s the problem of having to keep around two
sets of equipment (one for the old English stuff and one for the new
metric stuff), which would require double the space and double the
maintenance. Second, there would be compatibility issues between new and
old equipment (e.g. my old 3/4” ipod port wouldn’t mate with my new 2 cm
connector). Third, the previous two problems would likely be around for
a long time, considering the age of some of the equipment that I’ve seen
in labs and elsewhere.
And I haven’t even mentioned the economics. If basic parts manufacturers
(the people who make the screws, the bolts, and the sheet metal that
will later be made into products) began to offer metric parts (now that
I think about it, maybe they already do?), I doubt anybody would buy
them. It would cost them too much to replace all their machinery
infrastructure. There would be no market for them. Maybe you would then
ask “Well, what if the government made everybody switch to metric?”
Well, other than the backlash this would cause towards whichever
administration suggested this, it would likely hurt and maybe even
bankrupt companies who were forced to switch. As far as I can tell, it
would definitely hurt the <span class="caps">US</span> economy in the short term (but it might
help other countries who could sell their metric wares here) and not
help it at all in the long term. To me, the economic loss (not to
mention the difficulty in convincing the <span class="caps">US</span> population to swallow the
change) outweighs the advantages of switching. At this point, exhausted
from my challenging you at every turn, you may finally say, “Well hey,
<span class="caps">SPEAKING</span> of Canada, they changed to the metric system only in 1973. How
did <span class="caps">THEY</span> do it??” The answer is that, well, they didn’t. Not entirely
anyway. Sure, the country may package food and make road signs in metric
(which the <span class="caps">US</span> could probably do, if people could somehow be convinced to
go along with it), but in fact their engineering materials, which mostly
come from the states, are still in English units. Even Canada couldn’t
justify completely converting to the metric system Which just goes to
show how difficult it is to pull off a paradigm shift. Next time, I’ll
present an example of a paradigm shift that I think <span class="caps">COULD</span> work.</p>Remembering two things2010-09-02T16:32:00-04:00Yarivtag:thephysicsvirtuosi.com,2010-09-02:posts/remembering-two-things.html<p>One of my professors, <a href="http://www.physics.cornell.edu/people/faculty/?page=website/faculty&action=show/id=80" title="who is now mentioned in two blogs in this context">Yuval
Grossman</a>,
was talking about the zoology of particle physics in class the other
day. Trying to get us to remember such trivia as the mass of the B
meson, he noted that it’s easier to remember two things than it is to
remember one - and as it happens, the mass of the B meson is about 5280
MeV, which is also the length of a mile in feet (an equally obscure
piece of trivia, if you ask me). This reminded of one of my first
calculus classes back home where another professor (Mikhail Sodin)
chided us for not knowing the value of e, 2.71828. This is easy to
remember, he said because 1828 is the year Lev Tolstoy was born. Then
again, when I came to write this post, I could neither remember e, nor
Tolstoy’s year of birth - or even that it was Tolstoy, rather than
Dostoevsky or some other Russian author. So perhaps two things are not
easier to remember than one after all.</p>Caught In The Rain2010-09-01T22:53:00-04:00Jessetag:thephysicsvirtuosi.com,2010-09-01:posts/caught-in-the-rain.html<p><a href="http://1.bp.blogspot.com/_SYZpxZOlcb0/TH8RX3wsh6I/AAAAAAAAAC0/fIrDNg5flzY/s1600/boy+rain.gif"><img alt="image" src="http://1.bp.blogspot.com/_SYZpxZOlcb0/TH8RX3wsh6I/AAAAAAAAAC0/fIrDNg5flzY/s200/boy+rain.gif" /></a>
There’s an age old question that mankind has pondered. I’m sure that
noble heads such as Aristotle, Newton, and Einstein have pondered it. I
myself have raised it a few times. The question is: do you get more wet
running or walking through the rain? Now, I know that this question was
<a href="http://mythbustersresults.com/episode38">mythbusted</a> a while back. So
this is one of those situations where I know the result I want to get to
with my calculation: according to mythbusters running is better. Still,
I think formulating the question mathematically will be fun, plus if I
fail to agree with experiment everyone can mock me mercilessly. I’ll
begin by stating a few assumptions. I’m going to assuming that the rain
is falling straight down, at a constant rate. I’m also going to assume
that if we are standing still, only our head and shoulders get wet, not
our front or back. With those in place, lets start by formulating the
expression for how wet we would get if we stood still. Well, take
<mathjax>$$\Delta W_{top} - \text{the change in water (in liters) on a person}
$$</mathjax> <mathjax>$$\rho - \text{the density of water in the air in liters per cubic
meter}$$</mathjax> <mathjax>$$A_t - \text{top area of a person}$$</mathjax> <mathjax>$$\Delta t -
\text{time elapsed}$$</mathjax> Intuition suggests that the rate at which
raindrops hit our top, times the area of our top, times the time we
stand in the rain, will give us the change in water. In an equation, <mathjax>$$
\Delta W_{top} = \rho A_t v_r \Delta t $$</mathjax> Note that whatever
expression we generate for how wet we get when moving will have to
reduce to this form in the limit that we’re not moving. This will be a
good check for us. Next, we need to define a few additional measures:
<mathjax>$$d - \text{distance we have to travel in the rain}$$</mathjax>
<mathjax>$$v_r - \text{raindrop velocity}$$</mathjax>
<mathjax>$$A_f - \text{front area of a person}$$</mathjax>
<mathjax>$$W_{tot} - \text{total amount of water in liters we get hit with} $$</mathjax>
Well, no mater how fast we run the rain will keep hitting us on the top
of our heads, so we’re going to have our standing still term, plus
another term for how much hits us when running. How do we consider that?
Well, when we run, we’re cutting into the swath of rainy air in front of
ourselves. We’ll get hit on our frontside by all the additional
raindrops in that stretch we carve out. Mathematically, if we travel
some distance delta x in a time delta t, we’ll get hit with an
additional amount of water
<mathjax>$$ \Delta W = \rho A_f \Delta x $$</mathjax>
<mathjax>$$ \Delta W = \rho A_f v \Delta t $$</mathjax>
We combine our two terms to get
<mathjax>$$\Delta W_{tot} = \rho \Delta t (A_t v_r + A_f v)$$</mathjax>
Note that if we stop walking (v goes to zero) we’ll return to our
stationary expression. Next I’ll take the delta t over to the other
side, turn that into a derivative, and integrate to get the total water
hitting us, not just the change for some delta t. Of course, since
everything else in the equation is constant, this is the equivalent of
dropping the deltas,
<mathjax>$$W_{tot} = \rho (A_t v_r + A_f v) t $$</mathjax>
<mathjax>$$W_{tot} = \rho (A_t v_r + A_f v)\frac{d}{v}$$</mathjax>
<mathjax>$$W_{tot}= \rho d (A_t \frac{v_r}{v} + A_f)$$</mathjax>
where I substituted t = d/v, and did some simplification. Now, lets look
at the qualitative features of this result. First, we have two terms, a
constant and a term that depends inversely on the velocity of motion.
This means that the faster we go, the less wet we get (I’ll plot this in
a bit), but also that there’s a threshold wetness you cannot avoid. This
threshold represents the amount of rain in a human sized channel between
where you start and where you end. Also note that as velocity goes to
zero, i.e. we stop moving, how wet we get goes to infinity. That is, if
we’re going to stand in the rain forever we’re going to keep getting
wet. What is the term in 1/v? It’s the amount of rain that hits you on
the top of your head! So what we’ve derived is that for a fixed distance
how wet you get on the front is fixed, and by moving faster you can make
less rain hit you on the top of your head.
Now, lets figure out what some reasonable numbers are, and plot this
function. A few months back when discussion <a href="http://thevirtuosi.blogspot.com/2010/05/human-radiation.html">human
radiation</a>
I estimated my area as a cylinder with a height of 1.8 m and a radius of
.14 m. This gives a top area of A_t = .06 m^2 and a front cross
section area (note, this is not the cylinder area, but my cross section
that will be exposed to the rain!) of .5 m^2. As for raindrop velocity,
well in my <a href="http://thevirtuosi.blogspot.com/2010/04/falling-water-hot-or-cold.html">first
post</a>
on this blog I calculated the terminal velocity of what I described as a
medium sized raindrop as 6 m/s, and since water drops reach that while
going over niagara falls, we can assume that our raindrops are falling
at terminal velocity.
Finally, I need to estimate the water density. In a medium-to-heavy rain
I would say it takes about 45 s to get a sidewalk square totally wet.
Let’s assume a raindrop wets an area of sidewalk equal to twice the
cross section of the raindrop. I used a raindrop of 1.5mm radius, so
that’s 7<em>10^-6 m^2 cross section. Now, a sidewalk square is about 2/3
m x 2/3m (about 2 ft x 2ft), so we need \~32000 drops! The volume of a
1.5mm drop is 1.4^-8 m^3, so we have a volume of 4.5</em>10^-4 m^3 =
.45 liters. Now, take our stationary expression from above. This allows
us to solve for \rho. Set delta W equal to .45 liters and substitute
the rest of the numbers we’ve generated.
<mathjax>$$ \frac{\Delta W_{top}}{A_t v_r \Delta t}= \rho $$</mathjax>
<mathjax>$$ \rho = \frac{.45 liters}{(.44 m^2)( 6 m/s )(45 s)} = .004
liters/m^3 $$</mathjax>
We’ve found our water density, .004 liters/m^3. Having done this, we
can plug numbers into our final equation above and find
<mathjax>$$W_{tot}= d (\frac{(.001 liters/s)}{v} + .002 liters/m)$$</mathjax>
This scales linearly with distance, so lets pick something reasonable,
say 100m, and if you want the result for another other distance just
scale the results appropriately. Thus
<mathjax>$$W_{tot}= (\frac{(.1 liters\text{*}m/s)}{v} + .2 liters)$$</mathjax>
Finally we can plot this.</p>
<hr />
<p><a href="http://1.bp.blogspot.com/_SYZpxZOlcb0/TH8Mj6XEkVI/AAAAAAAAACs/xiGu976NrOs/s1600/rain+graph.jpg"><img alt="image" src="http://1.bp.blogspot.com/_SYZpxZOlcb0/TH8Mj6XEkVI/AAAAAAAAACs/xiGu976NrOs/s400/rain+graph.jpg" /></a>
Plot of how wet you get vs. how fast you run. The blue line is the actual curve and the red line is the theoretical least wet asymptote.</p>
<hr />
<p>I’ve chosen .5 m/s (\~1mph, a meander) and 11 m/s (slightly faster than
the world record for the 100m dash) as my starting and ending points on
the velocity. The blue line is the curve I calculated, and the red line
represents the theoretical minimum, the ‘wetness threshold’ if you will.
So you see that if you are Usain Bolt, you can reduce how wet you get by
almost a factor of two by going from a meander to a sprint!
Now, there’s more I could say about this (what if the rain isn’t coming
straight down? What is my best speed if I have an umbrella?), but I
think that’s enough for tonight. I’ve come out with a theoretically
satisfying result that concurs with experiment. Anytime that happens
that’s a good day for the theorists.</p>Ringing A Bridge2010-08-14T21:16:00-04:00Corkytag:thephysicsvirtuosi.com,2010-08-14:posts/ringing-a-bridge.html<hr />
<p><a href="http://4.bp.blogspot.com/_fa6AZDCsHnY/TGc4zKuS4oI/AAAAAAAAAGw/dQHhvZBgASQ/s1600/bridgepic.png"><img alt="image" src="http://4.bp.blogspot.com/_fa6AZDCsHnY/TGc4zKuS4oI/AAAAAAAAAGw/dQHhvZBgASQ/s320/bridgepic.png" /></a>
Matt and Jared standing on our experiment</p>
<hr />
<p>When you strike a bell, it rings at a given frequency. This frequency is
called the resonant frequency and is the natural frequency at which the
bell likes to ring. Just about anything that can shake, rattle, or
oscillate will have a resonant frequency. Things like quartz crystals,
wine glasses, and suspension bridges all have a resonant frequency. The
quartz crystals oscillate at frequencies high enough for accurate
timekeeping in watches, the wine glasses at audible frequencies to make
boring dinners more interesting, and bridges at low enough frequencies
that you can feel it when you walk. It is the resonant frequency of
bridges that we decided to measure.
To make our measurements, we “borrowed” Yariv’s fancy phone. One of the
nice things about fancy new phones is that most of them have internal
accelerometers to detect motion. You can do a whole bunch of fun
experiments and take some pretty good data with these accelerometers
(see, for example, physicist and <span class="caps">TV</span> star Rhett Allain’s posts over at
<a href="http://scienceblogs.com/dotphysics/iphone/">dot physics</a>). Placing
Yariv’s phone on the suspension footbridge on campus, Alemi, Matt and I
took data and confused passers-by for about 15 minutes. The
accelerometer in the phone measures acceleration in three coordinate
directions: x is along the width of the bridge, y is along the length of
the bridge, and z is up and down. The raw data is shown below. The z
data is shown in blue, and x and y in green and red.
<a href="http://2.bp.blogspot.com/_fa6AZDCsHnY/TGNrbHEsuvI/AAAAAAAAAGA/4WyG8OuNww0/s1600/bridge_rawtime.png"><img alt="image" src="http://2.bp.blogspot.com/_fa6AZDCsHnY/TGNrbHEsuvI/AAAAAAAAAGA/4WyG8OuNww0/s400/bridge_rawtime.png" /></a>
The first thing you’ll notice about this data is that the z direction
(blue) has big spikes in it around 180s, 300s, and 800s. The biggest
spikes are when Alemi and I jumped up and down to ring the bridge. The
smaller bumps in the blue data are the result of people walking or
jogging by.
With the raw acceleration data and knowledge of the sample rate of the
accelerometer ( 90 Hz ), we can Fourier transform it to get frequencies.
Doing this to the raw data for each dimension we get the following
spectrograms. Each of the spectrograms illustrates how much of each
frequency is present at each point in time.
The most relevant direction for us is the z direction. We see that at
several points there are strong signals at all frequencies followed by
longer periods where the main signal is around 1 Hz. These events
correspond to when Alemi and I jumped up and down and are analogous to
ringing a bell. The striking of the bell is just a sharp impulse
(roughly a delta function) which is composed of all frequencies. Soon
after the impulse, all of the frequencies die out except for the
resonant frequency, which keeps on ringing. Just looking at this graph,
it looks like the bridge resonant frequency is around 1 Hz.
<a href="http://3.bp.blogspot.com/_fa6AZDCsHnY/TGNvFsSzqGI/AAAAAAAAAGI/m2Tr3GwM1qM/s1600/bridge_Zspec.png"><img alt="image" src="http://3.bp.blogspot.com/_fa6AZDCsHnY/TGNvFsSzqGI/AAAAAAAAAGI/m2Tr3GwM1qM/s400/bridge_Zspec.png" /></a>
We can also make similar graphs for the x and y directions. Remember,
the x direction is the width of the bridge and the y direction is the
length of the bridge. Although there is less motion in these directions,
the spikes where we jumped and people walked by are still clearly
visible.
<a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/TGNxb8WcJMI/AAAAAAAAAGY/tLPA6SbyyeE/s1600/bridge_Xspec.png"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/TGNxb8WcJMI/AAAAAAAAAGY/tLPA6SbyyeE/s400/bridge_Xspec.png" /></a>
<a href="http://3.bp.blogspot.com/_fa6AZDCsHnY/TGNxNPkNSTI/AAAAAAAAAGQ/3Xkx2WKFv6w/s1600/bridge_Yspec.png"><img alt="image" src="http://3.bp.blogspot.com/_fa6AZDCsHnY/TGNxNPkNSTI/AAAAAAAAAGQ/3Xkx2WKFv6w/s400/bridge_Yspec.png" /></a>
Finally, we can find out how much of a particular frequency is in the
whole signal. To do this we take find the power spectrum density of the
entire data set (blue is z, green is x and red is y). The ringdown
frequency of about 1 Hz we saw in the spectrograms above after the jumps
is illustrated in this graph as the first blue peak. There are also some
other peaks at around 15 Hz, 25 Hz and 35 Hz. I am not sure what they
correspond to.
<a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/TGNyNOZDLEI/AAAAAAAAAGg/a3eLidFVn3Y/s1600/bridge_psd.png"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/TGNyNOZDLEI/AAAAAAAAAGg/a3eLidFVn3Y/s400/bridge_psd.png" /></a>
To clean up this a bit, we can just take the data without the jumps in
it. Computing a new power spectrum density with just the data from about
400s - 700s, we get the following graph, which also displays a fairly
prominent peak around 1 Hz.
<a href="http://2.bp.blogspot.com/_fa6AZDCsHnY/TGcYbMuUbTI/AAAAAAAAAGo/v2d2bMQNsoE/s1600/psd.png"><img alt="image" src="http://2.bp.blogspot.com/_fa6AZDCsHnY/TGcYbMuUbTI/AAAAAAAAAGo/v2d2bMQNsoE/s400/psd.png" /></a>
So it seems that there is definitely something going on around 1 Hz.
Initially, I was worried that this is just the rate at which people walk
and therefore it was just showing up because we had people walking the
whole time. However, the strong 1 Hz signal after each ringing in each z
spectrogram seems to indicate that it is intrinsic to the bridge.
Therefore, it seems as though the resonant frequency in the z direction
of the bridge is about 1 Hz. But don’t take our word for it. If you want
to do your own analysis, you can find the raw data
<a href="http://docs.google.com/leaf?id=0Bwd5hrDOxWsrMjZmOTZiZDYtMjBjNC00MjM5LWFiOTktMzg2N2Y3MmQ4NTM1&hl=en&authkey=CNuUt-EH">here</a>.</p>Terminal Velocity 2: A Theorist’s Experimental Experiment2010-08-12T17:23:00-04:00Yarivtag:thephysicsvirtuosi.com,2010-08-12:posts/terminal-velocity-2-a-theorist-s-experimental-experiment.html<p>Yesterday we rode down Ithaca’s hills in an attempt to estimate the
terminal velocity of a bike rider braving the city’s potholes. But
estimations are easy, and we relied on a number of factors - the drag
coefficient and area of the bicyclist, in particular - to get them. To
see how well we did, it’s time to move on to the experimental portion
this exercise. Our tools? My bike (figure 1), and my beloved
accelerometer (figure 2), with Google’s <a href="http://mytracks.appspot.com/">My
Tracks</a> app installed.</p>
<p><a href="http://2.bp.blogspot.com/_JIGLe2C6VxI/TGRzmU_-t0I/AAAAAAAACFA/6JFhG9Z9YiA/s1600/bike.jpg"><img alt="image" src="http://2.bp.blogspot.com/_JIGLe2C6VxI/TGRzmU_-t0I/AAAAAAAACFA/6JFhG9Z9YiA/s200/bike.jpg" /></a>Figure
1: Our vehicle</p>
<p><a href="http://2.bp.blogspot.com/_JIGLe2C6VxI/TGRn_JN3A6I/AAAAAAAACEQ/b02aLvN9_ys/s1600/Droid+2.png"><img alt="image" src="http://2.bp.blogspot.com/_JIGLe2C6VxI/TGRn_JN3A6I/AAAAAAAACEQ/b02aLvN9_ys/s200/Droid+2.png" /></a></p>
<p>Figure 2: Our instrumentation</p>
<p>I took data twelve times while driving down two paths (<a href="http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=University+and+Cornell,+Ithaca+NY&sll=42.44395,-76.485014&sspn=0.016721,0.038581&ie=UTF8&hq=&hnear=University+Ave+%26+Cornell+Ave,+Ithaca+College,+Tompkins,+New+York+14850&ll=42.447243,-76.493082&spn=0.01672,0.038581&t=p&z=15">University
avenue</a>
and <a href="http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=State+and+Stewart,+Ithaca,+NY&sll=42.447243,-76.493082&sspn=0.01672,0.038581&ie=UTF8&hq=&hnear=E+State+St+%26+Stewart+Ave,+Ithaca+College,+Tompkins,+New+York+14850&ll=42.439262,-76.489692&spn=0.016722,0.038581&t=p&z=15">State
street</a>),
measuring both the speed and elevation as a function of time. I came up
with a lot of noisy data, some of it useful and a lot of it not. A
typical plot out of the software looks something like (figure 3); out of
those I identified moments of what seemed to be free acceleration, where
I was not applying the brakes. I then calculated the slope and the
acceleration at each point by subtracting subsequent measurements; this
resulted in much noisier data, as seen on (figure 4).</p>
<p><a href="http://2.bp.blogspot.com/_JIGLe2C6VxI/TGRoiavSb4I/AAAAAAAACEY/3Yee3_k-TiY/s1600/University7.png"><img alt="image" src="http://2.bp.blogspot.com/_JIGLe2C6VxI/TGRoiavSb4I/AAAAAAAACEY/3Yee3_k-TiY/s400/University7.png" /></a>Figure
3: Typical data riding downhill
<a href="http://1.bp.blogspot.com/_JIGLe2C6VxI/TGRo0pVcbSI/AAAAAAAACEg/qdGFWsUS4X8/s1600/University7DiffFocus.png"><img alt="image" src="http://1.bp.blogspot.com/_JIGLe2C6VxI/TGRo0pVcbSI/AAAAAAAACEg/qdGFWsUS4X8/s400/University7DiffFocus.png" /></a>Figure
4: Derivatives</p>
<p>The next question was what to fit these graphs to. I can’t compare
directly to the formula I had for terminal velocity, since I don’t
believe I achieve it at any point and we never see the velocity graph
plateau. What we do have is the formula for the acceleration, which
depends on both the angle and the velocity: <mathjax>$$ a = g\sin\theta -
\frac{1}{2}\frac{\rho A C_d}{m} v^2. $$</mathjax> It’s a little hard to plot
three-dimensional surfaces like this, but I can try to plot the
acceleration as a function of the velocity squared. Assuming that the
slope of each of my routes is constant and that they are both different,
this should give me two straight lines offset by a constant. Seen in
(figure 5), this yields less than optimal result. A first correction
would be to account for the differing slope at different measurement
points. Once we do that the data looks a little more linear, and we can
fit a line through it, as seen in (figure 6).</p>
<p><a href="http://2.bp.blogspot.com/_JIGLe2C6VxI/TGRpFDSXYbI/AAAAAAAACEo/pce7SNz6s_U/s1600/avv.png"><img alt="image" src="http://2.bp.blogspot.com/_JIGLe2C6VxI/TGRpFDSXYbI/AAAAAAAACEo/pce7SNz6s_U/s400/avv.png" /></a>Figure
5: Acceleration vs. velocity</p>
<p><a href="http://1.bp.blogspot.com/_JIGLe2C6VxI/TGRpIHtielI/AAAAAAAACEw/-MuDiahJbxg/s1600/avvfixed.png"><img alt="image" src="http://1.bp.blogspot.com/_JIGLe2C6VxI/TGRpIHtielI/AAAAAAAACEw/-MuDiahJbxg/s400/avvfixed.png" /></a>Figure
6: Adjusted for slope and fitted</p>
<p>The fits are given by: <mathjax>$$ a = (1.022 \rm{m}) - (0.00427 \rm{1/m})
v^2\;\; \rm{(University)} $$</mathjax> <mathjax>$$ a = (1.465 \rm{m}) - (0.00572
\rm{1/m}) v^2\;\; \rm{(State)} $$</mathjax> and we can quickly extract the
terminal velocity out of the coefficients to get a factor of 47.9 m/s
for the first line and 41.4 m/s for the second. These both fall within
20% of our initial estimate, which is quite satisfying considering how
bad the data looks. A few final thoughts:</p>
<ul>
<li>Why is the data so noisy? I can think of a lot of reasons. My Droid
phone is not quite a scientific measuring device to begin with, and
we did some numerical derivation of the initial data we got from it.
On top of that, the way I sit on the bike, the weight of the bag I
carry with me and other factors like the wind changed from ride to ride.</li>
<li>I tried to avoid biasing the analysis and I was quite relieved when
the final numbers came out so close to my original estimate. I did
play around a little with a different presentation that didn’t look
linear at all, but other than that I think what I did was pretty straightforward.</li>
<li>The one thing that I don’t like about the final results is the
constant addition to both acceleration fits, or put another way, the
fact that after subtracting the gravitational pull from the
acceleration I still get positive numbers, while the drag force
should work to reduce it. I suspect this implies that my
cancellation of the sinθ term was less than perfect.</li>
<li>Can you figure out what the trajectory the bike as a function of
time looks like? There’s a (non-trivial) analytic expression.</li>
</ul>Teminal Velocity2010-08-11T14:04:00-04:00Yarivtag:thephysicsvirtuosi.com,2010-08-11:posts/teminal-velocity.html<p>The impetus for this post lies with three facts. First, I like to bike
to work. Second, Cornell sits on a
<a href="http://www.cornell.edu/search/index.cfm?tab=facts&q=&id=272">hill</a>. And
finally, I’m not very brave. As a result of all of these, along with
Ithaca’s less-than-optimal road maintenance, my semi-daily rides home
tend to produce a lot of wear on my brakes as I cruise downhill at what
appears to me to be very high speeds. I began to ponder just how high
this speed really is, and if I could reduce my use of the brakes or if
I’m going to end up using them anyway at the bottom of the hill.
<img alt="image" src="http://2.bp.blogspot.com/_JIGLe2C6VxI/TGLwOVisUAI/AAAAAAAACD4/V9TvWzmEv9k/s200/200px-Free_body.svg.png" /></p>
<p>Figure 1: An inclined plane</p>
<p>So, I asked myself, what do I remember about bikes going down the hill?
Well, I remember the good old inclined plane (figure 1), and I remember
that air resistance is proportional to velocity, so that the equation of
motion is given by <mathjax>$$ ma = mg\sin\theta - \alpha v. $$</mathjax> I had no idea
what α was, though. My first stop in considering it was naturally
Wikipedia. A quick
<a href="http://en.wikipedia.org/wiki/Terminal_velocity">search</a> came up with
the formula <mathjax>$$m a = mg\sin\theta - \frac{1}{2}\rho A C_d v^2$$</mathjax>
where ρ is the density of air, A the projected area of the body and C~d~
the drag coefficient The first thing to notice here is that I was wrong
- drag in a fluid acts like the velocity squared, and not the velocity.
Second, we can easily determine terminal velocity out of this formula -
it’s the speed at which the sum of the forces equals to zero, or <mathjax>$$v_t
= \sqrt{\frac{2mg\sin\theta}{\rho A C_d}}.$$</mathjax> We can throw in some
numbers into that. ρ = 1.2 kg/m^3^ for air; Wikipedia estimates C~d~ =
0.9 for a <a href="http://en.wikipedia.org/wiki/Drag_coefficient">cyclist</a>. For
the mass, we need to add up mine (\~75 kg), the bike’s (15-20 kg) and my
bag’s (let’s say 5 kg). We come to about 100 kg, give or take 5%. A is a
little harder to estimate, but height times width gives me an initial
guess of 0.62 m^2^, which I’ll revise to 0.7 m^2^ to account for the
bike, flailing arms and fashionable helmet, up to about 10% accuracy.
We’re left with sinθ, which varies by road, but in general we expect the
terminal velocity to look like <mathjax>$$v_t \approx \left(50 \pm 3
\rm{m/s}\right) \sqrt{\sin\theta}.$$</mathjax> This appears not-unreasonable.
For an 8% grade like we have down University avenue this yields about 50
km/h and for a 13% grade like we have down Buffalo street this will
bring us up to a respectable 65
<a href="http://www.blogger.com/post-edit.g?blogID=8807287158334608095&postID=6743488546064317383#miles" title="That's - sigh - about 30 mph and 40 mph, respectively, in crazy units">km/h</a>.
Both, incidentally, are faster than I’m willing to go down a badly
maintained, not entire straight road. So we have some numbers, and I
begin to feel justified about pressing those breaks often, but all of
this is really an introduction for the next post, in which I go against
all my theorist instincts and take some data in the field. Stay tuned.</p>
<ol>
<li><a href="http://www.blogger.com/post-edit.g?blogID=8807287158334608095&postID=6743488546064317383"></a>That’s<ul>
<li>sigh - about 30 mph and 40 mph, respectively, in crazy units</li>
</ul>
</li>
</ol>The Wrath of Blotto2010-08-06T11:32:00-04:00Alemitag:thephysicsvirtuosi.com,2010-08-06:posts/the-wrath-of-blotto.html<p>You may remember <a href="http://thevirtuosi.blogspot.com/2010/05/memorial-day-distractions.html">when I
invited</a>
everyone to play <a href="http://pages.physics.cornell.edu/~aalemi/blotto/index.php">my
webform</a>
version of <a href="http://en.wikipedia.org/wiki/Colonel_Blotto">Colonel
Blotto</a>. Well, its still up
and has been up for some time, but hasn’t seen any action for a while so
I thought it might be time to take a look at the results. Colonel Blotto
is an interesting game. It seems to me, that much of this interest
derives from the fact that how well your strategy performs is very much
a function of which strategies exist in the pool. There is not a clear
cut winning strategy, you need to feel out the existing pool and adapt
accordingly. So to stir things up a little bit, in what follows I will
share some data from the existing database, refraining myself from
commenting too much. Basically, stay tuned for a bunch of pretty
pictures which will hopefully get your gears turning. The game is still
up, feel free to try to game it now that this information is out. Might
be interesting to see what kind of effect releasing the leaderboard will
have on the leaderboard.</p>
<h3>Leader Board</h3>
<p>347 Strategies were submitted since the game went live. Lets try and
take a look at what kind of strategies were submitted. Below are the top
25 ranking strategies in the database as of yesterday, along with the
actual strategy, its points, and full record.</p>
<hr />
<p>Rank Name Strategy Wins Losses Ties Points
1 PygmyGrouse 2,3,4,5,19,22,7,20,12,6 210 74 61 481
2 eighth 4,4,19,19,4,19,4,4,19,4 209 74 62 480
3 tg1i6 3,4,5,11,19,18,17,18,4,1 190 58 97 477
4 centerfold3 2,2,17,17,20,20,17,1,2,2 178 55 112 468
5 goose 17,15,5,3,16,18,4,16,3,3 165 43 137 467
6 StrawMan2 2,3,4,5,19,22,7,22,11,5 202 81 62 466
7 blackbird 17,16,5,2,16,18,4,16,3,3 169 49 127 465
8 hawk 3,3,5,3,16,18,17,16,15,4 157 38 150 464
8 fairandbalanced 2,3,4,16,18,18,17,17,3,2 164 45 136 464
10 nightingale 17,14,5,4,16,18,4,16,3,3 173 55 117 463
10 finch 17,3,5,15,16,18,4,16,3,3 172 54 119 463
12 foxnews 1,3,3,17,18,18,18,17,3,2 159 44 142 460
12 D 15,16,17,18,19,1,2,3,4,5 154 39 152 460
14 notgonnawin16 2,2,2,19,19,20,16,16,2,2 185 71 89 459
15 bluebird 17,5,3,15,16,18,4,16,3,3 171 58 116 458
16 Poitiers 3,20,4,3,20,3,20,4,3,20 200 89 56 456
17 StrawMan1 2,3,3,3,22,22,7,20,12,6 196 86 63 455
17 tg1e16 4,16,4,14,2,17,2,17,5,19 150 40 155 455
19 Guadalcanal 18,2,2,18,18,2,18,18,2,2 146 37 162 454
20 centerfold2 2,1,17,18,20,20,18,1,1,2 156 48 141 453
20 Culloden 3,3,21,3,3,20,21,3,20,3 201 93 51 453
22 parrot 3,3,5,3,16,18,4,16,15,17 142 35 168 452
22 tg1f1 4,16,1,14,2,18,2,18,5,20 154 47 144 452
24 eagle 3,3,5,15,16,18,4,16,3,17 149 43 153 451
25 robin 17,16,5,15,16,18,4,3,3,3 160 57 128 448</p>
<hr />
<h3>Soldier Distribution</h3>
<p>Next, lets try and take a look at all the strategies at once. Lets start
with the soldier distributions among the different slots. I will remind
you that the rules of the game are slot independent, i.e. if machines
were trying to play this game against one another you would expect the
soldier distribution to be more or less uniform between slots, any
deviation from uniformity probably says something deep and profound
about how humans think.</p>
<p><a href="http://4.bp.blogspot.com/_YOjDhtygcuA/TFrK2Sfe9zI/AAAAAAAAAM4/OTL-Nzonw54/s1600/soldierdistbox.png"><img alt="image" src="http://4.bp.blogspot.com/_YOjDhtygcuA/TFrK2Sfe9zI/AAAAAAAAAM4/OTL-Nzonw54/s400/soldierdistbox.png" /></a></p>
<p>Above is a <a href="http://en.wikipedia.org/wiki/Box_and_whisker">box and whiskers
plot</a> of all strategies,
looking at the number of soldiers in each slot.</p>
<p><a href="http://3.bp.blogspot.com/_YOjDhtygcuA/TFsIEnlnYPI/AAAAAAAAAOA/RjihpzKWRf0/s1600/soldierdist.png"><img alt="image" src="http://3.bp.blogspot.com/_YOjDhtygcuA/TFsIEnlnYPI/AAAAAAAAAOA/RjihpzKWRf0/s400/soldierdist.png" /></a></p>
<p>This plot is fun. It shows the full distribution of all of the
strategies. I went through the database, and for every strategy, added
one to the box that held true. I.e. for each slot (the slots are along
the x axis), the y axis marks how many strategies in the database have
that many soldiers in that slot. Should be fun to try and think about
what the non-uniformities mean. Colorbar is on the side to make the
colors quantitative.</p>
<h3>Point Distribution</h3>
<p>So, how well do all of the strategies do… lets take a look.</p>
<p><a href="http://1.bp.blogspot.com/_YOjDhtygcuA/TFrKwJ2wATI/AAAAAAAAAMw/Zm9oCju3964/s1600/pointshist.png"><img alt="image" src="http://1.bp.blogspot.com/_YOjDhtygcuA/TFrKwJ2wATI/AAAAAAAAAMw/Zm9oCju3964/s400/pointshist.png" /></a></p>
<p>Above is a histogram of all the scores for all the strategies in the
database. It has some interesting features. Definitely not singly
peaked. What do you think is going on on the far left?</p>
<p><a href="http://2.bp.blogspot.com/_YOjDhtygcuA/TFsILm1CC9I/AAAAAAAAAOI/jJrgQA3xe4E/s1600/scoredist.png"><img alt="image" src="http://2.bp.blogspot.com/_YOjDhtygcuA/TFsILm1CC9I/AAAAAAAAAOI/jJrgQA3xe4E/s400/scoredist.png" /></a></p>
<p>In this plot, I again went through all of the strategies in the
database, and this time, every square reflects the average score for all
strategies that have that many soldiers (y axis) in that slot (x axis).</p>
<h3>Strategies Layout</h3>
<p>Lets drill down a bit and look at how each strategy performs.</p>
<p><a href="http://2.bp.blogspot.com/_YOjDhtygcuA/TFrK_cjwa6I/AAAAAAAAANA/TM0z0lXeoMA/s1600/winvlossesareatie.png"><img alt="image" src="http://2.bp.blogspot.com/_YOjDhtygcuA/TFrK_cjwa6I/AAAAAAAAANA/TM0z0lXeoMA/s400/winvlossesareatie.png" /></a></p>
<p>This scatter plot has a point for each strategy in the database, the x
coordinate giving its number of wins, the y coordinate its number of
losses. The area of the circle is scaled to its number of ties, and the
color is scaled to its total score. Is there a clear trend? Why does it
fan out?</p>
<p><a href="http://3.bp.blogspot.com/_YOjDhtygcuA/TFrLG9Jz7EI/AAAAAAAAANI/dGD2srkuJjw/s1600/winvtiearea.png"><img alt="image" src="http://3.bp.blogspot.com/_YOjDhtygcuA/TFrLG9Jz7EI/AAAAAAAAANI/dGD2srkuJjw/s400/winvtiearea.png" /></a></p>
<p>Similar plot as above but this time, x coordinate is Wins, Y is ties,
size is losses and color score.</p>
<p><a href="http://1.bp.blogspot.com/_YOjDhtygcuA/TFrLL825x5I/AAAAAAAAANQ/ihuUuyOmoH4/s1600/lossesvtie.png"><img alt="image" src="http://1.bp.blogspot.com/_YOjDhtygcuA/TFrLL825x5I/AAAAAAAAANQ/ihuUuyOmoH4/s400/lossesvtie.png" /></a></p>
<p>One more, this time x coordinate is losses, y is ties, color is score.</p>
<h3>Fitness Landscapes</h3>
<p>So, what should you do if you wanted to design a winning strategy? Lets
first take a look at the fitness landscape. Now, this is difficult to do
for the whole game, with 10 slots and something like 40 reasonable
choices for each, this is some huge 10 dimensional space, which is
difficult to visualize, so instead lets take a look at the fitness
landscape for some lower dimensional cuts.</p>
<p><a href="http://3.bp.blogspot.com/_YOjDhtygcuA/TFrLR8oPRhI/AAAAAAAAANY/nVG1t0uisq4/s1600/energy456matshowjet.png"><img alt="image" src="http://3.bp.blogspot.com/_YOjDhtygcuA/TFrLR8oPRhI/AAAAAAAAANY/nVG1t0uisq4/s400/energy456matshowjet.png" /></a></p>
<p>So in the above plot, what I’ve done is constructed a whole bunch of
strategies. First I started with 8 soldiers in all slots but the ones
listed in the title, namely slots 4,5,6 (starting with 1). So with only
3 slots free, and with the constraint that we have to have 100 soldiers
total, I can parametrize a whole bunch of strategies with only two
numbers, in this case the number of soldiers in slot 5 (x axis) and slot
6 (y axis). The color represents the score that the resulting strategy
has when run against all of the previously existing strategies in the
database. This was created without adding all of these strategies to the
database itself as that would have changed the results.</p>
<p><a href="http://1.bp.blogspot.com/_YOjDhtygcuA/TFrLVmy2N3I/AAAAAAAAANg/zID_NyC00Q0/s1600/energy568matshowjet.png"><img alt="image" src="http://1.bp.blogspot.com/_YOjDhtygcuA/TFrLVmy2N3I/AAAAAAAAANg/zID_NyC00Q0/s400/energy568matshowjet.png" /></a></p>
<p>Similar plot, this time changing the soldiers in slots 5,6 and 8 with x
axis the number of soldiers in slot 6 and the y axis the number of
soldiers in slot 8.</p>
<p><a href="http://2.bp.blogspot.com/_YOjDhtygcuA/TFrLZDDNo1I/AAAAAAAAANo/beYQagxYO6A/s1600/energy123matshowjet.png"><img alt="image" src="http://2.bp.blogspot.com/_YOjDhtygcuA/TFrLZDDNo1I/AAAAAAAAANo/beYQagxYO6A/s400/energy123matshowjet.png" /></a></p>
<p>Another one, hopefully my labels make enough sense now that I don’t have
to spell it out. I think this one has an interesting shape. What’s going on?</p>
<p><a href="http://1.bp.blogspot.com/_YOjDhtygcuA/TFrLcW834jI/AAAAAAAAANw/Opwx0CqiqBs/s1600/energy127matshowjet.png"><img alt="image" src="http://1.bp.blogspot.com/_YOjDhtygcuA/TFrLcW834jI/AAAAAAAAANw/Opwx0CqiqBs/s400/energy127matshowjet.png" /></a></p>
<p>One more.</p>
<h3>Random Strategies</h3>
<p>So, lets say you are trying to construct winning strategies. The first
thing you might try to do is construct random strategies, by randomly
dropping 100 soldiers each into one of the slots at random. Doing so and
running these strategies against the database I got an idea of how
effective that would be.</p>
<p><a href="http://2.bp.blogspot.com/_YOjDhtygcuA/TFrLiuvLW-I/AAAAAAAAAN4/X-RmiGeA7R4/s1600/randomhist.png"><img alt="image" src="http://2.bp.blogspot.com/_YOjDhtygcuA/TFrLiuvLW-I/AAAAAAAAAN4/X-RmiGeA7R4/s400/randomhist.png" /></a></p>
<p>Above is a histogram of the random strategies’ scores. Not so good.
Looks like humans playing the game are better than randomly guessing.</p>
<h3>Best Strategy?</h3>
<p>So lets say you wanted to make the best strategy, what could you look
at? Well, for starters you might be interested in a question like the
following? ‘If I put N soldiers in slot X, what percentage of the
existing strategies in the database would I beat in that slot?” The next
graph answers this question.</p>
<p><a href="http://1.bp.blogspot.com/_YOjDhtygcuA/TFsK-VC2qRI/AAAAAAAAAOY/UTtTyvyxhT4/s1600/strategychooserspectral.png"><img alt="image" src="http://1.bp.blogspot.com/_YOjDhtygcuA/TFsK-VC2qRI/AAAAAAAAAOY/UTtTyvyxhT4/s400/strategychooserspectral.png" /></a></p>
<p>Here I have attempted to show for every X,Y coordinate the following.
With Y soldiers in slot X, what percentage of the existing strategies do
you beat in slot X. I changed the color scaling to make it more refined,
so you can better read it quantitatively.</p>
<h3>Go Forth</h3>
<p>So, there you have it. You are not fueled with what is probably way more
information than you were hoping for. Hopefully these graphs are more
than just pretty, and get you thinking a bit. That is a lot of what
science is about. Make some observations and then attempt to explain the
results with your own models. You can then test your models with
experiment. I’ve provided you with a bunch of observations and invited
you to construct your own explinations. Now I invite you to perform an
experiment. Think you know whats going on in the game? Then try and beat
it. The link to play is the <a href="http://pages.physics.cornell.edu/~aalemi/blotto/index.php">same as
before</a>.</p>Steak Dinner2010-08-03T21:20:00-04:00Alemitag:thephysicsvirtuosi.com,2010-08-03:posts/steak-dinner.html<p>Sorry about the blog hiatus. During the summer, without teaching
classes, inspiration is harder to come by. But, tonight I cooked a
steak. I recently got a new digital meat thermometer. My plan was to
slowly cook the steak until the internal temperature got to be about 140
degrees Fahrenheit with the oven at 200 degrees, take it out, wrap in
tin foil, crank the oven to 500 degrees, stick it back in, and give it a
nice exterior, reaching an internal temperature of about 150 degrees
which would put it at about medium. After I put the steak into the oven
though, I started to watch the temperature go up on my digital
thermometer and thought, why not take data. And so I did. Here are the results.</p>
<p><a href="http://2.bp.blogspot.com/_YOjDhtygcuA/TFi924poROI/AAAAAAAAAMg/VSSuNTCh30g/s1600/steakdinner.png"><img alt="image" src="http://2.bp.blogspot.com/_YOjDhtygcuA/TFi924poROI/AAAAAAAAAMg/VSSuNTCh30g/s400/steakdinner.png" /></a></p>
<p>Above you see the internal temperature of the steak as a function of
time. First some comments about the graph.</p>
<ul>
<li>The steak started at 37 degrees, the temperature of my refrigerator.</li>
<li>I didn’t start taking data until about 20 minutes in.</li>
<li>The red dashed lines mark where I turned up the temperature of the
oven. It started at 200 degrees, then 250, then 300, and in the
final stretch, 500 with the broiler.</li>
<li>The green dotted lines mark where I got impatient and opened the
door to the over to check on the food.</li>
<li>The yellow background denotes where the steak was outside of the
oven resting in tinfoil.</li>
</ul>
<p>Now some comments on the data</p>
<ul>
<li>You can clearly see a change in the data when I changed the oven temperature.</li>
<li>Opening the oven door really seems to slow down the cooking process</li>
<li>The temperature of the center of the steak continues to rise after
you take it out of the oven.</li>
<li>In fact, curiously enough, the temperature of the center of the
steak seems to have risen the quickest after it was taken out of the oven.</li>
</ul>
<p>Next I decided to look at the heating rate, computed by taking the
finite differences of my data points and propagating the errors.</p>
<p><a href="http://2.bp.blogspot.com/_YOjDhtygcuA/TFi97yIEe9I/AAAAAAAAAMo/EzZ_kvRdb7U/s1600/heatrate.png"><img alt="image" src="http://2.bp.blogspot.com/_YOjDhtygcuA/TFi97yIEe9I/AAAAAAAAAMo/EzZ_kvRdb7U/s400/heatrate.png" /></a></p>
<p>As you’ll see, I really didn’t have enough data points or a precise
enough thermometer to really see the changes in the heating rate.
Finally some comments about the food.</p>
<ul>
<li>The steak was good. You’ll notice I shot past 150, ended up with a
temperature of about 160, and a steak that was nearly medium well.</li>
<li>Cooking the steak slowly like this yielded as pretty soft texture,
akin to a roast. Heck, I essentially roasted the steak.</li>
<li>The steak was served with asparagus and baked potatoes. As an
interesting aside, the baked potatoes were in the oven along with
the steak the whole time, but did not cook all the way through. I
normally bake potatoes at 350 for about an hour, and here they were
in a hot oven for over an hour and a half, half an hour of which was
above 300, but they didn’t cook through.</li>
</ul>
<p>I clearly have too much time on my hands. I have one more steak, and I
think I’ll try a different cooking method. Maybe I’ll have the patience
to take data again, we’ll see.</p>On the Death of Karl Schwarzschild2010-07-24T23:54:00-04:00Corkytag:thephysicsvirtuosi.com,2010-07-24:posts/on-the-death-of-karl-schwarzschild.html<p>\</p>
<p><a href="http://3.bp.blogspot.com/_fa6AZDCsHnY/TEevPyrqW6I/AAAAAAAAAF4/nI7Q_2RQZV0/s1600/Karl+Schwarzschild.jpeg"><img alt="image" src="http://3.bp.blogspot.com/_fa6AZDCsHnY/TEevPyrqW6I/AAAAAAAAAF4/nI7Q_2RQZV0/s200/Karl+Schwarzschild.jpeg" /></a></p>
<p>Every once in a while, in the study of science, one comes across
biographical snippets that momentarily breathe life into names that
otherwise serve as shorthand for equations and eras. As an obvious
effect of the selection bias involved with including this superfluous
information in technical books, they are bound to be pretty interesting.
Such stories range from the hilarious antics of Feynman [1] or Fermi
[2], to the heartbreaking stories of
<a href="http://en.wikipedia.org/wiki/Ludwig_Boltzmann#Final_years">Boltzmann</a>
and
<a href="http://en.wikipedia.org/wiki/Oppenheimer_security_hearing">Oppenheimer</a>,
and even to the surprisingly scandalous life of Erwin Schrödinger. But
my all time favorite of all these historical “fun facts” is that of the
man who provided the first exact solution to the Einstein field
equations while fighting in the First World War: Karl Schwarzschild
(pictured left impersonating a surprised walrus [3]).</p>
<p>Karl Schwarzschild was an astronomer, physicist and mathematician; an
across the board physical scientist with passions both abstract and
practical. He published his first article regarding the orbits of double
stars at the age of 16 while still in high school. He went on to work on
the mathematical treatment of orbits, constructed several useful
astronomical instruments, and put forward theoretical treatments of
comet tails and stellar atmospheres. His creativity was admired by some
of the greatest scientists of his era. Eddington, chief among them,
offered his praise in a strangely brutal simile [4]:</p>
<blockquote>
<p><em><span class="dquo">“</span>… his joy was to range unrestricted over the pastures of
knowledge, and, like a guerrilla leader, his attacks fell where they
were least expected.”</em></p>
</blockquote>
<p>But Schwarzschild’s greatest contribution to science was finding the
first exact solution to Einstein’s field equations for general
relativity in 1916. This solution, which takes the form of the
<a href="http://en.wikipedia.org/wiki/Schwarzschild_metric">Schwarzschild
metric</a>, describes
the space-time surrounding a non-rotating spherically symmetric object
(the solution for a rotating spherically symmetric object was found in
1963 by Roy Kerr). It was through this metric that I came to meet
Schwarzschild, since every time it is introduced in a class the
instructor is sure to drop the mother of all fun facts: Schwarzschild
discovered his solution while in the army during World War I, a war that
would eventually kill him (among others).</p>
<p>In this age of Wikipedia, I find it hard to believe that I didn’t
immediately go home and check the full story behind this statement. I
may very well have, but over the years I have constructed a myth about
the death of Schwarzschild that I at least half believed until I finally
looked up the full story for this post.</p>
<p>I imagined Schwarzschild as the noble and peaceful scientist immediately
skeptical of the war, but eventually conscripted to fight. There he
fought on the front lines, carrying both a Mauser and a notebook. He
would spend the long lulls between suicidal assaults through
no-man’s-land huddled down in the mud in the trenches scribbling away
like mad at what would eventually become the elegant solution that bears
his name.</p>
<p>Then (and here I will blatantly plagiarize the <em>All Quiet on the Western
Front</em>movie) Schwarzschild began to see the solution, everything started
to fall in place. He became excited and no longer able to sit still. Now
standing, he reached out towards the beauty of nature he saw not in a
<a href="http://www.youtube.com/watch?v=ShscVNkUmy0">butterfly</a> or
<a href="http://www.youtube.com/watch?v=DJcBC2Am-uU">bluebird</a>, but in the
fabric of space-time. Just as he finishes his solution, Schwarzschild’s
head briefly peaks above the trench and is caught by a sniper’s bullet.
Both he and his notebook fall to the mud unnoticed, an overly
romanticized symbol for the futility of war or some such nonsense.</p>
<p>Now this version is obviously false (how did he get the solution to
anyone?), but it is the one that has persisted in my mind grapes. So
what really happened? When <a href="http://www.youtube.com/watch?v=YQkaD6fG8mk">war were
declared</a> in 1914,
Schwarzschild volunteered for the German army and manned weather
stations and calculated missile trajectories in France, Belgium, and
Russia. It was in Russia that he discovered and published his well-known
results in relativity as well as a derivation of the Stark effect using
the ‘old’ quantum mechanics. It was also in Russia that he began to
struggle with pemphigus, an autoimmune disease where the body starts
attacking its own cells. He was sent home, where he died in 1916 at the
age of 42.</p>
<p>The reality is much more sensible than my version, and still a good tale
in its own right: a brilliant scientist cut down in his prime, working
until the very end. So why did I unconsciously elevate a respectable
tale to one of mythical proportions? I think it has something to do with
how we view the great scientists of the past. Since they were
extraordinary at one thing (some scientific field), we assume they must
have been extraordinary in every regard. Therefore, their fates must
carry some deeper meaning or symbolism. But it turns out that the
reality, while certainly less romantic, may be more satisfying (to
some), that these men and women whose names live on tied to equations
were just regular people who happened to be very good at one thing:
being scientists.</p>
<p>Or maybe it’s just my inability to separate movies from real life. Who knows?</p>
<p>[1] <em>Surely You’re Joking, Mr. Feynman</em> is essentially the gag reel for
20th century physics. It can be
<a href="http://www.amazon.com/Surely-Feynman-Adventures-Curious-Character/dp/0393316041">purchased</a>
with money if you’re into the whole capitalism thing. Or for all you
pinko commies and poor grad students there are two free options. One,
assemble approximately one gaggle of undergrad physics majors or
Virtuosi bloggers and proceed to give them the shakedown, at least one
copy should pop out. Two, look into big government
<a href="http://en.wikipedia.org/wiki/Public_library">socialism</a>.</p>
<p>[2] See Virtuosi blog post, Future.</p>
<p>[3] Some scholars use this picture as evidence of Schwarzschild’s
involvement as the nerdiest and most forgotten Marx Brother. There is no
evidence to suggest his was, in fact, Poindexter Marx.</p>
<p>[4] Quote (and most facts) lifted from
<a href="http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Schwarzschild.html">here</a>,
take that Wikipedia!</p>
<p>[5] Apologies for the excessive and irrelevant links! [6]</p>
<p>[6] Apologies for the excessive and unnecessary footnotes!</p>Something Bugging Me2010-07-20T23:24:00-04:00Jessetag:thephysicsvirtuosi.com,2010-07-20:posts/something-bugging-me.html<p><a href="http://4.bp.blogspot.com/_SYZpxZOlcb0/TEZsSZmSnUI/AAAAAAAAAB8/6ArQgAym3Fk/s1600/3063594778_019489ef21.jpg"><img alt="image" src="http://4.bp.blogspot.com/_SYZpxZOlcb0/TEZsSZmSnUI/AAAAAAAAAB8/6ArQgAym3Fk/s200/3063594778_019489ef21.jpg" /></a>
Apparently July is a quiet month here at the Virtuosi. We’re busy with
research, travel, vacation, etc. I, myself, have been busy with only a
few of those things, though I’ve also been studying for my qualifying
exam, which is coming up in less than a month. However, that’s not the
question before us today. Today I’d like to think about the density of
bugs in the air. I was walking outside this past weekend, there was a
fierce wind blowing, and twice in five minutes a bug hit my ear. That
seemed like a lot. But for 1 hour of previous walking no bugs hit my
ear. How many bugs would there have to be per cubic meter of air to
achieve that rate? We’ll restrict ourselves to small bugs, like nats,
nothing really mobile like house flies. We have an observed bug impact
rate of 2 bug/hour. I’d estimate that my ear is \~1”x2”, an area of 2
in^2. Let’s convert to metric, that’s about 13 cm^2. Next I need to
know how fast the wind was blowing. I’d guess about 15 miles per hour,
it was a good stiff wind off the lake. From here, we just need to write
down an equation for the bug collision rate. The simplest theory I can
imagine would go something like this: (Bug density)(ear cross
section)(wind speed)=(collision rate) In the above I’ve assumed that the
bugs are moving with the wind (hence my initial assumption that the bugs
are small, and thus will more or less move with the wind). If you check
the above, it has the right units, bugs per time on both sides of the
equation. Now we just need to solve for the bug density, we’ll call that
<em>B</em>, in terms of the rest of our knowns: ear cross section area <em>A</em>,
wind speed <em>v</em>, and collision rate <em>R</em>. This gives
<mathjax>$$B=\frac{R}{Av}=\frac{2 bug/hour}{(13cm^2)(15mph)}$$</mathjax> We’ve got a big
of a units problem with our wind speed in miles per hour and our ear
area in cm^2, so I’ll convert from mph to cm/hour, and come out with a
bug density of <mathjax>$$B=\frac{2 bug/hour}{(13cm^2)(2.4
Mcm/hour)}=6.4*10^{-8}bugs/cm^3$$</mathjax> That seems like an absurdly small
number. Let’s convert from cm^3 to m^3. That gives us .064 bugs/m^3.
This still seems rather low to me. Put another way, we’d need \~16 m^3
to find 1 bug (this is only \~160 bugs/olympic swimming pool!). What
might be amiss with the estimate? Well, I’m fairly happy with the ear
area. I could be misremembering the length of my walk. More importantly,
I’d wager that not all of my walk was perpendicular to the direction the
wind was blowing. If I were at an angle to the wind I’d have a
commensurately smaller ear surface area presented to the wind. Of
course, this will at best probably gain us around a factor of two. I’m
not very familiar with wind speeds, so maybe we could pick up another
factor of two there. Still, I imagine that the main problem with my
calculation is that I assumed that all of the bugs were stationary and
would be blown against my ear. Even little bugs are very mobile (as you
well know if you’ve ever tried to swat them), and were probably actively
trying to avoid my ear for the most part. Only the really weak,
senseless (literally), or stupid bugs got blown against my ear, and
apparently those aren’t that common. How common are they? You tell me.
Make an estimate for the bug density of air, compare it to my senseless
bug density, and find the percent of bugs that fit my description! Or
not, if you only come here to watch the rest of us toil over our calculations.</p>Zombpocalypse2010-07-01T23:48:00-04:00Matttag:thephysicsvirtuosi.com,2010-07-01:posts/zombpocalypse.html<p>Here at the Virtuosi, we’re concerned. We are concerned that perhaps the
world is really not ready for a zombie apocalypse. You know, the kind of
zombie apocalypse that you may have seen in such classics as “Night of
the Living Dead”, “Shaun of the Dead”, or perhaps the even more recent
“Zombieland” (sweet cameo by the way). The kind of zombpocalypse that
could leave major cities void of life and the country plagued with the
undead.
Well, Alemi and I were curious as to how likely such a pandemic was to
occur and what it would look like in the simplest of models. In typical
Virtuosi fashion, we threw some physics at it and this is what we came
up with.
<strong>The Model</strong>:
Taking cues from the epidemiologists, we feel that the spread of
zombie-ism might fall into a <a href="http://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology">compartmental
model</a>
that is much like the heavily studied <span class="caps">SIR</span> model. Here, there are
susceptibles, S, that get infected, I, and can subsequently infect other
susceptibles or recover, R, and not play a part in the disease dynamics
anymore. Our analogous model is the <span class="caps">SZR</span> model in which there are
susceptibles, S, once again that can be bitten by a zombie to become
another zombie, Z. However, this is a huge difference - a susceptible
must then kill a zombie (by destroying the brain of cause) to make it
removed, R. Hence, we have the <span class="caps">SZR</span> disease model.
Like with the compartmental models, one can write down a set of
differential equations that govern the dynamics of disease in a
population. However, these might not be the best option for simulation
as the population is represented as a continuous variable. After all,
there can’t be 1/100th of a zombie roaming around biting ankles causing
a new epidemic after all of this partners have been killed off. To get
around this, we set up a discrete contact network where each node is one
of the three states of the model and there are set probabilities of the
three states interacting when they are neighbors. In particular if there
is a susceptible with a zombie neighbor then there is a probability
<strong>b</strong> that the zombie will bite the susceptible turning the node into a
zombie. In the same time step, there is a probability <strong>k</strong> that the
susceptible will kill the zombie putting it in the removed class. Both
of these actions can happen in one time step of the simulation. However,
the two actions can’t happen sequentially to the same node. That is, a
newly bitten susceptible must be a zombie for at least one time step
before it can be removed adding a latent period for the infection.
<strong>Simulations:</strong>
With these rules, we created networks with average connectivity k=8
(probably way too low) with up to N=40,000 susceptibles. Patient zero
was seeded at a random location and the simulation run until there were
no more susceptible - zombie neighbor pairs.
A typical simulation looks something like:
Here, the blue represent the susceptibles and red zombies. More subtly,
the removed are shown in black. You may also notice that the zombies
actually wrap around as they spread due to periodic boundary conditions.
And so we ran many of these simulations for various values of N, b, and
k to see what type of dynamics arise. As the ratio of b/k is increased
and zombies become proportionally better at biting susceptibles than
they are at killing the zombies, we see a transition from little to no
infection to mass pandemic in what appears to be a sort of percolation
transition.
<a href="http://2.bp.blogspot.com/_qY9DSyjj8Ro/TC1xAXdRNKI/AAAAAAAAB04/4_SC8vFPUF0/s1600/cp.png"><img alt="image" src="http://2.bp.blogspot.com/_qY9DSyjj8Ro/TC1xAXdRNKI/AAAAAAAAB04/4_SC8vFPUF0/s320/cp.png" /></a>
In the figure red is the fraction of zombies in the final population and
green is the amount of time that the simulation took to run. The spike
in time around the critical point shows critical slowing down as also
seen in the length of the movie above. Also, the ending configuration of
zombies shows a fractal like structure near the critical point. Using
the
<a href="http://en.wikipedia.org/wiki/Minkowski%E2%80%93Bouligand_dimension">box-counting</a>
method, we actually calculated the fractal dimension:
<a href="http://3.bp.blogspot.com/_qY9DSyjj8Ro/TC1zP2doE3I/AAAAAAAAB1I/Hfsl9e-TIaI/s1600/fractal.png"><img alt="image" src="http://3.bp.blogspot.com/_qY9DSyjj8Ro/TC1zP2doE3I/AAAAAAAAB1I/Hfsl9e-TIaI/s320/fractal.png" /></a>
This helps us pinpoint the epidemic transition at a ratio of b/k = 1.4.
This means for our particular network type and assumptions on the
infectious nature of zombie-ism that zombies must be 40% better at
biting than we are at lopping heads for an epidemic to take off. Even
then, there must be a much larger ratio of b/k for more than 50% of the
population to become part of the undead.
I trust that we the living would be at least as capable as the undead so
any potential zombie scenario would seem to be stifled in our model. But
then again, what about fast zombies? Now those things are scary.
<strong>Update:</strong>
I had a quick thought that mobile zombies might be a bit more realistic.
I made a quick movie where zombies pursue the closest susceptible while
the susceptibles run away from the zombies at the same speed. You will
see an area clear around the initial infection as the zombies give
chase. By accident, two zombies will collide providing the spark for an
epidemic. An explosion of zombies results. More to follow!</p>The Impossibility of Why2010-07-01T23:44:00-04:00Jessetag:thephysicsvirtuosi.com,2010-07-01:posts/the-impossibility-of-why.html<p>So I think we’ve all been rather busy here. Hence the lack of posts. I’m
going to try to keep this one short but sweet. A lot of people think
that physics tells us why things happen. Why is the sky blue? Why does
the earth orbit the sun? Why does copper transmit electricity so well?
These all seem like perfectly reasonable questions to ask. Questions
that we, as physicists can answer. Yet, I entitled this post the
impossibility of why. In general, questions about why are not good
questions for physicists More accurately we answer questions about how.
Or what. What phenomena causes use to see the sky as blue? What forces
cause the earth to orbit the sun? How does copper transmit electricity
so well? In general, we can’t answer a question of why. A friend once
asked me (at the end of a talk I gave, nonetheless) ‘why can’t two
electrons be in the same quantum state, while two bosons can?’. That’s
an example of a question that we as physicists can’t answer. We have no
idea about why that is. The best answer we can give is ‘I don’t know.
However, experiment tells us that’s what happens.’ In general, physics
cannot be built in a vacuum. We cannot sit down and write down the laws
of nature. Not without looking at nature. That is the difference between
physics and mathematics. Mathematicians can construct arbitrary logical
systems. Anything with a given set of logical axioms that is consistent
can be a valid system of mathematics. Of course, only a few are useful
systems. This allows mathematicians to generate exceedingly interesting
playgrounds for the mind. Physics is meaningless without experiment. We
have to test our theories against the world as we know it. We can no
more explain why F = ma or the Pauli exclusion principle is the way of
the world than we can answer why the universe exists. We can (at least
we hope) tell you how the world works, what to expect, cause and effect.
But why we found this particular set of rules and not another, different
but consistent system, is something we can’t, and usually don’t try to
answer. In many ways, I think this reduces much of the perceived
conflict between religion and science. As long as one doesn’t read God’s
(or gods, depending on your religion) word literally, religion is an
attempt to explain why. Physics is an attempt to explain how. There’s no
inherent conflict in that. At least, that’s my thinking.</p>Life as an Experimenter - Reflections2010-06-19T15:50:00-04:00Jessetag:thephysicsvirtuosi.com,2010-06-19:posts/life-as-an-experimenter-reflections.html<p>I’ve been doing experimental physics for about three years. I started
during my sophomore year in college, went straight to graduate school,
and continued here (albeit not quite immediately). There are people out
there who have been doing this for a lot longer than I have, but I’ve
gained a few insights, by working with said people, and through my own
experiences in the lab. I thought I’d try to share a few of these, to
help illuminate the past few days I blogged about. <strong>Something Always
Goes Wrong</strong> It’s the nature of experimental science. Something always
goes wrong. It might be your equipment, it might be your sample, hell,
it might be something personal. But something will go wrong. Wednesday
was a prime example. Not only did I get soaked by rain and temporarily
lose my bike lock, but it took us 9 hours to really start taking data
due to equipment problems. I think this is endemic in what we’re doing.
We’re trying to investigate something that no one has ever done (at
least not quite exactly what we’re doing). This means that we’re usually
pushing the bounds of our equipment and setup. Not just that, but most
of the equipment is exceedingly complex, giving it many more places it
can break down. <strong>It’s An Emotional Roller Coaster</strong> Sometimes nothing
works. At all. You try and try and you just can’t make it work. Then
sometimes everything is going well. You’re getting data, and not just
that, but the data looks good. Whether it’s confirming or destroying
your expectations, you’re finding out something new about the world.
It’s exciting. You find something interesting and start chasing it.
However, you have to be careful. Often the result is not what you think
you’re seeing. It’s a false signal from your equipment, or attributable
to the background material doing something strange. Or you’ve destroyed
the sample, and that’s why it’s not doing what you expect. You go up and
down quite a bit. <strong>It’s Exhausting</strong> Part of it is the long hours. But
really, 16 hour days are not <em>that</em> bad. Sleeping 6.5-7 hours. I can
usually do that, no problem. So what’s different when it’s research? You
have to be thinking all the time. For 16 hours. In general, you can’t
just push the button and wait. You need to look at the data while it
comes in, to determine what data to take next. Combine that with the
emotional up and down … even though you’re not necessarily
physically exhausted, you’re mentally worn out. <strong>You Get More Questions
Than Answers</strong> My undergraduate advisor would always say that when
you’re doing research you generate three questions for every question
you answer. That’s more or less true. It is very rare to go out and
answer all the questions you have. It is even more rare to not get more
questions. There’s always more to learn. In many ways, that’s great,
though a little frustrating. If we ran out of questions, we’d probably
run out of a job. But there’s always another twist that you didn’t
expect nature to have, something else surprising you. That’s why we do
it. There’s no finality in research. We can’t get all the answers, ever.
A lot of our time isn’t data taking, but making/setting up our
equipment, analyzing data, and such things. Still, I hope that this
series of posts has given you a taste of what we scientists may do when
we’re taking data.</p>Life as an Experimenter - Day Three2010-06-18T13:49:00-04:00Jessetag:thephysicsvirtuosi.com,2010-06-18:posts/life-as-an-experimenter-day-three.html<p>Today marks the third and final day in our beam time at <span class="caps">CHESS</span>. I think
the circles under all of our eyes may take away from the glamor a bit,
but the right makeup specialist could fix that. If they ever make a
movie about us, which they should, I want to be played by some really
awesome british actor. I think that would be about right. Someone with a
strong jaw. In case it’s not obvious, lack of sleep is getting to me a
little bit. Read on for the final few hours of our experiment.
Friday 6/18
<strong>9:07am</strong> - Not sure if I didn’t turn on my alarm, or turned it off
without noticing. But somehow I manage to jerk awake not too much later
than planned. Dragging a little bit. Get some food. No time for shower,
do that in a few hours. Only have to get through to noon.
<strong>9:41</strong> - Leave house. Hop in car. Realize as I drive through that
there’s a stop sign at a corner that I didn’t see last night, mostly
hidden by trees. Very glad there wasn’t any traffic.
<strong>9:54</strong> - Arrive back at <span class="caps">CHESS</span>. Set up laptop to stream the <span class="caps">US</span> game.
Get a report from Matt and Ryan. Looks like the data that was exciting
last night is actually repeatable, and though it is a small effect,
probably real. This is a very nice bonus for our experiment, we were
mostly expecting null results from that phase of things. Plan is to test
that more, run some experiments to make sure we’re not seeing other,
more mundane, effects that would look the same with our collection
methods.
<strong>10:25</strong> - I take over crystal mounting. Matt is busy with data, and
Ryan is not having luck with crystals this morning. That’s how it goes.
<span class="caps">US</span> is down a goal. Morale low for that reason.
<strong>10:45</strong> - Data taking going okay. Still seeing the effect, though
small. <span class="caps">US</span> down 2 goals. Not good at all.
<strong>11:00</strong> - <span class="caps">US</span> down 1 goal! Oh yes. Some experiment. We’re almost done,
not paying too much attention any more. Everyone is slowing down.
<strong>11:20</strong> - <span class="caps">CHESS</span> staff members keep on wandering into our area and
staying to watch the game. Kind of fun. One got us an ethernet cable to
eliminate some of the pauses we were getting in the streaming using the
wireless. Set up a final run, determine if the effect is from what we
suspect (hope?) or from mundane causes. Need to pack up and be out in 40
minutes.
<strong>11:45</strong> - Experiment finishes. <span class="caps">US</span> is robbed! Should have been a
victory.
<strong>11:50</strong> - Mad scramble to pack up equipment. Crystals, mounts,
computers, etc.
<strong>11:58</strong> - Off the beam.
<strong>12:05pm</strong>- Parking permit returned. On my way home. Taking the rest of
the day off.
I hope that you, dear reader, have enjoyed this brief taste of how
experiments go. I tried to let the emotions reflect what I actually was
feeling at the time, but often for the most harrowing/exciting parts
there was no time to write, so much of this has been upon reflection.
Tomorrow I hope to offer a few insights into how experiments usually
seem to go. A lot of people have never done experimental science, and
experimentation in movies is <a href="http://xkcd.com/683/">very different</a> from
what we actually do. Now it’s time to sleep. And maybe watch the
England/Algeria game.</p>Life as an Experimenter - Day Two2010-06-18T01:57:00-04:00Jessetag:thephysicsvirtuosi.com,2010-06-18:posts/life-as-an-experimenter-day-two.html<p>Today I’m continuing my series on the life of an experimenter. Today is
the longest day, since we have beam time for all 24 hours. And after the
setbacks of yesterday, we feel compelled to use it to the utmost bit.
Read on for more tantalizing glimpses of the grit behind the glamor of
the rock-star-like lifestyle of an experimental physicist. Today had a
very different feel. Things were working, and that meant a lot of down
time waiting for data to collect. Despite the excitement of data coming
in, I was a little bored at points. Lots of internet use. Thursday 6/17
<strong>7:15am</strong> - Alarm goes off. 6.5 hours of sleep? Up quickly, and eat and
shower. Run into a couple of my housemates that I normally don’t. I
don’t feel to tired. Which is good, because I don’t usually do caffeine,
if I had to now, it might well mess me up. <strong>7:50</strong> - Out of the house.
Taking my car, since my bike isn’t working. <strong>8:05</strong> - At the F1
station. Matt is there, awake, looking not too much worse for ware. Ryan
comes in right behind me. I have to get a parking permit, which <span class="caps">CHESS</span>
kindly provides for visiting researchers. Since my car isn’t registered
on campus, I qualify as a visiting researcher. <strong>8:10</strong> - Matt briefs us
on the progress of the night. Some good data was taken, and we’ve
switched from the helium to a liquid nitrogen (<span class="caps">LN</span>) cooling. He lays out
the set of experiments we should try to run today. Obvious that he needs
sleep. <strong>8:45</strong>- Matt goes home. Ryan and myself are on our own. Start a
data run. <strong>9-10</strong> - All the staff are checking in on us. After the
problems of yesterday, they want to make sure everything is running
smoothly. Which it is. <strong>10:30</strong> - Data running smoothly. When
everything goes well, for this particularly type of data set, there’s
not much to be done while it’s coming in. Put on the Greece vs. Nigeria
world cup game, streaming on my laptop. <strong>11:30</strong> - Data run finished.
Load up a new crystal smooth as can be. Start another data set.
Everything working like a charm. <strong>12:30pm</strong> - Lunch break. Much more
relaxed today. <strong>2:00</strong> - Data run done. Switching to a different type
of data run <strong>2:15</strong> - New run starts. Smooth sailing so far. <strong>2:30</strong> -
Put on Mexico vs. France world cup game. <strong>3:30</strong> - Attempting to start
a new run. Took us three or four tries to find a new crystal that was
good. Run good to go. <strong>4:15</strong> - We’re trying to run at 80K, but we’re
having problems with temperature stability. We talk to Ulrich, and he
thinks that we’ve got a partial ice plug in the <span class="caps">LN</span> line, reducing the <span class="caps">LN</span>
we can draw through. We can either waste two hours having it replaced,
or run with it as it is. No guarantee that it won’t get worse. We decide
to run with it, we’ve lost too much beam time already. Our lowest
temperature seems to be \~90K. <strong>5:15</strong> - Data run finishes. New crystal
mounted. New data run started. <strong>6:30</strong>- Data run finishes. Two attempts
before we get a good crystal. Data flowing smoothly. <strong>7:10</strong> - Matt
returns. There is much rejoicing. <strong>8:00</strong>- Trying room temperature data
taking. Matt wants us to learn all the tricks this run, it seems. Having
trouble getting good crystals. <strong>10:00</strong>- Room temperature data giving
us a few interesting results. Trying a full run, but we’re going to kill
the crystal long before that. X rays will kill proteins (that’s why we
avoid them, ourselves). Faster at room temperature than at 100K.
<strong>10:45</strong> - Dinner from the stuff I packed this morning. I do love
microwaved leftovers. No rush, though. All three of us are in the lab,
and Ryan and Matt can handle whatever comes. <strong>11:25</strong> - Lots of trouble
getting good crystals (4 or 5 attempts?). 220K is a hard temperature.
Finally gave up and went to 240K. <strong>11:55</strong> - Ryan goes home. He’ll be
back around 6 tomorrow. I’m sticking it out for a while. Interesting
data coming in around 240K. If we’re skilled and lucky, we’ll get
something around 220K also. Friday 6/18 <strong>12:20am</strong> - Potentially
exciting results! <strong>12:45</strong> - Reproducible potentially exciting results!
<strong>1:15</strong> - My brain can’t do simple calculations right now, but we’re
changing temperatures and trying out another look for our result at
240K. I hope it’s there! <strong>1:20</strong> - Morale lower. Possibly the effect is
from a perfectly reasonable explainable thing. What a great hour though!
<strong>1:45</strong> - Data inconclusive but leaning towards no. Heading home. Leave
Matt all by his lonesome. <strong>1:55</strong> - Home. Sleep time. Alarm set for 9.</p>Life as an Experimenter - Day One2010-06-17T13:53:00-04:00Jessetag:thephysicsvirtuosi.com,2010-06-17:posts/life-as-an-experimenter-day-one.html<p>I’m an experimental physicist. If you think this sounds like a job
second in glamour only to rock star you would be right. Just like being
a rock star, you have to deal with the people, the shows, the lights,
the groupies … okay, maybe I’m lying about the groupies. Unless
you’re <a href="http://groups.myspace.com/index.cfm?fuseaction=groups.groupprofile&groupID=103575126">Brian
Greene</a>.
Also similar to a rock star, no one really knows what it is we do behind
the scenes (when we’re not touring the nation or publishing papers). I’d
like to pull back that curtain a little bit. A bit of background. For
this data run we’re looking at the structure of protein crystals. The
basic idea is that if you have a bunch of proteins, you can create a
crystal out of them using various synthesis techniques. If they’re
formed right, they look similar to the crystals you are more familiar
with, quartz, diamond, emerald, etc. We’re interested in how the
proteins are held in the crystal, that is, what kind of structure the
protein crystal has. I’ll talk more about this at some other point. Our
method for examining this is using x rays. Similar to a medical x-ray,
we shoot x-rays at our sample, and look at the transmitted images. Of
course, instead of something like this:
<a href="http://2.bp.blogspot.com/_SYZpxZOlcb0/TBpWZFChfhI/AAAAAAAAABs/Cp-4Rf5ca5Y/s1600/X-ray-hands.jpg"><img alt="image" src="http://2.bp.blogspot.com/_SYZpxZOlcb0/TBpWZFChfhI/AAAAAAAAABs/Cp-4Rf5ca5Y/s320/X-ray-hands.jpg" /></a>
we see something like this:
<a href="http://3.bp.blogspot.com/_SYZpxZOlcb0/TBpWqG2OqLI/AAAAAAAAAB0/SncRJvsz1Uk/s1600/20071104200851!X-ray_diffraction_pattern_3clpro.jpg"><img alt="image" src="http://3.bp.blogspot.com/_SYZpxZOlcb0/TBpWqG2OqLI/AAAAAAAAAB0/SncRJvsz1Uk/s320/20071104200851!X-ray_diffraction_pattern_3clpro.jpg" /></a>
which is a little bit harder to interpret. Nevertheless, we’ve got some
pretty nifty software that will do the job for us. The unfortunate part
about using x-rays is that to produce research grade x-rays takes really
big expensive facilities, so we have a limited amount of time we get to
use them for. Right now, we’ve got 48 hours of beam time, so we want to
take data for as many of those 48 hours as possible. For those
interested, we’re using <span class="caps">CHESS</span> at Cornell, station F1. Wednesday, 6/16
<strong>9:30am</strong>- Get up. I was warned by the postdoc in my group to sleep in
as much as I could. Our data run starts at noon, and goes for 48 hours.
Who knows how much sleep we’ll get in those 48 hours. Our beam time
starts at noon today. I eat breakfast, and realize that I’ve left my lab
notebook in the lab. I need to swing by there and pick it up. <strong>10:52</strong>
- Out of the house. My goal is to be at <span class="caps">CHESS</span> by 11. As this is my first
time using the facility, I need to get a safety tour before I can use
the beam line. I hope on my bike and speed towards to the physics
building. <strong>10:54</strong> - Downpour. I get soaked. I’m wearing a rain jacket,
but my jeans and shoes are soaked through. <strong>10:55</strong>- A pocket on my
backpack comes unzipped, dumping my water bottle (and, as I find out
later, my bike lock) into the road. I stop and retrieve the water
bottle. <strong>11:02</strong> - Arrive at physics building. Hasten to the lab,
dripping water. Grab notebook. Receive call from Ryan, another graduate
student in the lab, saying he’s at <span class="caps">CHESS</span>, and ready to take the safety
tour when I am. <strong>11:12</strong> - Arrive at <span class="caps">CHESS</span>. Realize I don’t have my
bike lock anymore. Call Ryan and tell him to take the tour without me,
I’ll get to it later. Hop on bike, ride route in reverse. Find lock
right where water bottle fell out. <strong>11:25</strong> - Back at <span class="caps">CHESS</span>. Check in.
Get given safety tour, radiation badge. Meet up with Ryan. Good times.
<strong>11:59</strong> - Matt, the postdoc in my lab, with experience at <span class="caps">CHESS</span> and
the samples and equipment, arrives. <strong>12:00pm</strong> - Beam time starts.
We’re not ready. For this run, we’re using liquid helium to cool our
sample. We have to get that set up first. <strong>1:45</strong> - With the aid of
Ulrich, one of the Research Associates at <span class="caps">CHESS</span>, we have the liquid
helium stream set up to cool our sample. <strong>2:00</strong>- Matt trains Ryan and
myself how to mount samples for the beam. <strong>2:15</strong> - Matt mounts a
sample, and trains Ryan and myself how to operate beam controls.
<strong>2:30</strong>- Start taking data. <strong>2:45</strong>- Crystal turns out to be not very
crystalline. Rinse and repeat. <strong>4:00</strong> - After 7 increasingly
frustrating attempts with bad crystals, the 8th turns out to give us a
good signal. Looks like we’re back in business. <strong>4:15</strong> - Getting
anomalous signals from our sample. Every 5th image has about have the
intensity of the others. Also getting some weird tiling in the image. No
one knows what’s going on. We call in Ulrich and the beam operator.
<strong>4:30</strong>- In the midst of trying to resolve the anomalous signals, the
computer we’re running the experimental control software on goes down.
Requires calling in more tech support. <strong>4:50</strong>- I take a break to eat.
First food since breakfast. Two sandwiches and a half-frozen banana
(kept it near the back of the fridge in the <span class="caps">CHESS</span> lounge. I learned my
lesson). <strong>5:10</strong> - Food eaten. Computer back up. Data taking ready.
Still no resolution for the anomalous signal. While the experts continue
to trouble shoot, Matt attempts to figure out if we can run any of our
experiments with the data as bad as it currently is. <strong>5:45</strong> - Still no
solution. Matt has determined that none of our planned experiments will
work with the data as bad as it is. Suspicion of a bad detector (a
million dollar piece of equipment). Morale is low. <strong>8:00</strong> - After
continued trouble shooting, Ulrich is convinced that the problem is a
bad shutter for the x ray beam (just like a camera, you achieve a
certain image exposure by letting the beam hit the sample/detector for a
certain amount of time). <strong>8:30</strong> - Technician arrives to swap shutters.
Operation successful! Data looks good. <strong>9:00</strong> - After some test runs,
we start taking real data. Morale is high. <strong>11:15</strong> - Data coming in
with no sign of stopping. Ryan and I head home. Matt has the night
shift, we’re to relieve him at 8 the next morning. <strong>11:20</strong>- Discover
that my bike has somehow broken while sitting on the bike rack. The
collar that keeps the handlebars in the frame is loose, and the
handlebars no longer connect with the front wheel. Bike is unrideable.
<strong>11:25</strong> - Decide bike is unfixable with current tools. I have to walk
it home. <strong>11:40</strong> - Home. Attempt to fix bike. Discover none of my
wrenches are quite large enough. <strong>11:45</strong>- Food. Microwave leftovers.
Discover one of my housemates has left fresh chocolate chip cookies on
the counter for the rest of the house. Bless her a thousand times over
and eat two. <strong>** Thursday 6/17 </strong>12:15am** - Set alarm for 7:15am, and
hop into bed.</p>How Long Can You Balance A (Quantum) Pencil2010-06-16T03:20:00-04:00Alemitag:thephysicsvirtuosi.com,2010-06-16:posts/how-long-can-you-balance-a-quantum-pencil.html<p><a href="http://2.bp.blogspot.com/_YOjDhtygcuA/TBhmR0BI7oI/AAAAAAAAALw/tqkQP717AX4/s1600/pencil37.png"><img alt="image" src="http://2.bp.blogspot.com/_YOjDhtygcuA/TBhmR0BI7oI/AAAAAAAAALw/tqkQP717AX4/s200/pencil37.png" /></a></p>
<p>Sorry for the delay between posts. Here in Virtuosi-land, we’ve all
begun our summer research projects and I think we’ve just become a
little bogged down in the rush that is starting a summer research
project. You feel as though you have no idea what the heck is going on,
and just try desperately to keep your head up as you hit the ground
running, but thats a topic for another post. In this post I’d like to
address a fun physics problem.</p>
<blockquote>
<p>How long can you balance a pencil on its tip? I mean in a perfect
world, how long?</p>
</blockquote>
<p>No really. Think about it a second. Try and come up with an answer
before your proceed. What this question will become by the end of this
post is something like the following:</p>
<blockquote>
<p>Given that Quantum Mechanics exists, what is the longest time you
could conceivably balance a pencil, even in principle?</p>
</blockquote>
<p>I will walk you through my approach to answering this question. I think
it is a good problem to illustrate how to solve non-trivial physics
problems. I will try and go into some detail about how I arrived at my
solution. For most of you this will probably be quite boring, so feel
free to skip ahead to the last section for some numbers and plots.</p>
<h3>Finding an Equation of Motion</h3>
<p>The first thing we need to do is find an equation of motion to describe
our system. Lets consider the angle theta that the pencil makes with
respect to the vertical. Lets treat this as a torque problem. Dealing
with rotating systems is almost identical to dealing with free particles
in Newtonian mechanics. Instead of Netwon’s first law, relating forces
to acceleration <mathjax>$$ F = m \ddot x $$</mathjax> we just replace it with the
rotational analogue of force - torque, the rotational analogue of
acceleration - rotational acceleration, and the rotational analogue of
mass - the moment of inertia. <mathjax>$$ T = I \ddot \theta $$</mathjax> (I’ve taken the
usual physics notation here, dots represent time derivatives) We need to
determine the torque and moment of inertia of our pencil. At this point
I need to <em>model</em> the system. I need to break up the real world, rather
complicated idea of a pencil, and turn it into an approximation that
retains all of the important bits but enables me to actually proceed.
So, I will model a pencil as a rod, a uniform rod with a constant mass
density. In doing so, I can proceed. The moment of inertia of a rod
about its end is rather easy to calculate. If you are not familiar with
the result I recommend you try the integral yourself. <mathjax>$$ I = \int r^2
\ dm = \frac{1}{3} m l^2 $$</mathjax>\ where m is the total mass of my pencil
and l is its length. I will take a pencil’s mass to be 5 g and its
length to be 10 cm. Now the torque. The only force the pencil feels is
the force due to gravity, which acts from the center of mass, which for
my model of a pencil occurs at half its length. I additionally wish to
express the force in terms of the parameter I decided would be useful,
namely theta, the angle the pencil makes with the vertical. I obtain <mathjax>$$
T = r \times F = \frac{1}{2} m g l \sin \theta $$</mathjax> Great, putting the
pieces together we obtain an equation of motion for our pencil <mathjax>$$
\frac{1}{2} m g l \sin \theta = \frac{1}{3} m l^2 \ddot \theta $$</mathjax>
rearranging I get this into a nicer form <mathjax>$$ \ddot \theta -
\frac{3}{2} \frac{g}{l} \sin \theta = 0 $$</mathjax> in fact, I’ll utilize
another time honored physics trick of the trade and simplify my
expression further by making up a new symbol. Since I’ve done these
kinds of problems before I can make a rather intelligent replacement <mathjax>$$
\omega^2 = \frac{3}{2} \frac{g}{l} $$</mathjax> obtaining finally <mathjax>$$ \ddot
\theta - \omega^2 \sin \theta = 0 $$</mathjax> And we’ve done it.</p>
<h3>Looking at the equation of motion</h3>
<p>Now that we’ve found the equation of motion, lets look at it a bit.
First off, what does an equation of motion tell us? Well, it tells us
all of the physics of our system of interest. That little equation
contains all of the information about how our little model pencil can
move. (Notice that while I haven’t yet been explicit about it, in my
model of the pencil, I also don’t allow the tip to move at all, the
pencil is only able to pivot about its tip). Great. A useful thing to do
when confronting a new equation of motion is to try and find its fixed
points. I.e. try and find states in which your system can be which do
not evolve in time. How can I do that? Sounds complicated. In fact, I’ll
sort of work backwards. I want to know the values that do not evolve in
time, meaning of course that if I were to find such a solution, all of
the terms that depend on time would be zero. So, if such a solution
exists, for that solution the derivative term will vanish. So the
solutions have to be solutions to the much simpler equation <mathjax>$$ \sin
\theta = 0 $$</mathjax> Which we know the solutions. In fact, lets be a little
smart about things and only worry about theta = 0 and theta = pi.
Thinking back to our model this suggest a pencil being straight up
(theta = 0) and straight down (theta = pi). These are the stable points
of the equation. The second one you are familiar with. If you instead of
balancing a pencil ‘up’, try to balance it ‘down’, you know that if you
start with the pencil pointing straight down it stays that way and
doesn’t do anything interesting. But what about that first solution
theta =0? That indicates that if you could start this model pencil
exactly straight up, it would stay that way forever, and also not do
anything interesting. Oh no you cry. It seems as though we’ve already
answered the question. How long can you balance a pencil? It looks like
you could do it forever if you did it perfectly. But you and I both know
that is impossible. You can’t ever balance a pencil forever. I’ve never
done it, and tonight I’ve spent a lot of time trying. So what went wrong?</p>
<h3>When your approximations fail</h3>
<p>So what went wrong again? It seems like I’ve gotten an answer, namely in
my model you could, at least in principle balance your pencil forever.
But you and I both know you can’t. Something is amiss. Hopefully, the
first thought that occurs to you is something along the lines of the following.</p>
<blockquote>
<p>Of course you dummy! You could <em>in principle</em> balance a pencil
forever, but in the real world, you can’t set the pencil up standing
perfectly straight. Even if its tilted just a little bit, its going to
fall. This is exactly the problem with your physicists, you don’t live
in the real world!</p>
</blockquote>
<p>Whoa, lets not be so harsh there. I made some rather crude
approximations in order to get such a simple equation. You are allowed
to make approximations provided (1) they are still right to as many
digits as you care about, and (2) you keep in mind the approximations
you made, and think a bit about how they could go wrong. So, before we
do anything too drastic, lets do with your gut. I agree, it seems like
if the pencil would be at any small angle, it ought to fall. Lets double
check that our equation does that. So for the moment imagine theta being
some small number. In fact, I will use the usual notation and call it
epsilon. What does our equation say then? <mathjax>$$ \ddot \theta = \omega^2
\sin \epsilon $$</mathjax> lets make another approximation (I know I know, we’ve
already run into trouble, but bear with me). If epsilon is going to be a
really small number, then we can simplify this equation even more. That
sine being in there is really bugging me. Sines are hard. So lets fix
it. Can we say something about how sine behaves when the angles are
super small?? In fact we can. Such an approach is super common in physics.</p>
<h3>A short side comment on Taylor Series</h3>
<p>Imagine a function. Any function. Picture the graph of the function.
I.e. imagine a line on a graph. No matter what function you imagine, if
you zoom in far enough, at any point that function ought to look like a
line. Seriously. Zoom out a little bit and it will look like a line plus
a parabola. Zoom out a little more and it will look like a cubic
polynomial. You can make these statements precise, and thats the <a href="http://en.wikipedia.org/wiki/Taylor_expansion">Taylor
Expansion</a>. But the idea
isn’t much more complicated than what I’ve described. Taylor expanding
the sine, we obtain <mathjax>$$ \ddot \theta \approx \omega^2 \left(
\epsilon - \frac{\epsilon^3}{3!} + \cdots \right) $$</mathjax> So if you are
at <em>really</em> small angles, sin(x) looks just like x. Whats really small?
Well as long as x^3 is too small for you to care about. For me, for the
rest of the problem that will be for angles that are less than about 0.1
radians, for which that second term is about 0.00017 radians or 0.01
degrees, which is too small for me to care.</p>
<h3>Coming back to the approximation bit</h3>
<p>Anywho, for really small angles, our equation of motion is approximately
<mathjax>$$ \ddot \theta = \omega^2 \theta $$</mathjax> So, notice for a second that
if theta is positive, since omega^2 has to be positive, then our
angular acceleration is going to be positive. So your intuition was
right. If your pencil ever gets to any positive angle, even the smallest
of angles, then our angular acceleration is positive and our pencil will
start to fall down. So. The next question becomes. How can we capture
this bit of reality. It looks like my model has this unphysical
solution. How can I make it more real worldy? Ah, this is the real fun
of physics. You could go in any number of directions. Perhaps you could
try and estimate how good you can actually prepare the angle of the
pencil, perhaps you could ask whether air bouncing into the pencil would
make it fall, perhaps you could wonder whether adding more realism to
the moment of inertia would make the pencil fall easier, maybe you could
wonder whether the thermal motion of the pencil would make it fall?
Maybe you could consider the pencil as an elastic object and consider it
vibrating as well as pivoting. Maybe you could model the tip as being
able to move? Maybe you could introduce the gravitational pull of the
sun? or the moon? or you? or the nearest mountain? The sky’s the limit.
So what am I going to do? Quantum Mechanics. Seriously, bear with me a bit.</p>
<h3>A little preliminaries</h3>
<p>Before proceeding any further, lets actually solve the equation of
motion we just got for the smallest angles. To remind you, the equation
I got for my model of a pencil in the limit of the smallest angles was
<mathjax>$$ \ddot \theta = \omega^2 \theta $$</mathjax> This is an equation I can
solve. Its a very common differential equation, and one that we use and
abuse in physics so I know the solution by heart. So lets write it down.
First just to let know you, this sort of equation, second derivative of
thing being linearly proportional to thing gives you solutions that are
always pairs, usually, depending on the sign of the constant, they are
written as sines and cosines, or decaying and growing exponentials.
Naturally of course in order to solve a second order differential
equation, we need to specify two initial conditions. In this case I will
call them theta_0 and dot theta_0, representing the initial position
and initial angular velocity. In this form the solution can actually
most easily be written in terms of the exponential pair associated with
sine and cosine, sinh and cosh (you can read more about them
<a href="http://en.wikipedia.org/wiki/Sinh">here</a>, they are really neat
functions). The solution is <mathjax>$$ \theta(t) = \theta_0 \cosh \omega t
+ \dot \theta_0 /\omega \sinh \omega t $$</mathjax> which as you could
probably convince yourself, exponentially grows for any positive
theta_0 or dot theta_0.</p>
<h3>Abandoning Realism</h3>
<p>At this point of considering the question, I turned down a different
route. I don’t really care about balancing pencils on my desk. You see,
I know a curious fact. I know that in quantum mechanics there is an
uncertainty principle which says that you cannot precisely know both the
position and momentum of an object. This of course means that <em>even in
principle</em>, since our world is dominated by quantum mechanics, I could
never actually balance even my model pencil forever, because I could
never prepare it with perfect initial conditions. The <a href="http://en.wikipedia.org/wiki/Uncertainty_principle">uncertainly
principle</a> tells us
that the best possible resolution I could have in the position and
momentum of an object are set by Planck’s constant: <mathjax>$$ \Delta x \Delta
p \geq \frac{\hbar}{2} $$</mathjax> This has to be true for our pencil as well.
In fact I can translate the uncertainty principle into its angular form
<mathjax>$$ \Delta \theta \Delta J \geq \frac{\hbar }{2} $$</mathjax> where theta is
our theta and J is the angular momentum, which for our pencil we know is
<mathjax>$$ J = I \dot \theta = \frac{1}{3} m l^2 $$</mathjax> So the uncertainty
principle for our pencil is, <mathjax>$$ \Delta \theta \Delta \dot \theta
\geq \frac{3 \hbar }{2 m l^2 } \approx 3.2 \times 10^{-30}
\text{ Hz} $$</mathjax> So what? So, I’m going to approximate the effects the
uncertainty relation has on our pencil problem by saying that when I
start off the classical mechanical pencil, I’m going to require that my
initial conditions satisfy the uncertainty relation: <mathjax>$$ \theta_0
\dot\theta_0 = \frac{3 \hbar }{2 ml^2} $$</mathjax> which we decided is
going to mean that our pencil has to fall. The real question is? How
long will it take this pseudo-quantum mechanical pencil to fall? In
order words the question I am really trying to answer is:</p>
<blockquote>
<p>Assuming a completly rigid pencil which you place in a vaccum and cool
down to a few millikelvin so that it is in its ground state. Roughly
how long will it take this pencil to fall?</p>
</blockquote>
<h3>Do it to it</h3>
<p>So lets do it. This is going to be a bit quick, mostly because its
getting late and I want to go to bed. But the procedure is kind of
straight forward now. I need to choose initial conditions subject to the
above constraint, figure out how long a pencil with those initial
conditions takes to get to theta = pi/2 (i.e. fall over), and then do it
over and over again for different values of the initial conditions. So,
the first thing to do is figure out how to pick initial conditions that
satisfy the constraint. I’ll do this systematically by parameterizing
the problem in terms of the ratio of the initial conditions. I.e. lets
define <mathjax>$$ \log_{10} \frac{\theta_0}{\dot \theta_0} = R $$</mathjax> where
I’ve taken the log for convenience. Now, figuring how long the pencil
takes to fall in principle is just numerically integrating forward the
full equation of motion <mathjax>$$ \ddot \theta = \omega^2 \sin \theta $$</mathjax>
where I need to do it numerically because the sine makes this equation
too hard to solve analytically. In order to do the numerical integration
I implemented a <a href="http://en.wikipedia.org/wiki/Runge_kutta">Runge-Kutta</a>
algorithm in python. The only problem is that I am dealing with really
small numbers and my algorithm can’t play well with those in any
reasonable amount of time. But, I can solve analytically for the
equations of motion in the small angle limit, so I actually use the
solution <mathjax>$$ \theta(t) = \theta_0 \cosh \omega t + \dot \theta_0
/ \omega \sinh \omega t $$</mathjax> to evolve the system up to an angle of 0.1
radians, and then let the nonlinear equation and runge kutta algorithm
take over. The full python code for my problem is available
<a href="https://docs.google.com/Doc?docid=0AcIl0b2saix4ZGNzMnBuNjZfMTZjdnE0YzZjaw&hl=en">here</a>
(if you want to run it, remove the .txt extension, I did that so that it
would be previewable). And what do I obtain? First looking over 20
orders of magnitude in differential initial conditions:</p>
<p><a href="http://1.bp.blogspot.com/_YOjDhtygcuA/TBh5bCxWJeI/AAAAAAAAAL4/RASoK8dhsZk/s1600/biglook.png"><img alt="image" src="http://1.bp.blogspot.com/_YOjDhtygcuA/TBh5bCxWJeI/AAAAAAAAAL4/RASoK8dhsZk/s400/biglook.png" /></a></p>
<p>And second, zooming into the interesting region.</p>
<p><a href="http://1.bp.blogspot.com/_YOjDhtygcuA/TBh5eR3B6lI/AAAAAAAAAMA/MjN-rZbuG8E/s1600/closelook.png"><img alt="image" src="http://1.bp.blogspot.com/_YOjDhtygcuA/TBh5eR3B6lI/AAAAAAAAAMA/MjN-rZbuG8E/s400/closelook.png" /></a></p>
<p>So, what is the best time you could balance a quantum mechanical pencil,
i.e. what is the absolute longest time you could hope to balance a
pencil in our universe? About 3.5 seconds. Seriously. Think about that
for a second. Usually you hear about the uncertainty principle, and it
seems like a neat parlor trick, but something that couldn’t influence
your day to day lives, and here is a remarkable problem where even in
the best case, the uncertainty principle puts a hard limit on
championship pencil balancing which seems tantalizingly close. And there
you have a graduate student working through a somewhat non trivial
problem. I probably went into way too much details with the basics, but
we are still trying to feel out who our audience is. Please leave
comments and let me know if I either could have left things out, or
should have went into more details at parts.</p>
<h3><span class="caps">EDIT</span></h3>
<p>As per request, here is how the max fall time scales with the length of
the pencil assuming a pencil with uniform density.</p>
<p><a href="http://4.bp.blogspot.com/_YOjDhtygcuA/TBqNzN0zuWI/AAAAAAAAAMU/ZDEGu8S2rik/s1600/falltimevslength.png"><img alt="image" src="http://4.bp.blogspot.com/_YOjDhtygcuA/TBqNzN0zuWI/AAAAAAAAAMU/ZDEGu8S2rik/s400/falltimevslength.png" /></a></p>
<p>Plotted on a log-log plot, that is a pretty darn good line. The power
law dependence is <mathjax>$$ t \sim l^{0.514} $$</mathjax> Neat. Strangely enough, if I
trust my numbers here, the longest you could hope to balance a ‘pencil’
1 km long would be about 6 minutes. Thats a very strange mental picture.</p>My Pepsi* Challenge2010-06-08T01:30:00-04:00Corkytag:thephysicsvirtuosi.com,2010-06-08:posts/my-pepsi-challenge.html<p><a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/TAz2-SbRtAI/AAAAAAAAAFY/eaUWEFQHTDs/s1600/bottles.png"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/TAz2-SbRtAI/AAAAAAAAAFY/eaUWEFQHTDs/s200/bottles.png" /></a>The
basement of the Physics building has a Pepsi machine. Over the course of
two semesters Alemi and I have deposited roughly the equivalent of the
<span class="caps">GDP</span> of, say, Monaco to this very same Pepsi machine (see left, with most
of Landau and Lifshitz to scale). It just so happens that Pepsi is now
having a contest, called “Caps for Caps,” in which it is possible to win
a baseball hat. There are several nice things about this contest.
Firstly, I drink a lot of soda. Secondly, I like baseball hats. So far
so good. Lastly (and most important for this post), is that it is fairly
straightforward to calculate the statistics of winning (or at least
simulate them).
So how does the game work? Well, on each soda cap the name of a Major
League Baseball team is printed. All thirty teams are (supposedly)
printed with the same frequency, so the odds of getting any particular
team are 1/30. You can win a hat by collecting three caps with the same
team printed on them. So if I had five caps, the following would be a
win:
Phillies Cubs Tigers Phillies Phillies
whereas the following would not win me anything:
Yankees Rays Blue Jays Orioles Royals
and I would also lose if I had:
Mets Nationals Braves Braves Mets.
In addition, one in eight caps gets you 15% off on some <mathjax>$50 dollar or
more purchase to MLB.com or something like that. For simplicity, I
ignored these 15% off guys, but all they will do is push back the number
of caps you need by one for every eight purchased. It should not be too
difficult to factor these ones in, but I was lazy and I already made all
these nice graphs, so...
The first thing I tried to do was just simulate the contest. I wrote a
little Python script that randomly generates a team for each cap and
counts my wins over a given number of caps purchased. Running this about
100,000 times for all number of caps between 1 and 61 (with 61
guaranteed to win) and averaging over the number of wins, I could
determine both the expected number of wins per cap value and the
probability of winning at least once. The results are shown below.
[![image](http://4.bp.blogspot.com/_fa6AZDCsHnY/TAwPaC7VV-I/AAAAAAAAAEA/ei6DPQ09bmI/s400/CapsforCaps(sim1).png)](http://4.bp.blogspot.com/_fa6AZDCsHnY/TAwPaC7VV-I/AAAAAAAAAEA/ei6DPQ09bmI/s1600/CapsforCaps(sim1).png)
[![image](http://1.bp.blogspot.com/_fa6AZDCsHnY/TAxwinoE0FI/AAAAAAAAAFA/VTOnToOXcmM/s400/WinsvsSodas.png)](http://1.bp.blogspot.com/_fa6AZDCsHnY/TAxwinoE0FI/AAAAAAAAAFA/VTOnToOXcmM/s1600/WinsvsSodas.png)
But we can also exactly solve this game. This turned out to take longer
for me (I'm bad at probability) than just simulating the darn thing. I
had initially included my derivation in the post but it was long,
muddled, and none too illuminating, so I took it out. But I super-duper
promise I did it and can post if you really really want to know.
Otherwise, I have just plotted the predicted results below (as a red
curve) along with the simulated data (blue dots). Turns out they agree
pretty well!
[![image](http://4.bp.blogspot.com/_fa6AZDCsHnY/TAxxKz3m-MI/AAAAAAAAAFI/S6YpmCcO1Fs/s400/CapsforCapsSimandPred.png)](http://4.bp.blogspot.com/_fa6AZDCsHnY/TAxxKz3m-MI/AAAAAAAAAFI/S6YpmCcO1Fs/s1600/CapsforCapsSimandPred.png)
Just eyeballing the graph, we see that after 18 or 19 sodas the chances
of winning are about a half. Beyond about 25 or so it appears to be
almost 90% that you'll win at least once. In reality, these percentages
would occur about 2 or 3 caps later to compensate for the 15% off
thingies. So now that we have some numbers and can trust our model a
bit, let's see how worth it this contest is for us.
First, we can ask: Is this a good way to get a hat cheaper than retail
value (about $</mathjax>15)? To quantify “worth it” I have chosen to find the
value of winnings (price of hat times expected number of wins) minus the
cost of caps (how much I spend on soda). I am fairly embarrassed to say
that the cost of each soda is <mathjax>$1.75. See plot below.
[![image](http://1.bp.blogspot.com/_fa6AZDCsHnY/TAxzBhvw-4I/AAAAAAAAAFQ/RiKEfFBDqig/s400/ValueDifferential.png)](http://1.bp.blogspot.com/_fa6AZDCsHnY/TAxzBhvw-4I/AAAAAAAAAFQ/RiKEfFBDqig/s1600/ValueDifferential.png)
From this plot, we see that it doesn't become "worth it" (that is, value
of winnings is greater than cost of sodas) until about 40 sodas
purchased. That's a lot! In fact, we see that just when I start feeling
pretty confident I'll win something (around 20-25 sodas), I'm right in a
big valley of "totally not worth it." So if I just want a baseball hat,
I'm better off forking over the $</mathjax>15 dollars.
Although, one does see from this plot that once I get above about 40 or
so sodas, it becomes much more cost effective to just keep buying sodas
and winning hats. However, Pepsi tries to stifle this a bit in the
<a href="http://www.pepsiusa.com/capsforcaps/">rules</a>, stating that “Limit one
(1) Official <span class="caps">MLB</span>® baseball cap per name, address or household.” Unless I
either make a lot of friends real soon or develop a creative definition
of my address, it looks like I’m out of luck.
But what if I want a hat but I don’t want to actually buy soda like a
chump? This contest, like many others, needs to have a “No Purchase
Necessary” clause for some legal reason or another (so they aren’t
lotteries or gambling or something). I had assumed they (the nameless
overlords at Pepsi) would limit the number of caps possible from just
mailing in, but it doesn’t seem that way. From the Official rules,
Chapter Two, verses nine to twenty-one:
“Limit one (1) free game piece per request, per stamped outer envelope.”
That sounds to me like you could get as many as you want, as long as you
use different envelopes. So we can redo our cost analysis with the cost
of getting one cap as the cost of a stamp. Putting the value of a cap
now at the cost of a stamp (44 cents), we get the following:
<a href="http://2.bp.blogspot.com/_fa6AZDCsHnY/TAz3jfb-KcI/AAAAAAAAAFg/erU1nZU7ZCg/s1600/mailincaps.png"><img alt="image" src="http://2.bp.blogspot.com/_fa6AZDCsHnY/TAz3jfb-KcI/AAAAAAAAAFg/erU1nZU7ZCg/s400/mailincaps.png" /></a>
Zooming in:
<a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/TAz3tj6kg8I/AAAAAAAAAFo/MDHgjjjS2J0/s1600/mailinZoom.png"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/TAz3tj6kg8I/AAAAAAAAAFo/MDHgjjjS2J0/s400/mailinZoom.png" /></a>
Hey, that seems worth it! And it should, since from above we saw that
the probability of winning after about 30 caps was in the high 90% ‘s.
The cost of getting 30 caps this way is the cost of 30 stamps, which is
less than the $15 that the hat is (supposedly) worth. So if I really
wanted a hat from this contest and didn’t feel like drinking all my
money away, I’d just send away for the mail-in pieces.
I may try this method, since it seems to be allowed under the rules.
Although, even a strict constructionist reading of the contest rules
pretty much allows Pepsi to do whatever the heck it wants. Either way,
I’ll be sure to update to see how well my model holds up!</p>
<hr />
<p>*<span class="caps">NOTE</span>: In no way is The Virtuosi blog affiliated in any way with Pepsi.
We may occasionally purchase Pepsi products (like sweet tasting Wild
Cherry Pepsi!), but we don’t do it because we think it makes us look
“cool” or “hip” or “rad” (we <span class="caps">KNOW</span> it does). In fact, drinking too much
soda can have certain adverse health effects (like making you stronger,
faster, and in general more attractive). So if you want to have a Pepsi
product (like sweet tasting Wild Cherry Pepsi!) every now and then
(literally, <span class="caps">EVERY</span> <span class="caps">INSTANT</span>), go ahead. But drinking too many Pepsi
products (like sweet tasting Wild Cherry Pepsi!) could make you sick
(with awesome-itis).</p>Memorial Day Distractions2010-05-31T13:12:00-04:00Alemitag:thephysicsvirtuosi.com,2010-05-31:posts/memorial-day-distractions.html<p><a href="http://1.bp.blogspot.com/_YOjDhtygcuA/TAPt33D8YYI/AAAAAAAAALQ/oUokdNNFupw/s1600/manufactoria.png"><img alt="image" src="http://1.bp.blogspot.com/_YOjDhtygcuA/TAPt33D8YYI/AAAAAAAAALQ/oUokdNNFupw/s200/manufactoria.png" /></a></p>
<p>Summer is upon us. That means summer research, and online games. In
order to help you through this three day weekend and beyond, I thought
I’d share some of the more physics inspired games I’ve been playing
lately to pass the time. I really enjoy physics based games. When done
right, I think they can not only be fun and engaging but have the
opportunity to teach you something.</p>
<h3>Colonel Blotto</h3>
<p>First up is a web Colonel Blotto tournament that I coded up, which you
can find
<a href="http://pages.physics.cornell.edu/~aalemi/blotto/index.php">here</a>.
<a href="http://en.wikipedia.org/wiki/Colonel_Blotto">Colonel Blotto</a> is a silly
little game theoretic game where you try and assign one hundred soldiers
between 10 different fields. Opponents do the same. Neither side knows
what the other side is going to do. The game turns out to have some very
interesting dynamics. Apparently for more than 12 soldiers, the game
does not have a deterministic optimal strategy, so you’re guess is as
best as mine for how to beat the other players in the pool. Its worth
checking out. Feel free to try your hand at the pool. Once the
participation has settled down, I want to try and analyze the strategies
that people employ and see if I can do any interesting statistical
physics with it. I spent a good chunk of this weekend working out the
bugs, so it should run smoothly now, but be sure to let me know if you
discover any errors.</p>
<h3>Manufactoria</h3>
<p>This game has been sucking up a lot of my evenings lately.
<a href="http://pleasingfungus.com/">Manufactoria</a> is one of the most pleasing
puzzle games I’ve played in a long time. It pretends to be about
selecting robots that meat specification, but really its about
programming finite state machines. I’ve really enjoyed this game a lot.
It has a real engineering type bent, but unlike a lot of games that
require critical thinking to solve a puzzle, once you know how to solve
it, it isn’t tedious to do so. A great game, and I highly recommend it.
One of the best games I’ve played in a while, although maybe I’m so into
it because I’m reading through <a href="http://books.google.com/books?id=-olQAAAAMAAJ&q=feynman+lectures+on+computation&dq=feynman+lectures+on+computation&ei=qOoDTIntCqCszQSc-cyODA&cd=1">The Feynman Lectures on
Computation</a>
right now. He gives as exercises very similar problems to the ones in
the game. Its also in regards to these types of problems specifically
that he gives <a href="http://thevirtuosi.blogspot.com/2010/04/some-of-best-advice-youll-ever-receive.html">this
quote</a>.
A few tips. (1) You can use asdf or the arrow keys to rotate the pieces.
(2) Space bar switches the chirality of the gates (swaps red for blue).
(3) In order to round corners, you just make the conveyors meet at a T
(4) You don’t have to enter the gates on the open side. (5) During
tests, move the slider in the bottom left to the right to speed up time.
If you manage to solve all of the puzzles, three bonus ones will appear
at the bottom.</p>
<h3>Fantastic Contraption</h3>
<p>This one is old. Its been on the internet for a while now, but if you
haven’t seen <a href="http://www.kongregate.com/games/inXile_Ent/fantastic-contraption?acomplete=fantastic">Fantastic
Contraption</a>
you should check it out. Its a very free form game. You need to get the
pink block into the goal, but in order to do so, you can build any
contraption you desire out of wheels and sticks. Its a great time. Most
impressive of course is the level that some kids have taken it to. Once
you’ve played with the game a bit, be sure to search <a href="http://www.youtube.com/results?search_query=fantastic+contraption&page=&utm_source=opensearch">youtube for
fantastic
contraption</a>,
and marvel at the engineering insight some of these kids show. I like
this game a lot, but it can get a bit tedious to get your design to work
out just right.</p>
<h3>Splitter 2</h3>
<p>This one is neat. As a kid I read the <a href="http://en.wikipedia.org/wiki/His_Dark_Materials">His Dark
Materials</a> trilogy
(which started with <a href="http://en.wikipedia.org/wiki/Golden_Compass">The Golden
Compass</a>). Anyway the
second book in the series is <a href="http://en.wikipedia.org/wiki/The_Subtle_Knife">The Subtle
Knife</a> which stars a
knife, the subtle knife in fact, which can cut through anything like
butter, and I mean anything, including the fabric between worlds. This
led to many day dreams as a kid, and I’ve spent a lot of time thinking
about what a knife like that could actually do. Anyway, <a href="http://www.kongregate.com/games/CasualCollective/splitter-2">Splitter
2</a> is a
puzzle game that works along those lines. You have to get a ball do the
goal, but do so by cutting through pieces of wood on the stage. A neat
game. Worth a look.</p>
<h3>Phun</h3>
<p>While I’m hear, I can’t help but mention
<a href="http://www.phunland.com/wiki/Home">Phun</a>. While not strictly a game.
Phun is a physics sandbox similar in style to Fantastic Contraption, but
with a whole lot more features. This one is loads of fun to play with,
and I think could really serve as a learning tool for some classes.
Worth the download. If you know of any other physics type games, drop
them in the comments. I’m always on the look out for good ones.</p>Cryopreservation2010-05-30T13:42:00-04:00Jessetag:thephysicsvirtuosi.com,2010-05-30:posts/cryopreservation.html<p><a href="http://4.bp.blogspot.com/_SYZpxZOlcb0/TAKGwD0rgVI/AAAAAAAAABk/VqmdMGXzMCI/s1600/frozen.jpg"><img alt="image" src="http://4.bp.blogspot.com/_SYZpxZOlcb0/TAKGwD0rgVI/AAAAAAAAABk/VqmdMGXzMCI/s200/frozen.jpg" /></a>
I’d like to start a short series of posts on what I’m doing this summer.
Like most all of the first year physics graduate students, I’ve found a
<a href="http://pages.physics.cornell.edu/~rthorne/">research group</a> at Cornell
for the summer, and if everything goes well, I’ll continue working with
them after the summer is done. This is good for three reasons. First,
I’m getting paid, so I can do things like pay rent. I’m not all about
the money, but having a place to live is a big deal to me. Second, I’m
getting a chance to explore a new area of research, with limited
expectation of commitment on my part. Third, if everything works out,
I’ve found the group I’ll be working with the for the next 4-6 years.
I’d like to spend a few posts here on The Virtuosi discussing first the
physics I’m considering, and then what I actually do. Today I’m going to
talk about the most exciting sounding piece of my work,
cryopreservation. <strong>What do we mean when we talk about
cryopreservation?</strong> For many of us our first thought may be Han Solo
frozen in carbonite. This is the basic idea. Cryopreservation is the
freezing biological samples, and thawing them, while they retain
biological viability. At least, that’s the idea. At the moment,
cryopreservation is the freezing and thawing biological samples, and
hoping that some of them remain biologically viable. Even though the
methods have been in development since the 1950s, successes are limited
even for such simple objects as spermatozoa or oocytes (sperm <span class="amp">&</span> eggs),
let alone larger and much more complicated objects like Harrison Ford.
<strong>Why would we want to cryopreserve something? (and did I just invent a
verb?)</strong> The idea is fairly simple. If we cool down biological samples,
biological functions slow down or completely stop. The hope is that if
we get samples sufficiently cold, we can suspend biological (and all
other) functions indefinitely. This would allow indefinitely
preservation of many things. Eggs and sperm for reproductive purposes
(both human and other animals), plant seeds, blood (for transfusions),
vaccines, drugs (no more pesky shelf life for pharmaceuticals until you
unfreeze them). The perfect frozen strawberries. The list of
possibilities is huge. And according to my dictionary, cryopreserve is a
well established verb. <strong>Why is it so hard?</strong> It’s easy to stick
something in the freezer. That seems to work on my food. However, it
doesn’t work well on biological samples. If we just stick them in the
freezer many of the cells are damaged irreparably. We’d like to know
why. To answer this we’re going to need to take a step down to the
cellular level. I’m not a biologist, so I’m not going to attempt to
explain cell structure to you. For our current purposes it is sufficient
to know that cells have an inside and an outside (then, doesn’t most
everything?), separated by a semi-permeable membrane, and that both
inside and outside of the cells there is water. We need to take a brief
detour now into the realm of water. Water is a fantastically amazing
substance, and I’m not just saying that because we (and most/all life on
earth) would all die without it. Water is fascinating from a pure
physics perspective. Now, when I refer to water, I’m going to use it to
mean <span class="caps">H2O</span>, in any phase. Colloquially we usually mean liquid water when
we say water, we say ice we mean crystalline frozen water, and steam
when we mean vaporous water. I’m going to call all of these things
‘water’ and I will specify what phase I mean by saying ‘frozen, liquid,
vapor’. I will refer to frozen <em>crystalline</em> water as <em>ice</em>. So what’s
so fascinating about water? Well, first off, it is one of the only
substances that expands when it crystallizes (forms ice). Most objects
become more dense when they transform from a liquid to a solid. Water
doesn’t. This is why ice floats. There are many other fascinating
aspects of water, for example, that liquid water is most dense at 4 C.
This is mostly responsible for why fish survive in the winter. Ice forms
on top of lakes/rivers, and the 4C water stays at the bottom of the lake
because it is the densest. But for our purposes, what we really need to
know is that water expands on forming ice. When we freeze our cells,
this expansion of water can cause massive problems. Everything in the
cell besides the water contracts on cooling. The water crystallizes and
expands (by about 9%). If you’ve ever frozen something liquid in a very
full container and seeing it split the container open, the idea is very
similar. The intracellular water can crystallize and split open the cell
membrane. Once you’ve split the cell membrane, there’s no way (that I’ve
seen) to successfully revive that cell. It turns out that ice formation
outside the cell isn’t such a big deal, it usually just pushes the cells
around. The water in the cells is what is most likely to cause damage to
our biological tissues when we freeze them. <a href="http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WD5-4KXF5HB-1&_user=10&_coverDate=08%2F31%2F1968&_rdoc=1&_fmt=high&_orig=search&_sort=d&_docanchor=&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=41501096aa949fd2c441dc760bc6513a">Early
studies</a>
showed survival rates between 10^-6% and \~50%. Those are not good
odds. <strong>How do we prevent this damage?</strong> There are two methods currently
in practice to prevent damage on cooling. The first of these is slow
cooling. It was discovered that if you cool your sample at a rate of \~1
C/min very little to no intracellular ice forms, though intercellular
ice still forms. The water in and between cells is not pure water, but
contains all kinds of things (proteins, salts, etc). This means that it
starts to crystallize not at 0 C, but somewhere lower. This is the
reason we salt sidewalks in the winter (at least, in places where we get
snow). The salt lowers the freezing point of water, causing the ice/snow
to melt even if the temperature is below 0 C. Of course, if the
temperature gets low enough it will still freeze. So our intra and
intercellular water crystallizes below 0 C. It turns out that the
intercellular water starts to crystallize before the intracellular
water. This means we have ice formation outside the cell while the water
inside the cell is still liquid. Earlier I told you that we needed to
know that the membrane of a cell was semi-permiable. This is why. Some
water diffuses through the membrane. At room temperature, the liquid
water outside the cells diffuses into the cell at the same rate that the
liquid water inside the cell diffuses out of the cell. Now, if we start
crystallizing the intercellular water, there will be less liquid water
moving into the cell. But since the intracellular water is still liquid
it will still be diffusing out at the same rate. This results in a
partial dehydration of the cell, more liquid water moving out than in.
This is a basic example of osmotic pressure. With less liquid water in
the cell, when it does crystallize, the ice crystals that form are
smaller, and less likely to damage the cell membrane. This method is not
perfect, because extreme dehydration of cells will also damage them, due
to cell shrinkage and extremely high solute concentration (this
phenomena is called osmotic shock). The second method that is considered
is ultrafast cooling. Here we must take another diversion into physics
land. You may have noticed my emphasis on ice as crystalline water. It
turns out that when you freeze water, forming a crystal is not the only
option available to it (and we’re not even going to discuss the variety
of crystals it can form). It can also form a glass. Now, if you’re not a
physicist, you’re probably not familiar with glass as a type of
substance, but rather as the thing you put in your windows and drink out
of. A glass, to a physicist, is, essentially, a solid without long range
order. The best way to think about of it would be to imagine you took
liquid water and stopped time for the water (<em>not</em> freezing it in the
conventional sense). That’s a glass. It looks like liquid water, has the
same density as liquid water, but is a solid. It’s a very strange
concept, and I’m going to save a more thorough discussion for a later
post, because that takes me into a whole different research area I’m
working on this summer. For our purposes, it is sufficient to know that
we can reach a solid state of water at cold temperatures that has the
same density as liquid water. Based on our discussion above, the
advantages should be immediately obvious. Without the expansion of ice
to worry about, the solidification of water is nowhere near as damaging
to our cells. By vitrifying (turning to a glass) our liquid water, we
should see much less damage to the cells. To achieve vitrification, you
have to cool down the cells much faster (depending on the substance
>1000 C/min). How do we do this? The conventional method is to dunk the
cells directly into liquid nitrogen, or put them into a very cold stream
of gas (around liquid nitrogen temperatures). I should note that I’m
skipping over a very large field of putting cryopreservants into cells
and freezing them. However, the effect of cryopreservants is to prevent
ice crystallization, and so this is just a method of vitrification. It
just makes it easier to achieve by lowering the rate you have to cool
the sample at. <strong>So that’s it?</strong> It may seem like that’s the solution.
Cooling fast enough to vitrify the liquid water, and we’ll get good
cryopreservation of our cells. The rest is just technology development.
Unfortunately, it’s not that easy. It turns out that warming is a harder
problem to solve than cooling. When we warm up glassy water, it turns
back into supercooled liquid water (liquid water way below the freezing
point of water). I’ll discuss this in more detail in my next post on my
research. So now we’ve got really cold water, and it will start forming
ice, unless we warm up fast enough to prevent that, very similar to how
we had to cool down very quickly to prevent ice formation. The problem
is, experiments suggest that ‘fast enough’ is at least 10-100 times
faster than cooling down. Not only that, but on cooling down, we’re
really trying to get very very cold. On warming up, we don’t want to get
much over room temperature, otherwise we’ll fry the cells. So we have to
warm really really quickly, in an extremely well controlled manner. So
far, this has proved very challenging to do. A recent study with mouse
oocytes achieved a \~90% success rate with their freezing/thawing
cycles, and that is the best that I’ve seen. 90% is pretty good, but
we’d like to do better. <strong>So what’s your research this summer?</strong> It
turns out that my research isn’t so much into cryopreservation. As I
will talk about next post, it is more into the physics of
crystallization and glass formation of aqueous substances, where there
is a lot of fundamental physics still to be understood. However, one of
the most exciting eventual applications is cryopreservation. We’ve
already developed a fast cooling system, and we’re working on a fast
warming system that will allow us to study crystallization and glass
formation, and hopefully extend the techniques we develop into the field
of cryopreservation. <strong>That’s way cool!</strong> Yes. Yes it is. Even if that
is a horrible pun. <em>Note: The question and answer format was inspired by
Chad over at <a href="http://scienceblogs.com/principles/">Uncertain Principles</a>
and his research blogging.</em></p>I was born on Wednesday2010-05-26T03:17:00-04:00Alemitag:thephysicsvirtuosi.com,2010-05-26:posts/i-was-born-on-wednesday.html<p>Probability is a tricky thing. There are a lot of nonsensical answers to
be had. I just read <a href="http://www.newscientist.com/article/dn18950-magic-numbers-a-meeting-of-mathemagical-tricksters.html?full=true">an
article</a>
about the recent <a href="http://www.g4g4.com/">Gathering for Gardner</a> meeting
that took place. Gathering for Gardner is a unique meeting for
mathematicians, magicians and puzzle makers where they get together and
talk about interesting things. The meetings were inspired by <a href="http://en.wikipedia.org/wiki/Martin_Gardner">Martin
Gardner</a>, one of the
awesomest dudes of our time, who unfortunately just passed away. The
question put to the floor was the following:</p>
<blockquote>
<p><span class="dquo">“</span>I have two children. One is a boy born on a Tuesday. What is the
probability I have two boys?”</p>
</blockquote>
<p>Think about that for a moment. Not too hard though. The answer turns out
to be surprising. Upon reading the question, I thought about it for a
long time and managed to confused myself entirely. Thinking I had gone
crazy, I wrote a little python script to test the riddle, which only
left me more convinced I had gone insane. I’ve spent most of the night
thinking about it, and after making it half way to crazy, I’ve come
around and am momentarily convinced the puzzle makes perfect sense. I’m
going to attempt to convince you it makes perfect sense, but I plan on
doing it in steps so as to reduce the bewilderment.</p>
<h3>Playing Cards</h3>
<p>Forget the question. Lets play a game of cards. You shuffle a deck and
deal me two cards:</p>
<p><a href="http://3.bp.blogspot.com/_YOjDhtygcuA/S_zCNBPs3KI/AAAAAAAAAK4/WdtzaW_A6pk/s1600/b2fv.png"><img alt="image" src="http://3.bp.blogspot.com/_YOjDhtygcuA/S_zCNBPs3KI/AAAAAAAAAK4/WdtzaW_A6pk/s320/b2fv.png" /></a><a href="http://3.bp.blogspot.com/_YOjDhtygcuA/S_zCNBPs3KI/AAAAAAAAAK4/WdtzaW_A6pk/s1600/b2fv.png"><img alt="image" src="http://3.bp.blogspot.com/_YOjDhtygcuA/S_zCNBPs3KI/AAAAAAAAAK4/WdtzaW_A6pk/s320/b2fv.png" /></a></p>
<p>I accidentally flip one of them over.</p>
<p><a href="http://2.bp.blogspot.com/_YOjDhtygcuA/S_zCuZ4KiQI/AAAAAAAAALA/cNa3n-rjuTg/s1600/23.png"><img alt="image" src="http://2.bp.blogspot.com/_YOjDhtygcuA/S_zCuZ4KiQI/AAAAAAAAALA/cNa3n-rjuTg/s320/23.png" /></a><a href="http://3.bp.blogspot.com/_YOjDhtygcuA/S_zCNBPs3KI/AAAAAAAAAK4/WdtzaW_A6pk/s1600/b2fv.png"><img alt="image" src="http://3.bp.blogspot.com/_YOjDhtygcuA/S_zCNBPs3KI/AAAAAAAAAK4/WdtzaW_A6pk/s320/b2fv.png" /></a></p>
<p>Whats the probability that my other card is red? Well, that ones easy,
its about half. Sure, its not exactly a half, knowing that the deck is
finite and that the draws are done without replacement, knowing that the
card showing is a red one means that there are only 52/2 - 1= 26 -1 = 25
red cards out of a deck of 52-1 = 51 cards giving a probability of 49%.
But its basically a half. Lets do over, deal me two cards:</p>
<p><a href="http://3.bp.blogspot.com/_YOjDhtygcuA/S_zCNBPs3KI/AAAAAAAAAK4/WdtzaW_A6pk/s1600/b2fv.png"><img alt="image" src="http://3.bp.blogspot.com/_YOjDhtygcuA/S_zCNBPs3KI/AAAAAAAAAK4/WdtzaW_A6pk/s320/b2fv.png" /></a><a href="http://3.bp.blogspot.com/_YOjDhtygcuA/S_zCNBPs3KI/AAAAAAAAAK4/WdtzaW_A6pk/s1600/b2fv.png"><img alt="image" src="http://3.bp.blogspot.com/_YOjDhtygcuA/S_zCNBPs3KI/AAAAAAAAAK4/WdtzaW_A6pk/s320/b2fv.png" /></a></p>
<p>Darn, I flipped one of them over again:</p>
<p><a href="http://3.bp.blogspot.com/_YOjDhtygcuA/S_zCNBPs3KI/AAAAAAAAAK4/WdtzaW_A6pk/s1600/b2fv.png"><img alt="image" src="http://3.bp.blogspot.com/_YOjDhtygcuA/S_zCNBPs3KI/AAAAAAAAAK4/WdtzaW_A6pk/s320/b2fv.png" /></a><a href="http://2.bp.blogspot.com/_YOjDhtygcuA/S_zDlNQLJyI/AAAAAAAAALI/WPBCX9i-Pk0/s1600/5.png"><img alt="image" src="http://2.bp.blogspot.com/_YOjDhtygcuA/S_zDlNQLJyI/AAAAAAAAALI/WPBCX9i-Pk0/s320/5.png" /></a></p>
<p>Whats the probability that my other card is red? About a half still.<em>
(</em>Sure, this time its really 26/51 = 51%). Nothing mysterious going on.
Do over again. Deal me two cards:</p>
<p><a href="http://3.bp.blogspot.com/_YOjDhtygcuA/S_zCNBPs3KI/AAAAAAAAAK4/WdtzaW_A6pk/s1600/b2fv.png"><img alt="image" src="http://3.bp.blogspot.com/_YOjDhtygcuA/S_zCNBPs3KI/AAAAAAAAAK4/WdtzaW_A6pk/s320/b2fv.png" /></a><a href="http://3.bp.blogspot.com/_YOjDhtygcuA/S_zCNBPs3KI/AAAAAAAAAK4/WdtzaW_A6pk/s1600/b2fv.png"><img alt="image" src="http://3.bp.blogspot.com/_YOjDhtygcuA/S_zCNBPs3KI/AAAAAAAAAK4/WdtzaW_A6pk/s320/b2fv.png" /></a></p>
<p>This time I’ll ask a little trickier question. Whats the probability
that both my cards are red? Ah, well its about 1/2 * 1/2 or about 1/4 =
25%. (The real answer is 24.5%) Alright smarty pants. Whats the
probability that I had a red card and a black one? Well, that ought to
be about 1/2 (Real answer 51%). All in all, I could have a red card,
then a black one (<span class="caps">RB</span>), or a black one, then a red one (<span class="caps">BR</span>), or a red one
then a red one (<span class="caps">RR</span>) or a black one then a black one (<span class="caps">BB</span>). 4 distinct
possibilities, each of which are equally likely, so the above two
answers make complete sense. There is only one way in four to get both
red cards, but two ways out of four to have both a red and a black. So
far so good. Lets ask a different question. Now I’m going to get a bit
obtuse. You deal me two cards. Now you ask me.</p>
<blockquote>
<p>Hey Alemi, do you have a red card?</p>
</blockquote>
<p>Meaning, do I have at least one red card. I respond, “Yes.” Now, go with
your gut. You know I have at least one red card. What do you reckon the
color of the other one is? Probably black you say? You’d be correct.
Looking at our breakdown above, I could have gotten <span class="caps">RR</span>, <span class="caps">RB</span>, <span class="caps">BR</span>, or <span class="caps">BB</span> as
my cards dealt. Each was equally likely, but now you know something
else. You know that I have at least one red card, so we only have three
possibilities left, I either have <span class="caps">RR</span>, <span class="caps">RB</span>, or <span class="caps">BR</span>. Each of which was
equally likely. So whats the probability that my other card is black?
About 2/3 or 67%. (Real answer: 67.5%) Alright, same situation. You deal
me two cards, I reveal that I have at least one red one. Whats the
probability that my other card is red? Well, obviously 1/3 or 33%
(Actually 32.5%) since this is the opposite question to the one directly
above, and follows from the same reasoning. Fine. No problems. All of
this makes sense.</p>
<h3>Offspring</h3>
<p>Instead of playing cards, lets return to offspring. Lets first look at a
classic probability riddle.</p>
<blockquote>
<p>I have exactly two children. At least one of them is a boy. What is
the probability that the other one is a boy?</p>
</blockquote>
<p>If I were to give you this question straightaway, most people would have
said the probability would be a 1/2. Their reasoning being that boys and
girls are equally likely. But having just led you through the playing
cards, hopefully now it makes some sense how the true answer to this
question is 1/3 or 33%. Originally my family could have been <span class="caps">BB</span>,<span class="caps">BG</span>,<span class="caps">GB</span>,or
<span class="caps">GG</span>. Each of which was equally likely. Telling you I have at least one
boy means now we are dealing with only the situations <span class="caps">BB</span>, <span class="caps">BG</span> or <span class="caps">GB</span>,
still all equally likely, making the probability 1/3. Fine. Now, lets
reexamine the true question at hand:</p>
<blockquote>
<p><span class="dquo">“</span>I have two children. One is a boy born on a Tuesday. What is the
probability I have two boys?”</p>
</blockquote>
<p>Is it a half? Is it 1/3? What do you reckon? At first thought, it seems
like the Tuesday bit shouldn’t enter into it at all, but on second
thought, I’ve just revealed a lot more information than I did in the
previous question. I’ve told you something specific about one of my
children. This is analogous to when I accidently flipped over one of my
cards, revealing not only its color but its count as well. Hopefully it
makes sense that the probability ought to be much closer to a half than
to a third. In fact the answer is 13/27 = 48.1%. With a little thought,
you should be able to come up with that number yourself. Otherwise, see
<a href="http://www.newscientist.com/article/dn18950-magic-numbers-a-meeting-of-mathemagical-tricksters.html?full=true">the
article</a>
I mentioned at the beginning of this post. They have a nice breakdown at
the bottom. Hopefully, at this point if you’ve read this far, you’d
should be wondering why this question was so mysterious to begin with,
and if thats the case, I did my job. If you think the question is
obvious, and think it would have been obvious even without the card
analogy, try and ask the boy-born-on-a-tuesday question by itself to one
of your friends. I guarantee they’ll be bewildered. Its a fun problem,
and one that illustrates just how strange and counterintuitive
probability can be. If you want some other mind twisting mathematical
puzzles, try your hand at the <a href="http://en.wikipedia.org/wiki/Two_envelopes_problem">Two envelopes
problem</a>, or
<a href="http://en.wikipedia.org/wiki/Bertrand's_box_paradox">Bertrands’ box
paradox</a>, or <a href="http://en.wikipedia.org/wiki/Birthday_problem">the
Birthday problem</a>, or
everyone’s favorite <a href="http://en.wikipedia.org/wiki/Monty_Hall_problem">the Monty Hall
problem</a>. <a href="http://thevirtuosi.blogspot.com/2010/04/some-of-best-advice-youll-ever-receive.html">Remember
though</a>,
try your hand at the problem before reading the answer. Super fun bonus
homework question: Lets do cards again. You deal me two cards, and I
reveal that I have a red heart. Whats the probability that my other card
is red?</p>Why is the Grass Green?2010-05-25T23:29:00-04:00Corkytag:thephysicsvirtuosi.com,2010-05-25:posts/why-is-the-grass-green-.html<p>I was outside talking with Alemi last week and we were both startled to
realize that the frozen white tundras of Ithaca had somehow transformed
into fields of green. Apparently the snow was a temporary fixture that
covered real live grass. Neato, gang! The joy at seeing green grass led
quickly to surprise, confusion and then anger. Why the heck is grass
green? Well, things look a color when they reflect back that color. So
grass is green because its pigments (chlorophyll) absorb only a certain
range of the visible spectrum, reflecting back the greenish bits. But if
I know anything about approximating the sun as a blackbody, I know that
it has a peak output of around 550 nm (i.e. green) light. So what’s
going on? Why are plants blatantly rejecting the most abundant kind of
light? Since my initial confusion rested on the assumption of the sun as
a blackbody, I decided to take a closer look at the actual spectrum of
the sun. Below is a graph showing the frequency dependence of solar
radiation incident on the top of the atmosphere and at sea level. Since
plants typically don’t live in space, we are most concerned with the sea
level plot.
<a href="http://4.bp.blogspot.com/_fa6AZDCsHnY/S_xmXr6HTaI/AAAAAAAAACw/weKj6XTaf4M/s1600/Solar_Spectrum.png"><img alt="image" src="http://4.bp.blogspot.com/_fa6AZDCsHnY/S_xmXr6HTaI/AAAAAAAAACw/weKj6XTaf4M/s400/Solar_Spectrum.png" /></a>
From this plot, it looks like the incident radiation from the sun is
fairly level beyond about 450nm or so. Just going on this graph alone,
it looks like plants could just absorb reddish light and do alright for
themselves. But do they? Let’s take a look at the absorption spectrum of
chlorophyll. As it turns out there are a bunch of different “flavors” of
chlorophyll (chlorophyll a, b, c, and d). As far as I could gather, only
a and b are important (a is found in just about everything that plays
the photosynthesis game and b is found in plants and green algae). So we
need to find the absorption spectrum of chlorophyll a and b. After
looking for a while at very qualitative drawings, I found this
<a href="http://omlc.ogi.edu/spectra/PhotochemCAD/html/alpha.html">neato-toledo
site</a>, which
actually gives real live data. Plotting the results, we get the figure
below.
<a href="http://4.bp.blogspot.com/_fa6AZDCsHnY/S_xpZIEtjkI/AAAAAAAAAC4/B7RknYpI63w/s1600/chlorophyll.png"><img alt="image" src="http://4.bp.blogspot.com/_fa6AZDCsHnY/S_xpZIEtjkI/AAAAAAAAAC4/B7RknYpI63w/s400/chlorophyll.png" /></a>
Comparing with our handy dandy wavelength-to-color converter below we
see that there is a big peak in absorption of both chlorophyll a and b
in the dark blue and lesser peaks in the red.
<a href="http://2.bp.blogspot.com/_fa6AZDCsHnY/S_xqlbrKz0I/AAAAAAAAADA/fsx_JmlD5Ks/s1600/spectrum.gif"><img alt="image" src="http://2.bp.blogspot.com/_fa6AZDCsHnY/S_xqlbrKz0I/AAAAAAAAADA/fsx_JmlD5Ks/s320/spectrum.gif" /></a>
So how do these absorption lines correspond to the incident light at sea
level given above. Well, its kind of tough to check by eye, but it looks
like chlorophyll has its biggest peaks right below the plateau in the
solar spectrum. Marking the wavelengths of the absorption peaks makes
this clear (a = blue, b = green). It seems like plants are using a
sub-optimal band of the spectrum!
<a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/S_xrLFfWBXI/AAAAAAAAADI/NwShTicac9A/s1600/solarwithchlorphyll.png"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/S_xrLFfWBXI/AAAAAAAAADI/NwShTicac9A/s400/solarwithchlorphyll.png" /></a>
So how do we reconcile this? Well, let’s first start at The Beginning.
The first photosynthetic organisms developed and spent the first billion
or so years of their existence living and evolving in the oceans. The
solar spectrum we have been using so far has been accurate only in air
at sea level. Presumably it would be favorable for the organism to be
able to survive at some finite depth in the ocean and not merely at the
surface. Thus we must consider the effects of water on our incident
solar radiation. A plot of the absorption spectrum of water is shown
below (note the log-log scale).
<a href="http://3.bp.blogspot.com/_fa6AZDCsHnY/S_xwl48ZUSI/AAAAAAAAADQ/kPVlOauL1o4/s1600/water+absorption.gif"><img alt="image" src="http://3.bp.blogspot.com/_fa6AZDCsHnY/S_xwl48ZUSI/AAAAAAAAADQ/kPVlOauL1o4/s400/water+absorption.gif" /></a>
Lo and behold, the minimum absorption of visible light in water occurs
towards the far end in the blue. And this is exactly where our biggest
absorption peak in of chlorophyll is! Comparing our two graphs we see
that the ratio of incident blue light to incident green light at see
level is at worst about a third. But we see that for each meter traveled
in water, green light is absorbed almost ten times more than blue light.
Thus, an organism that lives a few meters underwater and wants to
harness solar energy would probably do best to focus on that blue light.
[<span class="caps">WARNING</span>: The next bit is speculative and I haven’t taken a bio class
since high school] As much as I gather about chlorophyll from Wikipedia
and other semi-reputable sites, chlorophyll a is found in just about
anything that photosynthesizes, <span class="caps">BUT</span> chlorophyll b is only found in
plants and green algae. And apparently, land plants are largely
descended from green algae (which are aquatic, but typically around the
surface and around the shoreline). Now take a look at the chlorophyll
absorption spectra again. The chlorophyll b spectrum is sort of squished
in more towards the middle (towards 550 nm maybe?) than chlorophyll a.
In fact, on the graph where I have drawn lines on the solar spectra
graph where the chlorophyll peaks are, we see that chlorophyll b just
barely gets up to that plateau region. This suggests to me that
chlorophyll a was working just fine for the early aquatic plants, but
once they reached land and got out of all that water it became an
advantage to utilize light closer to the peak solar output. Thus, plants
that had chlorophyll b in addition to a had a slight advantage over
their b-less brethren. Or so I shall continue to shamelessly speculate
(and, apparently, alliterate). Anyway, I thought that was kind of cool,
but if I have made some horrible error or mangled some biology, please
let me know!</p>Flying back2010-05-25T09:04:00-04:00Yarivtag:thephysicsvirtuosi.com,2010-05-25:posts/flying-back.html<p>Those of us originating on the right side of the <a href="http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=Atlantic+Ocean&sll=39.368279,-9.316406&sspn=57.758311,144.316406&dirflg=w&ie=UTF8&hq=&hnear=Atlantic+Ocean&ll=37.857507,-22.675781&spn=62.051619,144.316406&z=3">atlantic
ocean</a>
are familiar with a little quirk of international flights: the flights
home are shorter. Specifically, going from Tel Aviv to New York takes
about one hour longer than going the other way around. This is an
oddity, and the very first explanation that comes to mind the rotation
of the Earth. After all, our naive image of a plane going up in the air
might be something a little like a rock being thrown up from a moving
cart, and we would imagine the plane to pick up some relative speed by
not rotating as fast as the Earth. Is this a factor in the plane’s
movement? This gives us a perfect chance to use the Earth Units we
<a href="http://thevirtuosi.blogspot.com/2010/04/earth-day-earth-units.html">introduced</a>
a few weeks ago. Specifically I’ll use the Earth meter (equal to the
radius of the Earth, which I’ll dub e-m) and Earth second (one day,
e-s). First we want to figure out velocity of the airplane compared to
the ground. When it is grounded, the plane and the Earth’s surface both
have an angular velocity of 2π 1/e-s; they do one revolution per day.
This means the plane’s linear velocity is 2π e-m/e-s, and its angular
velocity once it’s airborne is <mathjax>$$2 \pi \cdot \frac{(1\;
\rm{m_\oplus})}{(1\; \rm{m_\oplus}+A)}
\;\rm{s_\oplus^{-1}},$$</mathjax> where A is the altitude. That’s the one
number I’m going to pull out of thin air here; that being the thin air
of the cabin where they always announce that we have attained a cruising
altitude of 30,000 feet. In real people units, that’s about 9,000
meters, or 9 kilometers - round it up to 10. Going back to the Earth day
<a href="http://thevirtuosi.blogspot.com/2010/04/earth-day-earth-units.html">post</a>
1 e-m is about 6380 km, so that the angular velocity of the airplane is
about 0.9984 (2π) 1/e-s, and relative to the ground it is <mathjax>$$0.0016
\cdot 2 \pi \;\rm{s_\oplus^{-1}}$$</mathjax> So, over a journey of length
of about 0.5 e-s, the overall distance traveled due to this effect would
come to about <mathjax>$$0.0008 \cdot 2\pi \;\rm{m_\oplus}.$$</mathjax> Tel Aviv and
New York are both in the mid-northern hemisphere and seven time zones
apart, so a first-order estimate of the distance between them would be
about <mathjax>$$\frac{7}{24}\cdot 2\pi \;\rm{m_\oplus} \approx
0.29\cdot 2\pi \;\rm{m_\oplus}.$$</mathjax> Overall, it looks like this
effect is negligible. Indeed, anyone who gives the matter a second
thought would notice that the planes should go faster when traveling
westwards, as the Earth spins eastwards toward the rising sun. Anyone
who looks even further into the matter finds that eastbound and
westbound planes simply take different routes across the Atlantic,
leaving us with a rather more mundane and less exciting explanation.
Still, I won’t complain if it makes my flight any shorter. Now, if
you’ll excuse me, I have some beaches to catch up with.</p>Fishy Calculation Followup & New Contest2010-05-25T03:03:00-04:00Alemitag:thephysicsvirtuosi.com,2010-05-25:posts/fishy-calculation-followup-new-contest.html<p><a href="http://4.bp.blogspot.com/_YOjDhtygcuA/S_t2IHW-7EI/AAAAAAAAAKw/nbtBQnQnpWI/s1600/hamster.jpg"><img alt="image" src="http://4.bp.blogspot.com/_YOjDhtygcuA/S_t2IHW-7EI/AAAAAAAAAKw/nbtBQnQnpWI/s200/hamster.jpg" /></a></p>
<p>So, some you may remember when I attempted to calculate how much the
oceans would lower if you took out all of the fish in an <a href="http://thevirtuosi.blogspot.com/2010/04/fishy-calculation.html">earlier
post</a>.
Well, the results came in a while ago, but I forgot to mention that <a href="http://diaryofnumbers.blogspot.com/2010/05/we-have-winner.html">I
lost</a>
the contest. I was about two orders of magnitude off from the winning
answer. In light of my failure, I’m going to try again in the <a href="http://diaryofnumbers.blogspot.com/2010/05/hamster-powered-mansions.html">newest
contest</a>.
This time the question is a bit stranger:</p>
<blockquote>
<p>How many buff hamsters would it take to completely power a mansion?</p>
</blockquote>
<p>I encourage all of The Virtuosi readers to enter as well, it only takes
a minute to come up with some number. Good luck one and all.</p>Esoteric Physics I - The Hall Effect2010-05-23T23:31:00-04:00Jessetag:thephysicsvirtuosi.com,2010-05-23:posts/esoteric-physics-i-the-hall-effect.html<p>What we usually do here at the The Virtuosi is take an interesting
problem, and work out the physical principles behind what we’re seeing.
Or pose a question and try to answer it. Now, I’m a big fan of this kind
of thing, which is why I’ve done so much of it. But I worry that it
might give a slightly skewed view of physics. Sure, physics explains
things. That’s why we do it. But not everything in physics is <a href="http://thevirtuosi.blogspot.com/2010/04/laser-gun-recoil-follow-up.html">laser
guns</a>
and <a href="http://thevirtuosi.blogspot.com/2010/05/solar-sails-i.html">solar
sails</a>.
There are a lot of interesting physics phenomena that the general public
will never hear about, because they’re just too, well, esoteric. What
I’m going to do is occasionally talk about such effects, and, for some
of them, give you applications for these strange effects you might see
on a day-to-day basis. Today I’m going to examine the Hall effect. The
Hall effect is simple, as these things go, once we understand the
pieces. The first piece is that magnetic fields deflect moving
electrically charged particles. I don’t think I can give you a good
simple reason for this, you’re just going to have to trust me (for those
interested, I’d argue that the relativistic transformation of a magnetic
field is an electric field, and that will certainly deflect an
electrically charged particle). This is a piece of the Lorentz force.
The next piece that we need to know is that opposite electrically
charged particles attract. So a positively charged particle attracts a
negatively charged particle. Knowing those two things we can detail the
Hall effect.
<a href="http://4.bp.blogspot.com/_SYZpxZOlcb0/S_nxJSw3bQI/AAAAAAAAABU/Mi2_jDtskus/s1600/Hall+effect+pos.JPG"><img alt="image" src="http://4.bp.blogspot.com/_SYZpxZOlcb0/S_nxJSw3bQI/AAAAAAAAABU/Mi2_jDtskus/s320/Hall+effect+pos.JPG" /></a>
Take the slab pictured above. We run an electric current through it.
Conventionally we take current as moving positively charged particles.
There is a magnetic field into the screen. This deflects the positive
charges up the screen, as shown, with some upward force. Over some time,
we will accumulate positive charges at the top. Because there is no net
charge in our slab, this must leave a region of negative charge at the
bottom. These regions of charge will attract, and cancel out the force
from the magnetic field. This charge separation results in a voltage
differential between the two sides of the slab, which is what we
actually measure. The Hall effect has some nifty consequences
physically. I mentioned that conventionally we take current to be
positive particles moving. A microscopic picture of our conductors will
tell us that, in general, electrons are what we consider to be flowing
in an electric current. Now, it turns out that our magnetic field will
deflect electrons moving opposite our current direction (negative
current moving backwards is the same as positive current moving
forwards) to the same side as our hypothetical positive particles got
deflected to. This generates a charge differential with negative and
positive charges on the opposite sides of the slab (shown below), which
means the voltage is negative what we would have measured above! This
means we expect to get a certain sign of the measured Hall voltage,
which we can predict. This sign would correspond to negative particles
(electrons) being the moving charge carriers in substances. It turns out
that there are some substances (some semiconductors) where the sign of
the Hall voltage is opposite what we expect from electrons. This means
that in those substances the current is being carried by positive
particles! I won’t explain what that means here (I may address that in a
later post), but I hope you can see why that would be fascinating. We
expected to have electrons moving, and it turns out that something else
is really doing the moving. The Hall effect is an experimental result
that helped suggest a whole new way of thinking about conduction in
materials.
<a href="http://2.bp.blogspot.com/_SYZpxZOlcb0/S_nxRTxhhoI/AAAAAAAAABc/AJGpkjd-iWU/s1600/Hall+effect+neg.JPG"><img alt="image" src="http://2.bp.blogspot.com/_SYZpxZOlcb0/S_nxRTxhhoI/AAAAAAAAABc/AJGpkjd-iWU/s320/Hall+effect+neg.JPG" /></a>
Beyond being very interesting physics, there are some applications to
this effect. It is an easy way to create a magnetic field sensor. Take a
slab of material, run a current through it, and measure the voltage on
the sides. Where do we use magnetic field sensors? Well, they sometimes
show up as a way to tell if something is open or closed. Put a magnet in
your lid, and a Hall effect sensor in the lip the lid rests on. When it
is closed, you’ll measure a voltage, and when it is open you won’t. Now
you can tell if it is open or closed. A little imagination, and you can
see how this would be useful for all kinds of switches. Hit the switch,
move your magnet, and change your voltage. According to Wikipedia, Hall
effect switches are used in things as diverse as paintball guns and
go-cart speed controls. They could also be used as a speed or
acceleration measurement in a rotating system. Attach a magnet to the
rotating object, put a sensor at a fixed location, and measure how the
voltage in your sensor changes as the object sweeps past it. There are
many more applications, but this is just to give you a taste of how this
seemingly esoteric physics concept may show up in your everyday life.
It’s not just the interesting problems we often work on this blog,
physics is everywhere. In many different guises</p>Physics of Baseball: Batting2010-05-19T18:21:00-04:00Alemitag:thephysicsvirtuosi.com,2010-05-19:posts/physics-of-baseball-batting.html<p><a href="http://4.bp.blogspot.com/_YOjDhtygcuA/S_RkXXs4DpI/AAAAAAAAAKo/jPSgwpl4qHA/s1600/baseball.jpg"><img alt="image" src="http://4.bp.blogspot.com/_YOjDhtygcuA/S_RkXXs4DpI/AAAAAAAAAKo/jPSgwpl4qHA/s320/baseball.jpg" /></a></p>
<p>Summer is upon us, and so that means that we here at the Virtuosi have
started talking about baseball. In fact, Corky and I did some simple
calculations that illuminate just how impressive batting in baseball can
be. We were interested in just how hard it is to hit a pitch with the
bat. So we thought we’d model hitting the ball with a rather simple
approximation of a robot swinging a cylindrical bat, horizontally with
some rotational speed and at a random height. The question then becomes,
if the robot chooses a random height and a random time to swing, what
are the chances that it gets a hit?</p>
<h3>Spatial Resolution</h3>
<p>So the first thing to consider is how much of the strike zone the bat
takes up. In order to be a strike, the ball needs to be over home plate,
which is 17 inches wide, and between the knees and logo on the batters
jersey. Estimating this height as 0.7 m or 28 inches or so, we have the
area of the strike zone <mathjax>$$ A_S = (17") \times (0.7 m) = 0.3 \text{
m}^2 $$</mathjax> when you swing, how much of this area does the bat take up?
Well, treat it as a cylinder, with a diameter of 10 cm, and assume it
runs the length of the strike zone, when the area that the bat takes up
is <mathjax>$$ A_B = (10\text{ cm}) \times (17" ) = 0.043 \text{ m}^2 $$</mathjax> So
that the fractional area that the bat takes up during our idealized
swing is <mathjax>$$ \frac{A_B}{A_S} \approx 14\% $$</mathjax> So already, if our
robot is guessing where inside the strike zone to place the bat, and
doing so randomly, assuming the pitch is a strike to begin with, it will
be able to bunt successfully about 14% of the time.</p>
<h3>Time Resolution</h3>
<p>But getting a hit on a swing is different than getting a bunt. Not only
do you have to have your bat at the right height, but you need to time
the swing correctly. Lets first look at how much time we are dealing
with here. Most major league pitchers pitch the ball at about 90 mph or
so. The pitchers mound is 60.5 feet away from home base. This means that
the pitch is in the air for <mathjax>$$ t = \frac{ 60.5 \text{ ft} }{ 90
\text{ mph} } \approx \frac{1}{2} \text{ second} $$</mathjax> i.e. from the
time the pitcher releases the ball to the time it crosses home plate is
only about half a second. Compare this with human reaction times. My
drivers ed course told me that human reaction times are typically a
third of a second or so. So, baseball happens quick! Alright, but we
were interested in how well you have to time your swing. Successfully
hitting the ball means that you’ve made contact with the ball such that
it lands somewhere in the field. I.e. you’ve got a 90 degree play in
when you make contact. How does this translate to time? We would need to
know how fast you swing.</p>
<h4>Estimating the speed of a swing</h4>
<p>I don’t know how fast you can swing a baseball bat, but I can estimate
it. I know that if you land your swing just right, you have a pretty
good shot at a home run. Fields are typically 300 feet long. So, I can
ask, if I launch a projectile at a 45 degree angle, how fast does it
need to be going in order to make it 300 feet. Well, we can solve this
projectile problem if we remember some of our introductory physics. We
decouple the horizontal and vertical motions of the ball, the ball
travels horizontally 300 feet, so we know <mathjax>$$ v_x t = 300 \text{ ft} $$</mathjax>
where t is the time the ball is in the air, similarly we know that it is
gravity that makes the ball fall, and so as far the vertical motion is
concerned, in half the total flight time, we need the vertical velocity
to go from its initial value to zero, i.e. <mathjax>$$ g \frac{t}{2} = v_y $$</mathjax>
where g is the acceleration due to gravity. Furthermore, I’m assuming
that I am launching this projectile at a 45 degree angle, for which I
know from trig that <mathjax>$$ v_x = v_y = \frac{v}{\sqrt 2} $$</mathjax> So I can
stick these equations into one another and solve for the velocity needed
to get the ball going 300 feet. <mathjax>$$ \frac{v^2}{ g} = 300 \text{ ft} =
\frac{ v^2}{ 9.8 \text{ m/s}^2 } $$</mathjax> <mathjax>$$ v \approx 30 \text{ m/s}
\sim 67 \text{ mph}$$</mathjax> So it looks like the ball needs to leave the bat
going about 70 mph in order to clear the park. ( This was of course
neglecting air resistance, which ought to be important for baseballs ).
Great that tells us how fast the ball needs to be going when it leaves
the bat, but how fast was the bat going in order to get the ball going
that fast? Well, lets work worst case and assume that the baseball - bat
interaction is inelastic. I.e. I reckon that if I throw a baseball at
about 100 mph towards a wooden wall, it doesn’t bounce a whole lot. In
that case, the bat needs to take the ball from coming at it at 90 mph to
leaving at 70 mph or so, i.e. the place where the ball hits the bat
needs to be going at about 160 mph. That seems fast, but when you think
about it, if a pitcher can pitch a ball at 90 mph, that means their hand
is moving at 90 mph during the last bits of the pitch, so you expect
that a batter can move their hands about that fast, and we have the
added advantage of the bat being a lever.</p>
<h4>Coming back to timing</h4>
<p>So, we have an estimate for how fast the bat is going. Knowing this and
estimating the length between the sweet spot and the pivot point of the
bat to be about 0.75 m or so, we can obtain the angular frequency of the
bat. <mathjax>$$ v = \omega r $$</mathjax> <mathjax>$$ \omega = \frac{ 160 \text{ mph} }{ 0.75
\text{ m} } \approx 100 \text{ Hz} $$</mathjax> So, if we need to have a 90
degree resolution in our swing timing to hit the ball in the park, this
means that if our swing near the end is happening at 100 \text{ Hz}, we
need to get the timing down to within <mathjax>$$ t = \frac{ 90 \text{
degree}}{ 100 \text{ Hz} } \sim 15 \text{ ms} $$</mathjax> So we need to get
the timing of our swing to within about 15 milliseconds to land the hit.
So if our robot randomly swung at some point during the duration of the
pitch, it would only hit with probability <mathjax>$$ \frac{\text{time to land
hit} }{ \text{time of pitch}} = \frac{ 15 \text{ ms}}{500 \text{
ms}} \sim 3\% $$</mathjax> or only 3% of the time. If we take both the spatial
placement, and timing of the swing as independent, the probability that
our robot gets a hit would be something about <mathjax>$$ p = 0.03 \times 0.14 =
0.004 = 0.4 \% $$</mathjax> or our robot would only get a hit 1 time out of 250
tries. Suddenly hitting looks pretty impressive.</p>
<h3>Experiment</h3>
<p>Saying that the robot swings at some random time during the duration of
the pitch is pretty bad. So I decided to do a little experiment to see
how good people are at judging times on half second scales. I had some
friends of mine start a stop watch and while looking try to stop it as
close as they could at the half second mark. Collecting their
deviations, I obtained a standard deviation of about 41 milliseconds,
which suggests a window of about 100 milliseconds over which people can
reliably judge half second intervals. Now, I have to admit, this wasn’t
done in any very rigorous sort of way, I had them do this while walking
to dinner, but it ought to give a rough estimate of the relevant time
scale for landing a hit. So instead of comparing our 15 millisecond ‘get
a hit’ window to the full half second pitch duration, lets compare it
instead to the 100 ‘humans trying to judge when to hit’ window. This
gives us a temporal resolution of about <mathjax>$$ p = \frac{ 15}{100} = 15
\%. $$</mathjax> So that now we obtain an overall hit probability of <mathjax>$$ p = 0.15
* 0.14 = 0.021 = 2 \% $$</mathjax> So that it seems like a poor baseball player,
more or less randomly swinging should have a batting average of about
2\%. Compare this with typical baseball batting averages of 250 or so,
denoting 0.25 or 25% probability of a hit. I think this is a much better
estimate of how much better over random baseball players can do with
training. So it looks like practice can improve your ability to do a
task by about an order of magnitude or so. Either way, baseball is
pretty darn impressive when you think about it.</p>Cell Phone Brain Damage: Part Deux2010-05-19T16:03:00-04:00Alemitag:thephysicsvirtuosi.com,2010-05-19:posts/cell-phone-brain-damage-part-deux.html<p><a href="http://3.bp.blogspot.com/_YOjDhtygcuA/S_RDhF0ciII/AAAAAAAAAKg/XexPpWRmpg4/s1600/cell-phone-21.jpg"><img alt="image" src="http://3.bp.blogspot.com/_YOjDhtygcuA/S_RDhF0ciII/AAAAAAAAAKg/XexPpWRmpg4/s320/cell-phone-21.jpg" /></a></p>
<p>I thought I’d take another look at cell phone damage, coming at it from
a different direction than my colleague. Mostly I just want to consider
the energy of the radiation that cell phones produce, and compare that
with the other relevant energy scales for molecules.</p>
<h3>Cell Phone Energy</h3>
<p>So, lets start with cell phones. I looked at my cell phone battery, and
it looks like it is rated for 1 A, at 3.5 V. So when it is running at
its peak it should put out about 3.5 W of power in electromagnetic waves
(assuming it reaches its rating and all of that energy is fully
converted into radiation). But what form does this energy take? Well,
its electromagnetic radiation, so its in the form of a bunch of photons.
In order to determine the energy of each photon, we need to know the
frequency of the radiation. Surfing around a bit on wikipedia, I
discovered that most cell phones operate in the 33 cm radio band, or
somewhere between about 800 - 900 Mhz. How much energy does each \~ 1
Ghz photon have? We know that the energy of a photon is: <mathjax>$$ E = h \nu
\sim 7 \times 10^{-25} \text{ J} \sim 4 \times 10^{-6} \text{
eV} $$</mathjax> it will be convenient to know the photon energy in “eV’s”. 1 eV
is the energy of a single electron accelerated through a potential of 1
volt, or <mathjax>$$ 1 eV = (1 \text{ electron charge} ) * ( 1 \text{ Volt} )
= 1.6 \times 10^{-19} \text{ J} $$</mathjax> So my cell phone is sending out
signals using a bunch of photons, each of which has an energy of about 4
micro eVs. Lets consider the energy scales involved in most molecular
processes and compare those scales with this energy.</p>
<h3>Molecules</h3>
<p>Great. We have a number. What what does it mean? A number in physics
means little without some context. Lets try and consider what photons
can do to molecules. I can think of three different processes: first, a
photon could knock out an electron (i.e. ionize the molecule), second
the photon could make the molecule vibrate or wiggle, or third the
photon could make the molecule rotate. Lets see if we can estimate the
energies for these three different types of processes. Lets first
collect some of the information we know about atoms and molecules so
that we can continue our estimations. I know that most atoms are about
an angstrom big, or 10^(-10} meters. I know the charge of an electron
and proton.</p>
<h4>Ionization</h4>
<p>What are typical molecular ionization energies? Well we could try and
estimate it. Whats the energy stored in an electron and proton being
about an angstrom apart? Well, remembering some of our electrostatics we
have <mathjax>$$ E = \frac{ k q_1 q_2 }{ r} \sim (9 \times 10^9) \frac{
(1.6 \times 10^{-19} \text{ C} )^2 }{ (1 \ \AA)} \sim 14 \text{
eV} $$</mathjax> which is pretty darn close to the ionization energy of hydrogen
at 13.6 eV. So I will claim that since all atoms are about the same
size, typical ionization energies across the board are about 10 eV, in
order of magnitude.</p>
<h4>Vibration</h4>
<p>What about making our molecules vibrate? Well what are the energies of
molecular bonds? They ought to be quite similar to the ionization
energies of molecules, but as we know they are a tad weaker. Bond
energies for most molecules are on the order of a few eV. The
oxygen-hydrogen bond in water for example has a binding energy of 5.2
eV. What does that have to do with vibration? Well, if we consider a
material as made up by a bunch of atoms all stuck together with springs,
we can estimate the spring constant. Assuming that a typical binding
energy of 3 eV or so, and a typical atomic separation of 1 angstrom or
so, we can estimate the spring constant for atoms, knowing <mathjax>$$ U =
\frac{1}{2} k x^2 $$</mathjax> <mathjax>$$ 3 \text{ eV } \approx \frac{1}{2} k ( 1 \A
)^2 $$</mathjax> <mathjax>$$ k \approx 100 \text{ N/m} $$</mathjax> And now having estimated the
spring constant, we can estimate how much energy there is in a quanta of
atomic vibration, i.e. figure out the corresponding frequency from <mathjax>$$
\omega = \sqrt{ k / m } $$</mathjax> and quantize it in units of hbar. We
discover that a quanta of atomic vibration typically has energies on the
order of <mathjax>$$ U = \hbar \omega = \hbar \sqrt{ \frac{ k }{ m } } $$</mathjax> <mathjax>$$
= 6.6 \times 10^{-16} \text{ eV s } \sqrt{ \frac{ 100 \text{ N/m}
}{ 2 \times \text{ mass of a proton } } } \sim 0.1 \text{ eV} $$</mathjax> So
molecular vibration energies are about a tenth of an electron volt.</p>
<h4>Rotation</h4>
<p>I can also make molecules rotate. What is the energy of the lowest
rotational mode of a molecule. Well, bohr taught us that angular
momentum is quantized in units of h, planck’s constant. Imagine a two
atom molecule, with two atoms separated by an angstrom. The energy of a
rotating object can be written <mathjax>$$ E = \frac{ L^2 }{2 I } $$</mathjax> in analogy
to the energy of a moving object <mathjax>$$ E = \frac{ p^2 }{2m} $$</mathjax> where I is
the moment of inertia for a molecule. We will estimate <mathjax>$$ I = 2 m r^2
\sim 2 (1 * \text{ mass of proton} * ( 1 \text{ \AA} )^2 \sim 3
\times 10^{-47} \text{ kg m}^2 $$</mathjax> So we can estimate the rotational
energy for a small molecule <mathjax>$$ E = \frac{ h^2 }{ 2 m r^2 } \sim 80
\text{ meV} $$</mathjax> and this is for a small molecule, and will only go down
for larger molecules, as I will increase. So I will call typical
rotational energies 1 meV for medium sized molecules.</p>
<h3>Heat</h3>
<p>Another relevant energy scale to discuss when we are talking about
brains is the energy due to the fact that our brain is rather warm. Body
temperature is about 98 degrees Fahrenheit, or 37 degrees Celsius, or
310 Kelvin. Statistical Mechanics tells us that temperature is an
average energy for a system, and in fact the <a href="http://en.wikipedia.org/wiki/Equipartition_theorem">Equipartition
theorem</a> tells us
that when a body is in thermal equilibrium, every mode of it has <mathjax>$$
\frac{1}{2} k_B T $$</mathjax> amount of energy in it. For our brain that means
<mathjax>$$ E = \frac{1}{2} k_B T = 2 \times 10^{-21} \text{ J } = 13
\text{ meV} $$</mathjax> i.e. just the fact that our brains are hot means that
every degree of freedom in our brain already has 13 millielectron volts
associated with it. Comparing to our results above, this is comparable
to the rotational energies of molecules, but a tad less than their
vibrational energies, which means that we should expect most of the
molecules in our head to already be rotating.</p>
<h3>Results</h3>
<p>So going through some very rough calculations, we discovered that for
molecules, there are three obvious ways you can get them hot and
bothered, you can ionize them, make them wiggle, or make them rotate.
There are some typical energy scales for these things, ionization
energies are about 10 eV, vibrational energies are about 0.1 eV, and
rotational energies are about 0.001 eV. In addition, our brain already
has about 13 meV of energy in every one of its degrees of freedom, and
cell phone photons have an energy some 1/3300 of this. And what was the
energy for each cell phone photon? 0.000004 eV. Notice that this energy
is about a hundred times weaker than typical rotational energies, and
some 3300 thousandth times less than the natural thermal energy in our
brains. Now you can begin to understand why most physicists are not too
worried about the effects of cell phones. The radiation from cell phones
is just not on the kind of energy scales that affect molecules in was
that could potentially harm us. So I’m not too worried.</p>Solar Sails III (because two just isn’t enough)2010-05-17T22:24:00-04:00Corkytag:thephysicsvirtuosi.com,2010-05-17:posts/solar-sails-iii-because-two-just-isn-t-enough-.html<p><a href="http://4.bp.blogspot.com/_fa6AZDCsHnY/S_IKFerS7nI/AAAAAAAAACo/DPNAyMeuMaQ/s1600/leezle+pon+justice.jpg"><img alt="image" src="http://4.bp.blogspot.com/_fa6AZDCsHnY/S_IKFerS7nI/AAAAAAAAACo/DPNAyMeuMaQ/s320/leezle+pon+justice.jpg" /></a>
One thing that I’ve wanted to quantify since reading <em>Intelligent Life
in the Universe</em>, an outstanding book by Carl Sagan and <span class="caps">I.S.</span> Shklovskii,
is the idea of exogenesis. Exogenesis is the hypothesis that life formed
elsewhere in the universe and was somehow transferred to earth in the
form of some small seed or spore. Now since <span class="caps">E.T. E.</span> coli presumably do
not have little tiny jetpacks or other means of active transport, they
would need to traverse the cosmos in some passive way. One such way
would be solar sailing.
Way back in Solar Sails I, we derived equations describing the maximum
speeds and time-of-travel for various distances for a given solar sail.
Each of these equations was a function of the surface mass density of
the sail, which is just the amount of mass per unit cross-sectional
area. All we need to know is the cross-sectional area and mass of a
given object and we can apply these equations to just about anything!
Assume we have some spherical blob with the density of water (1g/cm^3).
The effective sigma of this blob would just be the mass divided by the
cross-sectional area. In other words,
<mathjax>$$ \sigma = \frac{m}{Area} = \frac{\frac{4}{3}\pi r^3 \rho}{\pi
r^2} = \frac{4}{3}\rho r .$$</mathjax>
Rearranging to get r in terms of the other variables, we have
<mathjax>$$ r = \frac{3\sigma}{4\rho} . $$</mathjax>
Plugging in our density of 1g/cm^3 and a suitable sigma (10^-4
g/cm^2), we get
<mathjax>$$ r \le 0.75 \times 10^{-4} cm = 0.75 \mu m .$$</mathjax>
Check out
<a href="http://learn.genetics.utah.edu/content/begin/cells/scale/">this</a> fun
site to see what kind of critters can fit in this blob.
From the previous post, we saw that for a sigma of 10^-4 g/cm^2, our
sail would get to the nearest stars on a timescale of order 10,000
years. Thus if our blob has a radius of less than about a micron, it
could spread to hundreds of stars in around 10,000-100,000 years. Even
if it would take millions of years, that would be almost no time at all
on the cosmic scale. Just based on this calculation it all seems fairly
feasible.
In making these calculations I have neglected several important aspects
of the problem. First, in no way have I actually calculated any sort of
probability of this happening. Additionally, I would have to see how
likely it is for some blob to reach planetary escape velocity
(presumably just by riding that tail of the Boltzmann distribution).
Finally, and perhaps most important of all, I have not given any sort of
motivation or mechanism by which some living body could survive hundreds
of thousands of years in the vacuum of space with constant radiation
exposure. But I have heard that some forms of life are totally
<a href="http://en.wikipedia.org/wiki/Extremophile">extreme</a> (especially if they
drink <a href="http://en.wikipedia.org/wiki/Mountain_Dew">this</a>).
Even though such a process seems possible, it certainly doesn’t seem
like the easiest way to get life on earth. I prefer the much more
satisfying “amino acids + lightning + magic =
<a href="http://en.wikipedia.org/wiki/Abiogenesis">life</a>” model. But it does
offer some interesting possibilities. Suppose we as people think that
people are super awesome and therefore people should be everywhere. We
do some bio magic and put whatever <span class="caps">DNA</span> we want into viruses, which we
then pack into as many micron spheres as we can make. We then point them
at the nearest stars and have them disperse.
What would the probabilities be that they land somewhere habitable? Are
there any ethical considerations in doing this? Is it a galactic faux pas?</p>Ask a Physicist: Volume I2010-05-17T11:56:00-04:00Jaredtag:thephysicsvirtuosi.com,2010-05-17:posts/ask-a-physicist-volume-i.html<p><a href="http://upload.wikimedia.org/wikipedia/commons/6/67/Convection-snapshot.gif"></a>We
have received our first <em>Ask a Physicist</em> e-query! An entity known only
to us as “Hungry” writes:
“We had a dispute at a dinner party about whether blowing on hot food
actually makes it cool down faster, or only gives you something to do
while you wait for your food to cool.”
While it is questionable whether or not I am indeed a physicist (but
people do pay me to do physics) and whether I will answer definitively
(there’ll be some hand-waving), I’ll give it my darndest.
Let’s imagine a hot baked potato you just sliced open. The mass of
potato is capable of transferring energy as heat to the surrounding air
(and therefore cooling down) via a number of mechanisms. The most common
we hear about are conduction, radiation, and convection. Conduction is
the process by which two objects in intimate contact exchange heat.
Radiation is a bit more mysterious, but in the same way that a poker in
the fire can glow white hot, all things at non-zero temperature emit
electromagnetic radiation, which carries away energy from your potato.
Convection is even more complicated, but is one of the most important
processes by which solids exchange heat with gases. Convection in our
case occurs when the potato heats up the gas at its surface. The random
motion of air molecules (now a little hotter) will “randomly” dissipate
heat. However, the hotter gas is less dense than the room temperature
gas, and buoyant forces (think Helium balloon) will create a current of
hot gas upwards from the potato surface. Both of these processes
constitute convection.
Convection is <span class="caps">REALLY</span> hard to describe with physical models. You can do
some computer simulations of the motion of the gas in convection. I
found this cool picture on Wikipedia of someone who did just that:
<a href="http://upload.wikimedia.org/wikipedia/commons/6/67/Convection-snapshot.gif"><img alt="image" src="http://upload.wikimedia.org/wikipedia/commons/6/67/Convection-snapshot.gif" /></a>
Pretty cool. Now! To actually answer your question. To get a qualitative
answer, we don’t even need to consider models of convection, etc. Isaac
Newton was kind enough to find an empirical relation for the process of
cooling, aptly named “Newton’s law of cooling”. If we call the amount of
heat transferred from the potato Q, the found that the rate of heat
being lost by the potato is proportional to the temperature difference
between the surrounding gas and the potato itself:
<mathjax>$$ \text{Rate of Heat Loss}=\frac{\text{Amount of Heat
Loss}}{\text{Amount of time}}=\frac{\Delta Q}{\Delta t} \propto
T_{\text{Potato}}-T_{\text{Air }} $$</mathjax> So the quantity which is of
primary importance is the temperature difference between your potato and
the surrounding air. Now we can answer the question. If you let your
potato cool down naturally, then the potato heats the air which then
starts to convect. This means that the air immediately above the potato
is quite hot. If you blow on the potato, however, you get the air
circulating faster than it would by convection alone, effectively
replacing the hot air with slightly cooler air. And since the cooling
rate is proportional to to temperature difference; this will result in a
faster cooling rate. To first order, blowing on your food should help.
So, Hungry, feel free to blow on your food, Miss Manners be damned.</p>Solar Sails II2010-05-17T01:15:00-04:00Corkytag:thephysicsvirtuosi.com,2010-05-17:posts/solar-sails-ii.html<p>[<span class="caps">NOTE</span>: In my hurry to make up for weeks of non-posts, I managed to
almost immediately knock Nic’s <a href="http://thevirtuosi.blogspot.com/2010/05/why-black-holes-from-large-hadron.html">first
post</a>
from the top of the page. It’s got the <span class="caps">LHC</span>, black holes, and about 3
full cups of metric awesome, so make sure you check it out (after
reading this one, of course).]
Last time we did some calculations on how fast and far our solar sails
can go, but those calculations were just for the sail itself. If you are
going to do any science with it, you’re going to need a payload. Let’s
take it a step further and make it an actual spaceship (with people and
everything!).
<a href="http://4.bp.blogspot.com/_fa6AZDCsHnY/S_DXvcSsDXI/AAAAAAAAACg/jK_N-B4mOME/s1600/ssradius.png"></a>
Comparing it with a typical people-carrying space hotel (the
International Space Station), let’s give our payload a mass of 300,000
kg. Remembering from the last post that a sigma of less that about
10^-4 g/cm^2 gave fairly nice results, we can make an effective sigma
of our payload carrying sail as
<mathjax>$$ \sigma_{eff} = \frac{m_{total}}{Area} = \frac{m_s +
m_p}{Area}, $$</mathjax>
where m_s is the mass of the sail and m_p is the mass of our payload
(the ship). Assuming the sail has some surface density of sigma and the
sail is circular with some radius r, we have
<mathjax>$$ \sigma_{eff} = \frac{\pi r^2 {\sigma}_s + m_p}{\pi r^2} =
{\sigma}_s + \frac{m_p}{\pi r^2} . $$</mathjax>
Now we can find the radius of our sail such that
<mathjax>$$ \sigma_{eff} \le 10^{-4} \frac{g}{{cm}^2}. $$</mathjax>
Rearranging our equation above and solving for radius, we find that
<mathjax>$$ r \ge \left[\frac{m_p}{\pi \left( 10^{-4}\frac{g}{cm^2} -
\sigma_s \right)} \right]^{1/2} cm. $$</mathjax>
Below is a plot of sail radius (in meters) against sail surface density
(g/cm^2).
<a href="http://4.bp.blogspot.com/_fa6AZDCsHnY/S_DXvcSsDXI/AAAAAAAAACg/jK_N-B4mOME/s1600/ssradius.png"><img alt="image" src="http://4.bp.blogspot.com/_fa6AZDCsHnY/S_DXvcSsDXI/AAAAAAAAACg/jK_N-B4mOME/s400/ssradius.png" /></a>
From this plot, we see that we will need our sail radius to be <span class="caps">AT</span> <span class="caps">LEAST</span>
10 km and the surface density of our sail must be less than 10^-4
g/cm^2. Now that’s a big sail, but it’s not obscenely big (depending,
of course, on your definitions of obscenity). One could certainly
imagine such a sail being built, but it would be an impressive
engineering feat.
So get to work engineers! I’ve already made a whole plot in Mathematica,
I can’t do everything.</p>Solar Sails I2010-05-16T23:29:00-04:00Corkytag:thephysicsvirtuosi.com,2010-05-16:posts/solar-sails-i.html<p><a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/S_C9Ll986gI/AAAAAAAAACY/eTykcbU6PTE/s1600/solarsail.jpg"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/S_C9Ll986gI/AAAAAAAAACY/eTykcbU6PTE/s200/solarsail.jpg" /></a>
Solar sails are in the news again, and this time not just for <a href="http://www.cbsnews.com/stories/2005/06/22/tech/main703405.shtml">blowing
up</a>.
The Japanese space agency is
<a href="http://www.space.com/businesstechnology/japan-venus-double-mission-100516.html">launching</a>
what they hope to be the first successful solar sail tomorrow. In honor
of that, we will be discussing the physics of solar sails.
First of all, what the heck are solar sails? Solar sails are a means of
propulsion based on the simple observation that “Hey, sails work on
boats. Therefore, they should work on interplanetary spacecraft (in
space).” Boat sails work when air molecules hit into the sail and bounce
back. By conservation of momentum, this gives the boat sail an itty
bitty boost in momentum. Summing over the large number of air molecules
moving as wind, the boat gets pushed along in the water. A similar
process works with solar sails, but instead of air molecules doing the
hitting, it’s photons. Since each photon of a given wavelength has some
momentum, by reflecting that photon the solar sail can gain a tiny bit
of momentum. Summing over the large number of photons coming from the
sun over a long time frame we can get a considerable boost. So let’s see
how good solar sails are.
First we need to find the net force on our sail. We will certainly have
to deal with gravitational forces (which will slow us down) :
<mathjax>$$ F_{g} = \frac{-GM_{\odot}m}{r^2} $$</mathjax>
where big M is the mass of the sun and little m is the mass of the sail.
Now we need to find the radiation force on the sail. Since force is just
rate of change of momentum, we can find the change of momentum of one
photon per unit time, then find how many photons are hitting our sail.
So for one elastic collision of a photon with the sail, the change in
momentum will be
<mathjax>$$ \Delta p = 2 \frac{h\nu}{c} $$</mathjax>
and by conservation of momentum, this will also be the momentum gained
by the sail. Now we want to find the number of photons incident on a
given area in a given time. This will just be the energy flux output by
the sun ( energy/ m^2 s ) divided by the energy per photon. In other
words:
<mathjax>$$ f_n = \frac{L_{\odot}}{4\pi r^2}\frac{1}{h\nu} .$$</mathjax>
So now we can get a force by
<mathjax>$$ \text{Force} = \left(\frac{\Delta p}{\text{1 photon}} \right)
\times \left(\frac{\text{number of photons}}{area \times
time}\right) \times \left( Area\right) $$</mathjax>
which is just
<mathjax>$$ F_{rad} = 2 \frac{h\nu}{c} \times \frac{L_\odot}{4\pi r^2
h\nu} \times \pi R^2 = \frac{L_{\odot} R^2}{2cr^2} .$$</mathjax>
So combining the radiation force with the gravitation force, we have a
net force on the sail of
<mathjax>$$ F = \left( \frac{L_{\odot} R^2}{2c} - GM_{\odot}m \right)
\frac{1}{r^2} .$$</mathjax>
This can then be integrated over r to find an effective potential,
giving:
<mathjax>$$ U = \left( \frac{L_{\odot} R^2}{2c} -
GM_{\odot}m\right)\frac{1}{r} .$$</mathjax>
For simplicity, let’s just write that
<mathjax>$$ \alpha = \frac{L_{\odot} R^2}{2c} - GM_{\odot}m $$</mathjax>
so
<mathjax>$$ U = \frac{\alpha}{r} .$$</mathjax>
Now we can start saying some things about this sail. The most
straightforward quantity to find would be the maximum velocity. By
conservation of energy (and starting from some r_0 at rest), we have
that
<mathjax>$$ v_f = \left[\frac{2\alpha}{m} \left(\frac{1}{r_0} -
\frac{1}{r_f} \right) \right]^{1/2} $$</mathjax>
So as r_f goes to very large values, the subtracted piece gets smaller
and smaller. In the limit that r_f goes to infinity we have that
<mathjax>$$ v_{max} = \left(\frac{2\alpha}{mr_0}\right)^{1/2} .$$</mathjax>
Plugging back in our long term for alpha and plugging in some numbers we
get:
<mathjax>$$ v_{max} = 42,000 m/s \left( \frac{1.5 \times 10^{-4}}{\sigma} -
1\right)^{1/2} $$</mathjax>
where sigma is just the surface mass density [g/cm^2] of the sail.
Below is a plot of maximum velocity ( m/s) plotted against surface mass
density (g/cm^2). For a sigma of 10^-4 g/cm^2, we get a max velocity
of about 30,000 m/s. Not bad.
<a href="http://4.bp.blogspot.com/_fa6AZDCsHnY/S_CjbwxG-JI/AAAAAAAAAB4/PS7tTqbmLUE/s1600/maxvel.png"><img alt="image" src="http://4.bp.blogspot.com/_fa6AZDCsHnY/S_CjbwxG-JI/AAAAAAAAAB4/PS7tTqbmLUE/s400/maxvel.png" /></a>
From this graph we see that there must be some maximum surface density,
above which we don’t get any (forward) motion at all. This makes sense
since we want our radiation forces (which scale with area) to overcome
our gravitational forces (which scale with mass). And below this maximal
surface density we see a power law behavior. Cool.
We can also find the distance traveled as a function of time. Taking the
final velocity equation above and writing v as dr/dt, we see that
<mathjax>$$ \frac{dr}{dt} = \left[ \frac{2\alpha}{m} \left( \frac{1}{r_0}
- \frac{1}{r_f} \right)\right]^{1/2} $$</mathjax>
Rearranging and integrating, we can get time (in years) as a function of
distance r (in <span class="caps">AU</span>):
<mathjax>$$ t = \frac{0.11 \left(\sqrt{(-1+r)
r}+\text{Log}\left[1+\sqrt{\frac{-1+r}{r}}\right]+\frac{\text{Log}[r]}{2}\right)}{\sqrt{-1+\frac{1.5
\times 10^{-4}}{\sigma}}}$$</mathjax>
A plot of t vs. r is shown below for typical solar system distances and
a sigma of 10^-4 g/cm^2. We assume that we are launching from earth (1
<span class="caps">AU</span>). Since Pluto is at a distance of about 40 <span class="caps">AU</span>, we see that our sail
could get there in less than 7 years. For comparison, the <a href="http://pluto.jhuapl.edu/">New
Horizons</a> probe will use conventional
propulsion to get to Pluto in 9.5 years (and it is the fastest
spacecraft ever made).
<a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/S_Ck6enlpVI/AAAAAAAAACI/_gjIHLCm_G8/s1600/ssplutolong.png"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/S_Ck6enlpVI/AAAAAAAAACI/_gjIHLCm_G8/s400/ssplutolong.png" /></a>
Zooming in to our starting point around 1 <span class="caps">AU</span>, we see that there is a
period of acceleration and then the maximum velocity is reached after a
few months. Just eyeballing it, it looks like it takes at least a month
to reach appreciable speed. That it takes so long is a result of the
very small forces involved due to radiation pressure. But even a small
acceleration amounts to a considerable speed if applied for long enough!
<a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/S_CopsUAIrI/AAAAAAAAACQ/_-IqCnf8xs8/s1600/sscloseup.png"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/S_CopsUAIrI/AAAAAAAAACQ/_-IqCnf8xs8/s400/sscloseup.png" /></a>
Now Pluto is fine I guess (it’s the second largest dwarf-planet!), but
how about some interstellar flight? Well, the nearest star is Proxima
Centauri which is about a parsec away. A parsec is 3<em>10^16 m, or about
200,000 <span class="caps">AU</span>. From, the plot below (or plugging in to the equation above),
we see that such a trip would take of order 10,000 years. That’s a long
time, but its not too shabby considering this craft uses no fuel of its
own.
<a href="http://3.bp.blogspot.com/_fa6AZDCsHnY/S_Cgqa_fqDI/AAAAAAAAABw/xxgjj5VEZww/s1600/ssTOTHESTARS.png"><img alt="image" src="http://3.bp.blogspot.com/_fa6AZDCsHnY/S_Cgqa_fqDI/AAAAAAAAABw/xxgjj5VEZww/s400/ssTOTHESTARS.png" /></a>
So solar sails can do some fairly impressive things simply by harnessing
the free energy of the sun. Though this only provides a very small
acceleration, it can be taken over a long enough time to be useful.
However, since the radiation pressure of the sun falls off as 1/r^2, we
start to observe diminishing returns and the sail reaches a max
velocity. But overall the numbers seem fairly impressive. All that
remains now is whether they are feasible to construct. Right now my only
data point for feasibility was that it was in <a href="http://www.starwars.com/databank/starship/solarsailer/">Star
Wars</a>, but as I
recall that was a </em>long* time ago.</p>Solar Sails Addendum I2010-05-16T23:28:00-04:00Corkytag:thephysicsvirtuosi.com,2010-05-16:posts/solar-sails-addendum-i.html<p>As requested, below is an explicit evaluation of the silly looking
integral in <a href="http://thevirtuosi.blogspot.com/2010/05/solar-sails-i.html">Solar Sails
I</a>. If you
just want some hints to do the integral, see <a href="http://thevirtuosi.blogspot.com/2010/05/solar-sails-addendum-ii.html">Solar Sails Addendum
<span class="caps">II</span></a>.
Here we present a step-by-step solution of the differential equation: <mathjax>$$
\frac{dr}{dt} = \left[ \frac{2\alpha}{m} \left( \frac{1}{r_0} -
\frac{1}{r} \right)\right]^{1/2} $$</mathjax> This is just a separable
equation, so we rearrange to get an integral equation: <mathjax>$$
\int_{r_0}^{r_f} \frac{dr}{\left[ \frac{2\alpha}{m} \left(
\frac{1}{r_0} - \frac{1}{r} \right)\right]^{1/2}}
=\int_{0}^{t}dt$$</mathjax> We see that the right hand side here will just
evaluate to t. Let’s rearrange the left hand side to get the integration
variable to be dimensionless. This is important because it allows the
integral to just be a number, with all the unit-dependent terms pulled
outside. So we have <mathjax>$$ \int_{r_0}^{r_f} \frac{dr}{\left[
\frac{2\alpha}{mr_0} \left( 1 - \frac{r_0}{r}
\right)\right]^{1/2}} =t$$</mathjax> \noindent Now we can do a change of
variables to get the dimensionless variable u = r/r0. This is just
giving us our distance in terms of our initial distance. In the problem
I took r_0 to be 1 <span class="caps">AU</span>. So u just gives our distance now in terms of <span class="caps">AU</span>:
<mathjax>$$ \int_{1}^{u_f} \frac{du}{\left[ \frac{2\alpha}{m{r_0}^3}
\left( 1 - \frac{1}{u} \right)\right]^{1/2}} =t$$</mathjax> So lets set <mathjax>$$ k
= (m{r_0}^3}/{2 \alpha)^{1/2} ,$$</mathjax> which just gives <mathjax>$$
k\int_{1}^{u_f} \frac{du}{\left[ \left( 1 - \frac{1}{u}
\right)\right]^{1/2}} =t$$</mathjax> Now we are ready to get started!
Typically, when I see something with a square root in the denominator
that’s giving me trouble, I just blindly try trig substitutions. Let’s
try u = [csc(x)]^2, so 1/u = [sin(x)]^2 and du = -(csc x)^2 * cot x,
and <mathjax>$$ \int_{1}^{u_f} \frac{du}{\left[ \left( 1 - \frac{1}{u}
\right)\right]^{1/2}} =\int_{x_0}^{x_f} \frac{-2\csc^2x \cot
x dx}{\left(1-\sin^2x \right)^{1/2}} = \int_{x_0}^{x_f}
-2\csc^3x dx ,$$</mathjax> where x_0 = pi/2 and x_f = arcsin u_f . The last
equality above just comes from simplifying the trig expressions. So now
how do we solve this “easier” problem? As a wise man once said, “When in
doubt, integrate by parts.” So let’s try that. Expanding out a bit we
see that: <mathjax>$$ -\int_{x_0}^{x_f} \csc^3x dx =
\int_{x_0}^{x_f}\csc x \left(-\csc^2x \right)dx $$</mathjax> Remembering
that integration by parts goes like <mathjax>$$ \int u dv = uv - \int v du $$</mathjax>
we can set u = csc x and v = \cot x<mathjax>$ to get $</mathjax><mathjax>$
\int_{x_0}^{x_f}\csc x \left(-\csc^2x \right)dx = \csc x
\cot x \Big |_{x_0}^{x_f} - \int_{x_0}^{x_f} \cot x
\left(-\csc x \cot x \right) dx $</mathjax><mathjax>$ which is just $</mathjax><mathjax>$
-\int_{x_0}^{x_f}\csc^3x dx = \csc x \cot x \Big
|_{x_0}^{x_f} + \int_{x_0}^{x_f} \cot^2 x \csc x dx $</mathjax><mathjax>$
Remembering that $</mathjax><mathjax>$ \cot^2 x = \csc^2 x -1 ,$</mathjax><mathjax>$ we have $</mathjax><mathjax>$
-\int_{x_0}^{x_f}\csc^3x dx = \csc x \cot x \Big
|_{x_0}^{x_f} + \int_{x_0}^{x_f} \left(\csc^2 x - 1 \right)
\csc x dx $</mathjax><mathjax>$ which we can expand to $</mathjax><mathjax>$ -\int_{x_0}^{x_f}\csc^3x
dx = \csc x \cot x \Big |_{x_0}^{x_f} - \int_{x_0}^{x_f}
\csc x dx + \int_{x_0}^{x_f} \csc^3 x dx$</mathjax><mathjax>$ But this is just what
we want! Rearranging we now have that $</mathjax><mathjax>$
-2\int_{x_0}^{x_f}\csc^3x dx = \csc x \cot x \Big
|_{x_0}^{x_f} - \int_{x_0}^{x_f} \csc x dx$</mathjax><mathjax>$ Remembering that
the integral for csc is $</mathjax><mathjax>$ \int \csc x dx =-\ln | \csc x + \cot x|
+ C $</mathjax><mathjax>$ and evaluating our limits we have that $</mathjax><mathjax>$
-2\int_{x_0}^{x_f}\csc^3x dx = \csc x_f \cot x_f - \csc x_0
\cot x_0 +\ln \left( \frac{| \csc x_f + \cot x_f|}{| \csc x_0
+ \cot x_0|} \right) $</mathjax><mathjax>$ Now we just need to evaluate at x_0 = pi/2
and x_f = arcsin u_f. We can draw a right triangle with far angle
x_f, hypoteneuse of length sqrt{u}, and legs of length 1 and sqrt{u-1}
to see that csc x_f = sqrt{u} and cot x_f = sqrt{u-1}, so $</mathjax><mathjax>$
-2\int_{x_0}^{x_f}\csc^3x dx = \sqrt{u}\sqrt{u-1} + \ln{ |
\sqrt{u} + \sqrt{u-1}|} $</mathjax><mathjax>$ And this is what we have sought from the
beginning. Using the last two equations on the first page we see that $</mathjax><mathjax>$
t = k\int_{1}^{u_f} \frac{du}{\left[ \left( 1 - \frac{1}{u}
\right)\right]^{1/2}} = k\left[-2\int_{x_0}^{x_f}\csc^3x dx
\right] = k \left[\sqrt{u}\sqrt{u-1} + \ln{ | \sqrt{u} +
\sqrt{u-1}|} \right] $</mathjax><mathjax>$ Plugging back in our value of k, we have $</mathjax><mathjax>$ t
= \left(\frac{m{r_0}^3}{2\alpha}
\right)^{1/2}\left[\sqrt{u}\sqrt{u-1} + \ln{ | \sqrt{u} +
\sqrt{u-1}|} \right] $</mathjax><mathjax>$ Simplifying a bit and dropping the absolute
value bars since u will always be bigger than u-1, we have our final
answer: $</mathjax><mathjax>$ t = \left(\frac{m{r_0}^3}{2\alpha}
\right)^{1/2}\left[\sqrt{u(u-1)} + \ln{ \left( \sqrt{u} +
\sqrt{u-1}\right)} \right] $</mathjax>$ where u = r / r_0 is our
non-dimensional distance measurement. And this is (up to some
rearranging) exactly what we get in the initial post.</p>Solar Sails Addendum II2010-05-16T23:27:00-04:00Corkytag:thephysicsvirtuosi.com,2010-05-16:posts/solar-sails-addendum-ii.html<p>This is the schematic version, if you just wanted hints. The full
solution is given in <a href="http://thevirtuosi.blogspot.com/2010/05/solar-sails-addendum-i.html">Solar Sails Addendum
I</a>.
Here we present a schematic solution of the differential equation: <mathjax>$$
\frac{dr}{dt} = \left[ \frac{2\alpha}{m} \left( \frac{1}{r_0} -
\frac{1}{r} \right)\right]^{1/2} $$</mathjax> This is just a separable
equation, so we rearrange to get an integral equation: <mathjax>$$
\int_{r_0}^{r_f} \frac{dr}{\left[ \frac{2\alpha}{m} \left(
\frac{1}{r_0} - \frac{1}{r} \right)\right]^{1/2}} =
\int_{0}^{t}dt$$</mathjax> From here it’s nice to non-dimensionalize, so our
integration variable is just a number (with no units attached). This
allows us to get the integral into a form like <mathjax>$$ k\int_{1}^{u_f}
\frac{du}{\left[ \left( 1 - \frac{1}{u} \right)\right]^{1/2}} =
t$$</mathjax> for appropriate values of k and u. Now we are ready to get started!
Typically, when I see something with a square root in the denominator
that’s giving me trouble, I just blindly try trig substitutions. After
an appropriate trig substitution, we get something of the form <mathjax>$$
\int_{1}^{u_f} \frac{du}{\left[ \left( 1 - \frac{1}{u}
\right)\right]^{1/2}} = \int_{x_0}^{x_f} -2\csc^3x dx$$</mathjax>, So
now how do we solve this “easier” problem? As a wise man once said,
“When in doubt, integrate by parts.” So let’s try that. <mathjax>$$ \left[
\mbox{HINT:} -\int_{x_0}^{x_f} \csc^3x dx =
\int_{x_0}^{x_f}\csc x \left(-\csc^2x \right)dx \right]$$</mathjax>
Remembering that integration by parts goes like <mathjax>$$ \int u dv = uv -
\int v du $$</mathjax> we can pick appropriate values of u and v to get something
nice, which eventually leads to <mathjax>$$ -\int_{x_0}^{x_f}\csc^3x dx =
\csc x \cot x \Big |_{x_0}^{x_f} - \int_{x_0}^{x_f} \csc x
dx + \int_{x_0}^{x_f} \csc^3 x dx$$</mathjax> But this is just what we
want! Rearranging we now have that <mathjax>$$ -2\int_{x_0}^{x_f}\csc^3x
dx = \csc x \cot x \Big |_{x_0}^{x_f} - \int_{x_0}^{x_f}
\csc x dx$$</mathjax> Evaluating our integrals, we see that <mathjax>$$
-2\int_{x_0}^{x_f}\csc^3x dx = \sqrt{u}\sqrt{u-1} + \ln{ |
\sqrt{u} + \sqrt{u-1}|} $$</mathjax> And this is what we have sought from the
beginning. Plugging back in to our earlier equations and rearranging
gives <mathjax>$$ t = \left(\frac{m{r_0}^3}{2\alpha}
\right)^{1/2}\left[\sqrt{u(u-1)} + \ln{\left( \sqrt{u} +
\sqrt{u-1}\right)} \right] $$</mathjax> where u = r / r_0 is our
non-dimensional distance measurement. And this is (up to some algebra)
exactly what we get in the initial post.</p>Why Black Holes from the Large Hadron Collider Won’t Destroy the World2010-05-16T19:53:00-04:00Nic Eggerttag:thephysicsvirtuosi.com,2010-05-16:posts/why-black-holes-from-the-large-hadron-collider-won-t-destroy-the-world.html<p>Hi everyone. As this is my first post, I thought I’d introduce myself.
Like the rest of the Virtuosi, I’m a graduate student in physics at
Cornell University. I work in experimental particle physics, in
particular on the Compact Muon Solenoid, one of the detectors at the
Large Hadron Collider. I’ll post more on what I actually do at some
point in the future, but I thought I’d start with a post in the spirit
of some of the other fun calculations that we’ve done. My goal is to
convince you that black holes created by the <span class="caps">LHC</span> cannot possibly destroy
the world.
To start with, the main reason no one working on the <span class="caps">LHC</span> is too
concerned about black holes is because of <a href="http://en.wikipedia.org/wiki/Hawking_radiation">Hawking
radiation</a>. While we
usually think of black holes as objects that nothing can escape from,
Stephen Hawking predicted that black holes actually do emit some light,
losing energy (and mass) in the process. In the case of the little bitty
black holes that the <span class="caps">LHC</span> could produce, they should just evaporate in a
shower of Hawking radiation.
That’s great you say, but Hawking radiation has never actually been
observed. What if Hawking is wrong and the black holes won’t evaporate?
Well, the usual next argument is that <a href="http://en.wikipedia.org/wiki/Cosmic_ray">cosmic
rays</a> from space bombard the
earth all the time, producing collisions many times more energetic than
what we’ll be able to produce at the <span class="caps">LHC</span>. To me, this is a fairly
convincing argument. However, let’s pretend we don’t know about these
cosmic rays and that there’s no Hawking radiation. We can calculate what
effect black holes produced by the <span class="caps">LHC</span> would have on the earth if they
do stick around.
To start out with, the most massive black hole the <span class="caps">LHC</span> could produce
would be around 10 Tera-electron-volts, or TeV. We’re probably
overestimating here. The eventual goal is for the <span class="caps">LHC</span> collisions to be
14 TeV, but producing a particle with the entire collision energy is
incredibly unlikely (see <a href="http://www.scientificblogging.com/quantum_diaries_survivor/fascinating_new_higgs_boson_search_dzero_experiment">Tomasso Dorigo’s
post</a>
for more details on why, along with more details than you probably
wanted to know about hadron colliders). However, we want to think about
the worst case scenario here, and we’re just going to do an order of
magnitude calculation, so 10 TeV is a good number. Note that I’m using a
particle physics convention here of giving masses in terms of energies
using E=mc^2. For reference, 10 TeV is about 1000 times smaller than a
small virus.
Now from the mass of our black hole, we can get its size by calculating
something called the <a href="http://en.wikipedia.org/wiki/Schwarzschild_radius">Schwarzschild
radius</a>. The
Schwarzchild radius for a black hole of mass m is given by
<mathjax>$$r = \frac{2Gm}{c^2}\text{.}$$</mathjax>
Here G is Newton’s gravitational constant and c is the speed of light.
Plugging our mass in gives us
<mathjax>$$r = 10^{-50} \text{meters.}$$</mathjax>
This is incredibly small! In fact as I was writing this, I realized that
it’s actually smaller than the Planck length, which means our equation
for the Schwarzschild radius may be somewhat suspect. Nonetheless, let’s
hope that if we ever figure out quantum gravity, it gives us a
correction of order one and proceed with our calculation, which is just
an order-of-magnitude affair anyway.
Now, anything that enters the Schwarzschild radius of the black hole is
absorbed by it. The lightest thing that we could imagine the black hole
swallowing is an electron. Let’s figure out how long on average a black
hole would have to travel through material with the density of the earth
before it absorbs an electron. In the spirit of considering the worst
case scenario, we’ll have the black hole travel at the speed of light,
and consider the earth to be the density of lead.
We could do a complicated cross-section calculation to find the rate at
which the black hole accumulates mass, but we can also get it right up
to factors of pi through unit analysis. We know that the answer should
involve the area of the black hole, the density of the earth, and the
speed of the black hole. We want our answer to have units of mass per
time to represent the mass accumulation rate of the black hole. The only
combination that gives the right units is
<mathjax>$$a=\frac{\text{mass}}{\text{time}} = \rho c
r^2=\frac{10,000\text{kg}}{\text{m}^3}\frac{3\times 10^8
\text{m}}{\text{s}}(10^{-50}\text{m})^2} =
10^{-88}\text{kg/s}\text{.}$$</mathjax>
Alright, now that we know how fast our black hole accumulates mass,
let’s figure out how long it takes it to accumulate an electron. The
electron mass is
<mathjax>$$m_e = 10^{-30}\text{kg,}$$</mathjax>
so the time to accumulate an electron is
<mathjax>$$t = \frac{m_e}{a} = 10^{50}\text{s.}$$</mathjax>
Now, the current age of the universe is 10^17 seconds. The time it
takes our black hole to accumulate an electron is longer than the age of
the universe by many orders of magnitude! So, if the <span class="caps">LHC</span> produces black
holes, and if Hawking is wrong, the black holes will just fly straight
through the earth without interacting with anything. Even if we take the
size of the black hole to be the Planck length, our black hole
accumulates an electron in 10^25 seconds, which is still much longer
than the age of the universe.
So the moral of the story is that you should be excited about the new
discoveries that the <span class="caps">LHC</span> might produce, and you don’t need to worry
about black holes.</p>Freezing in Space II - Turn On The Sun!2010-05-13T23:38:00-04:00Jessetag:thephysicsvirtuosi.com,2010-05-13:posts/freezing-in-space-ii-turn-on-the-sun-.html<p>Yesterday I considered how long it would take a human to <a href="http://thevirtuosi.blogspot.com/2010/05/freezing-in-space-i-blackest-night.html">freeze in
space</a>.
However, I considered only what would happen if you were not absorbing
any radiation from nearby sources. Today we consider what happens if you
do have hot objects nearby. Namely, the sun. The sun provides a lot of
energy, even as far away from it as we are. It keeps the earth at a
comfortable \~20 C, good for us humans, and provides the energy for life
on earth, also good for us humans. That’s a lot of energy. So maybe the
sun can keep you alive when you’re adrift in space. Or at least keep you
warm. I still think you’ll asphyxiate. From here on out we’re going to
assume that we are adrift in space near earth. You were out for a
joyride in that new spaceship of yours and something went horribly
wrong. We could go through a whole song and dance of calculating how
much power the sun delivers to the earth, but we won’t (if you’re
interested, let me know, an I can do that later). Instead, we’ll quote
the known result, that the sun delivers \~1370W/m^2 in the vicinity of
the earth. To find out what temperature we would cool to we set the
power we absorb from the sun equal to the power we radiate
<mathjax>$$P_{sun}=P_{rad}$$</mathjax>
<mathjax>$$1370W/m^2*A_{ab}*e_{ab}=e_{rad}*A_{rad}*\sigma*T^4$$</mathjax> Where
A_ab is the surface area absorbing the suns power, e_ab is a factor
between 0 and 1 that indicates how much of the incident power we
actually absorb, and e_rad is the emissivity of us, while A_rad is our
radiating area. Note that the emitting and absorbing areas are not the
same! Take a simple example. If you put a sheet in space, and face the
flat side towards the sun, it will only absorb energy from the sun on
one side, but it will radiate energy from both sides. Likewise e_ab and
e_rad are not necessarily equal because we are radiating and absorbing
at different wavelengths. We can solve the above equation for T, giving
<mathjax>$$T=\left(\frac{A_{ab}}{A_{rad}}\right)^{1/4}\left(\frac{e_{ab}}{e_{rad}}\right)^{1/4}\left(\frac{1370W/m^2}{\sigma}\right)^{1/4}$$</mathjax>
For a first pass, we’ll make the simplifying assumption that
e_ab=e_rad. Given this,
<mathjax>$$T=394K*\left(\frac{A_{ab}}{A_{rad}}\right)^{1/4}$$</mathjax> Now, the
absorption area of an object is just the shape of the object flattened
into the plane the incident radiation is perpendicular to. That is, the
absorption area of a sphere is a circle (a sphere projected to 2D is
always a circle), while the absorption area of a cylinder could be a
sheet or a circle, or something stranger. The best area ratio we can
ever have is that of a flat sheet, which gives 1/2. For a sphere, like
the earth, the ratio is 1/4. As an aside, this gives an equilibrium
temperature of the earth as \~5C, which is too cold. It turns out that
we shouldn’t neglect either the emissivity ratio or the natural
greenhouse effect in the case of the earth. Now, we need to figure out
the area ratio for us. In a <a href="http://thevirtuosi.blogspot.com/2010/05/human-radiation.html">previous
post</a> I
modeled myself as a cylinder with height 1.8 m and radius .14 m. Let us
assume we are facing the sun dead on, beating down on our chests. This
gives the cross sectional area of a sheet with width 2*.14 m =.28 m and
height 1.8 m. This is an area of .5 m^2, while my total surface area is
1.7 m^2. This gives an area ratio of \~.3, or an equilibrium
temperature of <mathjax>$$T=394K*(.3)^{1/4}\approx292K$$</mathjax> That is an
equilibrium temperature of 19 C. Not too cold, but certainly not body
temperature! So the sun will not save us. We also have to factor in the
fact that we reflect better in the visual that we do in the infrared, so
the emissivity ratio we set to 1 probably is less than that, reducing
our equilibrium temperature even more. It is interesting to note,
though, that if we model a human as a two sided sheet instead of a
cylinder, we can bring our equilibrium temperature up to 331 K. That’s
\~58 C! So in our model our geometrical assumptions change whether or
not we freeze or die of heat stroke. Finally, since it looks like the
sun may not save us, lets see how much it might slow down our
temperature loss. Instead of a net loss of 860W at body temperature, as
we calculated yesterday, sticking with our cylindrical human we’ll have
a net loss of <mathjax>$$860W-1370W/m^2(.5m^2)=175W$$</mathjax> Similarly at our lowered
body temperature of \~30 C, we’ll be losing a net of \~105 W. Once again
taking a geometric average gives an average power loss of \~135 W. Using
the energy to cool we found yesterday it would take 16300 s, or 4.5
hours to freeze! Also note that if you’re getting too hot or cold, given
how much the geometry plays into things, by changing your orientation to
the sun you’ll be able to have a certain amount of control over how much
you heat up or cool down. Also, make sure you rotate yourself so that
you end up evenly heated, and not roasted on one side and frozen on the other!</p>Freezing in Space I - Blackest Night2010-05-12T22:26:00-04:00Jessetag:thephysicsvirtuosi.com,2010-05-12:posts/freezing-in-space-i-blackest-night.html<p>In the last post I made, I discussed the fact that <a href="http://thevirtuosi.blogspot.com/2010/05/human-radiation.html">humans radiate
energy</a>.
In that post I calculated that we actually radiate quite a lot of power.
This immediately raises a few questions, the most obvious one being: How
long would it take you to freeze in space? This question is
multifaceted, and I’m going to split it between two parts. This first
part, ‘Blackest Night’ is how quickly we’d freeze if we were completely
lost in space, nothing anywhere near. The second part, ‘Turn On The
Sun!’ will address what would happen in near earth orbit. We need to
clarify what we mean when we say freezing in space. The fatal
temperature change for a human is (according to the all knowing
internet), roughly a drop of 7C. If you remember my previous post, we
calculated that we would radiate about 860 W. Now, we have to ask how
much energy it takes to change our temperature by 7 C. Well, as I’ve
discussed a
<a href="http://thevirtuosi.blogspot.com/2010/04/beer-diet.html#more">few</a>
<a href="http://thevirtuosi.blogspot.com/2010/04/falling-water-hot-or-cold.html">times</a>,
the energy it takes to change the temperature is given by <mathjax>$$Q=mc\Delta
T$$</mathjax> The mass of a human is \~75 kg. We’re mostly water, so let’s just
assume that we are all water. This gives us a specific heat of 4.2
kJ/kg*K. The energy needed for our 7 C temperature change is then
<mathjax>$$Q=(75kg)(4.2kJ/kg*K)(7K)=2.2 MJ$$</mathjax> By the time we have dropped to 30
C, we are only radiating a power of <mathjax>$$P=eA\sigma
T^4\=(.97)(1.7m^2)(5.67\cdot10^{-8}W/m^2K^4)(303K)^4=790W$$</mathjax> Let
us assume that the average power radiated is the geometric average of
these two powers, <mathjax>$$P_{avg}=\sqrt{(860W)(790W)}\approx825W$$</mathjax> This
gives us a time to freeze of <mathjax>$$t=\frac{2.2MJ}{825W}\approx2700s$$</mathjax> This
is \~45 minutes. So you’ve got 45 minutes until a deadly freeze in deep
space. Seems a rather long time, does it not? I’m fairly certain that
you’d freeze much faster in antarctica than deep space. Why? Because in
Antarctica you have more cooling mechanisms that just radiation, you
have conduction in the air around you and convection of that warmer air
away from your skin. If you’re adrift in space, for all that it is
rather cold, the good news is that you’ll asphyxiate before you freeze!
All of this was done assuming that there’s no energy gain from anywhere.
That is, that we’re stuck somewhere in the deepest space, the blackest
night. Tomorrow we’ll consider what would happen if you were in near
earth orbit, with all of these lovely energy sources around,
particularly the sun.</p>Human Radiation2010-05-09T00:24:00-04:00Jessetag:thephysicsvirtuosi.com,2010-05-09:posts/human-radiation.html<p>Things are still busy here at the Virutosi. Hopefully in a week or so
we’ll be back to normal, and much more active than we’ve been recently
Anyways, today I’d like to consider human radiation. It is well known
that any object will radiate energy based on its temperature. Even more
interesting, we radiate at all wavelengths, though at the human body
temperature our radiation is sharply peaked in the infrared. Even so, we
still put out some x-ray radiation. As a professor of mine once said,
consider that next time you sleep with someone! Given all this, the
question on my mind today is: how does the energy we radiate daily
compare to the energy we consume? That is, why don’t I lose weight
sitting here typing on the computer? We physicists call perfect
radiators black bodies (something that radiates perfectly also must
absorb perfectly). For perfect radiators, the power radiated is given by
<mathjax>$$P=\sigma A T^4$$</mathjax> where sigma is the stephan-boltzman constant, A is
the surface area of the radiator, and T is the temperature of the
radiator. For objects that are not perfect emitters or perfect
absorbers, we through in a fudge factor, e the emissivity, which is
between 0 and 1. This makes the power emitted <mathjax>$$P=e \sigma A T^4$$</mathjax> To
figure out the power radiated by a human, we need to know three things.
The first is the emissivity of human skin. It turns out this is .97. The
second is the temperature of a human body, \~37 C. The third is the
surface area of a human. This requires a little estimation. I’m about
180 cm tall, and I wear 35” waist pants, so my radius is \~14 cm.
Modeling myself as a cylinder, I have a surface area of \~1.7 m^2. Now
we can estimate my power output: <mathjax>$$P=.97*5.67\cdot 10^{-8}W/m^2K^4
* 1.7 m^2 * (310K)^4$$</mathjax> <mathjax>$$P \approx 860W$$</mathjax> I’m powerful! That’s
about 14 (60W) lightbulbs! We’d like to compare that to our daily energy
intake, so we need to turn this power into an energy. Well, there are
86400 s in a day. So we radiate 74<em>10^6 J per day. If you read my
<a href="http://thevirtuosi.blogspot.com/2010/04/beer-diet.html">beer diet</a>
post, you’ll know the conversion between J and Calories (note the
capitol C). If not, suffice it to say that 1 Calorie = 4.2</em>10^3 J. If
I consume about 2000 Calories a day (typical, right?), then I take in
about 8.4<em>10^6 J per day. So, dear reader, why haven’t I lost weight
while typing this post? There are a few answers. I’m wearing insulating
layers, clothing which keeps in some of my radiated heat. Also, our skin
temperature is lower than our internal temperature of 37 C. But more
than that, I’m absorbing energy from the surroundings. The earth is a
fairly good blackbody radiator, with an average temperature of \~20 C.
This means that my net power loss, with no clothing, would be about
<mathjax>$$\Delta P \approx (5.67\cdot 10^{-8}
W/m^2K^4)(1.7m^2)((310K)^4-(293K)^4)$$</mathjax> <mathjax>$$\Delta P = 180W$$</mathjax> This is
\~15</em>10^6 J per day. While still more than my intake, this is much
closer, and you can imagine that the rest of the difference can be made
up by our lower skin temperature, and clothing and such instruments of
men. Of course, these numbers are rough, so I don’t recommend the ‘naked
diet’, where you try to lose weight by walking around naked. Or if you
do try it, don’t say I told you to when you’re taken in for indecent exposure!</p>Letting Air Out of Tires II2010-05-04T14:52:00-04:00Jessetag:thephysicsvirtuosi.com,2010-05-04:posts/letting-air-out-of-tires-ii.html<p>In a <a href="http://thevirtuosi.blogspot.com/2010/05/letting-air-out-of-tires.html">recent
post</a>
I calculated how cold air coming out of bike tires should feel. However,
at the end of the post, I did note that there are competing explanations
for why the air cools. There’s the approach I took, which is adiabatic
cooling, but there’s also something called the Joule-Thomson effect. The
Joule-Thomson effect has the interesting property that helium being let
out of a bike tire would actually be warmer, which suggests an immediate
way to test which effect is dominant. We pressurize a bike tire with
helium, and see if the valve gets cold or hot. This is exactly what I
did. With the help of Mark and Vince, our local equipment gurus, I was
able to pressurize a bike tire to 26 psi with helium. Using one of those
little thermal measurers you can buy at radio shack, we measured the
initial temperature of the valve as 80 F. We then released the helium,
and measured the temperature of the valve as 73 F. The adiabatic
approach is the winner! Our experiment confirms that the dominant effect
of the cooling is the adiabatic cooling I talked about yesterday. The
Joule-Thomson effect may be at play, but if so it takes a secondary role
to the adiabatic cooling. Now, some of you may be saying: wait a second,
you predicted the air would be -100 F! It doesn’t feel that cold! Nor
did your valve cool down to -100 F! To which I reply: Yes, I did predict
very cold air. But you have to remember that it is mixing with a lot of
room temperature air, so it won’t feel as cold as I predicted. Nor will
it transfer much heat to the valve (recall, we predicted this would be
an adiabatic process, with absolutely no heat transfer, something that
is obviously false). Also, I didn’t have 60 psi of pressure. If we do
the calculation, 26 psi only gives a temperature of 250K = - 10 F.
Hopefully that answers your question. And now, dear reader, as I’ve
wanted to say for a while: Myth Busted!</p>Physics as Magic?2010-05-03T23:46:00-04:00Jessetag:thephysicsvirtuosi.com,2010-05-03:posts/physics-as-magic-.html<p>There’s a <a href="http://physicsbuzz.physicscentral.com/2010/05/iron-man-2-and-myth-of-physicist.html">nice
post</a>
over at Physics Buzz that I thought I might draw your attention to. The
central quote for me is: “Speaking strictly about technology - which is
often the knowledge attained by physicists put into practical use by
engineers - physics has created some pretty amazing things. Cars,
planes, iphones, medical treatments, lasers, 3-D movies, and the Large
Hadron Collider. We are constantly <span class="caps">WOWED</span> by science. Unfortunately, the
less someone understands how these things work, the more they begin to
believe anything is possible. In other words, if you don’t understand
the parameters that allow for amazing things (like jets!) you also don’t
understand the parameters that would prevent other things (like energy
generating heart replacements). If you don’t understand anything about
physics and technology, then it appears to be nothing short of magic,
and magic has no bounds…” I think this gets at some of the heart of
what we’re interested in doing here at the virtuosi. By trying to strip
away some of the mysticism around physics, we hope to bring people to a
better understand of what we do. Sure, we work fun problems, and discuss
interesting topics (at least, so I hope). More importantly though, while
doing so we display both the tools of physics, and how physicists think.
The reason I got started on this blog is because I feel there is a huge
gap in understanding between what physicists do and how we do it, and
what the general public perceives us as doing. I don’t think this gap is
good for anyone, and I think part of the reason for that is very well
articulated in the above quote. Of course, part of my hope for this blog
is that if more people are comfortable with physics, when I tell people
at a dinner party that I’m a physicist the response won’t be either “Oh,
I hated physics in high school” or “Oh, that’s nice.” End of conversation.</p>Letting Air Out of Tires2010-05-03T23:21:00-04:00Jessetag:thephysicsvirtuosi.com,2010-05-03:posts/letting-air-out-of-tires.html<p>Have you ever noticed how when you let air out of a bike tire (or, I
suppose, a car tire) it feels rather cold? Today we’re going to explore
why that is, and just how cold it is. Many people consider the air
escaping from a tire as a classic example of an adiabatic process. What
is an adiabatic process? It is a process that happens so quickly there
is no time for heat flow to occur. For our air in the bike tire this
means we’re letting it out of the tire so quickly that no energy can
move into it from the surrounding air. This may not be exactly true,
there may be a little energy flow, but there is little enough that we
can ignore it. Given that, how do we talk about temperature change?
Let’s give a physical motivation first. Imagine a gas as a collection of
hard spheres, like baseballs. Envision this bunch of baseballs in a box.
Suppose you make the volume of the box smaller, you move the walls in.
The baseballs will start to bounce around faster. Having trouble
thinking of this? Think of a single baseball in a box. Imagine it hits a
wall moving towards it. What happens? That’s just like what happens when
a baseball hits a baseball bat moving towards it, it goes flying. That
is, it starts moving faster. The speed with which these gas particles
are moving is what we measure as temperature. So shrinking our box
increases the temperature. Likewise, expanding our box will decrease the
temperature. The same principle holds here. Our gas is expanding from a
small volume (the bike tire) into a larger volume (the surrounding
world). Thus we expect the temperature to decrease.
Got all that? Good. Now for some math. It turns out that using the ideal
gas law, we can derive that for an (reversible) adiabatic process
<mathjax>$$PV^\gamma=constant$$</mathjax>
where P is the pressure of the gas, V is the volume of the gas, and
gamma is (for our purposes) just a number. The ideal gas law states that
<mathjax>$$PV=NkT$$</mathjax>
Where N is the number of gas molecules we have, T is the temperature of
the gas, and k is a constant. Since N doesn’t change in the process
we’re considering, we can use this to rewrite the above equation, by
substituting for V. This gives
<mathjax>$$P^{1-\gamma}T^{\gamma}=constant$$</mathjax>
Where the constant is not the same as above.
Because this is equal to a constant, we can say that our initial
pressure and temperature are related to our final pressure and
temperature by
<mathjax>$$P_i^{1-\gamma}T_i^{\gamma}=P_f^{1-\gamma}T_f^{\gamma}$$</mathjax>
We can solve this for the final temperature giving
<mathjax>$$T_f=T_i\left(\frac{P_i}{P_f}\right)^{\tfrac{1-\gamma}{\gamma}}$$</mathjax>
Finally, we can plug in some numbers. Gamma is 7/5 for diatomic gases
(which is most of air). If we assume the air is about room temperature,
20 C, and the tire is at 60 psi, this gives (1 atmosphere of pressure is
15 psi):
<mathjax>$$T_f=293K\left(\frac{60 psi}{15
psi}\right)^{\tfrac{1-7/5}{7/5}}=197K$$</mathjax>
Converting back from Kelvin to C, this is -76 C or -105 F. That’s cold!
<strong>For the Expert:</strong>
There is actually a debate as to whether or not adiabatic cooling is
responsible for the chill of air upon being let out of a tire. The
argument for it is fairly straightforward. The released air does work on
the surrounding atmosphere as it leaves, lowering the energy of the gas.
If this is the primary effect, then the change in temperature is given
above. However, it is possible that we can consider this a free
adiabatic expansion. In a free adiabatic expansion (like a gas expanding
into a vacuum), there is no work done, because gas is not acting
‘against’ anything.
The other possibility is the <a href="http://en.wikipedia.org/wiki/Joule%E2%80%93Thomson_effect">Joule-Thomson
effect</a>. I
don’t claim to understand this effect very thoroughly, but it is another
mechanism for cooling when air is let out of a well insulated valve.
I’ve seen claims both ways as to which process is actually responsible
for cooling.
Fortunately, a simple experiment suggests itself. Helium heats up
through the Joule-Thomson effect (when it starts abot \~50K). It will
cool down through the above described adiabatic cooling. So, fill a bike
tire with Helium gas, and let it out. See if the valve/gas feels hot or
cold. This will determine the dominant effect. As an experimentalist,
this <a href="http://www.brightlywound.com/?comic=42">appeals greatly to me</a>.
But if any theorists out there have ideas, please speak up.</p>The Beer Diet2010-04-30T11:35:00-04:00Jessetag:thephysicsvirtuosi.com,2010-04-30:posts/the-beer-diet.html<p>I know it’s been pretty quiet over here this week. The semester is
winding down (a week of classes left), and that means that things have
been kicked up into another gear. We’ve got four or five ideas bouncing
around at the moment, so hopefully we’ll get some up soon. Today I’d
like to talk about the beer diet. A while back, there was a rumor going
around that if you drank ice cold beer your body would burn more
calories heating the beer than the beer contained. It turns out that
this false, and I think the claim relied on a lack of knowledge that the
American food Calorie is actually one thousand calories (note the
difference in capitalization). Let’s prove this to ourselves. Let us
consider 12 fluid oz of beer. To any good approximation we can treat
this as 12 oz of water. Let us assume that the beer starts at 0C and our
body raises it to body temperature, 37C. This takes energy Q given by
<mathjax>$$Q=mc\Delta T$$</mathjax> Where m is the mass of the beer, c the specific heat
of water, and the last term the change in temperature. We convert out of
the archaic units (thanks google!) to get 12 oz = .35 liters. The
density of water is 1000kg/m^3, which is 1kg/liter. This gives us the
mass of the beer as .35 kg. The specific heat of water is 1 kcal/kg*C
(note: kcal = kilocalorie = 1 Calorie, i.e. 1000 calories), so the
energy it takes to change the temperature of our beer is
<mathjax>$$Q=(.35kg)(1kcal/kg*C)(37C-0C)$$</mathjax> <mathjax>$$Q=13kcal$$</mathjax> If memory serves, some
beer company recently had ads toting how their light beer was ‘under 100
Calories’. So, I imagine 100 kcal is a fairly good lower bound on the
calories in a 12 oz beer. So you can’t lose weight just by drinking
beer. Sad, isn’t it? Finally, we can see that if we didn’t understand
the distinction between Calorie and calorie, we might think that, since
it took 1300 calories to heat up the beer, and beer only contains 100
Calories, this would be a great way to lose weight! I leave it as a
question for you, dear readers, as to how much ice water you’d have to
drink to have an effective ice water diet. If it does become the next
fad diet, you heard about it here first!</p>Cell Phone Brain Damage… or not.2010-04-27T19:30:00-04:00Jstag:thephysicsvirtuosi.com,2010-04-27:posts/cell-phone-brain-damage-or-not-.html<p>For better or worse, cell phones are a part of our lives. I say this
because they can be convenient when needed, but there’s nothing more
annoying than a dropped call or an inconveniently timed ring tone. Since
many people carry their phones with them daily, there has been a number
of studies which ask what are the long-term health effects of cell phone
usage. While the <a href="http://www.google.com/search?q=cell+phone+tumor&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a">controversy
rages</a>
among medical researchers, I decided to find my own answers by doing a
calculation based on the power output of a simple hand set. The key
physical idea in this calculation is Faraday’s law. You can check
Wikipedia for a lengthy
<a href="http://en.wikipedia.org/wiki/Faraday%27s_law_of_induction">article</a>
describing it, but I’m more interested in getting down to business. Now,
from the antenna of every cell phone, an electromagnetic wave is
emitted. This <span class="caps">EM</span> wave, has magnetic and electric components which
oscillate back and forth across space and through time. To quantify the
effects of this <span class="caps">EM</span> wave on a human being, I’m going to focus on just the
brain. In the simplest case, I would imagine it as a homogeneous blob.
Unfortunately, this model is too simplistic and looses the all the
interesting physiology. Instead, I’m going to treat the brain as a
collection of loops made by neurons. In this model, I imagine that every
neuron is connected to many other neurons by electrically conducting
dendrites and axons. Some neurons will be connected to each other in two
places, thus creating a small continuous loop. This loop has some area
<mathjax>$$ A $$</mathjax> and as the <span class="caps">EM</span> wave from the cell phone passes by, it induces an
<span class="caps">EMF</span> a la Faraday’s law: <mathjax>$$ \epsilon = \frac{ d\phi}{dt}. $$</mathjax> Well,
since the flux <mathjax>$$ \phi = \textbf{B} \cdot \textbf{A} \approx \pi
r^2 E / c,$$</mathjax> and the electric field is an oscillatory function <mathjax>$$ E =
E_0 \cos \omega t,$$</mathjax> we have <mathjax>$$ \epsilon = \frac{\pi r^2 \omega
E_0}{c} \cos \omega t, $$</mathjax> where <mathjax>$$r$$</mathjax> is the radius of the neuron
loop, <mathjax>$$\omega = 2 \pi f$$</mathjax> is the frequency of cell phone carrier
waves and <mathjax>$$c $$</mathjax> is the speed of light. Taking the root mean square gets
rid of the cosine factor and gives <mathjax>$$ \epsilon_{rms} = \frac{ \pi
r^2 \omega}{c} E_{rms}. $$</mathjax> So, if I know the <span class="caps">RMS</span> amplitude of the <span class="caps">EM</span>
waves coming from my cell phone, then I will be able to calculate the
induced voltage in a given neuron loop. Fortunately, there’s a
convenient set of formulae which relates power output to the <span class="caps">RMS</span>
amplitude: <mathjax>$$ \frac{ \langle P \rangle}{A} = I = \langle u_E
\rangle c = \epsilon_0 c E^2_{RMS} \quad \rightarrow \quad
E_{RMS} = \left( \frac{ \langle P \rangle}{\epsilon_0 c A}
\right)^{1/2}. $$</mathjax> This line of gibberish is relating the intensity <mathjax>$$
I $$</mathjax> to the power per unit area as well as the average energy density,
which in turn is expressed in terms of <mathjax>$$E_{RMS}. $$</mathjax> The final
expression can now be substituted to find <mathjax>$$ \epsilon_{rms} = \frac{2
l^2 f}{c} \left( \frac{ \langle P \rangle}{\epsilon_0 c 4 \pi
R^2} \right)^{1/2}. $$</mathjax> Here, I’ve made a few additional substitutions
to get the formula in terms of real numbers. In particular, I’ve
converted from <mathjax>$$r, $$</mathjax> the radius of a neuron loop to <mathjax>$$l, $$</mathjax> the length
of an individual neuron when two of them form a loop. Also, I’m using
<mathjax>$$R$$</mathjax> to denote the distance from the cell phone antenna to the neuron
loop in your head. I think a reasonable number for this should be about
10 cm, yeah? Okay, now here comes the fun part. I asked around to look
at peoples cell phones, and I found that roughly, they all run at 1 W of
power. Furthermore, according to an introductory physics text, cell
phones operate at around <mathjax>$$ 10^9 Hz $$</mathjax> and from a little web browsing,
<mathjax>$$ l = 10^{-3} \sim 10^1 $$</mathjax> meters. Since the length of neurons seems
to vary quite a bit, I’ve written the expression as <mathjax>$$ \epsilon = l^2
\times (360 V/m^2). $$</mathjax> So for two neurons about a mm long in the same
vicinity, there’s a pretty decent chance they’ll make contact with each
other at two points. Thus, the induced voltage is about 0.36 mV. On the
off chance two longer neurons form a loop (taking <mathjax>$$l \sim 10^{-2} $$</mathjax>
meters), the induced voltage is about 36 mV. Finally if two <span class="caps">REALLY</span> long
neurons (<mathjax>$$ l \sim 1$$</mathjax> meter) make a loop (mind you, the chances of
this are zero since the circular area they would form is bigger than
your body!), then the induced voltage is 360 V. Electrifying! To compare
these numbers to something useful, consider that the brain is constantly
sending electrical signals at around 100 mV (this is called the Action
Potential). Since the most probbable induced voltage (0.36 mV) in
significantly smaller than the action potential (less than 1%), we can
conclude that the cell phonne <span class="caps">EM</span> waves are essentially negligable
effects. Like I said earlier, this is an extremely simple model for the
brain, but it does illustrate a particular mechanism by which common
technology can interfere with our biology. And yes, I trust that the
calculation is correct, but while the experiments are showing higher
incidence of tumors coinciding with cell phone use, well, I’m just limit
my talk time.</p>Some of the Best Advice You’ll Ever Receive2010-04-25T20:29:00-04:00Alemitag:thephysicsvirtuosi.com,2010-04-25:posts/some-of-the-best-advice-you-ll-ever-receive.html<p>I came across what might be the best advice any student (nay any human
being) could possibly ever receive reading a book today…</p>
<blockquote>
<p>Now there are two ways in which you can increase your understanding of
these issues. One way is to remember the general ideas and then go
home and try to figure out what commands you need and make sure you
don’t leave one out. Make the set shorter or longer for convidence and
try to understand the tradeoffs by trying to do problems with your
choice. This is the way I would do it because I have that kind of
personality! It’s the way I study — to understand somethingby trying
to work it out or, in other words, to understand something by creating
it. Not creating it one hundred percent, of course; but taking a hint
as to which direction to go but not remembering the details. These you
work out for yourself. The other way, which is also valuable, is to
read carefully how someone else did it. I find the first method best
for me, once I have understood the basic idea. If I get stuck I look
at a book that tells me how someone else did it. I turn the pages and
then I say “Oh, I forgot that bit”, then close the book and carry on.
Finally, after you’ve figured out how to do it you read how they did
it and find out how dum your solution is and how much more clever and
efficient theirs is! But this way you can understand the cleverness of
their ideas and have a framework in which to think about the problem.
When I start straight off to read someone else’s solution I find it
boring and uninteresting, with no way of putting the whole picture
together. <span class="caps">AT</span> least, thats the way it works for me! Throughout the
book, I will suggest some problems for you to play with. You might
feel tempted to skip them. If they’re too hard, fine. Some of them are
pretty difficult! But you might skip them thinking that, well, they’ve
probably been done by somebody else; so what’s the point? Well, of
<em>course</em> they’ve been done! But so what? Do them for the <em>fun</em> of it.
That’s how to learn the knack of doing things when you have to do
them. Let me give you an example. Suppose I wanted to add up a series
of numbers, 1 + 2 + 3 + 4 + 5 + 6 + 7 … up to say, 62. No doubt you
know how to do it; but when you play with this sort of problem as a
kid, and you haven’t been shown the answer … it’s <em>fun</em> trying to
figure out how to do it. Then, as you go into adulthood, you develop a
certain confidence that you can discover things’ but if they’ve
already been discovered, that shouldn’t bother you at all. <strong>What one
fool can do, so can another, and the fact that some other fool beat
you to it shouldn’t disturb you: you should get a kick out of having
discovered something.</strong> Most of the problems I give you in this book
have been worked over many times, and many ingenious solutions have
been devised for them. But if you keep proving stuff that other s have
done, getting confidence, increasing the complexities of your solution
for the fun of it — then one day you’ll turn around and discover that
<em>nobody actually did that one!</em> And that’s the way to become a scientist.</p>
</blockquote>
<p>The quote is from none other than <a href="http://en.wikipedia.org/wiki/Richard_feynman">Richard
Feynman</a> in his <a href="http://books.google.com/books?id=-olQAAAAMAAJ&q=feynman+lectures+on+computation&dq=feynman+lectures+on+computation&ei=4dLUS4HBC43aygSIqaXoCQ&cd=1">Lectures
on
Computation</a>
(a great read by the way). [reproduced without permission] Honestly I
think that’s one of the biggest problems with science and math education
in this country. They have taken something which is fun and challenging,
a path of discovery, and reduced it to memorizing a list of dogmatic
equations handed down from high. Students are all but taught that the
men and women that discovered these laws are somehow above the rest of
us. One of the best kept secrets of science is that scientists are in
fact mere mortals. They have their faults, and deficiencies just like
the rest of us. You don’t need anything special to do science, all you
need is a sense of curiosity and wonder, and the resolve to spend some
time working at it. The more time you work at it, the better you get.
Science is not a list of facts. It is not plugging numbers into
equations. It is neither history nor accounting. It is, in its purest
form, the recognition that as human beings we are each endowed with the
power to come to understand the world around us. I firmly believe that
anyone of a sane mind is capable of becoming a physicist. I’m not saying
that everyone should be; people should follow their passions, but I
worry that too many students are turned away thinking they aren’t cut
out to be scientists, and that’s rubbish. And that’s my two cents.</p>End of the Earth Physics III- Asteroids!2010-04-22T18:45:00-04:00Corkytag:thephysicsvirtuosi.com,2010-04-22:posts/end-of-the-earth-physics-iii-asteroids-.html<p><a href="http://1.bp.blogspot.com/_fa6AZDCsHnY/S9EZLq8fRRI/AAAAAAAAAAc/pg5n-zg70QE/s1600/armageddon.jpg"><img alt="image" src="http://1.bp.blogspot.com/_fa6AZDCsHnY/S9EZLq8fRRI/AAAAAAAAAAc/pg5n-zg70QE/s200/armageddon.jpg" /></a>No
day of earth destroying celebration would be complete without that
apocalyptic all-time favorite: the asteroid. And to be fair, it deserves
to be the favorite. Of all the doomsday predictions out there (nuclear
holocaust for the cynics, death by snoo-snoo for the optimists) it is
the only one that is just about certain to occur at some point in the
geological near future. On top of that, it’s one that we could
potentially avoid with enough time and some <a href="http://en.wikipedia.org/wiki/Asteroid_deflection_strategies">neat
ideas</a> .
Let’s model an asteroid impact on the earth. We will assume an asteroid
starting from rest at infinity and falling to the surface of the earth
due to earth’s gravity only. Now obviously these assumptions are not
exactly correct. We can imagine our asteroid not starting from rest or
feeling some force due to the sun. Each of these would certainly change
our answer, but should still be within an order of magnitude of our
result. By conservation of energy, we have that <mathjax>$$ \text{KE}_{i} +
\text{PE}_{i} = \text{KE}_{f} + \text{PE}_{f} $$</mathjax> Plugging in the
formulas for kinetic and potential energy, we have <mathjax>$$
\frac{1}{2}m{v_i}^{2} + \frac{-GM_{\oplus}m_a}{{r_{i}}} =
\text{KE}_f + \frac{-GM_{\oplus}m_a}{{r_{f}}}$$</mathjax> Now, plugging in
our initial conditions of starting at rest at infinity and rearranging,
we have that <mathjax>$$ \text{KE}_f =
\frac{GM_{\oplus}m_a}{{R_{\oplus}}} $$</mathjax> Assuming that all of this
kinetic energy is deposited into the earth as heat, we have that our
total energy added to the earth is <mathjax>$$ \text{E} =
\frac{GM_{\oplus}m_a}{{R_{\oplus}}} = \frac{(7 \times 10^{-11}
J m kg^{-2})(6 \times 10^{24} kg)}{6 \times 10^{6} m}m_a \approx
(10^{8} J)(\frac{m_a}{1kg}) $$</mathjax> Now let’s consider an asteroid made
out of iron. Iron is about 10 times denser that water so it has a
density of 10^4 kg/m^3. If our asteroid is a sphere, we can find its
radius given <mathjax>$$ m_a = \frac{4}{3} \pi {R_a}^{3} \rho $$</mathjax> Plugging
this into our energy equation gives energy as a function of asteroid
radius: <mathjax>$$ \text{E} \approx (4 \times 10^{8} J) \times (
\frac{R_a}{1m} )^{3} $$</mathjax>. Now we have the energy released during an
asteroid impact in terms of either the size or the mass. So let’s do
some destruction. We have heard in the last decade or so that even
relatively small changes in average global temperatures can result in
catastrophe. So let’s see how big of an asteroid we would need to
deposit enough energy to raise global temperatures by 1 degree (<span class="caps">NOTE</span>:
this is very much a toy model. For a more realistic and sadistically
addicting model, check out
<a href="http://www.lpl.arizona.edu/impacteffects/">this</a> site). To do this,
let’s pretend increasing global temperatures is the same as increasing
ocean temperatures. Since water has a much much higher specific heat
than air, this seems like a reasonable assumption. Plus, we have already
made this calculation before. In the first End of the Earth post a bit
ago, we found that the amount of energy required to raise the
temperature of the world’s oceans by 100 degrees is <mathjax>$$ E_{\text{100}}
\approx 10^{26} J $$</mathjax> Since the scaling is linear, we can see that the
energy to heat up the oceans by 1 degree is just one hundredth of this
value, or <mathjax>$$ E_{\text{heat}} \approx 10^{24} J $$</mathjax> So how big of an
asteroid is this? Using our equation from before, setting E = Eheat, we
have that <mathjax>$$ R_a = \frac{E_\text{heat}}{4 \times 10^{8} J m^{-1}}
= (\frac{10^{24}}{4 \times 10^8})^{1/3}m = (2.5)^{1/3}
\times{10^5}m \approx 100 km $$</mathjax> For comparison, the asteroid that
killed the dinosaurs was thought to be about 30 km or so. This is a big
rock. <span class="caps">FUN</span> <span class="caps">FACT</span> <span class="caps">FINALE</span>: We all know that an asteroid killed the
dinosaurs. But what would happen if instead of an asteroid, we dropped a
dinosaur from infinity? According to wikipedia, a brontosaurus was about
30 tons which is about 30,000 kg. Using the equation for energy as a
function of mass derived earlier, we have that <mathjax>$$\text{E} = (10^{8}
J)(\frac{m_a}{1kg}) = 3 \times 10^{12} J \approx 1 \text{kiloton}
\text{TNT} $$</mathjax> For comparison, the Hiroshima bomb was about 15 kilotons.
So to reproduce the Hiroshima energy yield we need about 15
brotosauruses falling from infinity.</p>Earth Day - Earth Units2010-04-22T16:18:00-04:00Alemitag:thephysicsvirtuosi.com,2010-04-22:posts/earth-day-earth-units.html<p>In honor of Earth day, I thought I would take a look at what it would
mean to do physics in ‘Earth’ units. What do I mean by that? Well lets
be anti-Copernican here, in fact lets assume the opposite of the
<a href="http://en.wikipedia.org/wiki/Copernican_principle">Copernican
principle</a>, and state
that the Earth is privileged in the universe and define all of our units
around the Earth.</p>
<p><a href="http://3.bp.blogspot.com/_YOjDhtygcuA/S9Cu1nxGXDI/AAAAAAAAAKU/JTtr_QKqdR8/s1600/50px-Earth_symbol.svg.png"><img alt="image" src="http://3.bp.blogspot.com/_YOjDhtygcuA/S9Cu1nxGXDI/AAAAAAAAAKU/JTtr_QKqdR8/s320/50px-Earth_symbol.svg.png" /></a></p>
<p>So, I will put a little subscript earth on all of the ‘earth’ units.
They are to be read as ‘earth meters’ or ‘earth amps’, etc. We will take
as our starting point the mass, radius and day of the earth, normalizing
all of our standards to that. This gives us our initial conversion
factors <mathjax>$$ 1 g_{\oplus} = M_{\oplus} = 5.9742 \times 10^{34}
\text{ kg} $$</mathjax> <mathjax>$$ 1 m_{\oplus} = R_{\oplus} = 6378.1 \text{ km} $$</mathjax>
<mathjax>$$ 1 s_{\oplus} = T_{\oplus} = 86,400 \text{ s} $$</mathjax> From this, we
can figure out what all of the other ‘earth’ units would be.</p>
<h3>Lengths</h3>
<p>Some things we are used to talking about start to look a little simpler
in these units. The surface area of the earth would be <mathjax>$$ 4 \pi\
m_{\oplus}^2 \sim 12.6\ m_{\oplus}^2 $$</mathjax> and the volume of the
earth would be <mathjax>$$ \frac{4\pi}{3}\ m_{\oplus}^3 \sim 4.2\
m_{\oplus}^3 $$</mathjax> One earth velocity would be <mathjax>$$ 1\ \frac{
m_{\oplus} }{ s_{\oplus} } = 73.8\ \frac{ m }{ s} $$</mathjax> so that the
velocity of a person standing at the equator would be about <mathjax>$$ 2 \pi\
\frac{ m_{\oplus }}{s_{\oplus} } \sim 6.3\ \frac{ m_{\oplus }
}{ s_{\oplus } } $$</mathjax> and one earth acceleration would be <mathjax>$$ 1\ \frac{
m_{\oplus} }{ s_{\oplus}^2 } = 8.5 \times 10^{-4} \frac{
m}{s^2} $$</mathjax> so that the gravitational acceleration we feel on the earth
in earth accelerations would be <mathjax>$$ g \sim 1.15 \times 10^4\
\frac{m_{\oplus}}{s_{\oplus}^2 } $$</mathjax> which is a little more awkward
than the 10 that it is in <span class="caps">SI</span>. After this all of the numbers start to get
pretty silly.</p>
<h3>Mechanics</h3>
<p>One earth energy is <mathjax>$$ 1\ J_{\oplus} = 3.3 \times 10^{28}\ J $$</mathjax>
and earth force <mathjax>$$ 1\ N_{\oplus} = 5.1\times 10^{21}\ N $$</mathjax> the
gravitational constant becomes <mathjax>$$ G = 3.88 \times 10^{-25} \
\frac{m_{\oplus}^3 }{ g_{\oplus} s_{\oplus}^2} $$</mathjax> earth power
<mathjax>$$ 1\ W_{\oplus} = 3.8 \times 10^{23} \ W $$</mathjax> earth pressure <mathjax>$$ 1\
Pa_{\oplus} = 1.2 \times 10^8 \ Pa $$</mathjax></p>
<h3>Electrical and Thermal</h3>
<p>Additionally if I take the Boltzmann constant and electrical constant as
fundamental dimensionfull quantities, and set them equal to 1 (i.e.
<span class="caps">CGS</span>-type Earth units), I can use them to discover earth units dealing
with electrical or thermal phenomenon. An earth kelvin is <mathjax>$$ 1\
K_{\oplus} = 3.2 \times 10^{28} \ K $$</mathjax> earth coulomb <mathjax>$$ 1\
C_{\oplus} = 4.6 \times 10^{17} \ C $$</mathjax> earth amp <mathjax>$$ 1\
A_{\oplus} = 5.3 \times 10^{12} \ A $$</mathjax> an earth volt <mathjax>$$ 1\
V_{\oplus} = 7.1 \times 10^{10} \ V $$</mathjax> an earth farad <mathjax>$$ 1\
F_{\oplus} = 6.4\times 10^6 \ F $$</mathjax> an earth ohm <mathjax>$$ 1\
\Omega_{\oplus} = 14 m\Omega$$</mathjax> a earth henry <mathjax>$$ 1\ H_{\oplus} =
1170 \ H $$</mathjax> an earth electric field <mathjax>$$ 1\
\frac{V_{\oplus}}{m_{\oplus}} = 11200\ \frac{V}{m} $$</mathjax> an earth
tesla <mathjax>$$ 1\ T_{\oplus} = 152 \ T $$</mathjax> etc….</p>
<h3>Lessons</h3>
<p>So, it looks like if we really decide to fly in the face of the
Copernican principle and look to the earth as something fundamental in
the universe, these considerations can suggest a bunch of other relevant
values for other kinds of dimensionfull quantities in the world. If the
earth really was something super special in the universe, and if somehow
its design was intimately connected with the properties of physics at
large, then all of these different values ought to have some kind of
deep meaning. Unfortunately, as far as I can tell, they are just a bunch
of random numbers. Looks like the Copernican principle wins again.
Nobody should tell the earth. Its feelings might get hurt.</p>Nobody Really Gets Quantum2010-04-22T14:29:00-04:00Yarivtag:thephysicsvirtuosi.com,2010-04-22:posts/nobody-really-gets-quantum.html<p>Nobody Really Gets Quantum / <a href="http://www.etgarkeret.com/">Etgar Keret</a>
On <a href="http://en.wikipedia.org/wiki/Yom_Kippur#Eve">Yom Kippur eve</a> Quantum
walked over to Einstein’s house to seek forgiveness. “I’m not home,”
shouted Einstein from behind a closed door. On the way home people
taunted him and somebody even hit him with an empty can of coke. Quantum
pretended not to care, but deep inside he was really hurt. Nobody really
gets Quantum, everybody hates him. “You parasite” people cry out when
he’s walking down the street, “why are you dodging the draft?” - “I
wanted to enlist,” Quantum tries to say, “but they wouldn’t take me,
because I’m so small.” Not that anybody listens to Quantum. Nobody
listens to Quantum when he tries to speak up for himself, but when he
says something that can be misconstrued, oh, then suddenly everybody’s
paying attention. Quantum can say something innocent like “wow, what a
cat!” and right away the news says he’s making provocations and run off
to talk to Schrodinger. And anyway, the media hates Quantum most,
because once when he was interviewed in Scientific American Quantum said
that the observer affects the observed event, and all the journalists
thought he was talking about the coverage of the
<a href="http://en.wikipedia.org/wiki/First_Intifada">Intifada</a> and claimed he
was deliberately inciting the masses. And Quantum can keep talking until
tomorrow about how he didn’t mean it and he has no political
affiliation, nobody believes him anyway. Everybody knows he’s friends
with <a href="http://en.wikipedia.org/wiki/Yuval_Neeman#Political_career">Yuval
Ne’eman.</a> A
lot of people think Quantum is heartless, that he has no feelings, but
that’s not true at all. On Friday, after a documentary on Hiroshima, he
was on the expert panel. And he couldn’t even speak. Just sat in front
of the open mic and cried, and all of the viewers at home, that don’t
really know Quantum, couldn’t understand that Quantum was crying, they
just thought he was avoiding the question. And the sad thing about it
is, even if Quantum writes dozens of letters to the editors of all the
scientific journals in the world and proves beyond any doubt that for
the whole atomic bomb thing he was just being used and he never thought
it would end this way, it wouldn’t help him, because nobody really gets
quantum. Least of all the physicists. from the original Hebrew by me;
posted without permission. Originally from <a href="http://www.amazon.com/Girl-Fridge-Stories-Etgar-Keret/dp/0374531056/ref=sr_1_1?ie=UTF8&s=books&qid=1271963967&sr=8-1">The Girl on the
Fridge</a>where
you can find somebody else’s translation of this and other surreal short
stories by the very talented Etgar Keret. Incidentally, the original
story is written in the plural because in Hebrew we call quantum
mechanics, roughly, “theory of the quantas;” I switched it to singular
here because I think it works better this way in English. I’m not sure
I’ve done it justice - I’m not sure you can actually do Keret’s writing
justice reading it out of the Israeli cultural context (for instance,
many physicists will know Ne’eman for his work on <span class="caps">QCD</span> but only Israelis
know he was politically active in a far-right party) - but I thought it
was worth a shot.</p>End of the Earth II - Blaze of Glory2010-04-22T14:26:00-04:00Jessetag:thephysicsvirtuosi.com,2010-04-22:posts/end-of-the-earth-ii-blaze-of-glory.html<p><a href="http://1.bp.blogspot.com/_SYZpxZOlcb0/S9CFFAkGooI/AAAAAAAAABM/2SpaTtw4ivI/s1600/Marvin_the_Martian.jpg"><img alt="image" src="http://1.bp.blogspot.com/_SYZpxZOlcb0/S9CFFAkGooI/AAAAAAAAABM/2SpaTtw4ivI/s200/Marvin_the_Martian.jpg" /></a>
In honor of earth day today, many bloggers are posting things about how
to save the earth, or retrospectives on earth days past. We here at The
Virtuosi decided, what better way to celebrate the earth than to figure
out how to destroy it? So that is exactly what we intend to do. This
post will focus on the destruction of the earth by a laser beam. This is
a familiar concept. Whether it is Marvin the Martian or the Death Star,
destroying planets with lasers (or threatening to) is a common theme. We
will be considering two questions today. The first is fairly obvious:
how much power would the death star need to destroy the earth? The
second relates to a topic of continuing interest of mine: how much would
the death star recoil upon firing? First, the power we would need. Most
estimates of the death star’s power seem to rely on simply overcoming
the gravitational binding energy of the earth. We’re going to assume the
earth is a uniform density. We can easily calculate the gravitational
binding energy to be <mathjax>$$U_G=\frac{3GM_{earth}^2}{5R_{earth}}$$</mathjax>
<mathjax>$$U=2.3\cdot 10^{32} J$$</mathjax> However, looking at the video of the death
star blast, it looks like the earth was more than just gravitationally
unbound. It looks like we’ve actually managed to atomize some of the
constituent molecules. Let us assume that about half of the earth gets
atomized. We’re also going to assume that the earth is made entirely of
iron (not <em>too</em> bad an assumption). In a lattice, iron will have \~6
bonds, and the energy of each bond will be on the order of \~2 eV. We
can calculate the number of iron atoms, N, needed to give the mass of
the earth by: <mathjax>$$N=\frac{M_{earth}}{m_{iron}}$$</mathjax> <mathjax>$$N=3\cdot 10^{50}
atoms$$</mathjax> We assume half of these have all of their bonds broken, this
gives an energy required to break the bonds of <mathjax>$$E \approx
3N(2eV)=2.88\cdot 10^{32} J$$</mathjax> This gives us a total destruction energy
of <mathjax>$$E_{tot}=5.2\cdot 10^{32} J$$</mathjax> Analyzing the video of the firing
of the first death star tells us that the laser fired for about 4s, so
this is a power of 1.3<em>10^32 W! As an aside, that much energy is about
the total energy output of the sun over a day. That means the impulse
this laser delivered (momentum per second, or force) to the death star
must be the power over c, the speed of light (see my <a href="http://thevirtuosi.blogspot.com/2010/04/today-id-like-to-approach-question-near.html">earlier
post</a>
for a discussion of laser gun recoil). This is a force of 4.3</em>10^23 N.
We now need to estimate the death star mass. According to <a href="http://starwars.wikia.com/wiki/Death_Star">confidential
sources</a> the death star had a
diameter of 150 km. The death star is made of metal, but it also has a
lot of empty space inside of it. We’ll go ahead and assume it has the
density of water. This gives is a mass of <mathjax>$$M_{ds}=\tfrac{4}{3}\pi
r^3\rho=2.4\cdot 10^{13} kg$$</mathjax> A quick glance at our numbers reveals
that we’re going to need special relativity here! Otherwise we’d be
accelerating this thing well past the speed of light. We have a total
momentum change of 1.7<em>10^{24}kg</em>m/s. This is a relativistic
momentum, which we can solve for v. The algebra is a tiny bit ugly, but
it turns out that velocity in terms of relativistic momentum is
<mathjax>$$v=\sqrt{\frac{p^2}{m^2+p^2/c^2}} \approx c $$</mathjax> It turns out that
the death star would be moving so close to speed of light as to not
matter. That’s fast! I think we can safely say that this laser recoil
would be noticed!</p>Laser Launching2010-04-21T22:22:00-04:00Jessetag:thephysicsvirtuosi.com,2010-04-21:posts/laser-launching.html<p><a href="http://2.bp.blogspot.com/_SYZpxZOlcb0/S8-y4RmjAhI/AAAAAAAAABE/CueT0Sq1ZYc/s1600/space-war-laser.jpg"><img alt="image" src="http://2.bp.blogspot.com/_SYZpxZOlcb0/S8-y4RmjAhI/AAAAAAAAABE/CueT0Sq1ZYc/s200/space-war-laser.jpg" /></a>
Lasers seem to be on my mind recently. Just yesterday, the class I <span class="caps">TA</span>
for (E&M for engineers) talked about the momentum carried by E&M waves.
This called to mind a discussion I had with a housemate a few weeks
back. He had heard somewhere that ‘they’ were thinking of launching
satellites with lasers. <em>No way</em>, I thought to myself. <em>Satellites are
too heavy</em>. However, his question has been hovering around in my mind,
so I’ve decided to try and answer it: can we use a laser to launch a
satellite into orbit? Let us begin with a few simplifying assumptions.
I’m going to assume that we want our launched satellite to reach the
escape velocity of the earth. Of course, we don’t want a satellite to
escape orbit, but escape velocity is calculated without considering any
kind of drag forces on our launched object. To first order then, I
expect achieving escape velocity will get our satellite into a
relatively high orbit without actually escaping earth’s gravity well.
Second, I’m going to assume our laser is on the ground (reusable
launching device!), and that our satellite is perfectly reflective, so
we’re not going to be melting it. Finally, I’m going to assume that the
laser will remain effective at targeting our object up to 15 km above
the surface, around the effective range for lasers used as guides for
<a href="http://en.wikipedia.org/wiki/Adaptive_optics">adaptive optics</a> in
telescopes.
Now, to the meat of the problem. First, we need to find the escape
velocity from the earth. This is defined as the velocity we would need
to give an object to overcome the gravitational energy of the earth upon
the object. This is found by setting the kinetic energy of the object
equal to the change in gravitational potential energy from the earth’s
surface to infinity:
<mathjax>$$ KE=\Delta PE $$</mathjax> so
<mathjax>$$\tfrac{1}{2}mv_{esc}^2 = \frac{GM_{earth}m}{R_{earth}}$$</mathjax>
(for those interested, the gravitational potential energy at infinity is
defined to be zero). We can easily solve this for v,
<mathjax>$$ v_{esc}=\sqrt{\frac{2GM_{earth}}{R_{earth}}$$</mathjax>
To me, the most remarkable feature of the escape velocity is that it is
independent of the mass of the object! We can now plug in numbers, and
find an escape velocity if \~11.2 km/s.
We now make use of our constraint, that we have to achieve this escape
velocity over 15 km. We assume the laser outputs a constant number of
photons per second, n. The force the laser provides the satellite is the
change in momentum with time,
<mathjax>$$F=\frac{dp}{dt}$$</mathjax>
How is our laser imparting momentum to our system? Well, light is
composed of photons, tiny packets of light. Each photon has momentum.
Assume each photon reflects perfectly, that is, straight backwards and
with no energy loss. Then the photon has reversed its momentum. Since
momentum is conserved, the satellite has gained a momentum of 2p. The
momentum of a photon is
<mathjax>$$p=\frac{h}{\lambda}$$</mathjax>
where h is Planck’s constant. This gives
<mathjax>$$F=\frac{2nh}{\lambda}$$</mathjax>
Given a constant force, the time taken to reach some velocity is just
<mathjax>$$t=mv/F$$</mathjax>
In our case the v is the escape velocity. Now, given a constant force,
and no initial velocity, an object of mass m travels a distance d in a
time t given by
<mathjax>$$d=\frac{F}{2m}t^2$$</mathjax>
We can substitute our expression for t into this expression, giving
<mathjax>$$d=\left(\frac{F}{2m}\right)\left(\frac{mv}{F}\right)^2$$</mathjax>
<mathjax>$$d=\frac{mv^2}{2F}$$</mathjax>
What we are really interested in is n, the number of photons per second
this process takes, so we substitute our expression for the force, and
solve for n:
<mathjax>$$d=\frac{mv^2\lambda}{4nh}$$</mathjax>
s0
<mathjax>$$n=\frac{mv^2\lambda}{4hd}$$</mathjax>
Now that we have the number of photons per second our laser supplies,
all that is left is to find the power of laser this would take. The
power of a laser will be given by the number of photons per second times
the energy per photon. The energy per photon is given by E=hf where f is
the frequency of our light. So, the power, P, of our laser is
<mathjax>$$P=nE$$</mathjax> or
<mathjax>$$P=\frac{mv_{esc}^2\lambda f}{4d}$$</mathjax> so
<mathjax>$$P=\frac{mv_{esc}^2 c}{4d}$$</mathjax>
Where we have recognized that the frequency times the wavelength of a
photon is just the speed of light, c.
All that remains is to plug in numbers. A medium sized satellite might
be about 750kg, giving a laser power of 4.7<em>10^14W. This is .5 <span class="caps">PW</span>.
According to wikipedia, the greatest power output of a continuous
operation laser is on the order of 1 kW, or \~10^12 times less than our
necessary power! There’s no way we’re getting a normal sized satellite
into orbit with a laser. What about a smaller satellite? In recent years
picosatellites have proposed, with masses of \~.1kg. This gives a
necessary power of 6.2</em>10^10, or 620 <span class="caps">TW</span>. This is still \~10^8 times
greater than our most powerful continuous laser. In fact, reversing the
calculation, our laser could launch a satellite with a mass of \~1.6 mg.
A little googling reveals this is the same order of magnitude as a grain
of rice! Simply put, we’re not going to be launching satellites with
lasers anytime soon.</p>Laser Gun Recoil: Follow-up2010-04-21T16:31:00-04:00Jessetag:thephysicsvirtuosi.com,2010-04-21:posts/laser-gun-recoil-follow-up.html<p>Matt Springer over at <a href="http://scienceblogs.com/builtonfacts/">Built on
Fact</a>s has a <a href="http://scienceblogs.com/builtonfacts/2010/04/laser_rifle_recoil.php">very nice
post</a>
following up on my earlier analysis of <a href="http://thevirtuosi.blogspot.com/2010/04/today-id-like-to-approach-question-near.html">whether or not a laser gun would
recoil</a>.
In my analysis I came to the conclusion that the momentum delivered was
much less than that of a conventional gun, but that the impulse, and
hence the force delivered, was about the same. However, that force was
delivered over a \~30ns timescale, which left open the question of
whether or not the wielder would be able to feel that recoil. While I
had thought to possibly return to this issue at some point, Matt has
beat me to it. This is fortuitous, because I wasn’t sure quite where to
start with the question, while he approaches it in a very sensible,
clear manner. The gist of his post is that he compares our response to
short time scale forces to our ability to sense sound. I won’t go
through his calculations, you can check those out for yourself, but he
concludes that it is very unlikely that we could feel the recoil of our
laser gun. Case closed. Though, like any good scientist, we may choose
to reopen it later if more evidence comes to light.</p>Q Factors2010-04-21T00:38:00-04:00Alemitag:thephysicsvirtuosi.com,2010-04-21:posts/q-factors.html<p>When I walk in my door when I get home, I hook my keys, which I keep on
a carabiner, onto a binder clip that I’ve clipped onto my window sill.
Its a great way to never lose your keys. But one thing I always notice
is that when I hook it on, it swings, and every time it swings it makes
a click. This you might expect. What always surprises me is how long the
keys keep swinging. They seem to swing for a surprisingly long time,
minutes.
It always catches me off guard. In order to explain why, I get to talk
about <a href="http://en.wikipedia.org/wiki/Q_factor">Q Factors</a>
<a href="http://en.wikipedia.org/wiki/Q_factor"></a>
The Q factor stands for quality factor. Its a nondimensional parameter
(my favorite kind) that tells you how pure your oscillator is.
Lets back up a step. Lots of things in the world oscillate. Think about
a swing. If you get going on the swing and then stop rocking, you swing
back and forth, back and forth, but eventually you come to a stop.
Imagine swinging on a rusted old swing set. Now give the joint where the
swing swings from a nice shot of <span class="caps">WD</span>-40. You can imagine that if you
repeated the experiment (get swinging to some height and then stop
pumping), you’d continue to swing longer. Why? Because the Q factor has
increased. You’re swinging on a higher quality swing.
Mathematically its defined to be
<mathjax>$$ Q = 2 \pi \times \frac{ U }{ \Delta U } $$</mathjax>
or 2 pi times the total energy stored in the oscillator divided by the
energy lost in a cycle.
But, another way to gauge the Q factor is the fact that it tells you
something about how the oscillators get damped each period. As a number
it tells you how many periods need to go by for the amplitude of the
oscillations to be damped by
<mathjax>$$ \frac{1}{e^{2\pi}} \sim \frac{1}{535} $$</mathjax>
This allows you to estimate Q factors for everyday objects. A factor of
1/535 is pretty near to my threshold for observing a lot of things. What
does a factor of 535 mean in terms of sound, one of the most common ways
I interact with things around me? Well, sound is measured in decibels,
which is a logarithmic scale, where a factor of 535 in the power output
by something corresponds to a change in the decibels of
<mathjax>$$ dB = 10 \log_{10} \frac{1}{535} \sim -27 $$</mathjax>
What is a decibel change of 27 mean? Well,
<a href="http://en.wikipedia.org/wiki/Sound_pressure#Examples_of_sound_pressure_and_sound_pressure_levels">wikipedia</a>
tells me that a calm room is somewhere between 20 and 30 decibels, where
as a <span class="caps">TV</span> set about a meter away is at about 60 dB. So that tells me that
if something like my keys start off making a sound comparable to the
volume I set my <span class="caps">TV</span> at, I can listen to it until it just gets drowned out
by the room and that should give me some estimate for the Q of my keys.
I’ll keep you in suspense just a bit longer. I said I was surprised how
long the keys swing. In order to put the Q that I measured in context,
I’ll tell you about a few other Qs of things you might have some
experience with.
Most swinging things that I seem to remember coming in contact with have
quality factors of about 10 or so. Swings, or things letting a meter
stick swing, stuff like that. Tuning forks, which are built to be
accurate resonators will have quality factors of about a thousand or so.
The quartz crystal in your watch, which is really supposed to be a good
oscillator has a quality factor of 10 thousand or so. One of the best Q
factors achieved by man is 10^14.
So, what was the Q factor of my keys? I counted the times I could hear
them swinging and got a count of 435. This number isn’t to be taken too
seriously, but it indicated that my swinging keys have a quality factor
of something between 400 and 500, which is pretty darn good for
something that wasn’t engineered. That explains why it always surprises
me, the keys always seem to swing much longer than I would anticipate.</p>The End of Earth Physics I2010-04-17T16:25:00-04:00Corkytag:thephysicsvirtuosi.com,2010-04-17:posts/the-end-of-earth-physics-i.html<p>I was reading the Wikipedia page for the Hitchhiker’s Guide books the
other day and found that it started as a series of radio shows called
“The Ends of Earth.” At the end of each episode, the Earth would be
destroyed. Since I feel like this is the best way to end any <span class="caps">TV</span>
show/movie/book/news broadcast/Mayan calendar, I will shamelessly steal
the idea.
Since this is the first End of the Earth post, we will start small and
just consider the boiling off of all the world’s oceans. To be precise,
we will consider how much energy it would take to turn all the world’s
water at 0 degrees Celsius to water vapor at 100 degrees Celsius.
First we need to estimate the mass of all the water on earth. I will
make the assumptions that all water is fresh water and that the oceans
account for all water on earth. These are obviously not exactly true,
but will give the proper order of magnitude. The volume of a spherical
shell is given by
<mathjax>$$ \text{V} = 4 \pi R^{2} \Delta R $$</mathjax>
Taking the radius of the earth to be 6000 km, the ocean depth to be 1
km, and the fraction of earth covered by water to be 7/10 , we see that
<mathjax>$$ V_\text{water} = 4 \pi {R_{\text{earth}}^{2} \times
\text{height} \times \text{fraction}$$</mathjax>
<mathjax>$$ V_\text{water} = 4 \pi (6 \times 10^{6} \text{m})^{2} \times
10^{3} \text{m} \times (\frac{7}{10}) = 3 \times 10^{17}
\text{m}^{3}$$</mathjax>
Now we have the volume of the ocean. Now, since
<mathjax>$$ \text{Density} = \frac{Mass}{Volume}$$</mathjax>
we can calculate the mass of the oceans using the density of water to be
1000 kg / m^3.
<mathjax>$$ \text{Mass} = (\rho_\text{water})(V_\text{water}) = (1000
\frac{\text{kg}}{\text{m}^{3}})(3 \times 10^{17} \text{m}^{3}) =
3 \times 10^{20} \text{kg}$$</mathjax>
<span class="caps">FUN</span> <span class="caps">FACT</span>: The mass of all the water on earth is about 2% the mass of
that celestial punching bag known as Pluto.
Alright, now that we have the mass of the earth, we can start
calculating how much energy it would take to heat it up and boil it
away. The amount of heat Q required to raise the temperature of a given
material is
<mathjax>$$ Q_\text{heat} = \text{mc}\Delta\text{T} $$</mathjax>
where m is the mass of the thing we are heating, c is the specific heat
( the amount of energy required to heat our material up 1 degree
Celcius) and delta T is our change in temperature. This equation only
holds when there are no phase transitions, so we can use it to calculate
how much energy is needed to heat up the oceans from 0 degrees to 100
degrees:
<mathjax>$$ Q_\text{heat} = (3 \times 10^{20} \text{kg}) \times (4000
\frac{\text{J}}{\text{kg C}}) \times (100 \text{deg C}) = 10^{26}
\text{J} $$</mathjax>
where we have used the fact that the specific heat of water is about
4000 J/kg deg C.
So now we know how much energy it takes to bring water to its boiling
point, but this is not the same as boiling it off. To find that, we need
to calculate how much energy we have to add to liquid water at a
constant 100 degrees to turn it into water vapor. This is given by
<mathjax>$$ Q_{\text{boil}} = \text{mL} $$</mathjax>
where m is the mass again and L is the “latent heat of vaporization.” It
tells us how much energy we need to add to turn a kilogram of liquid
water at 100 degrees to a kilogram of water vapor at 100 degrees. For
water, L is about 2*10^6 J/kg, so
<mathjax>$$ Q_{\text{boil}} = ( 3 \times 10^{20} \text{kg} ) \times (2
\times 10^{6} \frac{\text{J}}{\text{kg}}) = 6 \times 10^{26}
\text{J} $$</mathjax>
Comparing this to the energy calculated before, we see that amount of
energy needed to vaporize water at 100 degrees is about six times larger
than the amount of energy needed to heat up the water by 100 degrees!
Now we can calculate the total amount of energy to boil way the oceans:
<mathjax>$$ Q_{\text{total}} = Q_{\text{heat}} + Q_{\text{boil}} = 7
\times 10^{26} \text{J} $$</mathjax>
So how much energy is that really? Well the total power of the sun is
<mathjax>$$ L_{\text{sun}} = 4 \times 10^{26} \frac{\text{J}}{\text{s}} $$</mathjax>
Thus, it would take the entire energy output of the sun for about 2
seconds to boil away all the earth’s oceans.
Now thats a lot of energy. To put this into the conventional unit of
destruction, this is equivalent to 10^11 Megatons of <span class="caps">TNT</span>. Typical
hydrogen bombs are 10 Megatons. So if you want to boil away the oceans
with H-bombs, you’d need about ten billion hydrogen bombs.
Luckily, short of constructing a reflecting Dyson sphere with a hole
that directs all the sun’s radiation at the earth, these energies are
unobtainable. So don’t panic.</p>Entropy and the Arrow of Time2010-04-16T21:27:00-04:00Nik Zhelevtag:thephysicsvirtuosi.com,2010-04-16:posts/entropy-and-the-arrow-of-time.html<p>Somebody I talked to yesterday about what should the topic of my post be
suggested that I should write something about cosmology, in particular
the Big Bang and the ultimate fate of the universe. Being mostly
interested in condensed matter, I am by no means an expert in cosmology
(I’ll leave that to Corky). However, I will not abandon the request, but
instead, spin it in a more condensed matter direction, by talking about
Entropy.
Entropy is a quantity that measures disorder. The more ordered your
system is, i.e. the less ways you can arrange the constituents of your
system, the less entropy you have and vice versa. I like to use the
example of how messy my room is to visualize this. When I clean my room,
I would pick up the random articles of clothing, books, etc and put them
in the places where they belong. Now imagine, I stopped cleaning my room
for a couple of weeks. This would result in clothes on the ground, books
on my bed, empty bottles on my desk, etc. anything can be anywhere. This
means that there are more ways for my personal belongings to be arranged
when my room is messy than when my room is clean (when everything would
be at its place), so the entropy increases as the messiness of my room
increases. Now, notice that left on its own (i.e. if I don’t put the
conscious effort to put some order in my room once in a while) the
entropy of the system that is my room increases with time. This is, in
fact, the essence of one of the most fundamental (if not the most
fundamental) laws of nature – The Second Law of Thermodynamics, which
states that left on its own, the entropy of any system increases with
time.
I know what you are thinking, “What does this talk about the messiness
of your room and the law of increasing entropy has to do with the
universe, the Big Bang and the universe’s ultimate fate?” Ok, here you
go, read this short story by the greatest sci-fi writer Isaac Asimov:
<a href="http://bit.ly/84LN">The Last Question.</a>
Now that you’ve read this story (if you haven’t, please do), we can
discuss some physics (or actually, some philosophy, but you know,
sometimes there isn’t that much difference between the two). Einstein,
with his Theory of Relativity, showed that the concept of time is not
absolute, and time can be treated simply as another coordinate, much
like the three spatial coordinates that define the three dimensional
space we are used to. (A more in-depth explanation of the Theory of
Relativity will be the topic of one of the next posts.) However, despite
the relativity of time, the arrow of time is always well-defined. We can
always tell future from past by measuring the entropy and applying the
Second Law of Thermodynamics. At the instance of the Big Bang, the
entropy was zero. Since then (about 13.8 billion years ago) the entropy
of the universe has been increasing until it will eventually reach its
maximum value, at which point all the matter of the universe would be
evenly distributed and no physical process would be possible. Time
eventually stops, resulting in the “heat death” of the universe.
I believe that one of the most unsettling parts of the Bing Bang theory
is the question of what caused the Big Bang, and what was there before
the Bing Bang. Having the arrow of time defined with regards to entropy,
however, makes this question nonsensical… unless, of course, a Cosmic <span class="caps">AC</span>
(or God) has managed to find the answer of how the entropy can be reversed.</p>Physics for non-physicists2010-04-16T16:36:00-04:00Nik Zhelevtag:thephysicsvirtuosi.com,2010-04-16:posts/physics-for-non-physicists.html<p>This is my first (but certainly not the last) post in this blog, so let
me introduce myself. My name is Nikolay, and as most of the others
contributing to the blog, I am also a first-year graduate student in
physics at Cornell.
So, what is my motivation to join the team headed by Mr. Alemi in
contributing to the blog? I happen to have many friends that are not in
physics. In fact, probably due to the fact that I graduated from a
liberal arts college, many of my friends have never taken any physics
beyond that one class in high school, which due to both the inherent
difficulty of teaching physics at introductory level and the lack of
good high school physics teachers was often an unpleasant experience
that scared them away from physics for life. Now, imagine what my
difficulty is when on a daily basis I have to answer the question: “So
what do you study/work on as a graduate student?” I usually try to come
up with a sentence or two describing the essence of what I am doing
without going into too many details, but even that is a daunting task.
There seems to be a disconnect between the world in which a physicist
lives and the general public. As Chad Orzel pointed out in the talk that
motivated the creation of this blog, this is not the general public’s
fault, but our fault as physicists of not really committing enough
effort in relating our knowledge to the rest of the world.
In short, my goal is to create a series of posts about physics geared to
people with no physics background that would build upon each other and
culminate with a post providing an answer to the question of what my
research project is beyond the generic words I would often resort to
that would rather leave most people confused. I plan to use as little
math as possible, which will not be an easy task considering that math
is the language of choice for physics. However, mathematics is just a
tool, and physics is not about equations and complicated algebra, but
about how nature works, which we should be able to formulate in plain
English. I know I am embarking on a difficult task, so wish me luck, and
please leave your thoughts in the remarks section of the blog, since any
feedback would be appreciated.</p>Onsager’s Tour de Force2010-04-15T20:22:00-04:00Alemitag:thephysicsvirtuosi.com,2010-04-15:posts/onsager-s-tour-de-force.html<p>In 1943 in a <em>tour de force</em> of mathematical physics, Lars Onsager
<a href="http://prola.aps.org/abstract/PR/v65/i3-4/p117_1">solved</a> the 2D <a href="http://en.wikipedia.org/wiki/Ising_model">Ising
Model</a>.
<a href="http://1.bp.blogspot.com/_YOjDhtygcuA/S8esZ-lw1aI/AAAAAAAAAJw/2UTC_JAHg1Q/s1600/onsager.jpg"><img alt="image" src="http://1.bp.blogspot.com/_YOjDhtygcuA/S8esZ-lw1aI/AAAAAAAAAJw/2UTC_JAHg1Q/s320/onsager.jpg" /></a>
His solution has proved crucial in furthering statistical mechanics,
allowing theorists to check all of there approximation schemes against
analytical results. I call his effort a <em>tour de force</em> because it was a
huge mathematical exercise, his solution spanning 33 pages. I also call
it a ‘<em>tour de force</em>‘ because I have seen it referenced as such in no
less than 5 different sources, as well as numerous times in speech. This
got me wondering, just how many times is Onsager’s solution called a
<em>tour de force</em>… So, first I started with a Google books search, and
turned up <a href="http://www.google.com/search?q=%22tour%20de%20force%22%20onsager%20ising&num=30&hl=en&newwindow=1&safe=off&tbo=s&tbs=bks:1&ei=HaTHS_qSNYa0lQer-8jEAQ&sa=X&oi=tool&resnum=0&ct=tlink&ved=0CDAQpwU4Hg">39
Books</a>,
among the one’s that Google has indexed, which surely represent only the
tip of the iceburg. The books search turns up some of the more popular
statistical mechanics books including Kadanoff and Goldenfeld. And I
happen to know its also called a <em>tour de force</em> in Sethna’s book and
Cardy’s. Interestingly, the earliest mention in the book search is
<a href="http://books.google.com/books?ei=HaTHS_qSNYa0lQer-8jEAQ&ct=result&id=McHvAAAAMAAJ&dq=%22tour+de+force%22+onsager+ising&q=%22tour+de+force%22#search_anchor">Magnetism, Volume 2, Part 1 By George Tibor Rado, Harry
Suhl</a>,
from 1963, 19 years after Onsager’s paper. Next I used Google Scholar to
try and turn up some references in papers as well. I got <a href="http://scholar.google.com/scholar?hl=en&q=%22tour+de+force%22+onsager+ising&btnG=Search&as_sdt=20000000000&as_ylo=&as_vis=0">73
results</a>,
the earliest of which I have access to is: <a href="http://adsabs.harvard.edu/abs/1973JSP.....8..265F">Field theory of the
t