The Virtuosi (Posts by Brian)https://thephysicsvirtuosi.com/enContents © 2019 <a href="mailto:thephysicsvirtuosi@gmail.com">The Virtuosi</a> Thu, 24 Jan 2019 15:05:00 GMTNikola (getnikola.com)http://blogs.law.harvard.edu/tech/rss- Tragedy of Great Power Politics? Modeling International Warhttps://thephysicsvirtuosi.com/posts/old/modeling-international-war/Brian<div><div style="float: right;">
<img src="https://thephysicsvirtuosi.com/images/tgpp.jpg">
</div>
<p>Recently I finished reading John Mearsheimer's excellent political science book
The Tragedy of Great Power Politics. In this book, Mearscheimer lays out his
``offensive realism'' theory of how countries interact with each other in the
world. The book is quite readable and well-thought-out -- I'd recommend it to
anyone who has an inkling for political history and geopolitics. However, as I
was reading this book, I decided that there was a point of Mearsheimer's
argument which could be improved by a little mathematical analysis.</p>
<p>The main tenant of the book is that states are rational actors who act to to
maximize their standing in the international system. However, states don't seek
to maximize their absolute power, but instead their relative power as compared
to the other states in the system. In other words, according to this logic the
United Kingdom position in the early 19th century -- when its army and navy
could trounce most of the other countries on the globe -- was better than it is
now -- when many other countries' armies and navies are comparable to that of
the UK, despite the UK current army and navy being much better now than they
were in the early 19th century. According to Mearsheimer, the main determinant
of state's international actions is simply maximizing its relative power in its
region. All other considerations -- capitalist or communist economy, democratic
or totalitarian government, even desire for economic growth -- matter little in
a state's choice of what actions it will take. (Perhaps it was this
simplification of the problem which made the book really appeal to me as a
physicist.)</p>
<!-- more -->
<p>Most of Mearsheimer's book is spent exploring the logical corollaries of his
main tenant, along with some historical examples. He claims that his idea has
three different predictions for three different possible systems. 1) A balanced
bipolar system (one where two states have roughly the same amount of power and
no other state has much to speak of) is the most stable. War will probably not
break out since, according to Mearsheimer, each state has little to gain from a
war. (His example is the Cold War, which didn't see any actual conflict between
the US and the USSR.) 2) A balanced multipolar system ($N>2$ states each share
roughly the same amount of power) is more prone to war than a bipolar system,
since a) there is a higher chance that two states are mismatched in power,
allowing the more powerful to push the less around, and b) there are more
states to fight. (One of his examples is Europe between 1815 and 1900, when
there were several great-power wars but nothing that involved the entire
continent at once.) 3) An unbalanced multipolar system ($N>2$ states with power,
but one that has more power than the rest) is the most prone to war of all. In
this case, the biggest state on the block is almost able to push all the other
states around. The other states don't want that, so two or more of them collude
to stop the big state from becoming a hegemon -- i.e. they start a war.
Likewise, the big state is also looking to make itself more relatively
powerful, so it tries to start wars with the little states, one at a time, to
reduce their power. (His examples here are Europe immediately before and
leading up to the Napoleonic Wars, WWI, and WWII.) There is another case, which
is unipolarity -- one state has all the power -- but there's nothing
interesting there. The big state does what it wants.</p>
<p>While I liked Mearsheimer's argument in general, something irked me about the
statement about bipolarity being stable. I didn't think that the stability of
bipolarity (corollary 1 above) actually followed from his main hypothesis.
After spending some extra time thinking in the shower, I decided how I could
model Mearsheimer's main tenant quantitatively, and that it actually suggested
that bipolarity was actually unstable!!</p>
<p><a id="note1"></a>
Let's see if we can't quantify Mearsheimer's ideas with a model. Each state in
the system has some power, which we'll call $P_i$. Obviously in reality there are
plenty of different definitions of power, but in accordance with Mearsheimer's
definition, we'll define power simply in a way that if State 1 has power
$P_1 > P_2$, the power of State 2, then State 1 can beat State 2 in a
war<a href="https://thephysicsvirtuosi.com/posts/old/modeling-international-war/#fnote1"><sup>[1]</sup></a>.
Each state does not seek to maximize their total power $P_i$, but instead their
relative power $R_i$, relative to the total power of the rest of the states, So
the relative power $R_i$ would be</p>
<p>$$ R_i = P_i / \left( \sum_{j=1}^N P_j \right) \qquad , $$</p>
<p>where we take the sum over the relevant players in the system. If there was
some action that changed the power of some of the players in the system (say a
war), then the relative power would also change with time $t$:</p>
<p>$$ \frac{dR_i}{dt} = \frac{dP_i}{dt} \times \left( \sum_{j=1}^N P_j \right)^{-1} - P_i \times \left( \sum_{j=1}^N P_j \right)^{-2} \times \left(\sum_{j=1}^N \frac{dP_j}{dt} \right) \qquad (1) $$</p>
<p>A state will pursue an action that increases its relative power $R_i$. So if we
want to decide whether or not State A will go to war with State B, we need to
know how war affects a state's individual powers. While this seems intractable,
since we can't even precisely define power, a few observations will help us
narrow down the allowed possibilities to make definitive statements on when war
is beneficial to a state:</p>
<ol>
<li>War always reduces a state's absolute power. This is simply a statement that
in general, war is destructive. Many people die and buildings are bombed,
neither of which is good for a state. Mathematically, this statement is that in
wartime, $dP_i/dt < 0$ always. Note that this doesn't imply that that $dR_i/dt$
is always negative.</li>
</ol>
<p><a id="note2"></a></p>
<ol>
<li>
<p>The change in power of two states A & B in a war should depend only on
how much power A & B have. In addition, it should be independent of the
labeling of states. Mathematically, $dP_a / dt = f(P_a, P_b)$, and
$dP_b/dt = f(P_b, P_a)$ with the same function $f$<a href="https://thephysicsvirtuosi.com/posts/old/modeling-international-war/#fnote2"><sup>[2]</sup></a>.</p>
</li>
<li>
<p>If State A has more absolute power than State B, and both states are in a
war, then State B will lose power more rapidly than State A. This is almost a
re-statement of our definition of power. We defined power such that if State A
has more absolute power than State B, then State A will win a war against State
B. So we'd expect that power translates to the ability to reduce another
state's power, and more power means the ability to reduce another state's power
more rapidly.</p>
</li>
<li>
<p>For simplicity, we'll also notice that the decrease of a State A's absolute
power in wartime is largely dependent on the power of State B attacking it, and
is not so much dependent on how much power State A has.</p>
</li>
</ol>
<p>In general, I think that assumptions 1-3 are usually true, and assumption 4 is
pretty reasonable. But to simplify the math a little more, I'm going to pick a
definite form for the change of power. The simplest possible behavior that
capture all 4 of the above assumptions is:</p>
<p>$$ \frac{dx}{dt} = -y \qquad \frac{dy}{dt} = -x \qquad (2) $$</p>
<p><a id="note3"></a>
where $x$ is the absolute power of State X and $y$ is the absolute power of State
y. (I'm switching notation because I want to avoid using too many
subscripts<a href="https://thephysicsvirtuosi.com/posts/old/modeling-international-war/#fnote3"><sup>[3]</sup></a>). Here I'm assuming that the rate of
change of State X's power is directly
proportional to State Y's power, and depends on nothing else (including how
much power State Y actually has). <a id="note4"></a>
We'll also call $r$ the relative power of State
X, and $s$ the relative power of State Y<a href="https://thephysicsvirtuosi.com/posts/old/modeling-international-war/#fnote4"><sup>[4]</sup></a>.
Now we're equipped to see when war
is a good idea, according to our hypotheses.</p>
<p>Let's examine the case that was bothering me most -- a balanced bipolar system.
Now we have only two states in the system, X and Y. For starters, let's address
the case where both states start out with equal power $(x = y)$. If State X goes
to war with State Y, how will the relative powers $r =x/(x+y)$ & $s=y/(x+y)$
change? Looking at Eq. (1), we see that by symmetry both states have to lose
absolute power equally, so $x(t) = y(t)$ always, and thus $r(t) = s(t)$ always. In
other words, from a relative power perspective it doesn't matter whether the
states go to war! For our system to be stable against war, we'd expect that a
state will get punished if it goes to war, which isn't what we have! So our
system is a neutral equilibrium at best.</p>
<p>But it gets worse. For a real balanced bipolar system, both states won't have
exactly the same power, but will instead be approximately equal. Let's say that
the relative power between the two states differs by some small (positive)
number $e$, such that $x(0) = x0$ and $y(0) = x0 + e$. Now what will happen? Looking
at Eq. (2), we see that, at $t=0$,</p>
<p>$$ \frac{dr}{dt} = -(x_0 + e) / (2x_0 + e) + x_0(2x_0 + e) / (2x_0 + e)^2 = -e/(x_0 + e) $$</p>
<p>$$ \frac{ds}{dt} = -(x_0) / (2x_0 + e) + (x_0+e)(2x_0 + e) / (2x_0 + e)^2 = + e/(x_0 + e) \qquad . $$</p>
<p>In other words, if the power balance is slightly upset, even by an
infinitesimal amount, then the more powerful state should go to war! For a
balanced bipolar system, peace is unstable, and the two countries should always
go to war according to this simple model of Mearsheimer's realist world.</p>
<p>Of course, we've just considered the simplest possible case -- only two states
in the system (whereas even in a bipolar world there are other, smaller states
around) who act with perfect information (i.e. both know the power of the other
state) and can control when they go to war. Also, we've assumed that relative
power can change only through a decrease of absolute power, and in a
deterministic way (as opposed to something like economic growth). To really say
whether bipolarity is stable against war, we'd need to address all of these in
our model. A little thought should convince you which of these either a) makes
a bipolar system stable against war, and b) makes a bipolar system more or less
stable compared to a multipolar system. Maybe I'll address these, as well as
balanced and unbalanced multipolar systems, in another blog post if people are
interested.</p>
<p><a id="fnote1"></a>
1. <a href="https://thephysicsvirtuosi.com/posts/old/modeling-international-war/#note1">^</a> $P_i$ has some units (not watts). My definition of power is strictly
comparative, so it might seem that any new scale of power $p_i = f(P_i)$ with an
arbitrary monotonic function $f(x)$ would also be an appropriate definition.
However, we would like a scale that facilitates power comparisons if multiple
states gang up on another. So we would need a new scale such that </p>
<p>$$ p_{i+j} = f(P_i + P_j) = f(P_i) + f(P_j) = p_i + p_j $$ </p>
<p>for all $P_i, P_j$ . The only function that behaves like this is a linear function of
$P(p_i) = A \times P_i $, where A is some constant. So our definition of power is
basically fixed up to what "units" we choose. Of course, defining $P_i$ in terms
of tangibles (e.g. army size or GDP or population size or number of nuclear warheads)
would be a difficult task. Incidentally, I've also implicitly assumed here that there is a power scale,
such that if $P_1 > P_2$, and $P_2 > P_3$, then $P_1 > P_3$. But I think
that's a fairly benign assumption.</p>
<p><a id="fnote2"></a>
2. <a href="https://thephysicsvirtuosi.com/posts/old/modeling-international-war/#note2">^</a> This implicity assumes that it doesn't matter which state attacked the
other, or where the war is taking place, or other things like that.</p>
<p><a id="fnote3"></a>
3. <a href="https://thephysicsvirtuosi.com/posts/old/modeling-international-war/#note3">^</a> Incidentally this form for the rate-of-change of the power also has the
advantage that it is scale-free, which we might expect since there is no
intrinsic "power scale" to the problem. Of course there are other forms with
this property that follow some or all of the assumptions above. For instance,
something of the form $dx/dt = -xy = dy/dt$ would also be i) scale-invariant, and
ii) in line with assumptions 1 & 2 and partially inline with assumption 3.
However I didn't use this since a) it's nonlinear, and hence a little harder to
solve the resulting differential equations analytically, and b) the rate of
decrease of both state's power is the same, in contrast to my intuitive feeling
that the state with less power should lose power more rapidly.</p>
<p><a id="fnote4"></a>
4. <a href="https://thephysicsvirtuosi.com/posts/old/modeling-international-war/#note4">^</a> Homework for those who are not satisfied with my assumptions: Show that any
functional form for $dP_i/dt$ that follows assumptions 1-3 above does not change
the stability of a balanced bipolar system.</p></div>booksmodelingwarhttps://thephysicsvirtuosi.com/posts/old/modeling-international-war/Sun, 23 Jun 2013 23:48:00 GMT
- When will the Earth fall into the Sun? https://thephysicsvirtuosi.com/posts/old/when-will-the-earth-fall-into-the-sun-/Brian<div><p style="float: center;">
<figure>
<img src="https://thephysicsvirtuosi.com/images/earth-fall-sun/BrianWastesHisTime.png" alt="the sun giveth, the sun taketh away" width="50%">
<figcaption>The time I spent making this poster could have been spent doing research</figcaption>
</figure>
</p>
<p>Since December 2012 is coming up, I thought I'd help the Mayans out with
a look at a possible end of the world scenario. (I know, it's not Earth
Day yet, but we at the Virtuosi can only go so long without fantasizing
about wanton destruction.) As the Earth zips around the Sun, it moves
through the <a href="http://en.wikipedia.org/wiki/Heliosphere">heliosphere</a>,
which is a collection of charged particles emitted by the Sun. Like any
other fluid, this will exert drag on the Earth, slowly causing it to
spiral into the Sun. Eventually, it will burn in a blaze of glory, in a
bad-action-movie combination of Sunshine meets Armageddon. Before I get
started, let me preface this by saying that I have no idea what the hell
I'm talking about. But, in the spirit of being an arrogant physicist,
I'm going to go ahead and make some back-of-the-envelope calculations,
and expect that this post will be accurate to within a few orders of
magnitude. Well, how long will the Earth rotate around the Sun before
drag from the heliosphere stops it? This seems like a problem for fluid
dynamics. How do we calculate what the drag is on the Earth? Rather than
solve the fluid dynamics equations, let's make some arguments based on
dimensional analysis. What can the drag of the Earth depend on? It
certainly depends on the speed of the Earth v -- if an object isn't
moving, there can't be any drag. We also expect that a wider object
feels more drag, so the drag force should depend on the radius of the
Earth R. Finally, the density of the heliosphere might have something to
do with it. If we fudge around with these, we see that there is only 1
combination that gives units of force: </p>
<p>$$ F_{drag} \sim \rho v^2 R^2 $$ </p>
<p>Now that we have the force, the energy dissipated from the Earth
to the heliosphere after moving a distance $d$ is $E_\textrm{lost} = F\times d$. If
the Earth moves with speed v for time t, then we can write
$E_\textrm{lost} = F v t$. So we can get an idea of the time scale over which the Earth
starts to fall into the Sun by taking </p>
<p>$E_\textrm{lost} = E_\textrm{Earth} \sim 1/2 M_\textrm{Earth} v^2$.
Rearranging and dropping factors of 1/2 gives </p>
<p>$$ T_\textrm{Earth burns} \sim M_{Earth} v^2 / (F_{drag}\times v) \
\qquad \sim M_{Earth} / (\rho R^2 v) $$ </p>
<p>Using the velocity of the Earth as $2\pi \times 1 \mbox{Astronomical unit/year}$,
Googlin' for some numbers, and taking the
<a href="http://web.mit.edu/space/www/helio.review/axford.suess.html">density of the heliosphere</a>
to be $10^{-23}$ g/cc we get... </p>
<p>$$ T \approx 10^{19} \textrm{ years} $$</p>
<p>Looks like this won't be the cause of the Mayan apocalypse. (By comparison, the
<a href="http://en.wikipedia.org/wiki/Sun#Life_cycle">Sun will burnout</a>
after only $\sim10^9$ years.)</p></div>https://thephysicsvirtuosi.com/posts/old/when-will-the-earth-fall-into-the-sun-/Thu, 29 Nov 2012 22:58:00 GMT
- Creating an Earthhttps://thephysicsvirtuosi.com/posts/old/creating-an-earth/Brian<div><div style="float: center;">
<figure>
<img src="https://thephysicsvirtuosi.com/images/creating-an-earth/116.png" alt="GAH!" width="50%">
<figcaption style="text-align=center;">GAAAAAAAAH</figcaption>
</figure>
</div>
<p>A while ago I decided I wanted to create something that looks like the
surface of a planet, complete with continents & oceans and all. Since
I've only been on a small handful of planets, I decided that I'd
approximate this by creating something like the Earth on the computer
(without cheating and just copying the real Earth). Where should I
start? Well, let's see what the facts we know about the Earth tell us
about how to create a new planet on the computer. </p>
<p><strong>Observation 1</strong>:
Looking at a map of the Earth, we only see the heights of the surface.
So let's describe just the heights of the Earth's surface. </p>
<p><strong>Observation 2</strong>:
The Earth is a sphere. So (wait for it) we need to describe the
height on a spherical surface. Now we can recast our problem of making
an Earth more precisely mathematically. We want to know the heights of
the planet's surface at each point on the Earth. So we're looking for
field (the height of the planet) defined on the surface of a sphere (the
different spots on the planet). Just like a function on the real line
can be expanded in terms of its Fourier components, almost any function
on the surface of a sphere can be expanded as a sum of spherical
harmonics $Y_{lm}$. This means we can write the height $h$ of our planets
surfaces as </p>
<p>$$ h(\theta, \phi) = \sum A_{lm}Y_l^m(\theta, \phi) \quad (1) $$ </p>
<p>If we figure out what the coefficients $A$ of the sum should
be, then we can start making some Earths! Let's see if we can use some
other facts about the Earth's surface to get get a handle on what
coefficients to use. </p>
<p><strong>Observation 3</strong>:
I don't know every detail of the Earth's surface, whose history
is impossibly complicated. I'll capture
this lack-of-knowledge by describing the surface of our imaginary planet
as some sort of random variable. Equation (1) suggests that we can do
this by making the coefficients $A$ random variables. At some point we
need to make an executive decision on what type of random variable we'll
use. <a id="note1"></a>For various reasons,<a href="https://thephysicsvirtuosi.com/posts/old/creating-an-earth/#fnote1"><sup>[1]</sup></a>
I decided I'd use a Gaussian
random variable with mean 0 and standard deviation $a_{lm}$: </p>
<p>$$ A_{lm} = a_{lm} N(0,1) $$ </p>
<p>(Here I'm using the notation that $N(m,v)$ is a normal
or Gaussian random variable with mean $m$ and variance $v$. If you
multiply a Gaussian random variable by a constant $a$, it's the same as
multiplying the variance by $a^2$, so $a N(0,1)$ and $N(0,a^2)$ are
the same thing.) </p>
<p><strong>Observation 4</strong>:
The heights of the surface of the
Earth are more-or-less independent of their position on the Earth. In
keeping with this, I'll try to use coefficients $a_{lm}$ that will give me
a random field that is is isotropic on average. This seems hard at
first, so let's just make a hand-waving argument. Looking at some
<a href="http://en.wikipedia.org/wiki/Spherical_harmonics">pretty pictures</a>
of spherical harmonics, we can see that each spherical harmonic of degree $l$
has about $l$ stripes on it, independent of $m$. <a id="note2"></a>
So let's try using $a_{lm}$'s
that depend only on $l$, and are constant if just
$m$ changes<a href="https://thephysicsvirtuosi.com/posts/old/creating-an-earth/#fnote2"><sup>[2]</sup></a>. Just for convenience,
we'll pick this constant to be $l$ to some power $-p$: </p>
<p>$$ a_{lm} = l^{-p} \quad \textrm{ or} $$</p>
<p>$$ h(\theta, \phi) = \sum_{l,m} N_{lm}(0,1) l^{-p} Y_l^m(\theta, \phi) \quad (2) $$ </p>
<p>At this point I got bored & decided to see what a
planet would look like if we didn't know what value of $p$ to use. So
below is a movie of a randomly generated "planet" with a fixed choice of
random numbers, but with the power $p$ changing.</p>
<p><a id="note3"></a>
As the movie starts ($p=0$), we see random uncorrelated heights on the
surface.<a href="https://thephysicsvirtuosi.com/posts/old/creating-an-earth/#fnote3"><sup>[3]</sup></a> As the movie continues and $p$ increases, we see
the surface smooth out rapidly. Eventually, after $p=2$ or so, the planet
becomes very smooth and doesn't look at all like a planet. So the
"correct" value for p is somewhere above 0 (too bumpy) and below 2 (too
smooth). Can we use more observations about Earth to predict what a good
value of $p$ should be? </p>
<p><strong>Observation 5</strong>:
The elevation of the Earth's
surface exists everywhere on Earth (duh). So we're going to need our sum
to exist. How the hell are we going to sum that series though! Not only
is it random, but it also depends on where we are on the planet! Rather
than try to evaluate that sum everywhere on the sphere, I decided that
it would be easiest to evaluate the sum at the "North Pole" at
$\theta=0$. Then, if we picked our coefficients right, this should be
statistically the same as any other point on the planet. Why do we want
to look at $\theta = 0$? Well, if we look back at the
<a href="http://en.wikipedia.org/wiki/Spherical_harmonics">wikipedia entry</a>
for spherical harmonics, we see that </p>
<p>$$ Y_l^m = \sqrt{ \frac{2l +1}{4\pi}\frac{(l-m)!}{(l+m)!}} e^{im\phi}P^m_l(\cos\theta) \quad (3)$$ </p>
<p>That doesn't look too helpful -- we've just picked up
another special function $P_l^m$ that we need to worry about. But there is a
trick with these special functions $P_l^m$: at $\theta = 0$, $P_l^m$ is 0 if $m$
isn't 0, and $P_l^0$ is 1. So at $\theta = 0$ this is simply: </p>
<p>$$ Y_l^m(\theta = 0) = \bigg { ^{\sqrt{(2l+1)/4\pi},\,m=0}_{0,\,m \ne 0} $$ </p>
<p>Now we just have, from every equation we've written down: </p>
<p>$$ h(\theta = 0) = \sum_l \times l^{-p} \times \sqrt{(2l+1)/4\pi }\times N(0,1) $$</p>
<p>$$ \quad \qquad = \times \frac{1}{\sqrt{4\pi}} \times \sum_l N(0,l^{-2p}(2l+1)) $$</p>
<p>$$ \quad \qquad = \times \frac{1}{\sqrt{4\pi}} \times N(0,\sum_l l^{-2p}(2l+1) ) $$ </p>
<p>$$ \quad \qquad = \times \frac{1}{\sqrt{4\pi}} \sqrt{\sum_l l^{-2p}(2l+1)} \times N(0,1) $$ </p>
<p>$$ \quad \qquad \sim \sqrt{\sum_l l^{-2p+1}} N(0,1) \qquad (4) $$ </p>
<p>So for the surface of our imaginary planet to exist, we had better have that sum
converge, or $-2p+1 < -1 ~ (p > 1)$. And we've also learned something
else!!! Our model always gives back a Gaussian height distribution on
the surface. Changing the coefficients changes the variance of
distribution of heights, but that's all it does to the distribution.
Evidently if we want to get a non-Gaussian distribution of heights, we'd
need to stretch our surface after evaluating the sum. Well, what does
the height distribution look like from my simulated planets? Just for
the hell of it, I went ahead and generated ${\sim}400$ independent surfaces at
${\sim}40$ different values for the exponent $p$, looking at the first 22,499
terms in the series. From these surfaces I reconstructed the measured
distributions; I've combined them into a movie which you can see below.</p>
<p>As you can see from the movie, the distributions look like Gaussians.
The fits from Eq. (4) are overlaid in black dotted lines. (Since I can't
sum an infinite number of spherical harmonics with a computer, I've
plotted the fit I'd expect from just the terms I've summed.) As you can
see, they are all close to Gaussians. Not bad. Let's see what else we
can get. </p>
<p><strong>Observation 6</strong>:
According to some famous people, the Earth's surface is
<a href="http://en.wikipedia.org/wiki/How_Long_Is_the_Coast_of_Britain%3F_Statistical_Self-Similarity_and_Fractional_Dimension">probably a fractal</a>
whose coastlines are non-differentiable.
This means that we want a value of $p$ that will make our surface rough
enough so that its gradient doesn't exist (the derivative of the sum of
Eq. (2) doesn't converge). At this point I'm getting bored with writing
out calculations, so I'm just going to make some scaling arguments. From
Eq. (3), we know that each of the spherical harmonics $Y_l^m$ is related to
a polynomial of degree $l$ in $\cos \theta$. So if we take a derivative, I'd
expect us to pick up another factor of $l$ each time. Following through
all the steps of Eq. (4) we find </p>
<p>$$ \vec{\nabla}h \sim \sqrt{\sum_l l^{-2p+3}}\vec{N}(0,1) \quad , $$ </p>
<p>which converges for $p > 2$. So for our planet to be "fractal," we want $1<p<2$.
Looking at the first movie, this seems reasonable. </p>
<p><strong>Observation 7</strong>:
70% of the Earth's surface is under water. On Earth, we can think of the points
underwater as all those points below a certain threshold height. So
let's threshold the heights on our sphere. If we want 70% of our
generated planet's surface to be under water, Eq (4) and the
<a href="http://en.wikipedia.org/wiki/Cumulative_distribution_function">cumulative distribution function</a>
of a
<a href="http://en.wikipedia.org/wiki/Normal_distribution">Gaussian distribution</a>
tells us that we want to pick a critical height $H$ such that </p>
<p>$$ \frac{1}{2} \left[ 1 + \textrm{erf}(H \sqrt{2\sigma^2}) \right] = 0.7 \quad \textrm{or} $$ </p>
<p>$$ H = \sqrt{2\sigma^2}\textrm{erf}^{-1}(0.4) $$ </p>
<p>$$\sigma^2 = \frac 1 {4\pi} \sum_l l^{-2p}(2l+1) \quad (5)\, , $$</p>
<p>where $\textrm{erf}()$ is a special function called the error function,
and $\textrm{erf}^{-1}$ is its inverse. We can evaluate these numerically (or by using some
<a href="http://en.wikipedia.org/wiki/Error_Function#Asymptotic_expansion">dirty tricks</a>
if we're feeling especially masochistic). So for our generated planet,
let's call all the points with a height larger than $H$ "land," and all
the points with a height less than $H$ "ocean." Here is what it looks like
for a planet with $p=0$, $p=1$, and $p=2$, plotted with the same
<a href="http://en.wikipedia.org/wiki/Sinusoidal_projection">Sanson projection</a>
as before.</p>
<p>
<img src="https://thephysicsvirtuosi.com/images/creating-an-earth/allContinents.png" width="50%" alt="allContinents" align="center">
</p>
<p><sub>
Top to bottom: p=0, p=1, and p=2. I've colored all the "water" (positions with heights < $H$ as given in Eq. (5) ) blue and all the land (heights > $H$) green.
</sub></p>
<p>You can see that the the total amount of land area is roughly constant
among the three images, but we haven't fixed how it's distributed.
Looking at the map above for $p=0$, there are lots of small "islands"
but no large contiguous land masses. For $p=2$, we see only one
contiguous land mass (plus one 5-pixel island), and $p=1$ sits somewhere
in between the two extremes. None of these look like the Earth, where
there are a few large landmasses but many small islands. From our
previous arguments, we'd expect something between $p=1$ and $p=2$ to look
like the Earth, which is in line with the above picture. But how do we
decide which value of p to use? </p>
<p><strong>Observation 8</strong>:
The Earth has 7 continents This one is more vague than the others, but I think it's the
coolest of all the arguments. How do we compare our generated planets to
the Earth? The Earth has 7 continents that comprise 4 different
contiguous landmasses. In order, these are 1) Europe-Asia-Africa, 2)
North- and South- America, 3) Antartica, and 4) Australia, with a 5th
Greenland barely missing out. In terms of fractions of the Earth's
surface, Google tells us that Australia covers 0.15% of the Earth's
total surface area, and Greenland covers 0.04%. So let's define a
"continent" as any contiguous landmass that accounts for more than 0.1%
of the planet's total area. Then we can ask: What value of <em>p</em> gives us
a planet with 4 continents? I have no idea how to calculate exactly what
that number would be from our model, but I can certainly measure it from
the simulated results. I went ahead and counted the number of continents
in the generated planets.</p>
<p>
<img src="https://thephysicsvirtuosi.com/images/creating-an-earth/numContinents.png" width="50%" alt="allContinents" align="center">
</p>
<p>The results are plotted above. The solid red line is the median values
of the number of continents, as measured over 400 distinct worlds at 40
different values of $p$. The red shaded region around it is the band
containing the upper and lower quartiles of the number of continents.
For comparison, in black on the right y-axis I've also plotted the log
of the total number of landmasses at the resolution I've used. The
number of continents has a resonant value of $p$ -- if $p$ is too small,
then there are many landmasses, but none are big enough to be
continents. Conversely, if $p$ is too large, then there is only one huge
landmass. Somewhere in the middle, around $p=0.5$, there are about 20
continents, at least when only the first ${\sim}23000$ terms in the series are
summed. Looking at the curve, we see that there are roughly two places
where there are 4 continents in the world -- at $p=0.1$ and at $p=1.3$.
Since $p=0.1$ doesn't converge, and since $p=0.1$ will have way too many
landmasses, it looks like a generated Earth will look the best if we use
a value of $p=1.3$ And that's it. <a id="note4"></a>
For your viewing pleasure, here is a video of three of these planets below,
complete with water, continents, and mountains.<a href="https://thephysicsvirtuosi.com/posts/old/creating-an-earth/#fnote4"><sup>[4]</sup></a></p>
<hr>
<p><strong>Notes</strong></p>
<p><a id="fnote1"></a>
1. <a href="https://thephysicsvirtuosi.com/posts/old/creating-an-earth/#note1">^</a> Since I wanted a random surface, I wanted to make the mean of each
coefficient 0. Otherwise we'd get a deterministic part of our surface
heights. I picked a distribution that's symmetric about 0 because on
Earth the bottom of the oceans seem roughly similar in terms of changes
in elevation. I wanted to pick a stable distribution & independent
coefficients because it makes the sums that come up easier to evalutate.
Finally, I picked a Gaussian, as opposed to another stable distribution
like a Lorentzian, because the tallest points on Earth are finite, and I
wanted the variance of the planet's height to be defined.</p>
<p><a id="fnote2"></a>
2. <a href="https://thephysicsvirtuosi.com/posts/old/creating-an-earth/#note2">^</a> We could make this rigorous by showing that a rotated spherical
harmonic is orthogonal to other spherical harmonics of a different
degree $l$, but you don't want to see me try.</p>
<p><a id="fnote3"></a>
3. <a href="https://thephysicsvirtuosi.com/posts/old/creating-an-earth/#note3">^</a> Actually $p=0$ should correspond to completely uncorrelated
delta-function noise. (You can convince yourself by looking at the
spherical harmonic expansion for a delta-function.) The reason that the
bumps have a finite width is that I only summed the first 22,499 terms
in the series ($l=150$ and below). So the size of the bumps gives a rough
idea of my resolution.</p>
<p><a id="fnote4"></a>
4. <a href="https://thephysicsvirtuosi.com/posts/old/creating-an-earth/#note4">^</a> For those of you keeping score at home, it took me more than 6 days
to figure out how to make these planets.</p></div>https://thephysicsvirtuosi.com/posts/old/creating-an-earth/Sat, 27 Oct 2012 19:07:00 GMT
- A Homemade Viscometer Ihttps://thephysicsvirtuosi.com/posts/old/a-homemade-viscometer-i/Brian<div><p>Stirring a bowl of honey is much more difficult than stirring a bowl of
water. But why? The mass density of the honey is about the same as that
of water, so we aren't moving more material. If we were to write out
Newton's equation, $ma$ would be about the same, but yet we still need
to put in much more force. Why? And can we measure it? </p>
<p>The reason that
honey is harder to stir is of course that the drag on our spoon depends
on more than just the density of the fluid. The drag also depends on the
viscosity of the fluid -- loosely speaking, how thick it is -- and the
viscosity of honey is about 400 times that of water, depending on the
conditions. In fact, a quick perusal of the Wikipedia article on
<a href="http://en.wikipedia.org/wiki/Viscosity">viscosity</a>
shows that viscosities can vary by a fantastic amount -- some 13 orders of
magnitude, from easy-to-move gases to
<a href="http://en.wikipedia.org/wiki/Pitch_drop_experiment">thick pitch</a>
that behaves like a solid except on
long time scales. The situation is even more complicated than this, as
<a href="http://en.wikipedia.org/wiki/Non-Newtonian_fluid">some fluids</a>
can have a viscosity that changes depending on the flow. I wanted to
find a way to measure the viscosities of the stuff around me, so I made
the <a href="http://en.wikipedia.org/wiki/Viscometer">viscometer</a> pictured below
for about $1.75 (the vending machines in Clark Hall are pretty
expensive).</p>
<div style="float: center">
<figure>
<img src="https://thephysicsvirtuosi.com/images/viscometer1/visc_fig1.jpg" alt="A PLOT???" width="40%">
<figcaption> My homemade viscometer, taking data on the viscosity of water. </figcaption>
</figure>
</div>
<p>To do this, I </p>
<ol>
<li>
<p>Enjoyed the crisp, refreshing taste of Diet Pepsi from a 20 oz
bottle (come on, sponsorships).</p>
</li>
<li>
<p>Cut the top and bottom off the bottle, so all that was left was a
straight tube.</p>
</li>
<li>
<p>Mounted the bottle with on top of a small piece of flat plastic.</p>
</li>
<li>
<p>Mounted a single-tubed coffee stirrer horizontally out of the bottle
(I placed the end towards the middle of the bottle to avoid end
effects).</p>
</li>
<li>
<p>Epoxied or glued the entire edge shut.</p>
</li>
<li>
<p>Marked evenly-spaced lines on the side of the bottle.</p>
</li>
</ol>
<p>I can load my "sample" fluid in the top of the Pepsi bottle, and time
how long it takes for the sample level to drop to a certain point. A
more viscous fluid will take more time to leave the bottle, with the
time directly proportional to the viscosity. (This is a consequence of
Stokes flow and the equation for flow in a pipe. It will always be true,
as long as my fluid is viscous enough and my apparatus isn't too big.)</p>
<p>So we're done! All we need to do is calibrate our viscometer with one
sample, measure the times, and then we can go out and measure stuff in
the world! No need to stick around for the boring calculations! We can
do some fun science over the next few blog posts! </p>
<p>But this is a physics
blog written by a bunch of grad students, so I'm assuming that a few of
you want to see the details. (I won't judge you if you don't though.) If
we think about the problem for a bit, we basically have flow of a liquid
through a pipe (i.e. the coffee stirrer), plus a bunch of other crap
which hopefully doesn't matter much. </p>
<p>We first need to think about how
the fluid moves. We want to find the velocity of the fluid at every
position. This is best cast in the language of vector calculus -- we
have a (vector) velocity field $\vec{u}$ at a position $x$.
There are two things we know: 1) We don't (globally) gain or lose any
fluid, and 2) Newton's laws $F=ma$ hold. We can write these equations as
the Navier-Stokes equations: </p>
<p>$$ \vec{\nabla}\cdot \vec{u} = 0 \quad (1) $$ </p>
<p>$$ \rho \left( \frac {\partial \vec{u}} {\partial t} + (\vec{u}\cdot\vec{\nabla})\vec{u} \right) = - \vec{\nabla}p + \eta \nabla^2 \vec{u} \quad (2) $$ </p>
<p>The first equation basically
says that we don't have any fluid appearing or disappearing out of
nowhere, and the second is basically $m \vec{a}=\vec{F}$, except written per
unit volume. (The fluid's mass-per-unit-volume is $\rho$, the rate of
change of our velocity is $\frac{d\vec{u}}{dt}$, and our force per unit volume is
$\vec{\nabla}p$, plus a viscous term $\nabla^2 \vec{u}$. The only
complication is that $\frac{d\vec{u}}{dt}$ is a total derivative, which we need to
write as </p>
<p>$$ \frac{d\vec{u}}{dt} = \frac{\partial \vec{u}}{\partial t} + \frac{\partial \vec{u}}{\partial x} \frac{d x}{d t}$$</p>
<p>I won't drag you through the
<a href="http://www.4shared.com/office/y9ay-fNh/Homemade_viscometer_-gory_sect.html?refurl=d1url">gory details</a>,
unless you want to see them, but it turns out that for my system the
height of the fluid $h$ (measured from the coffee stirrer) versus time
$t$ is </p>
<p>$$ h(t) = h(0)e^{-t/T}, \quad T= 60.7 \textrm{sec} \times [\eta / \textrm{1 mPa s}] \times [\textrm{ 1 g/cc} / \rho] $$ </p>
<p>[For my viscometer, the coffee stirrer has length 13.34 cm and inside
diameter 2.4 mm, and the Pepsi bottle has a cross-sectional area of 36.3
square centimeters (3.4 cm inner radius). You can see how the timescale
scales with these properties in the
<a href="http://www.4shared.com/office/y9ay-fNh/Homemade_viscometer_-gory_sect.html?refurl=d1url">gory details section</a>.]</p>
<div style="float: center;">
<figure>
<img src="https://thephysicsvirtuosi.com/images/viscometer1/visc_fig2.png" alt="A PLOT" width="70%">
<figcaption style="font-size:small;">
A run with measured heights vs times & error bars. The
majority of the uncertainty turns out to come from not knowing the
exact proportions of the viscometer. I don't know exactly why the
heights are systematically deviating from the fit, but I suspect it's
that my gridlines aren't perfectly lined up with the bottom of my
viscometer (it looks like $\sim 5$ mm off would do it, which I can totally
believe looking at the picture of my viscometer). However, because of
the linearity of the equations for steady flow in a pipe, we know that
the time scales linearly with the viscosity, so we should be able to
accurately measure relative viscosities.
</figcaption>
</figure>
</div>
<p>Well, how well does it work? Above is a plot of the height of water in
my viscometer versus time, with a best-fit value from the equations
above. To get a sense of my random errors (such as how good I am at
timing the flow), I measured this curve 5 separate times. If I take into
account the uncertainties in my apparatus setup as systematic errors, I
find a value for my viscosity as </p>
<p>$$ \eta \approx 1.429 \textrm{mPa s} \pm 0.5 \% \textrm{Rand.} \pm 55\% \textrm{Syst.} $$ </p>
<p>The actual value of the viscosity of water at room temperature (T=25 C) is about
$0.86~\textrm{mPa s}$, which is more-or-less within my systematic errors. So it
looks like I won't be able to measure absolute values of viscosity
accurately without a more precise apparatus. But if I look at the
variation of my measured viscosity, I see that I should probably be able
to measure changes in viscosity to 0.5% !! That's pretty good! Hopefully
over the next couple weeks I'll try to use my viscometer to measure some
interesting physics in the viscosity of fluids.</p></div>https://thephysicsvirtuosi.com/posts/old/a-homemade-viscometer-i/Tue, 24 Jul 2012 22:32:00 GMT
- How Cold is the Ground IIhttps://thephysicsvirtuosi.com/posts/old/how-cold-is-the-ground-ii/Brian<div><div style="float: right; margin: 0px 0px 0px 10px">
<figure>
<img src="https://thephysicsvirtuosi.com/images/how-cold-is-the-ground-ii/mainImage.png" alt="GAH!" width="40%">
<figcaption style="text-align=center;">
Images <a href="http://en.wikipedia.org/wiki/File:Ithaca_Hemlock_Gorge.JPG">from</a>
<a href="http://en.wikipedia.org/wiki/File:Mercury_in_color_-_Prockter07_centered.jpg"> Wikipedia
</a>
</figcaption>
</figure>
</div>
<p><a href="http://thephysicsvirtuosi.com/posts/how-cold-is-the-ground-.html">Last week</a>
(ok, it was a little more than a few days ago...) I used
dimensional analysis to figure out how the ground's temperature changes
with time. But although dimensional analysis can give us information
about the length scales in the problem, it doesn't tell us what the
solution looks like. From dimensional analysis, we don't even know what
the solution does at large times and distances. (Although we can usually
see the asymptotic behavior directly from the equation.) So let's go
ahead and solve the the heat equation exactly: </p>
<p>$$ \frac {\partial T}{\partial t} = a \frac {\partial ^2 T}{\partial x^2} \quad (1)$$ </p>
<p>Well, what type of solution do we want to this equation? We want the
temperature at the Earth's surface $x=0$ to change with the days or the
seasons. So let's start out modeling this with a sinusoidal dependence
-- we'll look for a solution of the form </p>
<p>$$ T(x,t) = A(x)e^{i wt} $$</p>
<p>for some function $A(x)$, then we can take the real part for our
solution. Plugging this into Eq. (1) gives
$A^{\prime\prime} = i\omega/a \times A$, or </p>
<p>$$ A(x) = e^{ \pm \sqrt{w/2a } (1+i) x} $$ </p>
<p>Since we have a second-order
ordinary differential equation for $A$, we have two possible solutions,
which are like $\exp(+x)$ or $\exp(-x)$. Which one do we choose? </p>
<p><a id="note1"></a>
Well, we want the temperature very far away from the surface of the ground to be
constant, so we need the solution that decays with distance,
$A\exp(-x)$. Taking the real part of this solution, we
find<a href="https://thephysicsvirtuosi.com/posts/old/how-cold-is-the-ground-ii/#fnote1"><sup>[1]</sup></a> </p>
<p>$$ T(x,t) = T_0 \cos (wt + \sqrt{w/2a}\times x ) e^{-\sqrt{w/2a}x} \quad (2) $$ </p>
<p>Well, what does this solution <em>say</em>?
As we expected from our scaling arguments last week, the distance scale
depends on the <em>square root</em> of the time scale -- if we decrease our
frequency by 4 (say, looking at changes over a season vs over a month),
the ground gets cooler only $2{\times}$ deeper. We also see that the temperature
oscillation drops off quite rapidly as we go deeper into the ground, and
that there is a "lag" the farther you go into the ground. In particular,
we see that at distances deep into the ground, the temperature drops to
its average value at the surface. You can see this all in the pretty
plot below (generated with Python):</p>
<div style="float: center;">
<figure>
<img src="https://thephysicsvirtuosi.com/images/how-cold-is-the-ground-ii/SingleFrequency.png" alt="GAH!" width="60%">
<figcaption style="text-align=center;">
Single frequency plot
</figcaption>
</figure>
</div>
<p>Let's recap. To model the temperature of the ground, we looked for a
solution to the heat equation which had a sinusoidally oscillating
temperature at $x=0$, and decayed to 0 at large $x$. We found a solution
such a solution, and it shows that the temperature decays rapidly as we
go far into the ground. At this point, there are two questions that pop
into mind: </p>
<p>1) Is the solution that we found <em>unique</em>? Or are there other
possible solutions? </p>
<p>2) This is all well and good, but what if our days
or seasons <em>aren't perfect sines</em>? Can we find a solution that describes
this behavior? </p>
<p><a id="note2"></a>
I'll give one (1) VirtuosiPoint to the first commenter
who can prove to what extent the above solution is
unique<a href="https://thephysicsvirtuosi.com/posts/old/how-cold-is-the-ground-ii/#fnote2"><sup>[2]</sup></a>. But how about the second point? Can we solve this
for non-sinusoidal time variations? Well, at this point most of the
readers are rolling their eyes and shouting "Use a
<a href="http://en.wikipedia.org/wiki/Fourier_series">Fourier series</a> and move on." So I
will. Briefly, it turns out that (more or less) <em>any</em> periodic function
can be written as a sum of sines & cosines. So we can just add a bunch
of sines and cosines together and construct our final solution. So just
for fun, here is a plot of the temperature of the ground in Ithaca (data
from <a href="http://en.wikipedia.org/wiki/Ithaca,_New_York">Wikipedia</a>) over a
year. (I used a discrete Fourier transform to compute the coefficients.)</p>
<div style="float: center;">
<figure>
<img src="https://thephysicsvirtuosi.com/images/how-cold-is-the-ground-ii/IthacaTemp.png" alt="ithaca temp!" width="60%">
<figcaption style="text-align=center;">
The temperature (colorbar) is in degrees C, assuming a=0.5 mm^2/s from <a href="http://thephysicsvirtuosi.com/posts/how-cold-is-the-ground-.html">before</a>.
</figcaption>
</figure>
</div>
<p>Looks pretty boring, but I swear that all the frequencies are in that
plot. It just turns out that the seasons in Ithaca are pretty
sinusoidal. So about 20 meters below Ithaca, the temperature is a pretty
constant 8 C. While I was postponing writing this, I wondered what the
temperature on Mercury's rocks would be. If we dig deep enough, can we
find an area with habitable temperatures? Some
<a href="http://hypertextbook.com/facts/2000/OlesyaNisanov.shtml">quick</a>
<a href="http://en.wikipedia.org/wiki/Mercury_%28planet%29#Surface_conditions_and_.22atmosphere.22_.28exosphere.29">Googlin</a>'
shows that the daytime and nighttime temperatures on Mercury are
${\sim}550-700~{\rm K}$ and ${\sim}110~{\rm K}$ at the "equator."
While I don't think that Mercury's temperature varies symmetrically, let's assume so for lack of
better data.<a href="https://thephysicsvirtuosi.com/posts/old/how-cold-is-the-ground-ii/#fnote3"><sup>[3]</sup></a> Then we'd expect that deep into the
surface, the temperature would be fairly constant in time, at the
average of these two extremes. Plugging in the numbers
(assuming $a\approx0.52~{\rm mm}^2 / s$ and using a Mercurial solar day as 176 days), we get</p>
<p>$T=94~{\rm C}$ at 2.75 meters into the surface.</p>
<p><a id="fnote1"></a>
1. <a href="https://thephysicsvirtuosi.com/posts/old/how-cold-is-the-ground-ii/#note1">^</a> More precisely, since the heat equation is linear and real, if
$T(x,t)$ is a solution to the equation, then so are $\frac{1}{2}(T+T^{<em>})$ or
$\frac{1}{2i}(T-T^{</em>})$.</p>
<p><a id="fnote2"></a>
2. <a href="https://thephysicsvirtuosi.com/posts/old/how-cold-is-the-ground-ii/#note2">^</a> Hint: It's not unique. For instance, here is another solution that
satisfies the constraints, with no internal heat sources or sinks
(I'll call it the "freshly buried" solution):</p>
<div style="float: center;">
<figure>
<img src="https://thephysicsvirtuosi.com/images/how-cold-is-the-ground-ii/buriedAlive.png" alt="buried alive" width="60%">
<figcaption style="text-align=center;">
Freshly buried.
</figcaption>
</figure>
</div>
<p>Can you prove that all the other solutions decay to the original
solution? Or is there a second or even a spectrum of steady state
solutions?</p>
<p><a id="fnote3"></a>
[3] <a href="https://thephysicsvirtuosi.com/posts/old/how-cold-is-the-ground-ii/#note3">^</a> If someone provides me with better data of the time variation of
Mercury's surface at some specific latitude, I'll update with a full
plot of the temperature as a function of depth and time.</p></div>https://thephysicsvirtuosi.com/posts/old/how-cold-is-the-ground-ii/Sat, 26 May 2012 21:28:00 GMT
- How Cold is the Ground?https://thephysicsvirtuosi.com/posts/old/how-cold-is-the-ground-/Brian<div><p>It snowed in Ithaca a few weeks ago. Which sucked. But fortunately, it
had been warm for the previous few days, and the ground was still warm
so the snow melted fast. Aside from letting me enjoy the absurd
arguments against global warming that snow in April birthed, this got me
thinking: How cold is the ground throughout the year? At night vs.
during the day? And the corollary: How cold is my basement? If I dig a
deeper basement, can I save on heating and cooling? (I'm very cheap.)</p>
<p>Well, we want to know the temperature distribution $T$ of the ground as
a function of time $t$ and position $x$. So some googlin' or previous
knowledge shows that we need to solve the
<a href="http://en.wikipedia.org/wiki/Heat_equation">heat equation</a>. </p>
<p>For our purposes,
we can treat the Earth as flat (I don't plan on digging a basement deep
enough to see the curvature of the Earth), so we can assume the
temperature only changes with the depth into the ground $x$: </p>
<p>$$ \frac{\partial T}{\partial t} = a \frac{\partial^2 T}{\partial x^2}\qquad (1) $$ </p>
<p>where $a$ is the thermal diffusivity of the material, in
units of square meters per second. It looks like we're going to have to
solve some partial differential equations! Or will we? We can get a very
good estimate of how much the temperature changes with depth just by
dimensional analysis. </p>
<p>Let's measure our time $t$ in terms of a
characteristic time of our problem $w$
(it could be 1 year if we were trying to see the change in the ground's temperature from summer to winter,
or 1 day if we were looking at the change from day to night).
Then we can write: </p>
<p>$$ \frac{\partial T }{\partial t} = \frac{1}{w} \frac {\partial T} {\partial t/w} $$ </p>
<p>plugging this in Eq. (1),
rearranging, and calling $l= \sqrt{wa}$ gives.... </p>
<p>$$ \frac{\partial T}{\partial (t/w)} = \frac{\partial ^2 T}{\partial (x/l )^2} $$ </p>
<p>Now let's say we didn't know how to or didn't want to solve
this equation. (Don't worry, we do & we will). From rearranging this
equation, we see right away there is only one "length scale" in the
problem, $l$. So if we had to guess, we could guess that the ground
changes temperature a distance $l$ into the ground. A quick look at
Wikipedia for
<a href="http://en.wikipedia.org/wiki/Thermal_diffusivity">thermal diffusivities</a>
gives us the following table, for materials we'd find in the ground:</p>
<table>
<thead>
<tr>
<th>Tables</th>
<th align="center">Are</th>
<th align="right">Cool</th>
</tr>
</thead>
<tbody>
<tr>
<td>col 3 is</td>
<td align="center">right-aligned</td>
<td align="right">$1600</td>
</tr>
<tr>
<td>col 2 is</td>
<td align="center">centered</td>
<td align="right">$12</td>
</tr>
<tr>
<td>zebra stripes</td>
<td align="center">are neat</td>
<td align="right">$1</td>
</tr>
</tbody>
</table>
<p>blloop</p>
<table>
<thead>
<tr>
<th>Material</th>
<th align="center">$a$ (${\rm mm}^2 / {\rm s}$)</th>
<th align="center">$l$ ($\rm{cm}$, $1~{\rm day}$)</th>
<th align="center">$l$ ($\rm{m}$, $1~{\rm year}$)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Polycrystalline Silica (glass, sand)</td>
<td align="center">0.83</td>
<td align="center">27</td>
<td align="center">5.1</td>
</tr>
<tr>
<td>Crystalline Silica (quartz)</td>
<td align="center">1.4</td>
<td align="center">35</td>
<td align="center">6.6</td>
</tr>
<tr>
<td>Sandstone</td>
<td align="center">1.15</td>
<td align="center">32</td>
<td align="center">6.0</td>
</tr>
<tr>
<td>Brick</td>
<td align="center">0.52</td>
<td align="center">21</td>
<td align="center">4.0</td>
</tr>
<tr>
<td><a href="http://soilphysics.okstate.edu/software/SoilTemperature/document.pdf">Soil</a></td>
<td align="center">0.3-1.25</td>
<td align="center">16-33</td>
<td align="center">3.1-6.3</td>
</tr>
</tbody>
</table>
<p>So we would expect that the temperature of the ground doesn't change
much on a daily basis a foot or so below the ground, and doesn't change
ever about 15-20 feet into the ground. Just to pat ourselves on the back
for our skills at dimensional analysis, a quick check shows that
<a href="http://en.wikipedia.org/wiki/Permafrost#Time_to_form_deep_permafrost">permafrost</a>
penetrates 14.6 feet into the ground after 1 year. So our dimensional
estimates looks pretty good! In the next few days I'll solve this
equation exactly and throw up a few pretty graphs, and maybe talk a
little about PDE's and Fourier series in the process.</p></div>https://thephysicsvirtuosi.com/posts/old/how-cold-is-the-ground-/Fri, 18 May 2012 00:20:00 GMT
- Earth Day 2012: Escape to the Moonhttps://thephysicsvirtuosi.com/posts/old/earth-day-2012-escape-to-the-moon/Brian<p>It is now Earth Day 2012, and, according to the Mayan predictions, <a href="http://thevirtuosi.blogspot.com/search/label/end%20of%20the%20earth">The
Virtuosi will destroy the
earth</a>.
In a futile attempt to fight my own mortality, I decided to send
something to the Moon. It seems, for a poor graduate student trying to
get to the Moon, the most difficult part is the Earth holding me back.
So first I'll focus on escaping the Earth's gravitational potential
well, and if that's possible, then I'll worry about more technical
problems, such as actually hitting the moon. Moreover, in honor of the
destructive spirit of The Virtuosi near Earth Day, I'll try to do this
in the most Wiley-Coyote-esque way possible. <strong>Preliminaries</strong> If we
want to get to the Moon, we need to first figure out how much energy
we'll need to escape the Earth's gravitational pull. "That's easy!" you
say. "We need to escape a gravitational well, and we know from Newton's
law that the potential from a spherical mass <em>ME</em> 's gravity for a test
mass <em>m</em> is : \begin{equation} \Phi = - \frac {G M_{E} m}{r}
\label{eqn:gravpot} \end{equation} We're currently sitting at the
radius of the Earth <em>RE</em>, so we simply need to plug this value in and
we'll find out how much energy we need." This is all well and good, but
i) I can never remember what the gravitational constant <em>G</em> is, and ii)
I have no idea what the mass of the Earth <em>ME</em> is. So let's see if we
can recast this in a form that's easier to do mental arithmetic in.
Well, we know that the force of gravity is the related to the potential
by: $$ \vec{F}(r) = - \vec{\nabla} \Phi = - \frac {d\Phi}{dr}
\hat{r} \ \vec{F} = - \frac {G m M_E } {r^2}
\label{eqn:gravforce} $$ Moreover, we all know that the force of
gravity at the Earth's surface is <em>F(r=RE)=-mg</em>. Substituting this in
gives: $$ \frac {G m M_E} {R_E^2} = m g \quad \textrm{, or} $$
\begin{equation} \frac {G m M_E}{R_E} = m g {R_E} \quad .
\label{eqn:betterDef} \end{equation} So the depth of the Earth's
potential well at the Earth's surface is <em>mgRE</em>. If we use <em>g</em> = 9.8
m/s^2 \~ 10 m/s^2 and <em>RE</em> = 6378 km \~ 6x10^6 m, then we can write
this as \begin{equation} \Delta \Phi = m g {R_E} \approx m \times
6 \times 10^7 \textrm{m}^2/\textrm{s}^2 \quad \textrm{(1)},
\end{equation} give or take. How fast do we need to go if we're going
to make it to the Moon? Well, at the minimum, we need the kinetic energy
of our object to be equal to the depth of the potential well
<a href="https://thephysicsvirtuosi.com/posts/old/earth-day-2012-escape-to-the-moon/#footnote-1">[1]</a>, or $$ \frac 1 2 m v^2 = 6 m \times 10^7
\textrm{m}^2/\textrm{s}^2 \quad \textrm{or} \ v \approx 1.1
\times 10^4 \textrm{ m/s (2)} . $$ So we need to go pretty fast --
this is about Mach 33 (33 times the speed of sound in air). At this
speed, we'd get from NYC to LA in under 7 minutes. Looks difficult, but
let's see just how difficult it is. <strong>Attempt I: Shoot to the Moon</strong>
What goes fast? Bullets go fast. Can we shoot our payload to the moon?
Let's make some quick estimates. First, can we shoot a regular bullet to
the moon? Well, we said that we need to go about Mach 33, and a fast
bullet only goes about Mach 2, so we won't even get close. Since energy
is proportional to velocity squared, we'll only have (2/33)^2 \~ 0.4 %
of the necessary kinetic energy. <a href="https://thephysicsvirtuosi.com/posts/old/earth-day-2012-escape-to-the-moon/#footnote-2">[2]</a> So let's make a
bigger bullet. How big does it need to be? Well, loosely speaking, we
have the chemical potential energy of the powder being converted into
kinetic energy of the bullet. Let's assume that the kinetic energy
transfer ratio of the powder is constant. If a bullet receives kinetic
energy <em>1/2mbvb^2</em> from a mass <em>mP</em> of powder, then for our payload to
have kinetic energy <em>1/2 M V^2</em>, we need a mass of powder <em>MP</em> such
that \begin{equation} \frac {M_P} {m_P} = \frac M {m_b} \times
\frac {V^2}{v_b^2} \end{equation} A quick reference to Wikipedia
for a <a href="http://en.wikipedia.org/wiki/7.62%C3%9751mm_NATO">7.62x51mm NATO
bullet</a> shows that
\~25 grams of powder propels a \~10 gram bullet at a speed of \~Mach
2.5. We need to get our payload moving at Mach 33, so (<em>V/vb</em>)^2 \~
175. If we send a 10 kg payload to the Moon, we have <em>M/mb</em> \~ 1000. So
we'll need about 1.75 x 10^5 the amount of powder of a bullet to get us
to the Moon, or about 4400 kg, which is 4.8 tons (English) of powder.
That's a lot of gunpowder to get us to the Moon. For comparison, if we
are going to construct a tube-like "case" for our 10 kg
bullet-to-the-Moon, it will have to be about half a meter in diameter
and 17 feet tall. So I'm not going to be able to shoot anything to the
Moon anytime soon. <strong>Attempt II: Charge to the Moon</strong> OK, shooting
something to the Moon is out. Can we use an electric field to propel
something to the Moon? Well, we would need to pass a charged object
through a potential difference such that \begin{equation} q \Delta
\Phi_E = m g R_E = 6 m \times 10^7 \textrm{m}^2/\textrm{s}^2
\quad . \label{eqn:chargepot} \end{equation} After the humiliation of
the last section, let's start out small. Can we send an electron to the
Moon? We could plug numbers into this equation, but I'm too lazy to look
up all those values. Instead, we know that we need to get our electron
(rest mass 511 keV) to a speed which is (Eq. 2) $$v \approx 1.1 \times
10^4 \textrm{m/s} \approx 4 \times 10^{-5} c. $$ So an electron
moving at this velocity will have a kinetic energy of $$ \textrm{KE} =
m c^2 \times \frac 1 2 \frac {v^2}{c^2} = 511 \textrm{ keV}
\times \frac 1 2 \frac {v^2}{c^2} \ \qquad \approx 511
\textrm{ keV} \times 0.8 \times 10^{-9} \approx 0.4 \times
10^{-3} eV. $$ So we can give an electron enough kinetic energy to get
to the moon with a voltage difference of 0.4 mV, assuming it doesn't hit
anything on the way up (it will). We can send an electron to the Moon!
How about a proton? Well, the mass of a proton is 1836x that of an
electron, but with the same charge, so we'd need 1836 * 0.4 mV \~ 0.73
V to get a proton to the Moon -- again, pretty easy. Continuing this
logic, we can send a singly-charged ion with mass 12 amu (<em>i.e.</em> C-)
with a 9V battery, and a singly-charged ion with mass 150 amu (something
like caprylic acid) using a 110V voltage drop. (Again, assuming these
don't hit anything on the way up.) How about our 10 kg object? Let's say
we can miraculously charge it with 0.01 C of charge. <a href="https://thephysicsvirtuosi.com/posts/old/earth-day-2012-escape-to-the-moon/#footnote-3">[3]</a>
Then from Eq. (1), we'd need $$ 0.01 C \times \Delta \Phi_E \approx
6 \times 10^8 \textrm{ J ,} $$ or a potential difference of $$
\Delta \Phi_E = 6 \times 10^{10} \textrm{ V. } $$ That is a HUGE
potential drop. For comparison, if we have 2 parallel plates with a
surface charge of 0.01 C/m^2 (again, a huge charge density), they'd
have to be a distance $$ d = 6 \times 10^{10} \textrm{V} \times
\epsilon_0 / (0.01 \textrm{C/m}^2) \approx 53 \textrm{ meters
apart} $$ It looks like I won't be able to send something to the Moon
using tools from my basement anytime soon.
[1] We'll ignore both air resistance and the Moon's gravitational
attraction for simplicity.
[2] Since the potential <em>U \~ - 1/r</em>, if we increase our potential
energy by 0.4%, this is (to 1st order) the same as increasing <em>r</em> by
0.4%. So we'll get 0.004 * 6378 km \~ 25 km above the Earth's surface.
Of course <a href="http://scienceblogs.com/dotphysics/2009/09/how-high-does-a-bullet-go.php">air resistance slows it down a
lot</a>.
[3] According to Wikipedia, this is <a href="http://en.wikipedia.org/wiki/Orders_of_magnitude_%28charge%29">0.04% of the total charge of a
thundercloud</a>.
And if our object is uniformly charged with a radius of 1 m, it will
have an electrical self-energy of ** $$ U = \frac 1 2 \int
\epsilon_0 E^2 dV \approx 36 \textrm{kJ} $$</p>https://thephysicsvirtuosi.com/posts/old/earth-day-2012-escape-to-the-moon/Sun, 22 Apr 2012 15:12:00 GMT