Memorial Day Distractions

Summer is upon us. That means summer research, and online games. In order to help you through this three day weekend and beyond, I thought I'd share some of the more physics inspired games I've been playing lately to pass the time. I really enjoy physics based games. When done right, I think they can not only be fun and engaging but have the opportunity to teach you something.

Colonel Blotto

First up is a web Colonel Blotto tournament that I coded up, which you can find here. Colonel Blotto is a silly little game theoretic game where you try and assign one hundred soldiers between 10 different fields. Opponents do the same. Neither side knows what the other side is going to do. The game turns out to have some very interesting dynamics. Apparently for more than 12 soldiers, the game does not have a deterministic optimal strategy, so you're guess is as best as mine for how to beat the other players in the pool. Its worth checking out. Feel free to try your hand at the pool. Once the participation has settled down, I want to try and analyze the strategies that people employ and see if I can do any interesting statistical physics with it. I spent a good chunk of this weekend working out the bugs, so it should run smoothly now, but be sure to let me know if you discover any errors.

Manufactoria

This game has been sucking up a lot of my evenings lately. Manufactoria is one of the most pleasing puzzle games I've played in a long time. It pretends to be about selecting robots that meat specification, but really its about programming finite state machines. I've really enjoyed this game a lot. It has a real engineering type bent, but unlike a lot of games that require critical thinking to solve a puzzle, once you know how to solve it, it isn't tedious to do so. A great game, and I highly recommend it. One of the best games I've played in a while, although maybe I'm so into it because I'm reading through The Feynman Lectures on Computation right now. He gives as exercises very similar problems to the ones in the game. Its also in regards to these types of problems specifically that he gives this quote. A few tips. (1) You can use asdf or the arrow keys to rotate the pieces. (2) Space bar switches the chirality of the gates (swaps red for blue). (3) In order to round corners, you just make the conveyors meet at a T (4) You don't have to enter the gates on the open side. (5) During tests, move the slider in the bottom left to the right to speed up time. If you manage to solve all of the puzzles, three bonus ones will appear at the bottom.

Fantastic Contraption

This one is old. Its been on the internet for a while now, but if you haven't seen Fantastic Contraption you should check it out. Its a very free form game. You need to get the pink block into the goal, but in order to do so, you can build any contraption you desire out of wheels and sticks. Its a great time. Most impressive of course is the level that some kids have taken it to. Once you've played with the game a bit, be sure to search youtube for fantastic contraption, and marvel at the engineering insight some of these kids show. I like this game a lot, but it can get a bit tedious to get your design to work out just right.

Splitter 2

This one is neat. As a kid I read the His Dark Materials trilogy (which started with The Golden Compass). Anyway the second book in the series is The Subtle Knife which stars a knife, the subtle knife in fact, which can cut through anything like butter, and I mean anything, including the fabric between worlds. This led to many day dreams as a kid, and I've spent a lot of time thinking about what a knife like that could actually do. Anyway, Splitter 2 is a puzzle game that works along those lines. You have to get a ball do the goal, but do so by cutting through pieces of wood on the stage. A neat game. Worth a look.

Phun

While I'm hear, I can't help but mention Phun. While not strictly a game. Phun is a physics sandbox similar in style to Fantastic Contraption, but with a whole lot more features. This one is loads of fun to play with, and I think could really serve as a learning tool for some classes. Worth the download. If you know of any other physics type games, drop them in the comments. I'm always on the look out for good ones.

Cryopreservation

I'd like to start a short series of posts on what I'm doing this summer. Like most all of the first year physics graduate students, I've found a research group at Cornell for the summer, and if everything goes well, I'll continue working with them after the summer is done. This is good for three reasons. First, I'm getting paid, so I can do things like pay rent. I'm not all about the money, but having a place to live is a big deal to me. Second, I'm getting a chance to explore a new area of research, with limited expectation of commitment on my part. Third, if everything works out, I've found the group I'll be working with the for the next 4-6 years. I'd like to spend a few posts here on The Virtuosi discussing first the physics I'm considering, and then what I actually do. Today I'm going to talk about the most exciting sounding piece of my work, cryopreservation. What do we mean when we talk about cryopreservation? For many of us our first thought may be Han Solo frozen in carbonite. This is the basic idea. Cryopreservation is the freezing biological samples, and thawing them, while they retain biological viability. At least, that's the idea. At the moment, cryopreservation is the freezing and thawing biological samples, and hoping that some of them remain biologically viable. Even though the methods have been in development since the 1950s, successes are limited even for such simple objects as spermatozoa or oocytes (sperm & eggs), let alone larger and much more complicated objects like Harrison Ford. Why would we want to cryopreserve something? (and did I just invent a verb?) The idea is fairly simple. If we cool down biological samples, biological functions slow down or completely stop. The hope is that if we get samples sufficiently cold, we can suspend biological (and all other) functions indefinitely. This would allow indefinitely preservation of many things. Eggs and sperm for reproductive purposes (both human and other animals), plant seeds, blood (for transfusions), vaccines, drugs (no more pesky shelf life for pharmaceuticals until you unfreeze them). The perfect frozen strawberries. The list of possibilities is huge. And according to my dictionary, cryopreserve is a well established verb. Why is it so hard? It's easy to stick something in the freezer. That seems to work on my food. However, it doesn't work well on biological samples. If we just stick them in the freezer many of the cells are damaged irreparably. We'd like to know why. To answer this we're going to need to take a step down to the cellular level. I'm not a biologist, so I'm not going to attempt to explain cell structure to you. For our current purposes it is sufficient to know that cells have an inside and an outside (then, doesn't most everything?), separated by a semi-permeable membrane, and that both inside and outside of the cells there is water. We need to take a brief detour now into the realm of water. Water is a fantastically amazing substance, and I'm not just saying that because we (and most/all life on earth) would all die without it. Water is fascinating from a pure physics perspective. Now, when I refer to water, I'm going to use it to mean H2O, in any phase. Colloquially we usually mean liquid water when we say water, we say ice we mean crystalline frozen water, and steam when we mean vaporous water. I'm going to call all of these things 'water' and I will specify what phase I mean by saying 'frozen, liquid, vapor'. I will refer to frozen crystalline water as ice. So what's so fascinating about water? Well, first off, it is one of the only substances that expands when it crystallizes (forms ice). Most objects become more dense when they transform from a liquid to a solid. Water doesn't. This is why ice floats. There are many other fascinating aspects of water, for example, that liquid water is most dense at 4 C. This is mostly responsible for why fish survive in the winter. Ice forms on top of lakes/rivers, and the 4C water stays at the bottom of the lake because it is the densest. But for our purposes, what we really need to know is that water expands on forming ice. When we freeze our cells, this expansion of water can cause massive problems. Everything in the cell besides the water contracts on cooling. The water crystallizes and expands (by about 9%). If you've ever frozen something liquid in a very full container and seeing it split the container open, the idea is very similar. The intracellular water can crystallize and split open the cell membrane. Once you've split the cell membrane, there's no way (that I've seen) to successfully revive that cell. It turns out that ice formation outside the cell isn't such a big deal, it usually just pushes the cells around. The water in the cells is what is most likely to cause damage to our biological tissues when we freeze them. Early studies showed survival rates between 10^-6% and \~50%. Those are not good odds. How do we prevent this damage? There are two methods currently in practice to prevent damage on cooling. The first of these is slow cooling. It was discovered that if you cool your sample at a rate of \~1 C/min very little to no intracellular ice forms, though intercellular ice still forms. The water in and between cells is not pure water, but contains all kinds of things (proteins, salts, etc). This means that it starts to crystallize not at 0 C, but somewhere lower. This is the reason we salt sidewalks in the winter (at least, in places where we get snow). The salt lowers the freezing point of water, causing the ice/snow to melt even if the temperature is below 0 C. Of course, if the temperature gets low enough it will still freeze. So our intra and intercellular water crystallizes below 0 C. It turns out that the intercellular water starts to crystallize before the intracellular water. This means we have ice formation outside the cell while the water inside the cell is still liquid. Earlier I told you that we needed to know that the membrane of a cell was semi-permiable. This is why. Some water diffuses through the membrane. At room temperature, the liquid water outside the cells diffuses into the cell at the same rate that the liquid water inside the cell diffuses out of the cell. Now, if we start crystallizing the intercellular water, there will be less liquid water moving into the cell. But since the intracellular water is still liquid it will still be diffusing out at the same rate. This results in a partial dehydration of the cell, more liquid water moving out than in. This is a basic example of osmotic pressure. With less liquid water in the cell, when it does crystallize, the ice crystals that form are smaller, and less likely to damage the cell membrane. This method is not perfect, because extreme dehydration of cells will also damage them, due to cell shrinkage and extremely high solute concentration (this phenomena is called osmotic shock). The second method that is considered is ultrafast cooling. Here we must take another diversion into physics land. You may have noticed my emphasis on ice as crystalline water. It turns out that when you freeze water, forming a crystal is not the only option available to it (and we're not even going to discuss the variety of crystals it can form). It can also form a glass. Now, if you're not a physicist, you're probably not familiar with glass as a type of substance, but rather as the thing you put in your windows and drink out of. A glass, to a physicist, is, essentially, a solid without long range order. The best way to think about of it would be to imagine you took liquid water and stopped time for the water (not freezing it in the conventional sense). That's a glass. It looks like liquid water, has the same density as liquid water, but is a solid. It's a very strange concept, and I'm going to save a more thorough discussion for a later post, because that takes me into a whole different research area I'm working on this summer. For our purposes, it is sufficient to know that we can reach a solid state of water at cold temperatures that has the same density as liquid water. Based on our discussion above, the advantages should be immediately obvious. Without the expansion of ice to worry about, the solidification of water is nowhere near as damaging to our cells. By vitrifying (turning to a glass) our liquid water, we should see much less damage to the cells. To achieve vitrification, you have to cool down the cells much faster (depending on the substance >1000 C/min). How do we do this? The conventional method is to dunk the cells directly into liquid nitrogen, or put them into a very cold stream of gas (around liquid nitrogen temperatures). I should note that I'm skipping over a very large field of putting cryopreservants into cells and freezing them. However, the effect of cryopreservants is to prevent ice crystallization, and so this is just a method of vitrification. It just makes it easier to achieve by lowering the rate you have to cool the sample at. So that's it? It may seem like that's the solution. Cooling fast enough to vitrify the liquid water, and we'll get good cryopreservation of our cells. The rest is just technology development. Unfortunately, it's not that easy. It turns out that warming is a harder problem to solve than cooling. When we warm up glassy water, it turns back into supercooled liquid water (liquid water way below the freezing point of water). I'll discuss this in more detail in my next post on my research. So now we've got really cold water, and it will start forming ice, unless we warm up fast enough to prevent that, very similar to how we had to cool down very quickly to prevent ice formation. The problem is, experiments suggest that 'fast enough' is at least 10-100 times faster than cooling down. Not only that, but on cooling down, we're really trying to get very very cold. On warming up, we don't want to get much over room temperature, otherwise we'll fry the cells. So we have to warm really really quickly, in an extremely well controlled manner. So far, this has proved very challenging to do. A recent study with mouse oocytes achieved a \~90% success rate with their freezing/thawing cycles, and that is the best that I've seen. 90% is pretty good, but we'd like to do better. So what's your research this summer? It turns out that my research isn't so much into cryopreservation. As I will talk about next post, it is more into the physics of crystallization and glass formation of aqueous substances, where there is a lot of fundamental physics still to be understood. However, one of the most exciting eventual applications is cryopreservation. We've already developed a fast cooling system, and we're working on a fast warming system that will allow us to study crystallization and glass formation, and hopefully extend the techniques we develop into the field of cryopreservation. That's way cool! Yes. Yes it is. Even if that is a horrible pun. Note: The question and answer format was inspired by Chad over at Uncertain Principles and his research blogging.

I was born on Wednesday

Probability is a tricky thing. There are a lot of nonsensical answers to be had. I just read an article about the recent Gathering for Gardner meeting that took place. Gathering for Gardner is a unique meeting for mathematicians, magicians and puzzle makers where they get together and talk about interesting things. The meetings were inspired by Martin Gardner, one of the awesomest dudes of our time, who unfortunately just passed away. The question put to the floor was the following:

"I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?"

Think about that for a moment. Not too hard though. The answer turns out to be surprising. Upon reading the question, I thought about it for a long time and managed to confused myself entirely. Thinking I had gone crazy, I wrote a little python script to test the riddle, which only left me more convinced I had gone insane. I've spent most of the night thinking about it, and after making it half way to crazy, I've come around and am momentarily convinced the puzzle makes perfect sense. I'm going to attempt to convince you it makes perfect sense, but I plan on doing it in steps so as to reduce the bewilderment.

Playing Cards

Forget the question. Lets play a game of cards. You shuffle a deck and deal me two cards:

I accidentally flip one of them over.

Whats the probability that my other card is red? Well, that ones easy, its about half. Sure, its not exactly a half, knowing that the deck is finite and that the draws are done without replacement, knowing that the card showing is a red one means that there are only 52/2 - 1= 26 -1 = 25 red cards out of a deck of 52-1 = 51 cards giving a probability of 49%. But its basically a half. Lets do over, deal me two cards:

Darn, I flipped one of them over again:

Whats the probability that my other card is red? About a half still. (Sure, this time its really 26/51 = 51%). Nothing mysterious going on. Do over again. Deal me two cards:

This time I'll ask a little trickier question. Whats the probability that both my cards are red? Ah, well its about 1/2 * 1/2 or about 1/4 = 25%. (The real answer is 24.5%) Alright smarty pants. Whats the probability that I had a red card and a black one? Well, that ought to be about 1/2 (Real answer 51%). All in all, I could have a red card, then a black one (RB), or a black one, then a red one (BR), or a red one then a red one (RR) or a black one then a black one (BB). 4 distinct possibilities, each of which are equally likely, so the above two answers make complete sense. There is only one way in four to get both red cards, but two ways out of four to have both a red and a black. So far so good. Lets ask a different question. Now I'm going to get a bit obtuse. You deal me two cards. Now you ask me.

Hey Alemi, do you have a red card?

Meaning, do I have at least one red card. I respond, "Yes." Now, go with your gut. You know I have at least one red card. What do you reckon the color of the other one is? Probably black you say? You'd be correct. Looking at our breakdown above, I could have gotten RR, RB, BR, or BB as my cards dealt. Each was equally likely, but now you know something else. You know that I have at least one red card, so we only have three possibilities left, I either have RR, RB, or BR. Each of which was equally likely. So whats the probability that my other card is black? About 2/3 or 67%. (Real answer: 67.5%) Alright, same situation. You deal me two cards, I reveal that I have at least one red one. Whats the probability that my other card is red? Well, obviously 1/3 or 33% (Actually 32.5%) since this is the opposite question to the one directly above, and follows from the same reasoning. Fine. No problems. All of this makes sense.

Offspring

Instead of playing cards, lets return to offspring. Lets first look at a classic probability riddle.

I have exactly two children. At least one of them is a boy. What is the probability that the other one is a boy?

If I were to give you this question straightaway, most people would have said the probability would be a 1/2. Their reasoning being that boys and girls are equally likely. But having just led you through the playing cards, hopefully now it makes some sense how the true answer to this question is 1/3 or 33%. Originally my family could have been BB,BG,GB,or GG. Each of which was equally likely. Telling you I have at least one boy means now we are dealing with only the situations BB, BG or GB, still all equally likely, making the probability 1/3. Fine. Now, lets reexamine the true question at hand:

"I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?"

Is it a half? Is it 1/3? What do you reckon? At first thought, it seems like the Tuesday bit shouldn't enter into it at all, but on second thought, I've just revealed a lot more information than I did in the previous question. I've told you something specific about one of my children. This is analogous to when I accidently flipped over one of my cards, revealing not only its color but its count as well. Hopefully it makes sense that the probability ought to be much closer to a half than to a third. In fact the answer is 13/27 = 48.1%. With a little thought, you should be able to come up with that number yourself. Otherwise, see the article I mentioned at the beginning of this post. They have a nice breakdown at the bottom. Hopefully, at this point if you've read this far, you'd should be wondering why this question was so mysterious to begin with, and if thats the case, I did my job. If you think the question is obvious, and think it would have been obvious even without the card analogy, try and ask the boy-born-on-a-tuesday question by itself to one of your friends. I guarantee they'll be bewildered. Its a fun problem, and one that illustrates just how strange and counterintuitive probability can be. If you want some other mind twisting mathematical puzzles, try your hand at the Two envelopes problem, or Bertrands' box paradox, or the Birthday problem, or everyone's favorite the Monty Hall problem. Remember though, try your hand at the problem before reading the answer. Super fun bonus homework question: Lets do cards again. You deal me two cards, and I reveal that I have a red heart. Whats the probability that my other card is red?

Why is the Grass Green?

I was outside talking with Alemi last week and we were both startled to realize that the frozen white tundras of Ithaca had somehow transformed into fields of green. Apparently the snow was a temporary fixture that covered real live grass. Neato, gang! The joy at seeing green grass led quickly to surprise, confusion and then anger. Why the heck is grass green? Well, things look a color when they reflect back that color. So grass is green because its pigments (chlorophyll) absorb only a certain range of the visible spectrum, reflecting back the greenish bits. But if I know anything about approximating the sun as a blackbody, I know that it has a peak output of around 550 nm (i.e. green) light. So what's going on? Why are plants blatantly rejecting the most abundant kind of light? Since my initial confusion rested on the assumption of the sun as a blackbody, I decided to take a closer look at the actual spectrum of the sun. Below is a graph showing the frequency dependence of solar radiation incident on the top of the atmosphere and at sea level. Since plants typically don't live in space, we are most concerned with the sea level plot. From this plot, it looks like the incident radiation from the sun is fairly level beyond about 450nm or so. Just going on this graph alone, it looks like plants could just absorb reddish light and do alright for themselves. But do they? Let's take a look at the absorption spectrum of chlorophyll. As it turns out there are a bunch of different "flavors" of chlorophyll (chlorophyll a, b, c, and d). As far as I could gather, only a and b are important (a is found in just about everything that plays the photosynthesis game and b is found in plants and green algae). So we need to find the absorption spectrum of chlorophyll a and b. After looking for a while at very qualitative drawings, I found this neato-toledo site, which actually gives real live data. Plotting the results, we get the figure below. Comparing with our handy dandy wavelength-to-color converter below we see that there is a big peak in absorption of both chlorophyll a and b in the dark blue and lesser peaks in the red. So how do these absorption lines correspond to the incident light at sea level given above. Well, its kind of tough to check by eye, but it looks like chlorophyll has its biggest peaks right below the plateau in the solar spectrum. Marking the wavelengths of the absorption peaks makes this clear (a = blue, b = green). It seems like plants are using a sub-optimal band of the spectrum! So how do we reconcile this? Well, let's first start at The Beginning. The first photosynthetic organisms developed and spent the first billion or so years of their existence living and evolving in the oceans. The solar spectrum we have been using so far has been accurate only in air at sea level. Presumably it would be favorable for the organism to be able to survive at some finite depth in the ocean and not merely at the surface. Thus we must consider the effects of water on our incident solar radiation. A plot of the absorption spectrum of water is shown below (note the log-log scale). Lo and behold, the minimum absorption of visible light in water occurs towards the far end in the blue. And this is exactly where our biggest absorption peak in of chlorophyll is! Comparing our two graphs we see that the ratio of incident blue light to incident green light at see level is at worst about a third. But we see that for each meter traveled in water, green light is absorbed almost ten times more than blue light. Thus, an organism that lives a few meters underwater and wants to harness solar energy would probably do best to focus on that blue light. [WARNING: The next bit is speculative and I haven't taken a bio class since high school] As much as I gather about chlorophyll from Wikipedia and other semi-reputable sites, chlorophyll a is found in just about anything that photosynthesizes, BUT chlorophyll b is only found in plants and green algae. And apparently, land plants are largely descended from green algae (which are aquatic, but typically around the surface and around the shoreline). Now take a look at the chlorophyll absorption spectra again. The chlorophyll b spectrum is sort of squished in more towards the middle (towards 550 nm maybe?) than chlorophyll a. In fact, on the graph where I have drawn lines on the solar spectra graph where the chlorophyll peaks are, we see that chlorophyll b just barely gets up to that plateau region. This suggests to me that chlorophyll a was working just fine for the early aquatic plants, but once they reached land and got out of all that water it became an advantage to utilize light closer to the peak solar output. Thus, plants that had chlorophyll b in addition to a had a slight advantage over their b-less brethren. Or so I shall continue to shamelessly speculate (and, apparently, alliterate). Anyway, I thought that was kind of cool, but if I have made some horrible error or mangled some biology, please let me know!

Flying back

Those of us originating on the right side of the atlantic ocean are familiar with a little quirk of international flights: the flights home are shorter. Specifically, going from Tel Aviv to New York takes about one hour longer than going the other way around. This is an oddity, and the very first explanation that comes to mind the rotation of the Earth. After all, our naive image of a plane going up in the air might be something a little like a rock being thrown up from a moving cart, and we would imagine the plane to pick up some relative speed by not rotating as fast as the Earth. Is this a factor in the plane's movement? This gives us a perfect chance to use the Earth Units we introduced a few weeks ago. Specifically I'll use the Earth meter (equal to the radius of the Earth, which I'll dub e-m) and Earth second (one day, e-s). First we want to figure out velocity of the airplane compared to the ground. When it is grounded, the plane and the Earth's surface both have an angular velocity of 2π 1/e-s; they do one revolution per day. This means the plane's linear velocity is 2π e-m/e-s, and its angular velocity once it's airborne is $$2 \pi \cdot \frac{(1\; \rm{m_\oplus})}{(1\; \rm{m_\oplus}+A)} \;\rm{s_\oplus^{-1}},$$ where A is the altitude. That's the one number I'm going to pull out of thin air here; that being the thin air of the cabin where they always announce that we have attained a cruising altitude of 30,000 feet. In real people units, that's about 9,000 meters, or 9 kilometers - round it up to 10. Going back to the Earth day post 1 e-m is about 6380 km, so that the angular velocity of the airplane is about 0.9984 (2π) 1/e-s, and relative to the ground it is $$0.0016 \cdot 2 \pi \;\rm{s_\oplus^{-1}}$$ So, over a journey of length of about 0.5 e-s, the overall distance traveled due to this effect would come to about $$0.0008 \cdot 2\pi \;\rm{m_\oplus}.$$ Tel Aviv and New York are both in the mid-northern hemisphere and seven time zones apart, so a first-order estimate of the distance between them would be about $$\frac{7}{24}\cdot 2\pi \;\rm{m_\oplus} \approx 0.29\cdot 2\pi \;\rm{m_\oplus}.$$ Overall, it looks like this effect is negligible. Indeed, anyone who gives the matter a second thought would notice that the planes should go faster when traveling westwards, as the Earth spins eastwards toward the rising sun. Anyone who looks even further into the matter finds that eastbound and westbound planes simply take different routes across the Atlantic, leaving us with a rather more mundane and less exciting explanation. Still, I won't complain if it makes my flight any shorter. Now, if you'll excuse me, I have some beaches to catch up with.

Fishy Calculation Followup & New Contest

So, some you may remember when I attempted to calculate how much the oceans would lower if you took out all of the fish in an earlier post. Well, the results came in a while ago, but I forgot to mention that I lost the contest. I was about two orders of magnitude off from the winning answer. In light of my failure, I'm going to try again in the newest contest. This time the question is a bit stranger:

How many buff hamsters would it take to completely power a mansion?

I encourage all of The Virtuosi readers to enter as well, it only takes a minute to come up with some number. Good luck one and all.

Esoteric Physics I - The Hall Effect

What we usually do here at the The Virtuosi is take an interesting problem, and work out the physical principles behind what we're seeing. Or pose a question and try to answer it. Now, I'm a big fan of this kind of thing, which is why I've done so much of it. But I worry that it might give a slightly skewed view of physics. Sure, physics explains things. That's why we do it. But not everything in physics is laser guns and solar sails. There are a lot of interesting physics phenomena that the general public will never hear about, because they're just too, well, esoteric. What I'm going to do is occasionally talk about such effects, and, for some of them, give you applications for these strange effects you might see on a day-to-day basis. Today I'm going to examine the Hall effect. The Hall effect is simple, as these things go, once we understand the pieces. The first piece is that magnetic fields deflect moving electrically charged particles. I don't think I can give you a good simple reason for this, you're just going to have to trust me (for those interested, I'd argue that the relativistic transformation of a magnetic field is an electric field, and that will certainly deflect an electrically charged particle). This is a piece of the Lorentz force. The next piece that we need to know is that opposite electrically charged particles attract. So a positively charged particle attracts a negatively charged particle. Knowing those two things we can detail the Hall effect. Take the slab pictured above. We run an electric current through it. Conventionally we take current as moving positively charged particles. There is a magnetic field into the screen. This deflects the positive charges up the screen, as shown, with some upward force. Over some time, we will accumulate positive charges at the top. Because there is no net charge in our slab, this must leave a region of negative charge at the bottom. These regions of charge will attract, and cancel out the force from the magnetic field. This charge separation results in a voltage differential between the two sides of the slab, which is what we actually measure. The Hall effect has some nifty consequences physically. I mentioned that conventionally we take current to be positive particles moving. A microscopic picture of our conductors will tell us that, in general, electrons are what we consider to be flowing in an electric current. Now, it turns out that our magnetic field will deflect electrons moving opposite our current direction (negative current moving backwards is the same as positive current moving forwards) to the same side as our hypothetical positive particles got deflected to. This generates a charge differential with negative and positive charges on the opposite sides of the slab (shown below), which means the voltage is negative what we would have measured above! This means we expect to get a certain sign of the measured Hall voltage, which we can predict. This sign would correspond to negative particles (electrons) being the moving charge carriers in substances. It turns out that there are some substances (some semiconductors) where the sign of the Hall voltage is opposite what we expect from electrons. This means that in those substances the current is being carried by positive particles! I won't explain what that means here (I may address that in a later post), but I hope you can see why that would be fascinating. We expected to have electrons moving, and it turns out that something else is really doing the moving. The Hall effect is an experimental result that helped suggest a whole new way of thinking about conduction in materials. Beyond being very interesting physics, there are some applications to this effect. It is an easy way to create a magnetic field sensor. Take a slab of material, run a current through it, and measure the voltage on the sides. Where do we use magnetic field sensors? Well, they sometimes show up as a way to tell if something is open or closed. Put a magnet in your lid, and a Hall effect sensor in the lip the lid rests on. When it is closed, you'll measure a voltage, and when it is open you won't. Now you can tell if it is open or closed. A little imagination, and you can see how this would be useful for all kinds of switches. Hit the switch, move your magnet, and change your voltage. According to Wikipedia, Hall effect switches are used in things as diverse as paintball guns and go-cart speed controls. They could also be used as a speed or acceleration measurement in a rotating system. Attach a magnet to the rotating object, put a sensor at a fixed location, and measure how the voltage in your sensor changes as the object sweeps past it. There are many more applications, but this is just to give you a taste of how this seemingly esoteric physics concept may show up in your everyday life. It's not just the interesting problems we often work on this blog, physics is everywhere. In many different guises

Physics of Baseball: Batting

Summer is upon us, and so that means that we here at the Virtuosi have started talking about baseball. In fact, Corky and I did some simple calculations that illuminate just how impressive batting in baseball can be. We were interested in just how hard it is to hit a pitch with the bat. So we thought we'd model hitting the ball with a rather simple approximation of a robot swinging a cylindrical bat, horizontally with some rotational speed and at a random height. The question then becomes, if the robot chooses a random height and a random time to swing, what are the chances that it gets a hit?

Spatial Resolution

So the first thing to consider is how much of the strike zone the bat takes up. In order to be a strike, the ball needs to be over home plate, which is 17 inches wide, and between the knees and logo on the batters jersey. Estimating this height as 0.7 m or 28 inches or so, we have the area of the strike zone $$A_S = (17") \times (0.7 m) = 0.3 \text{ m}^2$$ when you swing, how much of this area does the bat take up? Well, treat it as a cylinder, with a diameter of 10 cm, and assume it runs the length of the strike zone, when the area that the bat takes up is $$A_B = (10\text{ cm}) \times (17" ) = 0.043 \text{ m}^2$$ So that the fractional area that the bat takes up during our idealized swing is $$\frac{A_B}{A_S} \approx 14\%$$ So already, if our robot is guessing where inside the strike zone to place the bat, and doing so randomly, assuming the pitch is a strike to begin with, it will be able to bunt successfully about 14% of the time.

Time Resolution

But getting a hit on a swing is different than getting a bunt. Not only do you have to have your bat at the right height, but you need to time the swing correctly. Lets first look at how much time we are dealing with here. Most major league pitchers pitch the ball at about 90 mph or so. The pitchers mound is 60.5 feet away from home base. This means that the pitch is in the air for $$t = \frac{ 60.5 \text{ ft} }{ 90 \text{ mph} } \approx \frac{1}{2} \text{ second}$$ i.e. from the time the pitcher releases the ball to the time it crosses home plate is only about half a second. Compare this with human reaction times. My drivers ed course told me that human reaction times are typically a third of a second or so. So, baseball happens quick! Alright, but we were interested in how well you have to time your swing. Successfully hitting the ball means that you've made contact with the ball such that it lands somewhere in the field. I.e. you've got a 90 degree play in when you make contact. How does this translate to time? We would need to know how fast you swing.

Estimating the speed of a swing

I don't know how fast you can swing a baseball bat, but I can estimate it. I know that if you land your swing just right, you have a pretty good shot at a home run. Fields are typically 300 feet long. So, I can ask, if I launch a projectile at a 45 degree angle, how fast does it need to be going in order to make it 300 feet. Well, we can solve this projectile problem if we remember some of our introductory physics. We decouple the horizontal and vertical motions of the ball, the ball travels horizontally 300 feet, so we know $$v_x t = 300 \text{ ft}$$ where t is the time the ball is in the air, similarly we know that it is gravity that makes the ball fall, and so as far the vertical motion is concerned, in half the total flight time, we need the vertical velocity to go from its initial value to zero, i.e. $$g \frac{t}{2} = v_y$$ where g is the acceleration due to gravity. Furthermore, I'm assuming that I am launching this projectile at a 45 degree angle, for which I know from trig that $$v_x = v_y = \frac{v}{\sqrt 2}$$ So I can stick these equations into one another and solve for the velocity needed to get the ball going 300 feet. $$\frac{v^2}{ g} = 300 \text{ ft} = \frac{ v^2}{ 9.8 \text{ m/s}^2 }$$ $$v \approx 30 \text{ m/s} \sim 67 \text{ mph}$$ So it looks like the ball needs to leave the bat going about 70 mph in order to clear the park. ( This was of course neglecting air resistance, which ought to be important for baseballs ). Great that tells us how fast the ball needs to be going when it leaves the bat, but how fast was the bat going in order to get the ball going that fast? Well, lets work worst case and assume that the baseball - bat interaction is inelastic. I.e. I reckon that if I throw a baseball at about 100 mph towards a wooden wall, it doesn't bounce a whole lot. In that case, the bat needs to take the ball from coming at it at 90 mph to leaving at 70 mph or so, i.e. the place where the ball hits the bat needs to be going at about 160 mph. That seems fast, but when you think about it, if a pitcher can pitch a ball at 90 mph, that means their hand is moving at 90 mph during the last bits of the pitch, so you expect that a batter can move their hands about that fast, and we have the added advantage of the bat being a lever.

Coming back to timing

So, we have an estimate for how fast the bat is going. Knowing this and estimating the length between the sweet spot and the pivot point of the bat to be about 0.75 m or so, we can obtain the angular frequency of the bat. $$v = \omega r$$ $$\omega = \frac{ 160 \text{ mph} }{ 0.75 \text{ m} } \approx 100 \text{ Hz}$$ So, if we need to have a 90 degree resolution in our swing timing to hit the ball in the park, this means that if our swing near the end is happening at 100 \text{ Hz}, we need to get the timing down to within $$t = \frac{ 90 \text{ degree}}{ 100 \text{ Hz} } \sim 15 \text{ ms}$$ So we need to get the timing of our swing to within about 15 milliseconds to land the hit. So if our robot randomly swung at some point during the duration of the pitch, it would only hit with probability $$\frac{\text{time to land hit} }{ \text{time of pitch}} = \frac{ 15 \text{ ms}}{500 \text{ ms}} \sim 3\%$$ or only 3% of the time. If we take both the spatial placement, and timing of the swing as independent, the probability that our robot gets a hit would be something about $$p = 0.03 \times 0.14 = 0.004 = 0.4 \%$$ or our robot would only get a hit 1 time out of 250 tries. Suddenly hitting looks pretty impressive.

Experiment

Saying that the robot swings at some random time during the duration of the pitch is pretty bad. So I decided to do a little experiment to see how good people are at judging times on half second scales. I had some friends of mine start a stop watch and while looking try to stop it as close as they could at the half second mark. Collecting their deviations, I obtained a standard deviation of about 41 milliseconds, which suggests a window of about 100 milliseconds over which people can reliably judge half second intervals. Now, I have to admit, this wasn't done in any very rigorous sort of way, I had them do this while walking to dinner, but it ought to give a rough estimate of the relevant time scale for landing a hit. So instead of comparing our 15 millisecond 'get a hit' window to the full half second pitch duration, lets compare it instead to the 100 'humans trying to judge when to hit' window. This gives us a temporal resolution of about $$p = \frac{ 15}{100} = 15 \%.$$ So that now we obtain an overall hit probability of $$p = 0.15 * 0.14 = 0.021 = 2 \%$$ So that it seems like a poor baseball player, more or less randomly swinging should have a batting average of about 2\%. Compare this with typical baseball batting averages of 250 or so, denoting 0.25 or 25% probability of a hit. I think this is a much better estimate of how much better over random baseball players can do with training. So it looks like practice can improve your ability to do a task by about an order of magnitude or so. Either way, baseball is pretty darn impressive when you think about it.

Cell Phone Brain Damage: Part Deux

I thought I'd take another look at cell phone damage, coming at it from a different direction than my colleague. Mostly I just want to consider the energy of the radiation that cell phones produce, and compare that with the other relevant energy scales for molecules.

Cell Phone Energy

So, lets start with cell phones. I looked at my cell phone battery, and it looks like it is rated for 1 A, at 3.5 V. So when it is running at its peak it should put out about 3.5 W of power in electromagnetic waves (assuming it reaches its rating and all of that energy is fully converted into radiation). But what form does this energy take? Well, its electromagnetic radiation, so its in the form of a bunch of photons. In order to determine the energy of each photon, we need to know the frequency of the radiation. Surfing around a bit on wikipedia, I discovered that most cell phones operate in the 33 cm radio band, or somewhere between about 800 - 900 Mhz. How much energy does each \~ 1 Ghz photon have? We know that the energy of a photon is: $$E = h \nu \sim 7 \times 10^{-25} \text{ J} \sim 4 \times 10^{-6} \text{ eV}$$ it will be convenient to know the photon energy in "eV's". 1 eV is the energy of a single electron accelerated through a potential of 1 volt, or $$1 eV = (1 \text{ electron charge} ) * ( 1 \text{ Volt} ) = 1.6 \times 10^{-19} \text{ J}$$ So my cell phone is sending out signals using a bunch of photons, each of which has an energy of about 4 micro eVs. Lets consider the energy scales involved in most molecular processes and compare those scales with this energy.

Molecules

Great. We have a number. What what does it mean? A number in physics means little without some context. Lets try and consider what photons can do to molecules. I can think of three different processes: first, a photon could knock out an electron (i.e. ionize the molecule), second the photon could make the molecule vibrate or wiggle, or third the photon could make the molecule rotate. Lets see if we can estimate the energies for these three different types of processes. Lets first collect some of the information we know about atoms and molecules so that we can continue our estimations. I know that most atoms are about an angstrom big, or 10^(-10} meters. I know the charge of an electron and proton.

Ionization

What are typical molecular ionization energies? Well we could try and estimate it. Whats the energy stored in an electron and proton being about an angstrom apart? Well, remembering some of our electrostatics we have $$E = \frac{ k q_1 q_2 }{ r} \sim (9 \times 10^9) \frac{ (1.6 \times 10^{-19} \text{ C} )^2 }{ (1 \ \AA)} \sim 14 \text{ eV}$$ which is pretty darn close to the ionization energy of hydrogen at 13.6 eV. So I will claim that since all atoms are about the same size, typical ionization energies across the board are about 10 eV, in order of magnitude.

Vibration

What about making our molecules vibrate? Well what are the energies of molecular bonds? They ought to be quite similar to the ionization energies of molecules, but as we know they are a tad weaker. Bond energies for most molecules are on the order of a few eV. The oxygen-hydrogen bond in water for example has a binding energy of 5.2 eV. What does that have to do with vibration? Well, if we consider a material as made up by a bunch of atoms all stuck together with springs, we can estimate the spring constant. Assuming that a typical binding energy of 3 eV or so, and a typical atomic separation of 1 angstrom or so, we can estimate the spring constant for atoms, knowing $$U = \frac{1}{2} k x^2$$ $$3 \text{ eV } \approx \frac{1}{2} k ( 1 \A )^2$$ $$k \approx 100 \text{ N/m}$$ And now having estimated the spring constant, we can estimate how much energy there is in a quanta of atomic vibration, i.e. figure out the corresponding frequency from $$\omega = \sqrt{ k / m }$$ and quantize it in units of hbar. We discover that a quanta of atomic vibration typically has energies on the order of $$U = \hbar \omega = \hbar \sqrt{ \frac{ k }{ m } }$$ $$= 6.6 \times 10^{-16} \text{ eV s } \sqrt{ \frac{ 100 \text{ N/m} }{ 2 \times \text{ mass of a proton } } } \sim 0.1 \text{ eV}$$ So molecular vibration energies are about a tenth of an electron volt.

Rotation

I can also make molecules rotate. What is the energy of the lowest rotational mode of a molecule. Well, bohr taught us that angular momentum is quantized in units of h, planck's constant. Imagine a two atom molecule, with two atoms separated by an angstrom. The energy of a rotating object can be written $$E = \frac{ L^2 }{2 I }$$ in analogy to the energy of a moving object $$E = \frac{ p^2 }{2m}$$ where I is the moment of inertia for a molecule. We will estimate $$I = 2 m r^2 \sim 2 (1 * \text{ mass of proton} * ( 1 \text{ \AA} )^2 \sim 3 \times 10^{-47} \text{ kg m}^2$$ So we can estimate the rotational energy for a small molecule $$E = \frac{ h^2 }{ 2 m r^2 } \sim 80 \text{ meV}$$ and this is for a small molecule, and will only go down for larger molecules, as I will increase. So I will call typical rotational energies 1 meV for medium sized molecules.

Heat

Another relevant energy scale to discuss when we are talking about brains is the energy due to the fact that our brain is rather warm. Body temperature is about 98 degrees Fahrenheit, or 37 degrees Celsius, or 310 Kelvin. Statistical Mechanics tells us that temperature is an average energy for a system, and in fact the Equipartition theorem tells us that when a body is in thermal equilibrium, every mode of it has $$\frac{1}{2} k_B T$$ amount of energy in it. For our brain that means $$E = \frac{1}{2} k_B T = 2 \times 10^{-21} \text{ J } = 13 \text{ meV}$$ i.e. just the fact that our brains are hot means that every degree of freedom in our brain already has 13 millielectron volts associated with it. Comparing to our results above, this is comparable to the rotational energies of molecules, but a tad less than their vibrational energies, which means that we should expect most of the molecules in our head to already be rotating.

Results

So going through some very rough calculations, we discovered that for molecules, there are three obvious ways you can get them hot and bothered, you can ionize them, make them wiggle, or make them rotate. There are some typical energy scales for these things, ionization energies are about 10 eV, vibrational energies are about 0.1 eV, and rotational energies are about 0.001 eV. In addition, our brain already has about 13 meV of energy in every one of its degrees of freedom, and cell phone photons have an energy some 1/3300 of this. And what was the energy for each cell phone photon? 0.000004 eV. Notice that this energy is about a hundred times weaker than typical rotational energies, and some 3300 thousandth times less than the natural thermal energy in our brains. Now you can begin to understand why most physicists are not too worried about the effects of cell phones. The radiation from cell phones is just not on the kind of energy scales that affect molecules in was that could potentially harm us. So I'm not too worried.

Solar Sails III (because two just isn't enough)

One thing that I've wanted to quantify since reading Intelligent Life in the Universe, an outstanding book by Carl Sagan and I.S. Shklovskii, is the idea of exogenesis. Exogenesis is the hypothesis that life formed elsewhere in the universe and was somehow transferred to earth in the form of some small seed or spore. Now since E.T. E. coli presumably do not have little tiny jetpacks or other means of active transport, they would need to traverse the cosmos in some passive way. One such way would be solar sailing. Way back in Solar Sails I, we derived equations describing the maximum speeds and time-of-travel for various distances for a given solar sail. Each of these equations was a function of the surface mass density of the sail, which is just the amount of mass per unit cross-sectional area. All we need to know is the cross-sectional area and mass of a given object and we can apply these equations to just about anything! Assume we have some spherical blob with the density of water (1g/cm^3). The effective sigma of this blob would just be the mass divided by the cross-sectional area. In other words, $$\sigma = \frac{m}{Area} = \frac{\frac{4}{3}\pi r^3 \rho}{\pi r^2} = \frac{4}{3}\rho r .$$ Rearranging to get r in terms of the other variables, we have $$r = \frac{3\sigma}{4\rho} .$$ Plugging in our density of 1g/cm^3 and a suitable sigma (10^-4 g/cm^2), we get $$r \le 0.75 \times 10^{-4} cm = 0.75 \mu m .$$ Check out this fun site to see what kind of critters can fit in this blob. From the previous post, we saw that for a sigma of 10^-4 g/cm^2, our sail would get to the nearest stars on a timescale of order 10,000 years. Thus if our blob has a radius of less than about a micron, it could spread to hundreds of stars in around 10,000-100,000 years. Even if it would take millions of years, that would be almost no time at all on the cosmic scale. Just based on this calculation it all seems fairly feasible. In making these calculations I have neglected several important aspects of the problem. First, in no way have I actually calculated any sort of probability of this happening. Additionally, I would have to see how likely it is for some blob to reach planetary escape velocity (presumably just by riding that tail of the Boltzmann distribution). Finally, and perhaps most important of all, I have not given any sort of motivation or mechanism by which some living body could survive hundreds of thousands of years in the vacuum of space with constant radiation exposure. But I have heard that some forms of life are totally extreme (especially if they drink this). Even though such a process seems possible, it certainly doesn't seem like the easiest way to get life on earth. I prefer the much more satisfying "amino acids + lightning + magic = life" model. But it does offer some interesting possibilities. Suppose we as people think that people are super awesome and therefore people should be everywhere. We do some bio magic and put whatever DNA we want into viruses, which we then pack into as many micron spheres as we can make. We then point them at the nearest stars and have them disperse. What would the probabilities be that they land somewhere habitable? Are there any ethical considerations in doing this? Is it a galactic faux pas?